[go: up one dir, main page]

WO2024219355A1 - Masque, dispositif de capture d'image, système de capture d'image et procédé de génération de données - Google Patents

Masque, dispositif de capture d'image, système de capture d'image et procédé de génération de données Download PDF

Info

Publication number
WO2024219355A1
WO2024219355A1 PCT/JP2024/014965 JP2024014965W WO2024219355A1 WO 2024219355 A1 WO2024219355 A1 WO 2024219355A1 JP 2024014965 W JP2024014965 W JP 2024014965W WO 2024219355 A1 WO2024219355 A1 WO 2024219355A1
Authority
WO
WIPO (PCT)
Prior art keywords
mask
pattern
image
dimensional
mask according
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/JP2024/014965
Other languages
English (en)
Inventor
Ryosuke Uemura
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Semiconductor Solutions Corp
Original Assignee
Sony Semiconductor Solutions Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Semiconductor Solutions Corp filed Critical Sony Semiconductor Solutions Corp
Priority to CN202480025714.XA priority Critical patent/CN121002890A/zh
Publication of WO2024219355A1 publication Critical patent/WO2024219355A1/fr
Anticipated expiration legal-status Critical
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/955Computational photography systems, e.g. light-field imaging systems for lensless imaging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof

Definitions

  • the present disclosure relates to a mask, an image capturing device, an image capturing system, and a data generation method.
  • a lensless image sensor equipped with an optical modulator on which a known two-dimensional pattern is formed including an optical amplitude modulation region or a phase modulation region in the front stage of an image capturing element in the optical axis direction (hereinafter described to as a "mask"). (For example, see PTL 1).
  • the rangeless image sensor generates a decoded image of a subject by projecting light encoded by the effect of the characteristic two-dimensional pattern of the mask onto the image capturing element and decoding the mask effect by inverse calculation.
  • the present disclosure proposes a mask, an image capturing device, an image capturing system, and a data generation method capable of suppressing deterioration in the image quality of the decoded image.
  • the mask according to an exemplary embodiment includes a two-dimensional pattern including an optical amplitude modulation region or a phase modulation region having a symmetrical component that is symmetrical in a rotation direction about a preset rotation reference point, wherein the mask guides, based on the two-dimensional pattern, incident light to an image sensor.
  • Fig. 1 is an explanatory diagram illustrating an overview of the configuration and operation of an image sensor according to embodiments.
  • Fig. 2 is an explanatory diagram illustrating an example of occurrence of a rotation error according to a first embodiment.
  • Fig. 3 is a graph illustrating SSIM of a decoded image illustrated in Fig. 2.
  • Fig. 4A is an explanatory diagram illustrating a mask generation procedure according to the first embodiment.
  • Fig. 4B is an explanatory diagram illustrating one example of a phase modulation mask according to the first embodiment.
  • Fig. 5 is an explanatory diagram illustrating an image employing a robust mask according to the first embodiment.
  • Fig. 6 is a graph illustrating the SSIM of the decoded image illustrated in Fig. 5.
  • Fig. 5 is an explanatory diagram illustrating the SSIM of the decoded image illustrated in Fig. 5.
  • Fig. 5 is an explanatory diagram illustrating the SSIM of the decoded image illustrated
  • FIG. 7A is an explanatory diagram of a symmetrical component in the robust mask according to the first embodiment.
  • Fig. 7B is an explanatory diagram of the symmetrical component in the robust mask according to the first embodiment.
  • Fig. 7C is an explanatory diagram of the symmetrical component in the robust mask according to the first embodiment.
  • Fig. 8 is an explanatory diagram of a rotation reference point according to the first embodiment.
  • Fig. 9 is an explanatory diagram illustrating images employing the robust mask with the shifted rotation reference point according to the first embodiment.
  • Fig. 10 is a graph illustrating the SSIM of the decoded image illustrated in Fig. 9.
  • Fig. 11 is an explanatory diagram illustrating an example of occurrence of a crop error according to a second embodiment.
  • Fig. 12 is an explanatory diagram illustrating an example of occurrence of the crop error according to the second embodiment.
  • Fig. 13 is an explanatory diagram illustrating an example of occurrence of the crop error according to the second embodiment.
  • Fig. 14 is a graph illustrating the SSIM of the decoded image illustrated in Fig. 13.
  • Fig. 15 is an explanatory diagram of a robust mask according to the second embodiment.
  • Fig. 16 is an explanatory diagram of effects of the robust mask according to the second embodiment.
  • Fig. 17 is an explanatory diagram illustrating an example of a pattern of the robust mask according to the second embodiment.
  • Fig. 18 is an explanatory diagram illustrating an example of a pattern of the robust mask according to the second embodiment.
  • FIG. 19 is an explanatory diagram illustrating an example of a pattern of the robust mask according to the second embodiment.
  • Fig. 20 is an explanatory diagram illustrating an example of a pattern of the robust mask according to the second embodiment.
  • Fig. 21 is an explanatory diagram illustrating a modification of the image sensor.
  • Fig. 22 is an explanatory diagram illustrating a modification of the image sensor.
  • Fig. 23 is an explanatory diagram illustrating a modification of the image sensor.
  • Fig. 24 is an explanatory diagram illustrating a modification of the image sensor.
  • Configuration of image sensor ⁇ Fig. 1 is an explanatory diagram illustrating an overview of the configuration and operation of an image sensor 1 according to the embodiments.
  • the image sensor 1 illustrated in Fig. 1 is a so-called lensless camera that does not need a lens to condense light emitted from each point of a subject onto each corresponding point on a sensor for image capturing.
  • the image sensor 1 includes an image capturing device 10 and a signal processing device 20.
  • the image capturing device 10 includes a mask M and an image capturing element S.
  • the mask M is provided at the front stage of the image capturing element S in the optical axis direction.
  • a known two-dimensional pattern including an optical amplitude modulation region or a phase modulation region is formed on the mask M.
  • the optical amplitude modulation is implemented using light-blocking members or members with varying degrees of transmittance.
  • the optical phase modulation is implemented by controlling the uneven structure within the mask plane, and by controlling the refractive index distribution of the mask and the effective refractive index obtained by controlling the density of members with a plurality of refractive indices at size orders smaller than the wavelength, and the like.
  • the example illustrated in Fig. 1 is an example of amplitude modulation.
  • the mask M When capturing an image of a subject T, the mask M modulates light L that is incident from the direction of the subject, with a known two-dimensional phase amplitude pattern, and then guides the light to the subsequent image capturing element placed at a certain distance away. As a result, a light intensity distribution corresponding to the shape of the subject T and the shape of the two-dimensional pattern of the mask M is projected onto a light-receiving surface of the image capturing element S.
  • the image capturing element S has a plurality of photo detectors PD arranged in a two-dimensional matrix on the light-receiving surface. By photoelectrically converting the light L received by each photo detector PD, the image capturing element S generates an encoded image DA in which the image of the subject T is encoded by the two-dimensional pattern of the mask M, and outputs the encoded image to the signal processing device 20.
  • the encoded image DA which is an image obtained by photoelectrically converting the light L incident via the mask M, is an image in which the subject T cannot be recognized with the naked eyes.
  • the image capturing principle of an encoded image DB by the image capturing element S is expressed by the following Formula (1).
  • Y F ⁇ X + N (1)
  • Y Light-receiving signal of each photo detector PD (one-dimensional data)
  • X Scene vector (value of light L incident on the mask M during image capturing (one-dimensional data))
  • N Noise F: Imaging matrix (matrix defined by the two-dimensional pattern of the mask M and the distance between the mask and the image capturing element)
  • the light-receiving signal Y of each photo detector PD is a signal obtained by adding the noise N to light obtained by modulating the scene vector X during image capturing by the imaging matrix F.
  • the signal processing device 20 executes predetermined decoding processing on the encoded image DA, thereby generating a two-dimensional image that allows the subject T to be recognized in a similar manner to a general camera (hereinafter described as "decoded image DB"). For example, the signal processing device 20 multiplies the light-receiving signal Y of each photo detector PD described above by an inverse matrix F -1 of the imaging matrix F to restore the scene vector X, thereby generating and outputting the decoded image DB.
  • the point spread function is the intensity distribution when light from a single point on the subject passes through the mask and reaches an image capturing surface.
  • the assumption of shift invariance is similar to, even if the light originates from different single points on the subject, the above-described intensity distribution shifting without altering the intensity distribution itself.
  • the point spread function means each row of the imaging matrix F
  • the assumption of shift invariance means that when comparing each row of the imaging matrix F, only the shift is occurring.
  • Formula (1) can be transformed as shown below, and the memory and calculation amount can be significantly compressed.
  • Y' F' * X' + N' (2)
  • X' Scene matrix (value of light L incident on the mask M during image capturing (two-dimensional data))
  • Imaging matrix point spread function *: Convolution operation
  • the imaging matrix F' has the similar matrix size to Y', and the huge memory is not required. Furthermore, the convolution operation can be processed at high speed due to the fast Fourier transform (FFT) algorithm.
  • FFT fast Fourier transform
  • Wiener filter processing may be executed, and a normalization term that incorporates various known assumptions regarding scenes and the like may be added.
  • the relative positional relationship between the mask M and the image capturing element S may deviate from the predetermined relative position.
  • rotation error an error occurs in the relative positional relationship between the mask M and the image capturing element S.
  • crop error an error may occur in which not all of the desired light L is incident on the image capturing element S.
  • an error may occur due to disturbance, such as the relative positional relationship between the mask M and the image capturing element S, the size relationship between the mask M and the image capturing element S, and increase in the incident angle of the light L.
  • the light-receiving signal Y of each photo detector PD in the encoded image DA becomes a signal modulated by a matrix different from the known imaging matrix F defined by the two-dimensional pattern of the mask M.
  • the signal processing device 20 cannot restore the correct scene vector X. As a result, the decoded image fails and becomes a state in which the subject T cannot be recognized.
  • the image capturing device 10 is configured to suppress deterioration in the image quality of the decoded image by providing the mask M with characteristics according to the type of disturbance such as the rotation error and the crop error.
  • the robustness can be improved at each stage.
  • the robustness can be improved at each stage with light decoding processing.
  • a first embodiment that solves problems caused by the rotation error and a second embodiment that solves problems caused by the crop error will be described below.
  • FIG. 2 is an explanatory diagram illustrating an example of occurrence of the rotation error according to the first embodiment.
  • the rotation angle illustrated in Fig. 2 is a rotation angle of the mask when the center of gravity of the two-dimensional pattern on the mask M is set as a rotation reference point.
  • Fig. 3 is a graph illustrating structural similarity index measure (SSIM) of the decoded image illustrated in Fig. 2.
  • the SSIM indicates that the closer the value is to 1, the better the image quality.
  • the evaluation index may be other than the SSIM.
  • the signal processing device 20 can generate the clear decoded image DB.
  • the rotation angle of the mask M gradually increases to 0.5° and 1°, the image quality of the decoded image DB gradually deteriorates.
  • FIG. 4A is an explanatory diagram illustrating a mask generation procedure according to the first embodiment.
  • Fig. 4B is an explanatory diagram illustrating one example of a phase modulation mask according to the first embodiment.
  • the case of optical amplitude modulation will be described as an example.
  • Fig. 4A when generating the mask according to the first embodiment (hereinafter described as "robust mask M1"), image data of each mask M is acquired while sequentially rotating the mask M by a predetermined rotation angle.
  • the mask M is rotated using the center of gravity of the two-dimensional pattern of the mask M as the rotation reference point.
  • the example illustrated in Fig. 4A illustrates a case where the mask M is rotated from 0° to 3°. Then, by integrating (synthesizing) the image data of each mask M, the two-dimensional pattern of the robust mask M1 is acquired.
  • the integration (synthesis) range of image data may change depending on the distance from the center.
  • the integrated (synthesized) value of the image data may be binarized using a certain threshold to create a new robust mask M1' of two-dimensional pattern.
  • the two-dimensional pattern of the robust mask M1 for example, like a star image captured by aligning the camera's optical axis with the polar star and taking a long exposure, as the distance from the rotation reference point increases, the length of an arc-shaped pattern of a symmetrical component that is symmetrical in the rotation direction about the rotation reference point increases.
  • the robust mask M1 is completed by forming the acquired two-dimensional pattern of the robust mask M1 on a transparent substrate such as a glass plate by a known method.
  • the two-dimensional pattern of the robust mask M1 becomes a two-dimensional pattern including the optical amplitude modulation region having a symmetrical component that is symmetrical in the rotation direction about the preset rotation reference point.
  • Fig. 4B illustrates an example in which phase modulation masks M1-p and M1'-p are created based on the robust masks M1 and M1', respectively.
  • the robust mask is completed by forming the new two-dimensional pattern on a transparent substrate such as a glass plate by using a known method.
  • FIG. 5 is an explanatory diagram illustrating the image employing the robust mask M1 according to the first embodiment.
  • the rotation angle illustrated in Fig. 5 is a rotation angle of the mask when the center of gravity of the two-dimensional pattern on the robust mask M1 is set as the rotation reference point.
  • Fig. 6 is a graph illustrating the SSIM of the decoded image illustrated in Fig. 5.
  • the SSIM of the decoded image DB has a value higher than the SSIM of the decoded image DB when the mask M illustrated in Fig. 3 is employed. This also shows that by employing the robust mask M1, it is possible to suppress the deterioration in the image quality of the decoded image DB.
  • Figs. 7A, 7B, and 7C are explanatory diagrams of symmetrical components in the two-dimensional pattern of the robust mask M1 according to the first embodiment.
  • the vertical axis of the graphs illustrated in Figs. 7A, 7B, and 7C is the distance from the rotation reference point of the robust mask M1 to the symmetrical component.
  • the horizontal axis is the integral angle range of the symmetrical component.
  • the symmetrical component that is symmetrical in the rotation direction about the rotation reference point may have at least a portion in which the integral angle range increases as the distance from the rotation reference point increases.
  • the symmetrical component has a portion where the arc length of an arc-shaped pattern becomes long from the rotation reference point toward the periphery of the pattern forming region where the two-dimensional pattern is provided.
  • the symmetrical component that is symmetrical in the rotation direction about the rotation reference point may be configured such that the integral angle range increases as the distance from the rotation reference point increases.
  • the arc length of the arc-shaped pattern increases from the rotation reference point toward the periphery of the pattern forming region where the two-dimensional pattern is provided.
  • the symmetrical component that is symmetrical in the rotation direction about the rotation reference point may be configured to have a constant integral angle range regardless of the distance from the rotation reference point.
  • the arc length of the arc-shaped pattern increases in proportion to the distance from the rotation reference point.
  • the example illustrated in Fig. 7A has the highest degree of freedom, and the example illustrated in Fig. 7C has the lowest degree of freedom. From the viewpoint of robustness against the rotation error, the example illustrated in Fig. 7C has the highest robustness, and the example illustrated in Fig. 7A has the lowest robustness.
  • the explanatory diagram of the first embodiment presents three degrees as one example, but there is no restriction on the upper and lower limits. It is possible to design on a case-by-case basis depending on the manufacturing capacity regarding mask alignment, the desired image quality for each use case, and the required degree of freedom of the two-dimensional pattern.
  • FIG. 8 is an explanatory diagram of the rotation reference point according to the first embodiment.
  • Fig. 8 illustrates an X-Y orthogonal coordinate system for convenience. So far, the case has been described in which the rotation reference point for rotating the mask M to acquire the robust mask M1 is set at the center of gravity of the pattern forming region where the two-dimensional pattern is formed, as illustrated in the upper figure of Fig. 8, but this is one example.
  • the position of the rotation reference point may be set, for example, at a position shifted from the center of gravity of the pattern forming region, as illustrated in the lower figure of Fig. 8.
  • the lower figure of Fig. 8 illustrates a robust mask M2 when the rotation reference point is set to be shifted by 20% from the center of gravity of the pattern forming region in the negative direction of the X-axis. If the rotation reference point is shifted 100% from the center of gravity of the pattern forming region, the rotation reference point will be set at the periphery of the pattern forming region.
  • the rotation reference point may be set at an arbitrary position within the mask M, but is preferably set inside the pattern forming region from the viewpoint of robustness against the rotation error, and is more preferably provided at a position shifted within 50% from the center of gravity of the pattern forming region.
  • the rotation reference point is more preferably set at a position where the distance from the center of gravity of the pattern forming region is equal to or less than half the distance from the center of gravity of the pattern forming region to the periphery of the pattern forming region.
  • the best position for setting the rotation reference point is the center of gravity of the pattern forming region.
  • the SSIM of the decoded image DB has a value higher than the SSIM of the decoded image DB when the mask M illustrated in Fig. 3 is employed. This also shows that by employing the robust mask M2, it is possible to suppress the deterioration in the image quality of the decoded image DB.
  • FIG. 11 to 13 are explanatory diagrams illustrating an example of occurrence of the crop error according to the second embodiment.
  • the crop rate illustrated in Fig. 13 is an index indicating what percentage of light arriving from an assumed angle of view falls outside the image capturing element S in the direction in which cropping is most likely to occur.
  • the assumed angle of view here is the incident angle of light defined by the incident characteristics of the image capturing element S.
  • Fig. 14 is a graph illustrating the SSIM of the decoded image illustrated in Fig. 13.
  • Figures (a) and (b) illustrated in the upper figure of Fig. 15 illustrate how information becomes missing due to the crop, and figures (c) and (d) illustrated in the lower figure illustrate how missing information due to the crop is compensated for by the robust mask.
  • the white rectangular frame XS illustrated in each figure indicates the size of the image capturing element S.
  • the pattern XM illustrated in each figure indicates a two-dimensional intensity pattern (point spread function) formed on the image capturing element S by the mask.
  • point spread function two-dimensional intensity pattern
  • a robust mask M6 (for example, see Fig. 16) itself is not illustrated, and a two-dimensional pattern including an optical amplitude modulation region or a phase modulation region having translational symmetry is provided.
  • the center of the mask of the basic pattern and the center of the image capturing element S agree with each other.
  • point spread function point spread function
  • point spread function point spread function
  • point spread function point spread function
  • Some basic pattern information is not received by the image capturing element S, but since the two-dimensional intensity pattern (point spread function) has translational symmetry, the missing information is compensated for on the opposite side of the image capturing element S.
  • This process leads to an increase in the memory and calculation amount, but since this method uses the properties of circular convolution inversely, this eliminates the need for these processes, which is advantageous from the viewpoint of memory and calculation amount.
  • preprocessing for complex signal processing it is necessary to crop the basic pattern portion of the two-dimensional intensity pattern (point spread function) formed on the image capturing element S, but this tends to further reduce the memory and calculation amount.
  • Fig. 16 is an explanatory diagram of effects of the robust mask according to the second embodiment. For example, as illustrated in Fig. 16, if the mask M3 illustrated in Fig. 11 is employed and the light L incident from the periphery of the mask M3 is not received by the image capturing element S, the signal processing device 20 cannot generate the clear decoded image DB from the encoded image DA.
  • the robust mask M6 with translational symmetry when the robust mask M6 with translational symmetry is employed, the image of light that is incident from a first periphery in the two-dimensional pattern and is not received by the image capturing element S appears in the encoded image DA at a second periphery opposite to the first periphery.
  • the signal processing device 20 can generate the clear decoded image DB from the encoded image DA.
  • FIG. 17 illustrates a first pattern example
  • Fig. 18 illustrates a second pattern example
  • Fig. 19 illustrates a third pattern example
  • Fig. 20 illustrates a fourth pattern example.
  • Figs. 17 to 18 illustrate the X-Y orthogonal coordinate system for convenience.
  • the description will be given in which the X-axis positive direction in the X-Y orthogonal coordinate system is defined as right, the X-axis negative direction is defined as left, the Y-axis positive direction is defined as up, and the Y-axis negative direction is defined as down.
  • the first pattern of the two-dimensional pattern has one-dimensional translational symmetry.
  • a partial pattern A1 at the right end of the central basic pattern P is placed outside the left end of the basic pattern P.
  • a partial pattern A2 at the left end of the central basic pattern P is placed outside the right end of the basic pattern P.
  • a robust mask M7 illustrated in the lower figure of Fig. 17 can be formed.
  • the signal processing device 20 can interpolate the image of light that is incident on the partial pattern A1 at the right end of the central basic pattern P and leaks from the image capturing element S with the image of incident light on the partial pattern A1 placed outside the left end of the basic pattern P.
  • the signal processing device 20 can interpolate the image of light that is incident on the partial pattern A2 at the left end of the central basic pattern P and leaks from the image capturing element S with the image of incident light on the partial pattern A2 placed outside the right end of the basic pattern P. Therefore, the signal processing device 20 can generate the clear decoded image DB from the encoded image DA.
  • the second pattern of the two-dimensional pattern has two-dimensional translational symmetry.
  • the partial pattern A1 at the right end of the central basic pattern P is placed outside the left end of the basic pattern P.
  • the partial pattern A2 at the left end of the central basic pattern P is placed outside the right end of the basic pattern P.
  • a partial pattern A3 at the upper end of the central basic pattern P is placed outside the lower end of the basic pattern P.
  • a partial pattern A24 at the lower end of the central basic pattern P is placed outside the upper end of the basic pattern P.
  • a partial pattern A5 at the upper right end of the central basic pattern P is placed outside the lower left end of the basic pattern P
  • a partial pattern A6 at the lower right end of the central basic pattern P is placed outside the upper left end of the basic pattern P.
  • a partial pattern A7 at the upper left end of the central basic pattern P is placed outside the lower right end of the basic pattern P, and a partial pattern A8 at the lower left end of the central basic pattern P is placed outside the upper right end of the basic pattern P.
  • a robust mask M8 illustrated in the lower figure of Fig. 18 can be formed.
  • the signal processing device 20 can similarly interpolate the image of light that is incident from the periphery of the central basic pattern P and leaks from the image capturing element S with the image of incident light on the partial patterns A1 to A8 placed outside the basic pattern P. Therefore, the signal processing device 20 can generate the clear decoded image DB from the encoded image DA.
  • the aspect ratio of the formation region of the basic pattern P does not have to be 1:1. That is, the shape of the formation region of the basic pattern P does not have to be square.
  • the shape may be rectangular like a robust mask M9 of the third pattern illustrated in Fig. 19. With the robust mask M9 as well, the signal processing device 20 can generate the clear decoded image DB from the encoded image DA.
  • a plurality of the basic patterns P is arranged in a matrix in the center.
  • nine basic patterns P are arranged in a 3 ⁇ 3 matrix.
  • the partial patterns A1 to A8 are arranged at various locations outside the periphery of a nine basic patterns P group.
  • a robust mask M10 illustrated in the lower figure of Fig. 20 can be formed.
  • the signal processing device 20 can generate the clear decoded image DB from the encoded image DA.
  • the same pattern repeatedly appears two times or less in the one-dimensional direction.
  • the area of the basic pattern P or the basic pattern P group is smaller than the area of the light-receiving region of the image capturing element S. This makes it possible to suppress light incident from the periphery of the basic pattern P or the basic pattern P group from leaking from the light-receiving region of the image capturing element S.
  • a gap may be provided between the basic pattern P or the basic pattern P group, and the partial patterns A1 to A8 arranged outside the basic pattern P or the basic pattern P group. That is, in the two-dimensional pattern, a gap may be provided between the same patterns that repeatedly appear in the one-dimensional direction. This facilitates the process of forming the two-dimensional patterns of the first to fourth patterns. In this case, if the basic pattern including the gap is considered, the above description of the effects applies as is.
  • the mask M is the basic pattern
  • the repeating unit is the basic pattern.
  • the basic pattern can take an arbitrary shape including the internal two-dimensional pattern and its external shape. That is, the internal two-dimensional pattern presents a Lissajous pattern, but this is just one example, and the external shape does not necessarily even have to be rectangular.
  • Figs. 21 to 24 are explanatory diagrams illustrating modifications of the image sensor.
  • Fig. 21 illustrates an image sensor 1A according to a first modification.
  • Fig. 22 illustrates an image sensor 1B according to a second modification.
  • Fig. 23 illustrates an image capturing system 100 including an image sensor 1C according to a third modification.
  • Fig. 24 illustrates an image sensor 1D according to a fourth modification.
  • the image sensor 1A according to the first modification includes the image capturing device 10.
  • the image sensor 1A since the image sensor 1A does not include the signal processing device 20, the height of the chip can be reduced.
  • Such an image sensor 1A outputs the encoded image DA obtained by capturing an image of a subject.
  • the encoded image DA output from the image sensor 1A is decoded by a signal processing device provided on a separate chip or cloud.
  • the image sensor 1B includes the image capturing device 10 and a signal processing device 20B.
  • the image capturing device 10 outputs the encoded image DA obtained by capturing an image of the subject to the signal processing device 20B.
  • the signal processing device 20B does not execute decoding processing, but executes signal processing on the encoded image DA to facilitate decoding processing. For example, if the encoded image DA includes the rotation error, the signal processing device 20B executes calibration processing to correct the rotation error. Then, the signal processing device 20B outputs the encoded image DA1 after the calibration processing.
  • the signal processing device provided on a separate chip that decodes the encoded image DA1 or the cloud can generate the clear decoded image DB by executing common decoding processing for all the image capturing devices 10. Therefore, the cost of the information processing device can be reduced.
  • the image capturing system 100 including the image sensor 1C according to the third modification includes the image sensor 1C, a signal processing device 20C, and an application 30.
  • the signal processing device 20C may be provided on a separate chip or may be provided on the cloud.
  • the image sensor 1C includes the image capturing device 10.
  • the image capturing device 10 outputs the encoded image DA obtained by capturing an image of the subject to the signal processing device 20C.
  • the signal processing device 20C outputs the encoded image DA1, which has undergone calibration processing on the encoded image DA, to the application 30, in a similar manner to the signal processing device 20B.
  • the application 30 executes decoding processing on the encoded image DA1 to generate and output the decoded image DB.
  • the signal processing device 20C may be configured to generate the decoded image DB from the encoded image DA and output the decoded image to the application 30.
  • the application 30 executes some kind of calibration processing on the decoded image DB input from the signal processing device 20C for output.
  • the image capturing system 100 since the calibration processing and the decoding processing are executed at different locations, security against information leakage can be improved.
  • an image capturing system 100A including the image sensor 1D according to the fifth modification includes the image sensor 1D and a signal processing device 20D.
  • the image sensor 1D includes the image capturing device 10.
  • the image capturing device 10 includes any one of the robust masks described in the present disclosure.
  • the image capturing device 10 outputs the encoded image DA obtained by capturing an image of the subject to the signal processing device 20D.
  • the signal processing device 20D executes decoding processing on the encoded image DA to output the decoded image DB.
  • the image capturing system 100A since the generation processing of the encoded image DA and the decoding processing are executed at different locations, security against information leakage can be improved.
  • the configuration of the mask according to the first embodiment and the configuration of the mask according to the second embodiment can be combined. That is, the mask according to the embodiments may be provided with a two-dimensional pattern including the optical amplitude modulation region or the phase modulation region having the symmetrical component that is symmetrical in the rotation direction about the preset rotation reference point, and the optical amplitude modulation region or the phase modulation region with translational symmetry, and may be configured to guide incident light to the image capturing element. This makes it possible to provide the mask with improved robustness against both the rotation error and the crop error.
  • a mask comprising: a two-dimensional pattern including an optical amplitude modulation region or a phase modulation region having a symmetrical component that is symmetrical in a rotation direction about a preset rotation reference point, wherein the mask guides, based on the two-dimensional pattern, incident light to an image sensor.
  • the rotation reference point is set inside a pattern forming region where the two-dimensional pattern is provided.
  • the rotation reference point is set at a position where a distance from a center of gravity of the pattern forming region is equal to or less than half a distance from the center of gravity to a periphery of the pattern forming region.
  • a mask comprising: a two-dimensional pattern including an optical amplitude modulation region or a phase modulation region having translational symmetry, an area of a repeating portion of the two-dimensional pattern being smaller than an area of a light-receiving region in an image sensor, wherein the mask guides, based on the two-dimensional pattern, incident light to the image sensor.
  • the mask according to (8), wherein the two-dimensional pattern has one-dimensional translational symmetry.
  • a mask comprising: a two-dimensional pattern including an optical amplitude modulation region or a phase modulation region having a symmetrical component that is symmetrical in a rotation direction about a preset rotation reference point, and an optical amplitude modulation region or a phase modulation region having translational symmetry, wherein the mask guides, based on the two-dimensional pattern, incident light to an image sensor.
  • An image capturing device including: the mask according to any one of (1), (8), and (13); and an image sensor that receives incident light guided by the mask.
  • An image capturing system including: an image capturing device including the mask according to any one of (1), (8), and (13), and an image capturing element that receives light incident via the mask; and a signal processing device that restores an image acquired by the image capturing element.
  • a data generation method including: receiving, by an image sensor, incident light guided by the mask according to any of (1), (8), and (13), and generating an encoded image of an image corresponding to the light.
  • the mask according to (1), wherein the optical amplitude modulation region includes members having varying degrees of light transmittance.
  • phase modulation region controls optical phase modulation based on an uneven structure within the mask plane and a refractive index based on a density of members with a plurality of refractive indices at size orders smaller than a wavelength of the incident light.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)
  • Studio Devices (AREA)

Abstract

L'invention concerne un masque, comprenant un motif bidimensionnel comprenant une région de modulation optique d'amplitude ou une région de modulation optique de phase ayant un composant symétrique qui est symétrique dans une direction de rotation autour d'un point de référence de rotation prédéfini, le masque guidant, sur la base du motif bidimensionnel, une lumière incidente vers un capteur d'image. De plus, le point de référence de rotation est défini à l'intérieur d'une région de formation de motif où le motif bidimensionnel est fourni.
PCT/JP2024/014965 2023-04-21 2024-04-15 Masque, dispositif de capture d'image, système de capture d'image et procédé de génération de données Pending WO2024219355A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202480025714.XA CN121002890A (zh) 2023-04-21 2024-04-15 掩模、图像捕获装置、图像捕获系统及数据生成方法

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2023-070363 2023-04-21
JP2023070363A JP2024155556A (ja) 2023-04-21 2023-04-21 マスク、撮像装置、撮像システム、およびデータ生成方法

Publications (1)

Publication Number Publication Date
WO2024219355A1 true WO2024219355A1 (fr) 2024-10-24

Family

ID=90922396

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2024/014965 Pending WO2024219355A1 (fr) 2023-04-21 2024-04-15 Masque, dispositif de capture d'image, système de capture d'image et procédé de génération de données

Country Status (3)

Country Link
JP (1) JP2024155556A (fr)
CN (1) CN121002890A (fr)
WO (1) WO2024219355A1 (fr)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150373265A1 (en) * 2013-03-05 2015-12-24 Rambus Inc. Phase Gratings with Odd Symmetry for High-Resolution Lensed and Lensless Optical Sensing
WO2019176349A1 (fr) 2018-03-14 2019-09-19 ソニー株式会社 Dispositif de traitement d'image, dispositif de formation d'image, et procédé de traitement d'image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150373265A1 (en) * 2013-03-05 2015-12-24 Rambus Inc. Phase Gratings with Odd Symmetry for High-Resolution Lensed and Lensless Optical Sensing
WO2019176349A1 (fr) 2018-03-14 2019-09-19 ソニー株式会社 Dispositif de traitement d'image, dispositif de formation d'image, et procédé de traitement d'image

Also Published As

Publication number Publication date
CN121002890A (zh) 2025-11-21
JP2024155556A (ja) 2024-10-31

Similar Documents

Publication Publication Date Title
US10670829B2 (en) Imaging device
US8436909B2 (en) Compound camera sensor and related method of processing digital images
US9338380B2 (en) Image processing methods for image sensors with phase detection pixels
CN110140348B (zh) 摄像装置及摄像模块
CN108616677B (zh) 摄像装置
Nagahara et al. Programmable aperture camera using LCoS
US9432568B2 (en) Pixel arrangements for image sensors with phase detection pixels
CN110692233B (zh) 摄像装置、图像处理装置、摄像系统、图像处理方法及记录介质
CN101430426A (zh) 用于获取场景的4d光场的设备和方法
US9473700B2 (en) Camera systems and methods for gigapixel computational imaging
WO2017145348A1 (fr) Dispositif d'imagerie
CN103916574B (zh) 摄像装置
US10506124B2 (en) Image reading apparatus
JP7373015B2 (ja) 撮像方法
JP4945806B2 (ja) 画像処理装置、画像処理方法、撮像装置、撮像方法、およびプログラム
WO2024219355A1 (fr) Masque, dispositif de capture d'image, système de capture d'image et procédé de génération de données
Lee et al. Robust all-in-focus super-resolution for focal stack photography
CN111815512B (zh) 用于检测失真图像中的对象的方法、系统和设备
WO2021093537A1 (fr) Procédé et dispositif d'acquisition de différence de phase, appareil électronique et support d'enregistrement lisible par ordinateur
WO2020059181A1 (fr) Dispositif d'imagerie, et procédé d'imagerie
JP2023016864A (ja) 撮像装置および方法
JP2012015982A (ja) 映像間のシフト量の決定方法
Wang et al. UDAC: Under-Display Array Cameras
US20210314546A1 (en) Apparatus, method and system for generating three-dimensional image using a coded phase mask
JP2025117421A (ja) 画像処理装置、三次元データ生成プログラム及び三次元データ生成方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24722353

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: CN202480025714X

Country of ref document: CN

WWE Wipo information: entry into national phase

Ref document number: 2024722353

Country of ref document: EP