WO2012053908A2 - Procédé et appareil destinés à une imagerie sans contact - Google Patents
Procédé et appareil destinés à une imagerie sans contact Download PDFInfo
- Publication number
- WO2012053908A2 WO2012053908A2 PCT/NZ2011/000216 NZ2011000216W WO2012053908A2 WO 2012053908 A2 WO2012053908 A2 WO 2012053908A2 NZ 2011000216 W NZ2011000216 W NZ 2011000216W WO 2012053908 A2 WO2012053908 A2 WO 2012053908A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- target
- image
- modulated light
- capture device
- light beam
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
Definitions
- the present invention relates to a method and apparatus for non-contact imaging.
- Diffuse Optical Tomography is a medical imaging technique that has been under research and development since the early 1990s.
- MRI Magnetic Resonant Imaging
- X-ray computed tomography X-ray computed tomography
- This imaging technique has advantages by being non-invasive, non-ionising, low cost, and also has one order of magnitude faster temporal resolution than that of MRI.
- DOT has the ability to produce information about the oxy-haemoglobin and deoxy-haemoglobin concentrations through measurement of the optical properties (absorption and reduced scattering) of tissues. This makes it ideal for use in functional brain imaging and optical mammography.
- the time domain technique passes pulsed light through the area under test and measures the time delay between the incident and received pulse.
- the continuous wave technique measures only the attenuated light intensity passed through the tissue, whereas the frequency domain technique passes intensity modulated light through the area under test and measures both phase and intensity of the received modulated light relative to the incident.
- the continuous wave technique lacks the ability of determining tissue optical properties because the received intensity wave does not include time information.
- a method of non-contact imaging including the steps of: illuminating a target to be imaged with a modulated light beam; capturing a first image of the illuminated target with an image capture device, the first image having at least one pixel containing amplitude and phase information; illuminating the target with a diverging modulated light; capturing a second image of the illuminated target with an image capture device, the second image having at least one pixel containing amplitude and phase information; determining distance information relating to position of the image capture device relative to the target using information contained in the second image; and determining internal characteristics of the target using information contained in the first image in combination with the distance information.
- an apparatus for non- contact imaging including: at least one light source configured to illuminate a target to be imaged with a modulated light beam, and a diverging modulated light; at least one image capture device configured to capture a first image of the target illuminated by the modulated light beam, and a second image of the target illuminated with the diverging modulated light, wherein the first and second image have at least one pixel containing amplitude and phase information; and at least one processor configured to: determine distance information relating to position of the image capture device relative to the target using information contained in the second image; and determine internal characteristics of the target using information contained in the first image in combination with the distance information.
- a method of non-contact imaging including the steps of: illuminating a target to be imaged with a modulated light having a wavelength in the near-infrared region; capturing a first image of the illuminated target with an image capture device, the first image having at least one pixel containing amplitude and phase information; illuminating the target with a modulated light having a wavelength which penetrates the target to a lesser degree than the light having a wavelength in the near-infrared region; capturing a second image of the illuminated target with an image capture device, the second image having at least one pixel containing amplitude and phase information; determining distance information relating to position of the image capture device relative to the target using information contained in the second image; and determining internal characteristics of the target using information contained in the first image in combination with the distance information.
- an apparatus for non- contact imaging including: at least one light source configured to illuminate a target to be imaged with a modulated light having a wavelength in the near-infrared region, and a modulated light having a wavelength which penetrates the target to a lesser degree than the light having a wavelength in the near-infrared region; at least one image capture device configured to capture a first image of the target illuminated by the near-infrared light, and a second image of the target illuminated with the light having a wavelength which penetrates the target to a lesser degree than the near-infrared light, wherein the first and second image have at least one pixel containing amplitude and phase information; and at least one processor configured to: determine distance information relating to position of the image capture device relative to the target using information contained in the second image; and determine internal characteristics of the target using information contained in the first image in combination with the distance information.
- the modulated light beam may be a collimated laser beam.
- the light beam may have a wavelength within the near-infrared region (between approximately 600nm to 950nm, preferably 850nm).
- the near-infrared region may be orders of magnitude lower that in the visible - thereby achieving a greater depth of penetration.
- the modulated light having a wavelength in the near-infrared region need not be a beam, although it is anticipated that this will assist in achieving greater penetration and subsequent clarity of measurements.
- the steps of illuminating the target with the modulated light beam, and capturing the first image may be repeated a number of times, directing the beam to different locations on the target. It should be appreciated that the necessity of doing so may be determined by the size of the target, or the size of a specific region within the target.
- the wavelength of the light having a wavelength which penetrates the target to a lesser degree than the light having a wavelength in the near-infrared region is in the visible region (between 400nm and 600nm).
- wavelengths may be in the visible region (between 400nm and 600nm), this should not be considered limiting.
- the wavelength of such a light may be any wavelength which does not penetrate as much as that of the light having a wavelength in the near-infrared region - including wavelengths within that region.
- the wavelength may be selected based on the composition of the target.
- the light having a wavelength which penetrates the target to a lesser degree than the light having a wavelength in the near-infrared region will be referred to as visible light.
- the diverging and/or visible light source simultaneously illuminates the target while being intensity modulated.
- a person skilled in the art should appreciate that it may be advantageous to use a light source (or sources) which does not penetrate appreciably into the target (particularly where the target is tissue), enabling accurate surface or distance measurements e.g. having a wavelength in the visible region (between 400nm and 600nm).
- a number of second images may be captured - directing the beam to different locations on the target.
- the diverging and/or visible light is transmitted on average from the same viewpoint as the capture device (that is, coaxially with the capture device's optical axis).
- performance of a step using the modulated light beam may be performed using the modulated light having a wavelength in the near-infrared region, and similarly that performance of a step using the modulated diverging light may be performed using the modulated light having a wavelength in the visible region.
- the steps of illuminating the target with the modulated light beam or diverging modulated light, and capturing the first image or second image respectively may be repeated with different modulation frequencies or coding techniques in order to improve quality of the measurements.
- Multiple frequencies may, for example, be used to reduce the distortion effects of stray or scattered multi-path light, and provide additional information to be used by the internal characteristics processor's algorithm.
- Additional coding techniques such as Hadamard and pseudo-random coding, can be used to improve signal to noise ratio and allow the simultaneous acquisition of images from multiple beam illumination and diverging illumination.
- the same image capture device is used to capture both the first image and the second image.
- the image capture device may be a Charge Couple Device (CCD) camera, with a separate shutter device such as an image intensifier enabling modulation.
- CCD Charge Couple Device
- the image capture device may be an internally modulated custom image sensor where shuttering is integrated on-chip as part of the pixel architecture.
- Such a device will have the ability to incorporate gain switching or gain modulation features internal to the pixel structure, eliminating the need for an external image modulation device.
- the image capture device includes a Complementary Metal Oxide Semiconductor (CMOS) image sensor.
- CMOS Complementary Metal Oxide Semiconductor
- Such sensors have the potential to be modulated on chip, and are generally less expensive to manufacture than CCD devices.
- An internally modulated image sensor has advantages over a CCD camera in that the custom sensors are smaller, lower power, and less complicated to integrate.
- the image capture device may be a Single Photon Avalanche Diode (SPAD) array sensor, which has advantages of increased sensitivity, faster frame rates, and faster modulation rates, but lower still resolution.
- SPAD Single Photon Avalanche Diode
- a single image capture device may be used to capture the first image and the second image simultaneously, for example using Hadamard and pseudo-random coding techniques.
- the present invention includes illuminating a plurality of positions on the target with the modulated light beam and capturing a first image at each position.
- determining the internal characteristics of the target includes determining absorption of photons of the modulated light beam by the target from the at least one pixel containing amplitude and phase information.
- determining the internal characteristics of the target includes determining scattering of photons of the modulated light beam by the target from the at least one pixel containing amplitude and phase information.
- determining the internal characteristics of the target includes generating a tomographic image of the target.
- Numerous models have been developed to predict the propagation of light within biological tissue, such as forward models based on the radiative transport equation or diffusion approximation, or the inverse model, to derive either the intrinsic optical properties of the tissue at the applied wavelengths, or to estimate functional information such as total hemoglobin content and water fraction from measurements at multiple wavelengths.
- phase delay created by capturing the first image in the non-contact scenario of the present invention may distort the image and ultimately lead to inaccuracies in modelling the internal characteristics of the target.
- the present invention includes the step of determining phase delay of the information contained in the first image using the distance information, and correcting the first image information prior to determining the internal characteristics of the target.
- the phase delay associated with that distance may be used to adjust the measurements derived from the first image (of the beam).
- the distance information includes distance of the image capture device to the target. It should be appreciated that phase delay may be compensated for using the variation in distance from the image capture device to the different points on the target - particularly where the target is curved or angled relative to the image capture device.
- the distance information includes variation of distance of the image capture device to the target between two points on the target. Determining distance information relating to position of the image capture device relative to the target using information contained in the second image may be achieved by any number of techniques, such as the well known full field Amplitude Modulated Continuous Wave (AMCW) indirect method of time-of-flight range imaging.
- AMCW Amplitude Modulated Continuous Wave
- a range image is produced by illuminating a target with a diverging light and measuring the time of flight (or propagation delay) of the light as it is reflected from objects back to an image sensor.
- the flight time is given by 2r/c where r is the target range, c is the speed of light, and the factor of 2 allows for the round-trip distance travelled both to and from the target.
- an indirect measurement is performed where the illumination source is amplitude modulated and the propagation delay is manifested as a phase shift of the modulation envelope of the reflected light.
- An image sensor synchronously samples the reflected light by modulating the gain of each pixel at the same frequency as that of the illumination source (homodyne detection), integrating the result over an integration period.
- the integration period allows a significant number (tens of thousands) of modulation cycles to be combined to achieve a high signal-to-noise ratio (SNR) before reading the output from the sensor, where the output voltage represents a correlation between the received illumination and the sensor modulation signals.
- SNR signal-to-noise ratio
- the phase shift of the modulation envelope will be small, producing a strong correlation between the illumination and the image sensor modulation waveforms, which is seen as a bright pixel value.
- the illumination and sensor modulation waveforms are out of phase for distant targets or points on a target, producing dull pixel values.
- the pixel intensity is not only dependent on the phase shift due to the propagation time, but is also affected by object surface properties (colour, reflectivity, etc.), as well as ambient lighting.
- the amplitude A, phase ⁇ , and offset B of the correlation function can then be determined for each pixel by:
- the range, r is calculated for each pixel from the measured phase where c is the speed of light and f is the amplitude modulation frequency of the illumination source and image sensor. It should be appreciated that this is not intended to be limiting, and that other range detection or determination techniques may be implemented with the present invention.
- the equation may then be rearranged and applied to the frequency or wavelength of the modulated light beam to determine the phase delay of each pixel in the first image captured by the image capture device.
- the pixels of the respective images may be easily matched to determine the phase delay associated with that particular pixel or group of pixels. This may reduce the complexity associated with calibrating separate image sensors.
- the distance information may also be combined with prior radial lens calibration information to calculate the three dimensional location of the surface of the target, providing the so-called grid or mesh data describing the relative three-dimensional location of detection points on the object, for the tomography calculations.
- the distance information relating to position of the image capture device relative to the target may also enable prediction of light behaviour in the first image, accounting for the external geometry of the target. As such, it is envisaged that the distance information may be used to generate a three dimensional boundary model.
- a processor as referred to within this specification may be implemented within one or more processors, micro-controllers, micro-processors, application specific integrated circuits (ASICs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), digital signal processors (DSPs) or any other electronic device known to one skilled in the art and designed to perform the functions described herein, or a combination thereof.
- a processor may also be implemented as a combination of computing devices, for example, a combination of a digital signal processor (DSP) and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
- DSP digital signal processor
- the processor may be configured to undertake control of the light source(s) and image capture device(s), and facilitate transfer of the images or data contained therein to a separate processor or computing device for processing.
- the processors may function in conjunction with servers and network connections as known in the art.
- firmware and/or software also known as a computer program
- the techniques of the present invention may be implemented as instructions (for example, procedures, functions, and so on) that perform the functions described. It should be appreciated that the present invention is not described with reference to any particular programming languages, and that a variety of programming languages could be used to implement the present invention.
- the firmware and/or software codes may be stored in a memory, or embodied in any other processor readable medium, and executed by a processor or processors.
- the memory may be implemented within the processor or external to the processor.
- a computer program may be accessible from any computer-readable or processor-readable device, carrier, or media.
- computer-readable medium can include but are not limited to magnetic storage devices, optical disks, digital versatile disk (DVD), smart cards, and flash memory devices.
- steps of a method, process, or algorithm described in connection with the present invention may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two.
- the various steps or acts in a method or process may be performed in the order shown, or may be performed in another order. Additionally, one or more process or method steps may be omitted or one or more process or method steps may be added to the methods and processes. An additional step, block, or action may be added in the beginning, end, or intervening existing elements of the methods and processes.
- Figure 1 provides a diagrammatic view of an apparatus for non-contact imaging according to one embodiment of the present invention
- Figure 2 is an overhead view of a target to be imaged according to an embodiment of the present invention
- Figure 3 is an overhead view of collection of images according to an embodiment of the present invention.
- Figures 4a-c are images collected by an apparatus for non-contact imaging according to an embodiment of the present invention
- Figure 5 is an overhead view of collection of an image according to an embodiment of the present invention
- Figure 6 shows the results of distance measurement according to one embodiment of the present invention
- Figures 7a, b are reconstructed images of a target according to an embodiment of the present invention.
- Figure 8 provides a diagrammatic view of an alternative configuration of the apparatus for non-contact imaging according to one embodiment of the present invention.
- Figure 1 illustrates an apparatus (generally indicated by arrow 1 ) for non-contact imaging in the form of diffuse optical tomography (DOT).
- DOT diffuse optical tomography
- the apparatus (1 ) includes a radiation source in the form of a near-infrared laser (2) operating at 850nm configured to radiate a target to be imaged (3) with an amplitude modulated collimated laser beam.
- the laser (2) is mounted to a track (4) on which the laser may be moved relative to the target (3).
- An imaging device in the form of a camera (5) such as the XZ422 Demonstrator made by Canesta Inc., is configured to capture a first image of the target (3) radiated by the modulated laser beam.
- the apparatus (1) also includes a second light source in the form of an LED array (6), configured to illuminate the target (3) with a diverging modulated light.
- the light produced by the LED array (6) is selected to have a wavelength that minimises penetration of the target (3), for example between 400-600 nm (wavelength of green light).
- the camera (5) is also configured to capture a second image of the target illuminated with the diverging modulated light.
- the apparatus (1 ) includes a processor (7) configured to control operation of the laser (2), LED array (6), and camera (5) in accordance with the modulation scheme of choice, for example homodyne amplitude modulation.
- the processor (7) is also configured to transmit the first and second images to a computer (8) for determining distance information relating to position of the camera (5) relative to the target (3) using information contained in the second image, and determine internal characteristics of the target (3) using information contained in the first image in combination with the distance information.
- Figures 2 shows a target (3) which will be used to illustrate application of the present invention.
- the target (3) is a tank with transparent plastic walls (20) filled with IntralipidTM (21).
- IntralipidTM is a commercially available milky solution containing Soy-bean oil, Lecithin, Glycerine and water, and is used to simulate body tissue when conducting experiments.
- a 6.3mm diameter metal rod (22) is immersed in the IntralipidTM.
- Figure 4a shows an example of a second image of the phase component of the light from the LED array (6) as reflected from the target (3).
- Each of the black squares (for example at point 40) is a block of pixels from which information is obtained.
- Figures 4b and 4c show the magnitude and phase, respectively, of a series of thirteen images captured by the camera (5) of the laser (2) at thirteen positions along the track (4).
- the pixel blocks e.g. block (40) may be seen in each image. From these pixel blocks, amplitude and phase information is sampled in order to reconstruct a tomographic image of the target (3) using NIRFAST, an open source reconstruction software which simulates light or radiation propagation in tissue based on the finite element method.
- NIRFAST an open source reconstruction software which simulates light or radiation propagation in tissue based on the finite element method.
- the distance from the camera (5) to the target (3) may be used to calculate the phase delay using the equation: c
- the surface of the target (3) is illuminated with the diverging light of the LED array (6), and second images are captured.
- Points on the target (3) surface closer to the camera (5) will produce less phase shift than those further away from the camera (5). Consequently, intensity or active brightness on each pixel is a function of distance. Points on the target (3) closer to the camera (5) will produce brighter pixels, due to the fact that light is attenuated the more it travels. In addition pixel brightness is affected by the background light. For that reason, more than one capture (ideally four) are used to determine an accurate phase and brightness on each pixel.
- the relative phase ⁇ , the active brightness a, and the background intensity o is calculated with: where Ao, Ai, A 2 , and A 3 are pixel brightness values recorded from four captures 90 degrees apart from each other.
- the distance (d) from the camera (5) to the target (3) between the pixel blocks on the captured images corresponding to points of the target (3) may then be determined.
- the same pixel blocks are used for the range measurement as were used in the phase and amplitude measurement from the first images.
- the phase information obtained from the first images may be adjusted. In doing so, the pixel blocks on the image become virtual sensors as if located on the boundary of the target (3).
- the phase delays between the laser (2) and the target (3), and also through the target (3) may also be determined - whether through measurement using a non-contact range detector, or estimated.
- the corrected phase of the first pixel block may be calculated as: -
- reconstruction software such as NIRFAST may produce a negative absorption or scattering coefficient being calculated, and results as illustrated in Table 1.
- Figures 7a and 7b show a NIRFAST reconstruction of a slice of the target.
- Figure 7a is a reconstruction based on the absorption coefficient, with Figure 7b based on reduced scattering.
- the presence of the metal rod (22) of Figure 2 may be seen in both reconstructions.
- the minimal presence of the rod (22) in Figure 7b is to be expected given that it is black-object when submerged in the tank and therefore little to no scattering occurs.
- Figure 8 illustrates an apparatus (generally indicated by arrow 80) for non-contact imaging in the form of diffuse optical tomography (DOT).
- DOT diffuse optical tomography
- the near-infrared laser (2) is positioned on the same side of the target (3) as the camera (5). This way, data collection may be performed from one side of the target - reducing complexity of positioning the target relative to the apparatus. It is envisaged that this will be particularly important in using the present invention in applications such as mammography.
- the pixels surrounding the point of illumination may be particularly useful for obtaining information regarding the internal characteristics of the target, as these will collect light which has penetrated the target and returned to the surface.
- the point of illumination and the surrounding pixels will likely contain a large dynamic range. Such a large dynamic range can be detrimental to the detected signal or image quality for most image sensors. This may be addressed using High Dynamic Range Imaging, where at least two images are acquired at difference sensitivities (achieved with changes in sensor gain, lens aperture, or image integration time), and are combined to produce an image that is of higher dynamic range than any single image.
Landscapes
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Investigating Or Analysing Materials By Optical Means (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
La présente invention se rapporte à un procédé et à un appareil destinés à une imagerie sans contact, dans laquelle une cible à imager est éclairée avec un faisceau de lumière modulé et avec une lumière modulée divergente et une première image et une seconde image de la cible respectivement éclairée sont capturées avec un dispositif de capture d'image - les images présentant au moins un pixel qui contient des informations d'amplitude et de phase. Les informations de distance qui se rapportent à la position du dispositif de capture d'image par rapport à la cible sont déterminées en utilisant les informations contenues dans la seconde image et les caractéristiques internes de la cible sont déterminées en utilisant les informations contenues dans la première image en association avec les informations de distance.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| NZ588636 | 2010-10-18 | ||
| NZ58863610A NZ588636A (en) | 2010-10-18 | 2010-10-18 | A method and apparatus for non-contact diffuse opitcal tomographic imaging |
Publications (3)
| Publication Number | Publication Date |
|---|---|
| WO2012053908A2 true WO2012053908A2 (fr) | 2012-04-26 |
| WO2012053908A8 WO2012053908A8 (fr) | 2012-07-05 |
| WO2012053908A3 WO2012053908A3 (fr) | 2012-08-23 |
Family
ID=45975776
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/NZ2011/000216 Ceased WO2012053908A2 (fr) | 2010-10-18 | 2011-10-18 | Procédé et appareil destinés à une imagerie sans contact |
Country Status (2)
| Country | Link |
|---|---|
| NZ (1) | NZ588636A (fr) |
| WO (1) | WO2012053908A2 (fr) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2015133910A2 (fr) | 2014-03-06 | 2015-09-11 | University Of Waikato | Système de caméras à temps de vol qui résout des composantes de radiation à chemin directe et à multiples chemins |
| US9861319B2 (en) | 2015-03-23 | 2018-01-09 | University Of Kentucky Research Foundation | Noncontact three-dimensional diffuse optical imaging of deep tissue blood flow distribution |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7711087B2 (en) * | 2006-04-07 | 2010-05-04 | Varian Medical Systems, Inc. | Patient setup using tomosynthesis techniques |
| DE102006024425A1 (de) * | 2006-05-24 | 2007-11-29 | Siemens Ag | Verfahren zur Lokalisierung eines medizinischen Instruments während eines Eingriffs im menschlichen Körper |
| BRPI0719141A2 (pt) * | 2006-11-21 | 2014-03-04 | Koninkl Philips Electronics Nv | Sistema e método para gerar imagem de câncer de próstata, meio legível por computador, e, usos do sistema e de tomografia óptica difusa |
| US7973912B2 (en) * | 2008-09-02 | 2011-07-05 | Basis Software, Inc. | Binary modulation rangefinder |
-
2010
- 2010-10-18 NZ NZ58863610A patent/NZ588636A/xx not_active IP Right Cessation
-
2011
- 2011-10-18 WO PCT/NZ2011/000216 patent/WO2012053908A2/fr not_active Ceased
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2015133910A2 (fr) | 2014-03-06 | 2015-09-11 | University Of Waikato | Système de caméras à temps de vol qui résout des composantes de radiation à chemin directe et à multiples chemins |
| US9861319B2 (en) | 2015-03-23 | 2018-01-09 | University Of Kentucky Research Foundation | Noncontact three-dimensional diffuse optical imaging of deep tissue blood flow distribution |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2012053908A8 (fr) | 2012-07-05 |
| WO2012053908A3 (fr) | 2012-08-23 |
| NZ588636A (en) | 2013-02-22 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US9635349B2 (en) | Second generation hand held optical imager | |
| US9709733B2 (en) | Hand-held optical probe based imaging system with 3D tracking facilities | |
| EP2749226A1 (fr) | Appareil d'acquisition d'informations sur un objet | |
| EP2654550A1 (fr) | Dispositif et procédé d'acquisition d'informations relatives à un sujet | |
| CN105011909A (zh) | 被检体信息获取设备和被检体信息获取方法 | |
| CN103222880B (zh) | 被检物信息获取装置 | |
| CN108027229A (zh) | 利用脉冲加宽二极管激光器的干涉测量法 | |
| JP2010237139A (ja) | 濃度定量装置、濃度定量方法及びプログラム | |
| Ban et al. | Heterodyne frequency‐domain multispectral diffuse optical tomography of breast cancer in the parallel‐plane transmission geometry | |
| JP6362420B2 (ja) | 被検体情報取得装置、被検体情報取得方法、および、プログラム | |
| US20160058295A1 (en) | Photoacoustic wave measurement apparatus and photoacoustic wave measurement method | |
| JP6296759B2 (ja) | 被検体情報取得装置 | |
| US10307064B2 (en) | Method and device for locating at least one target in an electromagnetically absorbent environment | |
| WO2012053908A2 (fr) | Procédé et appareil destinés à une imagerie sans contact | |
| US11771323B2 (en) | Optical time-of-flight imaging methods and systems for surgical guidance and fluorescence depth estimation in tissue | |
| JP2013244343A (ja) | 生体情報提示装置および生体情報提示方法 | |
| US8279443B2 (en) | Bioinstrumentation apparatus | |
| JP2017131482A (ja) | 被検体情報取得装置および信号処理方法 | |
| RU91517U1 (ru) | Устройство диффузионной оптической томографии | |
| Nouizi et al. | 3D modeling for solving forward model of no-contact fluorescence diffuse optical tomography method | |
| US20240000316A1 (en) | Optical time-of-flight imaging methods and systems for surgical guidance and fluorescence depth estimation in tissue | |
| Hassan et al. | Proof of concept of diffuse optical tomography using time-of-flight range imaging cameras | |
| Ueda et al. | Development of optical mammography based on analysis of time-resolved photon path distribution | |
| JP6556300B2 (ja) | 被検体情報取得装置、被検体情報取得方法およびプログラム | |
| CN119073923A (zh) | 基于荧光图像导引的乳腺动态光学层析成像系统 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11834689 Country of ref document: EP Kind code of ref document: A2 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 11834689 Country of ref document: EP Kind code of ref document: A2 |