WO2020090439A1 - Appareil de traitement d'images, procédé de traitement d'images, et programme - Google Patents
Appareil de traitement d'images, procédé de traitement d'images, et programme Download PDFInfo
- Publication number
- WO2020090439A1 WO2020090439A1 PCT/JP2019/040461 JP2019040461W WO2020090439A1 WO 2020090439 A1 WO2020090439 A1 WO 2020090439A1 JP 2019040461 W JP2019040461 W JP 2019040461W WO 2020090439 A1 WO2020090439 A1 WO 2020090439A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- data
- dimensional
- alignment
- processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/12—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
Definitions
- the disclosed technology relates to an image processing device, an image processing method, and a program.
- OCT Angiography Optical Coherence Tomography
- a blood vessel image (hereinafter referred to as an OCTA image) is generated by projecting three-dimensional motion contrast data acquired by OCT onto a two-dimensional plane.
- the motion contrast data is data in which the same cross section of the measurement target is repeatedly imaged by OCT and a temporal change of the measurement target between the imagings is detected.
- the motion contrast data is obtained, for example, by calculating the temporal change of the phase or vector of the complex OCT signal, the difference, the ratio, the correlation, or the like.
- Registration of three-dimensional data is a useful process for creating a composite image with reduced noise, but the amount of calculation is larger than that of registration of two-dimensional data, so the image after registration is displayed. Waiting time increases.
- One of the objects of the present invention is to achieve both the noise reduction and the reduction of the calculation amount required for the registration process until the composite image is generated.
- the present invention is not limited to the above-mentioned object, and it is also an effect that is obtained by each configuration shown in the embodiment for carrying out the invention described later and that is not obtained by the conventional technique. It can be positioned as one.
- the disclosed image processing apparatus includes a first mode in which a composite image of a plurality of images of the eye to be inspected is generated based on the first image registration processing, and a second mode in which the processing amount is larger than that in the first image registration processing. And a second mode for generating a composite image of a plurality of images of the eye to be inspected based on the image registration processing of.
- FIG. 13 is a diagram for explaining an example of an image update display according to the second embodiment.
- FIG. 13 is a diagram for explaining an example of an image update display according to the second embodiment. It is a flow chart which shows an example of the flow of processing in the image processing system in a third embodiment. It is a flow chart which shows an example of the flow of processing in the image processing system in a third embodiment. It is a figure for explaining an example of an image selection screen in a third embodiment. It is a figure for explaining an example of an image selection screen in a third embodiment. It is a flow chart which shows an example of the flow of processing in the image processing system in a fourth embodiment. It is a figure for explaining an example of a screen which displays an image in a fourth embodiment. It is a figure for explaining an example of a screen which displays an image in a fourth embodiment. It is a figure for explaining an example of a screen which displays an image in a fourth embodiment. It is a figure for explaining an example of a screen which displays an image in a fourth embodiment.
- the image processing apparatus aligns a plurality of motion contrast data to generate motion contrast data with reduced artifacts. It is characterized in that three-dimensional motion contrast data with reduced artifacts or two-dimensional motion contrast data with reduced artifacts is created according to the operator's request.
- the present embodiment it is possible to obtain high-quality two-dimensional or three-dimensional motion contrast data even when there is an artifact in the motion contrast data due to a slight eye movement.
- the high image quality refers to an image having an improved S / N ratio as compared with one shot.
- it refers to an image in which the amount of information necessary for diagnosis is increasing.
- the three-dimensional data refers to three-dimensional tomographic image data including brightness values and three-dimensional motion contrast data including decorrelation values.
- FIG. 1 is a diagram showing a configuration of an image processing system 100 including an image processing apparatus 300 according to this embodiment.
- an image processing apparatus 300 includes a tomographic image capturing apparatus (also referred to as OCT) 200, a fundus image capturing apparatus 400, an external storage unit 500, a display unit 600, and an input via an interface. It is configured by being connected to the section 700.
- OCT tomographic image capturing apparatus
- the tomographic image capturing apparatus 200 is an apparatus that captures a tomographic image of the eye.
- the apparatus used for the tomographic image capturing apparatus is, for example, SD-OCT or SS-OCT. Since the tomographic image capturing apparatus 200 is a known device, detailed description thereof will be omitted, and here, the capturing of a tomographic image performed according to an instruction from the image processing apparatus 300 will be described.
- a galvanometer mirror 201 is for scanning the fundus of measurement light, and defines the imaging range of the fundus by OCT.
- the drive control unit 202 also controls the drive range and speed of the galvanometer mirror 201 to define the imaging range and the number of scanning lines (scanning speed in the plane direction) in the plane of the fundus.
- the galvano mirror is shown as one unit, but actually it is composed of an X scan mirror and two Y scan mirrors, and can scan a desired range on the fundus with the measurement light. ..
- the focus 203 is for focusing on the retinal layer of the fundus through the anterior segment of the subject eye.
- the measurement light is focused on the retinal layer of the fundus by an unillustrated focus lens via the anterior segment of the subject eye.
- the measurement light emitted to the fundus reflects and scatters at each retinal layer and returns.
- the internal fixation lamp 204 includes a display unit 241 and a lens 242.
- the display unit 241 one in which a plurality of light emitting diodes (LD) are arranged in a matrix is used.
- the lighting position of the light emitting diode is changed under the control of the drive control unit 202 in accordance with the region to be imaged.
- the light from the display unit 241 is guided to the subject's eye via the lens 242.
- the light emitted from the display unit 241 is 520 nm, and the drive control unit 202 displays a desired pattern.
- the coherence gate stage 205 is controlled by the drive control unit 202 in order to deal with the difference in the axial length of the eye to be inspected.
- the coherence gate represents a position where the optical distances of the measurement light and the reference light in OCT are equal. Furthermore, by controlling the position of the coherence gate as an imaging method, it is controlled to perform imaging on the retinal layer side or on the deeper side than the retinal layer.
- an eye structure and an image acquired by the image processing system will be described with reference to FIG.
- Figure 2A shows a schematic diagram of the eyeball.
- C is the cornea
- CL is the lens
- V is the vitreous body
- M is the macula (the central part of the macula represents the fovea)
- D represents the optic papilla.
- the tomographic image capturing apparatus 200 mainly describes a case of capturing the posterior pole of the retina including the vitreous body, the macula, and the optic papilla. Although not described in this embodiment, the tomographic image capturing apparatus 200 can also capture images of the anterior segment of the cornea and the crystalline lens.
- FIG. 2B shows an example of a tomographic image when the retina acquired by the tomographic image capturing apparatus 200 is captured.
- AS represents a unit of image acquisition in an OCT tomographic image called A scan.
- a plurality of A scans are collected to form one B scan.
- the B scan is called a tomographic image (or tomographic image).
- Ve represents a blood vessel
- V represents the vitreous body
- M represents the macula
- D represents the optic disc.
- L1 is the boundary between the inner limiting membrane (ILM) and the nerve fiber layer (NFL)
- L2 is the boundary between the nerve fiber layer and the ganglion cell layer (GCL)
- L3 is the photoreceptor inner-segment outer-segment junction (ISSO).
- L4 represents the retinal pigment epithelium layer (RPE)
- L5 represents the Bruch's membrane (BM)
- L6 represents the choroid.
- the horizontal axis (the main scanning direction of OCT) is the x axis
- the vertical axis (the depth direction) is the z axis.
- FIG. 2C shows an example of a fundus image acquired by the fundus image capturing apparatus 400.
- the fundus image capturing device 400 is a device that captures a fundus image of an eye part, and examples of the device include a fundus camera and an SLO (Scanning Laser Ophthalmoscope).
- M represents the macula
- D represents the optic papilla
- the thick curve represents the blood vessels of the retina.
- the horizontal axis (OCT main scanning direction) is the x axis
- the vertical axis (OCT sub scanning direction) is the y axis.
- the tomographic image capturing apparatus 200 and the fundus image capturing apparatus 400 may be integrated or may be separate types.
- the image processing device 300 includes an image acquisition unit 301, a storage unit 302, an image processing unit 303, an instruction unit 304, a display control unit 305, and a determination unit 306.
- the image acquisition unit 301 includes a tomographic image generation unit 311 and a motion contrast data generation unit 312, acquires signal data of a tomographic image captured by the tomographic image capturing apparatus 200, and performs signal processing to generate a tomographic image, It also generates motion contrast data.
- the fundus image data captured by the fundus image capturing apparatus 400 is acquired. Then, the generated tomographic image and fundus image are stored in the storage unit 302.
- the image processing unit 303 includes a preprocessing unit 331, an image generation unit 332, a detection unit 333, a first alignment unit 334, a selection unit 335, a second alignment unit 336, a third alignment unit 337, and a fourth alignment unit 337.
- the pre-processing unit 331 performs a process of removing artifacts from the motion contrast data.
- the image generation unit 332 generates a two-dimensional motion contrast front image (also referred to as an OCTA image) from the three-dimensional motion contrast data.
- the detection unit 333 detects the boundary line of each layer from the retina.
- the first alignment unit 334 aligns the two-dimensional front image.
- the selection unit 335 selects the reference data from the result of the first alignment unit 334.
- the second alignment unit 336 aligns the lateral direction (x axis) of the retina using the OCTA image.
- the third alignment unit 337 aligns the retina in the depth direction (z axis).
- the fourth alignment unit 338 sets a plurality of regions for alignment in a characteristic portion inside the tomographic image, and aligns in the depth direction (z axis) of the retina for each region unit.
- the image composition unit 339 averages the three-dimensional data aligned by the first to fourth alignment units.
- the deciding unit 306 decides a method of arithmetic mean processing of motion contrast data.
- the external storage unit 500 holds information relating to the eye to be inspected (name, age, sex, etc. of the patient), the image data taken, the imaging parameters, the image analysis parameters, and the parameters set by the operator in association with each other.
- the input unit 700 is, for example, a mouse, a keyboard, a touch operation screen, or the like, and the operator gives instructions to the image processing device 300, the tomographic image capturing device 200, and the fundus image capturing device 400 via the input unit 700.
- FIG. 3 is a flowchart showing the operation processing of the entire system in this embodiment.
- step S301 the eye information acquisition unit (not shown) acquires the subject identification number as information for identifying the eye from the outside. Then, based on the subject identification number, the information regarding the subject's eye held in the external storage unit 500 is acquired and stored in the storage unit 302.
- step S302 the eye to be inspected is scanned and imaged.
- the tomographic image capturing apparatus 200 controls the drive control unit 202 and operates the galvanometer mirror 201 to scan the tomographic image.
- the galvanometer mirror 201 is composed of an X scanner for the horizontal direction and a Y scanner for the vertical direction. Therefore, by changing the orientation of each of these scanners, scanning can be performed in the horizontal direction (X) and the vertical direction (Y) in the device coordinate system. By changing the orientations of these scanners at the same time, scanning can be performed in a combined direction of the horizontal direction and the vertical direction, so that it is possible to perform scanning in any direction on the plane of the fundus oculi.
- the drive control unit 202 controls the light emitting diodes of the display unit 241 to control the position of the internal fixation lamp 204 so as to perform imaging at the center of the macula and the optic disc.
- the scan pattern sets a scan pattern such as a raster scan, a radial scan, or a cross scan for capturing a three-dimensional volume.
- the scan pattern is a three-dimensional volume by raster scanning, and the three-dimensional volume is photographed N times (N is 2 or more) to generate high quality image data.
- the data that is repeatedly photographed N times is photographed in the same scanning range with the same scan pattern. For example, a range of 3 mm ⁇ 3 mm is repeatedly photographed at intervals of 300 ⁇ 300 (main scanning ⁇ sub scanning).
- M is 2 or more
- the same line portion is repeatedly photographed M times (M is 2 or more) in order to calculate the motion contrast. That is, when M is twice, 300 ⁇ 600 data is actually photographed and 300 ⁇ 300 three-dimensional motion contrast data is generated from the image.
- the term “identical” includes not only the case of being completely the same but also the case of not being completely the same but substantially the same due to the imperfection of the function of tracking the movement of the eye to be inspected.
- the tomographic image capturing apparatus 200 performs tracking of the eye to be inspected in order to capture the same location for averaging, thereby reducing the influence of involuntary eye movement. Scan the eye to be examined. Furthermore, when a motion that is an artifact in detecting an image such as blinking is detected, rescanning is automatically performed at the place where the artifact occurs.
- Step S303 a tomographic image is generated.
- the tomographic image generation unit 311 generates a tomographic image by performing a general reconstruction process on each interference signal.
- the tomographic image generation unit 311 removes fixed pattern noise from the interference signal. Fixed pattern noise removal is performed by averaging a plurality of detected A scan signals to extract fixed pattern noise, and subtracting the fixed pattern noise from the input interference signal.
- the tomographic image generation unit 311 performs a desired window function process in order to optimize the depth resolution and the dynamic range that have a trade-off relationship when Fourier transform is performed in a finite section. Then, FFT processing is performed to generate a tomographic signal.
- step S304 the motion contrast data generation unit 312 generates motion contrast data.
- This data generation will be described with reference to FIG. MC indicates three-dimensional motion contrast data, and LMC indicates two-dimensional motion contrast data forming the three-dimensional motion contrast data.
- a method of generating this LMC will be described.
- the motion contrast data generation unit 312 first corrects the positional deviation between a plurality of tomographic images taken in the same range of the eye to be inspected.
- the method of correcting the positional deviation may be any method.
- the motion contrast generation unit 312 images the same range M times, and aligns tomographic image data corresponding to the same location using features such as the fundus shape. Specifically, one of the M pieces of tomographic image data is selected as a template, the degree of similarity with other tomographic image data is obtained by changing the position and angle of the template, and the amount of positional deviation from the template is obtained. .. After that, the motion contrast generation unit 312 corrects each tomographic image data based on the obtained positional deviation amount.
- the motion contrast generation unit 312 obtains a decorrelation value M (x, z) between two pieces of tomographic image data, in which the photographing times of the respective pieces of tomographic image data are continuous with each other, using Expression 1.
- the decorrelation value calculation method is not limited to the following, and other calculation methods may be used.
- a (x, z) indicates the luminance at the position (x, z) of the tomographic image data A
- B (x, z) indicates the luminance at the same position (x, z) of the tomographic image data B.
- the decorrelation value M (x, z) becomes a value of 0 to 1, and the larger the difference between the two luminances, the larger the value of M (x, z).
- the motion contrast generation unit 312 can obtain a plurality of decorrelation values M (x, z) at the same position (x, z) when M repeatedly acquired at the same position is 3 or more.
- the motion contrast generation unit 312 can generate final motion contrast data by performing statistical processing such as maximum value calculation and average calculation of the obtained plurality of decorrelation values M (x, z). ..
- the decorrelation value M (x, z) of two tomographic images A and B that are temporally adjacent to each other is It is the value of the motion contrast at the position (x, z).
- the calculation formula of motion contrast shown in Formula 1 tends to be affected by noise. For example, when there is noise in the non-signal portion of a plurality of tomographic image data and the values are different from each other, the decorrelation value becomes high, and noise is also superimposed on the motion contrast image.
- the motion contrast generation unit 312 can regard the tomographic data below a predetermined threshold as noise and replace it with zero as preprocessing. Thereby, the image generation unit 332 can generate a motion contrast image in which the influence of noise is reduced based on the generated motion contrast data.
- step S305 it is determined whether or not the repeated shooting is finished.
- information for selecting whether or not to end repeated shooting may be displayed on the display unit 600, and the operator may select whether or not to continue shooting.
- the photographing may be automatically repeated as many times as desired.
- the three-dimensional volume is photographed N times (N is 2 or more) in order to generate high quality image data. Therefore, in the case of repeatedly performing shooting for generating high-quality image data, the process returns to shooting in step S302. If the desired number of shots has been taken, the process proceeds to step S306.
- step S306 the data used to generate the high quality image data is selected.
- the display unit 600 displays the data candidates used for the high-quality data generation, and the operator selects from the data candidates.
- the image processing unit 303 uses the data selected from the data candidates to generate high quality image data.
- the screen display on the display unit 600 will be described in detail in step S311.
- the present invention is not limited to this. It is also possible to automatically select all the data shot repeatedly N times under the same shooting conditions as data candidates for generating high quality image data. In that case, it is not necessary to display the data candidates on the display unit 600.
- the same photographing condition means that the same photographing range is photographed with the same scan pattern.
- the XYZ shooting positions are not always the same because they are displaced even when tracking is performed.
- step S307 the image processing unit 303 performs alignment processing on the front image. That is, the image processing unit 303 performs the alignment process in the two-dimensional space.
- the process of step S307 corresponds to an example of the first image alignment process.
- the processing of the image processing unit 303 will be described with reference to the flowchart of FIG. 4 and FIGS. 8 to 10.
- FIG. 4A is a flowchart showing the flow of front image alignment in this embodiment.
- FIG. 4B is a flowchart showing the flow of the first alignment.
- step S371 the detection unit 333 detects the boundary line of the retinal layer in the plurality of tomographic images captured by the tomographic image capturing apparatus 200.
- the detection unit 333 detects each boundary of L1 to L6, or any of GCL / IPL, IPL / INL, INL / OPL, and OPL / ONL boundaries (not shown) in the tomographic image of FIG. 2B.
- An image is created by applying a median filter and a Sobel filter to the tomographic image to be processed (hereinafter referred to as a median image and a Sobel image).
- a profile is created for each A scan from the created median image and Sobel image.
- the median image has a brightness value profile
- the Sobel image has a gradient profile. Then, the peak in the profile created from the Sobel image is detected.
- the boundary of each region of the retinal layer is detected by referring to the profile of the median image corresponding to before and after the detected peak or between the peaks.
- step S372 the image generation unit 332 projects the motion contrast data corresponding to the range between the upper and lower generation ranges specified for the three-dimensional motion contrast data onto the two-dimensional plane to generate the OCTA image. To generate. Specifically, the image generating unit 332 projects the average value of the motion contrast data in the range based on the motion contrast data corresponding to the range between the upper end and the lower end of the generation range of the entire motion contrast data. By performing processing such as (AIP) or maximum intensity projection (MIP), an OCTA image that is a front image of a motion contrast image is generated. The method of generating the OCTA image is not limited to the average value or the maximum value.
- the minimum value, the median value, the variance, the standard deviation, the sum, or the like may be used. Since the tomographic image in which the boundary line is detected is, for example, the tomographic image used to obtain the three-dimensional motion contrast data, the position of the boundary line detected from the tomographic image can be associated with the three-dimensional motion contrast data. Is.
- the upper end of the generation range is the ILM / NFL boundary line
- the lower end of the generation range is the boundary line of 50 ⁇ m lower end in the depth direction from GCL / IPL.
- the motion contrast generation unit 312 may be configured to generate the motion contrast data using the tomographic data in the range between the upper end of the generation range and the lower end of the generation range.
- the image generating unit 332 can generate an OCTA image based on the tomographic data in the designated depth range by generating an OCTA image based on the generated motion contrast data.
- Step S373 In step S373, in N OCTA images, rotational alignment in the horizontal direction (x axis), vertical direction (y axis), and xy plane of the image is performed. This will be described with reference to the flowchart of FIG. 4B.
- step S3731 the preprocessing unit 331 detects and removes an artifact such as a black band or a white line from the OCTA image generated by the image generation unit 332.
- an artifact such as a black band or a white line
- FIGS. 8A and 8B a black region of the OCTA image represents a place where the decorrelation value is high, that is, a place where blood flow is detected (corresponding to a blood vessel), and a white region represents a place where the decorrelation value is low.
- 8A shows an example of a black band
- WL of FIG. 8B shows an example of a white line.
- the black band is caused by the movement of the retina moving away from the position with high sensitivity, which reduces the luminance value of the retinal tomographic image and lowers the decorrelation value, or the entire image becomes dark due to blinking. It occurs when the value of decorrelation becomes low.
- the white line indicates that the M pieces of tomographic images are aligned in the calculation of the decorrelation, but if the alignment does not work well or the position cannot be completely corrected by the alignment, the decorrelation of the entire image is performed. It occurs when the value becomes high. Since these artifacts are generated in the decorrelation calculation, they are generated for each line in the main scanning direction. Therefore, the preprocessing unit 331 detects the artifact on a line-by-line basis.
- the black band detection when the average value of decorrelation on one line is equal to or smaller than the threshold value TH AVG_B , the black band is set.
- the white line is detected as a white line when the average value of the decorrelation values in one line is equal to or greater than the threshold value TH AVG_W and the standard deviation (or the variance value) is equal to or less than TH SD_W .
- the decorrelation value may be high due to large blood vessels, etc.Therefore, it is possible to mistakenly detect a region containing blood vessels with a high decorrelation value as a white line. is there.
- the judgment is made in combination with an index such as a standard deviation or a variance for evaluating the variation of values. That is, one line including a blood vessel having a high decorrelation value has a high average value and a high standard deviation. On the other hand, one line of the white line has a high average value but a small variation in the value, and therefore the standard deviation is low.
- the value of the decorrelation value varies depending on the type of disease even in a healthy eye, an affected eye, an affected eye, or the like. Therefore, it is desirable to set the threshold value for each image, and it is desirable to set it according to the brightness of the OCTA image using a dynamic threshold method such as P-tile or discriminant analysis.
- the dynamic threshold method the upper limit threshold and the lower limit threshold are set in advance, and when the value exceeds or falls below the value, the upper limit threshold or the lower limit threshold is set as the threshold.
- the preprocessing unit 331 stores the artifact area obtained above in the Mask image corresponding to the OCTA image.
- the Mask image shown in the figure an example in which a white area is set to 1 and a black area is set to 0 is shown.
- step S3732 the first alignment unit 334 initializes a two-dimensional matrix for storing alignment parameters when aligning each OCTA image.
- each matrix element information necessary for image quality improvement such as deformation parameters and image similarity at the time of alignment is collectively stored.
- step S3733 the first alignment unit 334 selects the alignment target.
- all OCTA images are set as the reference image and alignment is performed with the remaining OCTA images. Therefore, in step S3733, when the OCTA image of Data0 is used as a reference, alignment is performed with Data1 to Data (N-1). Next, when the OCTA image of Data1 is used as a reference, alignment with Data2 to Data (N-1) is performed. Next, when the OCTA image of Data2 is used as a reference, alignment is performed with each of Data3 to Data (N-1). These processes are repeated. An example of this is shown in FIG. 9A. In FIG. 9, Data0 to Data2 are shown for simplification, but when the three-dimensional volume is photographed N times, N OCTA images are aligned.
- step S3734 the first alignment unit 334 performs rotational alignment within the xy plane in the horizontal direction (x axis) and the vertical direction (y axis) of the plurality of OCTA images. That is, the first alignment unit 334 performs two-dimensional alignment of the first image and the second image.
- the size of the OCTA images is enlarged and alignment is performed.
- Sub-pixel alignment is expected to improve alignment accuracy more than pixel alignment. For example, when the photographing size of the OCTA image is set to 300 ⁇ 300, it is enlarged to 600 ⁇ 600.
- an interpolation method such as the Bicubic or Lanczos (n) method is used.
- an evaluation function representing the degree of similarity between the two OCTA images is defined in advance, and the evaluation value is calculated while shifting or rotating the OCTA image positions, The position where the evaluation value is the best is the alignment result.
- the evaluation function include a method of evaluating with a pixel value (for example, a method of performing evaluation using a correlation coefficient).
- Equation 2 shows an equation when a correlation coefficient is used as an evaluation function representing the degree of similarity.
- the area is an image area used for alignment, and an area equal to or smaller than the size of a normal OCTA image is set, and the ROI size described above is set.
- the evaluation function is not limited to this, and may be SSD (Sum of Squared Difference) or SAD (Sum of Absolute Difference) as long as it can evaluate the similarity or difference of images.
- the alignment may be performed by a method such as POC (Phase Only Correlation). By this processing, global alignment in the XY plane is performed.
- the size of the OCTA image is enlarged and alignment is performed is shown, but the example is not limited to this. Further, in the case of a high-density scan in which the input OCTA image size is 900 ⁇ 900, it is not always necessary to enlarge. Further, since the alignment is performed at a high speed, the pyramid structure data may be generated and aligned.
- step S3735 the first alignment unit 334 calculates the image evaluation value of the OCTA image.
- the image evaluation value is calculated using the common area of the image that does not include the invalid area generated by the alignment in the OCTA image that has been subjected to the two-dimensional alignment in step S3734.
- the image evaluation value Q can be obtained by Expression 3.
- the area of the OCTA image of Data0 is f (x, y), and the area of the OCTA image of Data1 is g (x, y).
- the first term represents the correlation coefficient and is similar to the equation shown in equation 2. Therefore, ⁇ f and ⁇ g in the equation correspond to those shown in equation 2, respectively.
- the second term is the term that evaluates brightness
- the third term is a term for evaluating contrast.
- Each item has a minimum value of 0 and a maximum value of 1.
- the evaluation value is 1. Therefore, the evaluation value is high when the average image among the N OCTA images is used as a reference, and is low when the OCTA image that is different from other OCTA images is used as a reference.
- “different from other OCTA images” refers to a case where the photographing position is different, the image is distorted, the image is dark or too bright as a whole, and an artifact such as a white line or a black band is included. Note that the image evaluation value does not necessarily have to be the expression shown here, and each term may be evaluated independently or the combination may be changed.
- the first alignment unit 334 stores values in the two-dimensional matrix for storing parameters necessary for image quality improvement such as alignment and image similarity initialized in step S3732. ..
- the horizontal alignment parameter X, the vertical alignment parameter Y, and the rotation parameter ⁇ in the XY plane are assigned to the elements (0, 1) of the two-dimensional matrix.
- the image evaluation value and the image similarity are saved.
- the Mask image shown in FIG. 8 is stored in association with the OCTA image. Further, although not described in this embodiment, the magnification may be stored when the magnification is corrected.
- step S3737 the first alignment unit 334 determines whether or not alignment has been performed with the remaining target images using all the images as reference images. If all images have not been processed as a reference, the process returns to step S3733. If all the images have been processed as a reference, the process advances to step S3738.
- step S3738 the first alignment unit 334 updates the remaining elements of the two-dimensional matrix.
- the parameter of the element (0,1) of the two-dimensional matrix is copied to the element (1,0). That is, element (i, j) is copied to element (j, i).
- the alignment parameters X and Y and the rotation parameter ⁇ are opposite to each other, a negative value is multiplied to make a copy.
- the image similarity is not reversed, so the same value is copied as it is.
- the OCTA image alignment is performed by these processes. Next, returning to the processing flow of FIG. 4A, description will be made.
- step S374 the selection unit 335 selects a reference image.
- a reference image is selected based on the result of the alignment performed in step S373.
- step S373 a two-dimensional matrix is created, and each element of the matrix stores information necessary for generating a high quality image. Therefore, the reference image is selected by using the information.
- selection is performed using the image evaluation value, the alignment parameter evaluation value, and the artifact area evaluation value.
- the image evaluation value the value obtained in step S3735 is used.
- the alignment parameter is an evaluation value using X and Y of the alignment result obtained in step S3734, for example, using Expression 4. In Equation 4, the larger the movement amount, the larger the value.
- the artifact area evaluation value is an evaluation value using, for example, Expression 5 using the Mask image obtained in step S3731.
- T (x, y) represents a pixel in a non-artifact area in the Mask image
- a (x, y) represents all pixels in the Mask image. Therefore, if there is no artifact, the maximum value is 1.
- the image evaluation value and the alignment parameter evaluation value are values obtained in relation to other images when a certain image is used as a reference, they are N-1 total values. Since these evaluation values have different evaluation scales, they are sorted by the respective values and the reference image is selected based on the total value of the sorted indexes. For example, the image evaluation value and the artifact area evaluation value are sorted such that the larger the numerical value, the smaller the index after sorting, and the smaller the numerical value of the alignment parameter evaluation value, the smaller the index after sorting. I do. The image having the smallest index value after sorting is selected as the reference image.
- the evaluation value may be calculated by weighting the sorted index of each evaluation value.
- each evaluation value may be normalized so that the evaluation value becomes 1.
- the image evaluation value is normalized to 1, but in the present embodiment, since it is a total value of N-1 pieces, an average value may be used.
- the alignment parameter evaluation value can be normalized to 1 if it is defined as in Equation 6. In this case, the evaluation value closer to 1 is the better evaluation value.
- SV n is the total of N-1 values obtained by Expression 4, and the subscript n corresponds to the Data number. Therefore, in the case of Data0, a SV 0.
- SV max is the maximum alignment parameter evaluation value between Data0 and Data (N-1).
- ⁇ is a weight, and is a parameter for adjusting to what value the value of NSV n is set when SV n and SV max have the same numerical value.
- the maximum value SV max may be determined from actual data as described above, or may be defined as a threshold value in advance.
- the artifact area evaluation value is normalized to 0 to 1, so you can use it as it is.
- the reference image is an average image among the N images, and the image that most satisfies the condition that the amount of movement is small and the number of artifacts is small when the other images are aligned is selected.
- An example of the reference image selected by this example is shown in FIG. 9B.
- Data1 is selected as the reference image.
- Data0 and Data2 are moving based on the alignment parameters obtained by the first alignment unit 334.
- step S375 the second alignment unit 336 aligns the retina in the lateral direction (x axis) using the OCTA image.
- FIG. 10A shows an example in which the reference image is Data1 and the alignment target is the horizontal alignment with Data2.
- Mask also sets 0 to an invalid area (vertical black line in the figure) that is generated by the movement of Data2 as a result of alignment with the artifact (horizontal black line in the figure) included in Data2. I am doing it.
- the reference image and the alignment target image are aligned laterally on their respective lines, and the degree of similarity is calculated for each line. Equation 2 is used to calculate the similarity, for example. Then, the line is moved to a position where the degree of similarity is maximum.
- the similarity to the reference image is calculated for each line, and the weight is set to Mask according to the similarity.
- FIG. 10B An example of the alignment result by the second alignment unit 336 is shown in FIG. 10B.
- FIG. 10B shows an example in which it is determined that the upper end of the image and the vicinity of the center of the image are not similar to the reference image, and a black line in the horizontal direction is set in the Mask image as a line not used for superimposition. Further, in the vicinity of the center of the image and the lower end of the image, as a result of the alignment in units of lines, an example is shown in which the position is shifted to the left near the center and to the right at the lower end of the image. Since the invalid area is generated by shifting the image, 0 is set in the invalid area in Mask. By this processing, local alignment in the XY plane is performed.
- the rotation parameter ⁇ obtained in the first alignment may be applied to each image before performing the second alignment, or may be applied after performing the second alignment. May be.
- step S308 the processing in the image processing unit 303 is changed according to the high image quality data generation method determined by the determination unit 306.
- the processing is determined by displaying information for input such as a Radio button, a pull-down menu, and a check box on the display unit 600, and the operator inputs the information.
- the mode input itself may be performed in advance before shooting, or after the shooting is completed. Note that it is not necessary to change for each eye to be inspected, and by inputting a reference method in advance, the processing method is basically performed, and the processing method may be changed according to the imaging state of the eye to be inspected. ..
- the process proceeds to step S309, and when the high image quality data is generated from the three-dimensional data, the process proceeds to step S310.
- step S309 a case where high quality image data of an OCTA image that is front image data is generated as the first high quality image data generation will be described with reference to the flowchart of FIG.
- the process of generating the composite image (high quality image data) in steps S307 and S309 corresponds to an example of the first mode.
- step S391 the boundary information of the retina layer detected by the detection unit 333 is acquired. Since the boundary line information is information necessary to generate the two-dimensional OCTA image in step S392, the boundary line information regarding the upper end of the generation range and the boundary line regarding the lower end of the generation range are acquired as a set. Further, a numerical value that is an offset together with each boundary line information, for example, a numerical value of -10 ⁇ m or +50 ⁇ m is acquired together with each boundary line that is the upper end and the lower end.
- the boundary line information does not have to be one set, and multiple sets may be acquired at the same time.
- the type of the retina range to be generated as the OCTA image such as the surface layer portion, the deep layer portion, the choroid portion, and the entire retina layer, is determined to some extent. Therefore, the upper and lower ends of the generation range of the boundary lines necessary for generating these OCTA images, and the offset values for them may be acquired.
- the upper end of the generation range is the ILM / NFL boundary line
- the lower end of the generation range is the boundary line of the lower end of 50 ⁇ m in the depth direction from GCL / IPL. Therefore, here, the boundary line information of the other range is acquired.
- step S392 the image generation unit 332 projects the motion contrast data corresponding to the range between the upper end of the generation range and the lower end of the generation range designated for the three-dimensional motion contrast data onto the two-dimensional plane to generate the OCTA image. To generate.
- the OCTA image is generated in the same manner as in step S372.
- step S393 the alignment parameter result of the front image alignment performed in step S307 is applied to the OCTA image created in step S392.
- the registration process itself is not performed, but only the parameters are applied to deform the image, so that the processing load is lightened even if a plurality of planar images are handled.
- step S394 the image combining unit 339 averages the reference OCTA image selected by the selection unit 335 and the plurality of OCTA images.
- the sum SUM_A of the values obtained by multiplying the values of the plurality of OCTA images and the Mask image and the sum SUM_B of the values of the plurality of Mask images are held for each pixel. Since the invalid area removed as an artifact and the invalid area in which data does not exist due to alignment are stored as 0 in the Mask image, the sum value SUM_B of the Mask image holds a different value for each pixel.
- the arithmetic mean processing it is possible to obtain the OCTA image in which the arithmetic mean is calculated by dividing SUM_A by SUM_B.
- a high quality OCTA image that is a front image of the motion contrast image is generated.
- a high quality OCTA image is created with respect to the OCTA images created in steps S372 and S392.
- step S310 the second high image quality data is generated by performing the alignment using the three-dimensional tomographic image.
- the series of registrations in steps S309 and S310 corresponds to an example of the first image registration processing and the second image registration processing having a larger processing amount than the first image registration processing. Further, the process of generating the composite image in steps S309 and S310 corresponds to an example of the second mode.
- FIG. 6A is a flowchart related to the second high-quality image data generation
- FIG. 6B is a flowchart showing a third alignment flow
- FIG. 6C is a flowchart showing a fourth alignment flow.
- step S3101 the third alignment unit 335 aligns the reference three-dimensional data with other three-dimensional data in the depth direction (Z direction). That is, the three-dimensional alignment of the first image and the second image is performed. This processing will be described with reference to the flowchart of FIG. 6B.
- step S31011 the third alignment unit 335 stores the reference three-dimensional motion contrast data and the reference three-dimensional tomographic image data, respectively.
- the three-dimensional motion contrast data of Data1 and the three-dimensional tomographic image data are stored.
- step S31012 the third alignment unit 335 acquires the boundary line information detected in step S371.
- the boundary line used for the depth direction alignment in this embodiment is L1.
- step S31013 the third alignment unit 335 aligns the position and inclination in the depth direction for each three-dimensional data.
- the eye is moving when taking a three-dimensional tomographic image.
- the movement in the XY plane since the shooting is performed while tracking is performed in real time, almost all the alignment can be performed during the shooting.
- real-time tracking is not performed in the depth direction, it is necessary to perform alignment within the data. That is, the description herein relates to registration within a piece of 3D data. This will be described with reference to FIGS. 11A to 11C.
- FIG. 11A shows an example of a boundary line of a tomographic image used for alignment.
- boundary line L1 (ILM)
- ILM boundary line L1
- an example of using the boundary line L1 will be described, but the type of boundary line is not limited to this.
- Other boundary lines may be used, or a plurality of boundary lines may be combined.
- the reference data is Indexc and the target data is Indexc-1.
- the first reference data is the tomographic image at the center of the three-dimensional data
- the target data is the boundary line of the tomographic image adjacent to the reference data in the sub-scanning direction.
- the boundary line L1 of the reference data and the boundary line L1 'to be aligned are displayed at the same time for the sake of explanation.
- the boundary line is divided into 12 in the vertical direction.
- the areas are Area 0 to Area 11, respectively.
- the divided area is not drawn in the center of the image, but the entire image is actually divided into areas.
- the up / down arrow Difference1 represents the difference between L1 and L1 '. These differences are obtained for each of the areas Area0 to Area11.
- the number of divisions may be changed according to the image size in the horizontal direction. Alternatively, it may be changed according to the size of the width of the boundary line that is commonly detected.
- the horizontal boundary line size is displayed as the same, but in reality, the retinal layer is displaced upward in the image (direction of 0 on the Z axis), and a part of the retinal layer is displayed. Areas may be missing from the image. In that case, the boundary line cannot be detected in the entire image. Therefore, in the alignment of the boundary lines, it is desirable to divide the range in which the boundary line between the reference data boundary line L1 and the alignment target boundary line L1 'can be detected for alignment.
- the average of Difference1 of each area is D 0 to D 11 . That is, the average of the ILM differences is used as the representative value of the differences in that region.
- the representative values D 0 to D 11 obtained in each area are sorted in ascending order.
- the average and variance are calculated using eight sorted representative values in ascending order. In this embodiment, the number of selections is eight. However, the number is not limited to this. The number to be selected may be smaller than the number of divisions.
- the average and variance are calculated by shifting the sorted representative values one by one. That is, in the present embodiment, since the calculation is performed using eight representative values of the 12 divided areas, a total of five types of average values and variance values can be obtained.
- FIG. 11C is a graph in which the horizontal axis represents the center x coordinate of the divided area and the vertical axis represents the representative value of the difference.
- black circles are examples of representative values of the difference of the combination having the smallest variance value
- black triangles are examples of representative values of the non-selected differences.
- Expression 7 is calculated using the representative value of the difference of the combination having the smallest variance (black circle in FIG. 11C).
- D is the shift value in the depth direction
- x is the x coordinate, that is, the A scan position.
- the expressions a and b in the expression 7 are shown in the expressions 8 and 9.
- x i is the center x coordinate of the selected divided area
- D i is the representative value of the selected difference
- n is the number of the selected representative values. Therefore, n is 8 in this embodiment.
- the average value is used as the representative value in the depth direction of each region, the median value may be used as long as a representative value can be used.
- the variance value is used as the variance value, the standard deviation may be used as long as it is an index capable of evaluating the variance of the values.
- the first reference data was the center boundary line of the three-dimensional data
- the target data was the boundary line data next to the reference data.
- next alignment is performed using the data that was previously the target data as reference data and the data next to it as the target data.
- the reference data is used as the central boundary line
- the adjacent boundary line data on the opposite side to the initial alignment is used as the target data for alignment again. This process is performed on the opposite side up to the edge of the image. In the unlikely event that there is data for which layer detection is not possible, correction is performed using the previous alignment parameter, and the process proceeds to the next data.
- FIG. 12A is a DepthMap expressing the Z coordinate of the boundary line L1 as a luminance value. That is, when the depth map is bright, the Z coordinate value is large, and when the depth map is dark, the Z coordinate value is small.
- FIG. 12A shows Data0 to Data2, where the upper DepthMap is before alignment and the lower DepthMap is after alignment. DepthMap before alignment has color unevenness in the horizontal direction in all Data. This represents that the retina is moving in the Z direction at the time of shooting.
- the depth map after the alignment has no color unevenness in the horizontal direction, indicating that the adjacent data are aligned in the Z direction.
- the first reference data may be the same, and the processes on both sides may be executed in parallel.
- the third alignment unit 335 stores the amount of movement of the reference data (Data1 in this embodiment) in the depth direction of each A scan.
- step S31014 the third alignment unit 335 aligns the position and the inclination in the depth direction among the plurality of three-dimensional data.
- the alignment between the three-dimensional data is performed using the data aligned in the depth direction within the three-dimensional data in step S31013.
- the alignment is performed using the boundary line L1 as in the previous case.
- the calculation method is the same as that in step S31013, but the calculation target is not between data but between data. Therefore, the reference data and the target data are aligned. This will be described with reference to FIG. 12B.
- the reference data is Data1 and the alignment target data is Data0 and Data2.
- FIG. 12B Data0 to Data2 are shown, where the upper DepthMap is after alignment in the data and the lower DepthMap is after alignment between the data.
- the brightness of depth map is different because the Z position of the retina is different in Data0 to Data2.
- the Z positions of the retina are aligned in Data0 to Data2, which means that the brightness of the depth map is also aligned.
- step S31015 the third alignment unit 335 applies the deformation parameters regarding X, Y, Rotation, and Z obtained in the first alignment, the second alignment, and the third alignment to apply the three-dimensional data.
- the three-dimensional data deforms both the tomographic image data and the motion contrast data.
- the transformation parameters corresponding to the original size are restored when the three-dimensional data is transformed. That is, when the numerical value of the xy in-plane alignment parameter in the image magnified 2 times is 1, it is assumed here to be 0.5. Then, the shape of the three-dimensional data is transformed with the original size.
- the three-dimensional data is transformed by interpolation processing.
- being a sub-pixel or a sub-voxel means a case where the movement amount is a real value such as 0.5, or a case where the rotation parameter is not 0 and data is rotated.
- the Bicubic or Lanczos (n) method is used for the interpolation of the shape data.
- FIG. 13 shows three-dimensional tomographic images of Data0 to Data2.
- the upper three-dimensional tomographic image is the three-dimensional tomographic image before alignment
- the lower three-dimensional tomographic image is the first three-dimensional tomographic image. It is a three-dimensional tomographic image which is image-deformed after performing the alignment, the second alignment, and the third alignment.
- the three-dimensional tomographic images within the data and after the alignment between the data represent that the alignment regarding the XYZ of the retina has been performed in Data0 to Data2.
- step S31016 the third alignment unit 335 detects the difference between the reference data and the target data in the DepthMap that has performed the Z alignment between the data. Then, in the place (x, y) where the absolute value of the difference is equal to or larger than the threshold value, it is determined that the alignment accuracy is low, and the alignment is not used. Therefore, 0 is set as an invalid area in the Mask image of the target data.
- step S3102 the fourth alignment unit 338 sets a plurality of regions for alignment between the reference data and the target data in a portion having a characteristic inside the tomographic image, and in the lateral direction of the retina in units of the regions ( The x-axis) and the depth direction (z-axis) are aligned. Note that the alignment here will be described as local alignment in the Z direction. The local alignment performed by the fourth alignment unit 338 will be described with reference to the flowchart in FIG. 6C.
- step S31021 the fourth alignment unit 338 acquires the boundary line information detected in step S371.
- the boundary lines used for the depth direction alignment are L1 and L3.
- step S31022 the fourth alignment unit 338 sets a region for alignment so as to include the characteristic region of the target image. This will be described with reference to FIG.
- FIG. 14 shows a tomographic image in the three-dimensional tomographic image of the reference data and a tomographic image in the three-dimensional tomographic image to be aligned.
- An example of a plurality of regions for alignment (ROI: Region of interesting) set based on the boundary line information L1 and L3 of the reference tomographic image is shown in the target image 1 to be aligned.
- the size of the ROI in the depth direction is set to be several tens of pixels larger than that of L1 and L3, respectively, and wider in the upper and lower directions.
- the parameter may be corrected using the result of the global alignment. This is because, in the global alignment as shown in the target image 1 in FIG.
- the horizontal size of the ROI is set from the size obtained by dividing the image.
- the number of divisions is set according to the shooting parameters such as the size of the image (the number of A scans) and the shooting size of the image (3 mm). For example, in the present embodiment, when the number of A-scans is 300 and the image capturing size is 3 mm, the number of divisions is 10. Note that the horizontal size and the ROI setting value are also corrected using the result of global positioning. Since the invalid region may exist in the horizontal direction as well as the parameter in the vertical direction, it is necessary to set the range in which the ROI is set and the search region thereof not to include the invalid region.
- ROIs for local alignment are set so as to overlap each other. This is because when the size of the ROIs is reduced without overlapping the ROIs, there may be some places in the ROIs that do not include the characteristic site. For example, when the retina is imaged at a narrow angle of view, flat tissue may appear in a wide range in the image. On the other hand, if the ROI range is set to be wide so as to include the features without overlapping the ROIs, the number of samplings for local alignment is reduced, resulting in coarse alignment. Therefore, in order to solve these problems, the size of the ROI in the X direction is widened, and the ROIs are set to overlap each other. In FIG.
- the ROI is not drawn at the center of the image, but the ROI is actually set on the retina from the left end to the right end of the image. Furthermore, it is desirable to consider the search range at the time of ROI position alignment as the interval for setting the ROI. Specifically, when the horizontal search range at the time of ROI alignment is XR, the distance between the center coordinates of adjacent ROIs is set to be 2XR or more. This is because when the distance between the center coordinates is less than 2XR, the center positions of adjacent ROIs may be exchanged.
- step S31023 the fourth alignment unit 338 performs region alignment using the ROI.
- the area alignment is performed using a tomographic image. Therefore, similar to the OCTA image alignment shown in step S3734, the alignment according to the image similarity is performed using Expression 1.
- the evaluation value of the similarity is not limited to this, and may be SSD (Sum of Squared Difference), SAD (Sum of Absolute Difference), or the like.
- the alignment may be performed by a method such as POC (Phase Only Correlation).
- the alignment search range in the reference image may be a few to several tens of pixels vertically and horizontally from the initial position of the ROI, and the most similar place is the alignment result.
- the search area may be fixed, or may be changed according to the shooting angle of view, the shooting region, and the location (edge or center) of the image.
- the shooting angle of view is narrow and the scan speed is fast, the amount of eye movement during shooting one image is small, but when the shooting angle of view is wide, the amount of eye movement is large. Therefore, when the shooting angle of view is large, the search range may be widened.
- the search range of the peripheral portion may be widened.
- step S31024 the fourth alignment unit 338 calculates the movement amount of each A scan by interpolating the alignment parameter obtained in step S31023.
- FIG. 15A shows ROI1 to ROI3 in the initially set area.
- the lower triangles of C1 to C3 represent the center positions of ROI1 to ROI3.
- FIG. 15B shows an example of moving the ROI after the alignment in step S31023.
- ROI1 and ROI3 are moved to the right side, respectively, and ROI2 is not moved. Therefore, the centers C1 and C3 of the ROI have moved to C1 ′ and C3 ′, respectively.
- the calculation is performed based on the movement amount of the adjacent ROI and the center position of the ROI. For example, the center position of ROI1 has moved from C1 to C1 ′, and the center position of ROI2 remains C2.
- the equations for obtaining the X-direction movement amount of each A scan between C1 and C2 before deformation are shown in equations 10-12.
- X1 and X2 are initial center coordinates of each ROI
- ⁇ X1 and ⁇ X2 are movement amounts in the X direction of the center coordinates of each ROI
- A_before is a value of an A scan index before deformation
- A_after is referred to by A_before. It is the value of the A scan index before transformation.
- A_before is 55 and A_after is 56 as a result of the calculation
- the A scan index 55 contains A scan data of the A scan index 56.
- the amount of movement in the Z direction can also be obtained from the amount of movement of the center position of each ROI based on the same idea as in equations 10 to 12, and moves several pixel data in the vertical direction.
- the value of A_after may be a real number or an integer.
- new A scan data is created from a plurality of A scan data by an interpolation method (Biliear, Bicubic, etc.).
- the data of the corresponding A scan index is referred to as it is.
- an example has been shown in which both the X direction and the Z direction are locally aligned, but the invention is not limited to this. For example, only one of the X direction and the Z direction may be locally deformed. It should be noted that the X direction is aligned by the tracking at the time of photographing, so that the local alignment may be performed only in the Z direction in order to reduce the processing load.
- step S31025 the fourth alignment unit 338 moves in the X and Z directions for each A scan based on the A scan movement amount obtained in step S31024. Thereby, it is possible to generate a tomographic image deformed in units of A scans. In addition, the three-dimensional data to be deformed is deformed in both the tomographic image data and the motion contrast data.
- step S31026 it is determined whether or not all of the data to be aligned have been locally aligned with all the tomographic images of the reference three-dimensional data. If all the data has not been processed, the process returns to step S31011. Then, when all the data are locally aligned, the local alignment process ends.
- step S3103 the image composition unit 339 generates a composite image by arithmetically averaging the reference three-dimensional motion contrast data selected by the selection unit 335 and a plurality of three-dimensional motion contrast data.
- a total value SUM_A of values obtained by multiplying a plurality of motion contrast data by a value of a Mask image and a total value SUM_B of values of a plurality of Mask images are held for each voxel. Since the invalid area removed as an artifact and the invalid area in which data does not exist due to the alignment are stored as 0 in the Mask image, the sum value SUM_B of the Mask image holds a different value for each voxel.
- the motion contrast data in which the arithmetic mean is calculated can be obtained by dividing SUM_A by SUM_B.
- FIGS. 16A to 18B The motion contrast data before and after performing these arithmetic processing are shown in FIGS. 16A to 18B.
- 16A and B are XZ planes
- FIGS. 17A and 17B are OCTA images
- FIGS. 18A and 18B are examples in which three-dimensional motion contrast data is volume-rendered and displayed.
- FIG. 16A shows the XZ plane of the 3D motion contrast data before the arithmetic mean
- FIG. 16B shows the XZ plane of the 3D motion contrast data after the arithmetic mean
- FIG. 17A shows an OCTA image of the retinal surface layer generated from the three-dimensional motion contrast data before the arithmetic mean
- FIG. 17B shows an OCTA image of the retinal surface layer generated from the three-dimensional motion contrast data after the arithmetic mean
- FIG. 18A is an example of volume rendering data of three-dimensional motion contrast data before arithmetic averaging
- FIG. 18B is an example of volume rendering data of three-dimensional motion contrast data after arithmetic averaging. As shown in FIGS.
- the three-dimensional motion contrast data with improved contrast can be obtained by the averaging process.
- volume rendering of motion contrast data is performed as in FIG. 18B, it becomes easy to understand the vertical relationship in the depth direction of blood vessels that is difficult to recognize in a two-dimensional OCTA image.
- arithmetic averaging is also performed on multiple 3D tomographic image data.
- step S3104 the third alignment unit 337 inputs the input 3D motion contrast data and 3D tomographic image data stored in step S31011, and moves in the depth direction of each A scan stored in step S31013. Based on the amount, the retina position of the reference data (Data1 in this embodiment) is returned to the state of the depth position at the time of input. Specifically, in step S3103, the three-dimensional motion contrast data after the averaging and the three-dimensional tomographic image data are returned to the original state by using the depth direction movement amount of each A scan stored in step S31013. For example, if a certain A scan is moved 5 downwards, it is moved 5 upwards here. Further, by moving 5 upwards, an invalid area is generated at the bottom of the data. Therefore, the data of the same coordinate position in the input three-dimensional motion contrast data and three-dimensional tomographic image data stored in step S31011 are copied to the invalid area.
- the present invention is not limited to this.
- data in the range corresponding to the original coordinate position may be cut out from the three-dimensional data after the averaging and copied.
- the processing is performed in two steps of copying to the invalid area after the data is moved, but in this case, since only one step of copying is performed, the processing load can be reduced.
- the final output data is stored in the third alignment unit 337. It becomes data.
- step S311 at least one of the OCTA image after the arithmetic mean and the OCTA image generated from the three-dimensional motion contrast data after the arithmetic mean generated in step S394 is displayed on the display unit 600.
- the display unit 600 may display the high-quality three-dimensional motion contrast data created by averaging or the result of the high-quality three-dimensional tomographic image.
- FIGS. 19A and 19B examples of screens displayed on the display unit 600 are shown in FIGS. 19A and 19B.
- a basic screen will be described with reference to FIG. 19A.
- Reference numeral 1900 denotes the entire screen, 1901 a patient tab, 1902 an imaging tab, 1903 a report tab, 1904 a setting tab, and the diagonal lines in the report tab 1903 represent the active state of the report screen.
- a report screen is displayed for image confirmation after the shooting process is completed.
- Reference numeral 1905 represents a patient information display unit
- 1906 represents an examination sort tab
- 1907 represents an examination list
- 1908 represents a selection of the examination list, and the selected examination data is displayed on the screen.
- FIG. 19A thumbnails of SLO and tomographic images are displayed.
- a thumbnail of OCTA may be displayed.
- FIG. 19B shows an example in which thumbnails of SLO and OCTA are displayed in the examination list 1907, but the present invention is not limited to this. It may be a thumbnail of OCTA only, or may be a thumbnail of a tomographic image and OCTA.
- the inspection data acquired by photographing and the inspection data generated by the high image quality processing are displayed in the inspection list 1907 as a list.
- 1909 is an SLO image
- 1910 is a first OCTA image
- 1911 is a first tomographic image
- 1912 is a front image
- 1913 is a second OCTA image
- 1914 is a second tomographic image.
- Reference numeral 1915 indicates an image superimposed on the SLO image of 1909
- 1916 indicates a tab for switching the type of image of 1915.
- Reference numeral 1917 represents the type of OCTA image displayed as the first OCTA image.
- the types of OCTA include a surface layer, a deep layer, a choroid, and an OCTA image created in an arbitrary range.
- 1929 represents a button for executing the high quality generation of the motion contrast. It should be noted that in the present embodiment, by pressing the 1929 button for executing the high image quality generation, the high image quality data generation processing shown in steps S306 to S310 is executed. This button is pressed to display the data candidates for data selection in step S306. When the data selected in the inspection list 1907 and the data captured repeatedly under the same image capturing conditions are automatically selected, the high image quality data generation may be executed without displaying the data candidates. .. The generated high image quality data is displayed on the inspection list 1907.
- the tab of 1930 and 1931 represent view mode tabs.
- the tab of 1930 displays a two-dimensional OCTA image generated from the three-dimensional motion contrast data
- the tab of 1931 displays the three-dimensional motion contrast data as shown in FIG.
- Reference numeral 1921 represents a list for switching the type of OCTA image displayed as the second OCTA image
- reference numeral 1922 represents a selection state of images in the list.
- step S2001 it is determined whether the image data is high quality data. If it is high image quality data, the process proceeds to step S2002. If it is not high image quality data, the process proceeds to step S2004.
- step S2002 it is determined whether the high image quality data generation process has been performed on two-dimensional data or three-dimensional data. In the case of high quality data generation with two-dimensional data, the process proceeds to step S2003, and in the case of high quality data generation with three-dimensional data, the process proceeds to step S2004.
- step S2003 when the high-quality image data is generated with the two-dimensional data, the first high-quality image data generation process described in the flowchart of FIG. 5 is executed. Thereby, it is possible to generate a high-quality OCTA image in the range specified by the boundary line at the upper end of the generation range and the boundary line at the lower end of the generation range.
- step S2004 boundary line information is acquired. This is similar to step S391 of FIG.
- step S2005 the image generation unit 332 generates an OCTA image.
- the generation method is the same as step S392 in FIG.
- a high-quality OCTA image is obtained from the high-quality three-dimensional motion contrast data (additionally averaged three-dimensional motion contrast data) based on the boundary line information acquired in step S2004. create.
- the OCTA image may be generated from the three-dimensional motion contrast data.
- step S2006 the generated OCTA image is displayed.
- the newly generated OCTA image is displayed in 1913.
- the process of switching OCTA images by selecting the type of image displayed in the list has been described, but the present invention is not limited to this.
- the operator can also switch the OCTA image to be displayed by designating the location of the boundary line. For example, the operator may switch the type of the upper boundary line 1923 or change the number of offset values to update the OCTA image.
- an example has been shown in which an OCTA image is generated each time the image type is switched, but the invention is not limited to this.
- the preset OCTA images displayed in the list may be created and stored in advance, and the images may be read out.
- a high-quality OCTA image can be displayed regardless of whether the two-dimensional high-quality image data generation with a small calculation amount or the three-dimensional high-quality image data generation process with a large calculation amount is executed. ..
- the OCTA image displays a high-quality image, but the three-dimensional motion contrast data displayed when the tab 1931 is pressed is added as shown in FIG. 18A. Volume rendering of non-averaged 3D motion contrast data is displayed.
- high-quality image data can be displayed for both the OCTA image and the three-dimensional motion contrast data.
- Step S312> the instruction acquisition unit (not shown) externally acquires an instruction as to whether or not to end the high image quality data generation processing by the image processing system 100. This instruction is input by the operator using the input unit 700. When the instruction to end the process is acquired, the image processing system 100 ends the process. On the other hand, if the high image quality data generation processing is to be continued or to be redone without ending the processing, the processing is continued from step S306. As described above, the processing of the image processing system 100 is performed.
- the present embodiment it is possible to switch between high-quality image data generation with a small amount of calculation and high-quality image data generation with a large amount of calculation. Therefore, when it is desired to reduce the waiting time, by selecting the high-quality image data generation with a small amount of calculation, it is possible to generate the high-quality image data at a necessary portion and reduce the processing load.
- FIG. 21 is a diagram showing the configuration of an image processing system 1021 including an image processing device 3021 according to this embodiment.
- the updating unit 3322 is different from that of the first embodiment.
- the updating unit 3322 updates the data created by the two-dimensional high image quality data generation processing to the data created by the three-dimensional high image quality data generation processing.
- FIG. 22 is a flowchart showing the flow of operation processing of the entire system in this embodiment. Note that, here, steps S2208 to S2212, which are processes different from those in the first embodiment, will be described. The other steps are the same as those in the first embodiment.
- step S2208 first high image quality data generation is performed.
- a high quality OCTA image that is a front image of a motion contrast image is generated. This process is similar to the flowchart shown in FIG.
- step S2209 the high quality OCTA image created in step S2208 is displayed.
- An example of the display screen displayed here is the same as that shown in FIGS. 19A and 19B. At this point, the screen display is the same as in the case where the two-dimensional high-quality image data generation process with a small amount of calculation is executed in the first embodiment.
- step S2210 second high quality image data is generated.
- the second high quality data generation process high quality three-dimensional motion contrast data is generated.
- the second high-quality image data generation processing is executed in the background where the image is displayed in step S2209.
- FIG. 3 illustrates an example in which the data generation process is performed after the display in step S2209, but the present invention is not limited to this.
- the high-quality image data generation process may be executed first, and the image may be displayed during the background process. That is, the high quality OCTA image created in step S2208 is displayed in step S2209 before the completion of the alignment in the flowchart shown in FIG.
- step S2211 after the second high image quality data generation process is completed in step S2210, the update unit 3322 replaces the data created in the two-dimensional high image quality data generation process with the data created in the three-dimensional high image quality data generation process.
- Update Specifically, the updating unit 3322 executes processing for replacing the three-dimensional motion contrast data with the high-quality three-dimensional motion contrast data generated by the second high-quality image data generation processing.
- the OCTA image is also updated by updating the three-dimensional motion contrast data.
- the OCTA image is generated by projecting on the two-dimensional plane the motion contrast data corresponding to the range between the upper limit of the generation range and the lower limit of the generation range specified for the three-dimensional motion contrast data after the averaging.
- the OCTA image is displayed.
- FIG. 23A shows an example in which a dialog 2301 notifying that the data has been updated is displayed on the display unit 600.
- FIG. 23B shows an example in which an index 2302 indicating whether high-quality data generation is generated in two dimensions or three dimensions is displayed on the screen 1900.
- the display is switched from the index that is known to be generated in two dimensions to the index that is known to be generated in three dimensions. As shown in FIGS.
- the operator when the operator updates the displayed data while the OCTA image is being displayed, the operator is displayed with information indicating the data update.
- the example of the display by the dialog and the display by the index update has been shown, but it is not limited to this. It may be indicated by displaying a message on the screen 1900, or may be indicated by automatically switching the OCTA images 1910 and 1913. Alternatively, it may be notified by changing the color. As an example of changing the color, the colors of the frames of the OCTA images 1910 and 1913 are displayed in different colors for the data generated by the first high quality image data generation and the data generated by the second high quality image data generation. You may do so. Alternatively, the color when the threshold value-processed motion contrast data is superimposed and displayed on the luminance value tomographic images 1911 and 1914 may be changed.
- the timing of updating the data is not always the time when the first high-quality image data is displayed on the display unit 600. Therefore, if the data is not updated at the timing when the first high-quality image data is displayed, it is not necessary to display the dialog for notifying that the data has been updated.
- step S2212 an instruction acquisition unit (not shown) externally acquires an instruction as to whether or not to end the high image quality data generation processing by the image processing system 1021. This process is similar to step S312 of the first embodiment. As described above, the processing of the image processing system 1021 is performed.
- high-quality image data with a small amount of calculation is generated first and the result is displayed to the operator.
- the result is displayed, high-quality image data with a large amount of calculation is generated, and the result is displayed to the operator.
- FIG. 24A is a flowchart showing the flow of operation processing of the entire system in this embodiment.
- steps S2411 to S2413 which are processes different from those in the first embodiment, will be described.
- the other steps are the same as those in the first embodiment.
- This embodiment will be described based on the first embodiment, but the present invention is not limited to this, and can be applied to the second embodiment.
- the display processing of step S2209 of FIG. 22 is replaced with the processing of steps S2411 to S2413 of this embodiment.
- step S2411 the data generated by the first high-quality image generation or the second high-quality image generation is displayed.
- the screen to be displayed for example, the screens shown in FIGS. 19A and 19B are displayed.
- step S2412 the data selection screen 2500 as shown in FIGS. 25A and 25B is displayed, and the operator selects data from the screen.
- the operator may operate the user interface (not shown) to display it, or the display control unit 305 may display it automatically.
- FIG. 25A shows an example of the OCTA image generated in step S307 to step S309 or step S310 and a selection screen for generating the OCTA image.
- 2501 is a generated high-quality OCTA image
- 2502 to 2504 are a plurality of photographed OCTA images
- black frames 2510 to 2512 are the selection of the OCTA image used to generate the high-quality OCTA image. Indicates the state.
- Black lines 2520 and 2521 indicate an example of artifacts contained in the OCTA image.
- step S307 to step S309 or step S310 the image processing unit 303 identifies and removes the artifact.
- some artifacts may not be completely removed and may remain slightly.
- the operator selects the OCTA image to be combined while confirming the high-quality OCTA image.
- FIG. 25B shows an example of the case where an image is selected.
- FIG. 25B shows an example in which the OCTA image 2502 with the artifact 2521 is deselected.
- the image is updated by selecting or not selecting the OCTA image.
- the image update processing executed by the OCTA image selection will be described with reference to the flowchart of FIG. 24B.
- step S24131 it is determined whether the high image quality data generation processing has been performed on two-dimensional data or three-dimensional data. In the case of high quality data generation with two-dimensional data, the process proceeds to step S24132, and in the case of high quality data generation with three-dimensional data, the process proceeds to step S24134.
- Step S24132 the OCTA image to which the two-dimensional alignment parameter is applied is acquired.
- step S24133 high quality image data generation is performed using the OCTA image selected on the data selection screen.
- the selected OCTA image may be used to calculate an arithmetic mean to generate high quality image data.
- an addition OCTA image is created using all the OCTA images to which the registration parameters have been applied in advance, and the OCTA image that is not selected by data selection is subtracted from the addition OCTA image, and then the average calculation is performed.
- High quality data may be generated by using
- Step S24134 motion contrast data to which a three-dimensional alignment parameter is applied is acquired.
- step S241335 high quality image data generation is performed using the three-dimensional motion contrast data corresponding to the OCTA image selected on the data selection screen.
- high quality image data may be generated by calculating an arithmetic mean using selected motion contrast data.
- High-quality data may be generated by calculating the average.
- the 3D motion contrast data is projected onto the 2D plane with motion contrast data corresponding to the upper and lower generation ranges to generate an OCTA image. To generate.
- step S24136 the OCTA image generated in the high image quality data generation processing in step S24133 or step S24135 is displayed on the data selection screen 2500. By performing these processes, the artifacts are removed from the high-quality OCTA image 2506 in FIG. 25B.
- the operator can select data while checking the high quality OCTA image.
- the image is generated using the alignment parameter calculated in advance, so that it is possible to create an image with a small calculation amount for execution of data selection and data non-selection. I can. Therefore, the waiting time of the operator can be reduced.
- FIG. 26 is a flowchart showing the flow of operation processing of the entire system according to this embodiment. It should be noted that here, steps S2611 to S2614, which are processes different from those in the first embodiment, will be described. The other steps are the same as those in the first embodiment. In the present embodiment, the description will be given based on the first embodiment, but the present invention is not limited to this, and can be applied to the second embodiment and the third embodiment.
- step S2611 a confirmation screen is displayed after the shooting.
- FIG. 27 shows an example of the display of this confirmation screen.
- reference numeral 2705 represents left and right eye display
- 2706 represents a photographing mode
- 2707 represents patient information.
- Reference numeral 2708 is an SLO image
- 2709 is an OCTA image
- 2710 to 2712 are tomographic images
- 2713 is a shooting range
- 2714 is a repeat button for executing repeated shooting
- 2715 is an OK button
- 2716 is an NG button.
- the OCTA image generated by the high quality image data generation processing is shown at 2709.
- the OCTA image 2709 is an image that has not been subjected to high image quality processing.
- step S2612 it is determined whether or not the repeated shooting ends.
- the processing is performed from the shooting in step S302.
- the OK button 2715 or the NG button 2716 is pressed, the repeated image capturing process is ended. Then, the report tab 1903 is pressed and the process proceeds to step S2613.
- step S2613 the high quality OCTA image created in steps S307 to S309 or step S310 is displayed.
- An example of the display screen is shown in FIGS. 28A and 28B.
- 28A and 28B are the same as those shown in FIGS. 19A and 19B on the report screen.
- the inspection display of the inspection list 2801 is different.
- 19A and 19B show an example in which the photographed inspection data and the inspection data generated by the image quality enhancement processing are displayed in a list on the inspection list 1907, but in the present embodiment, the inspection data by the image quality enhancement processing is mainly used.
- An example of display will be shown.
- 28A shows an inspection list 2801 and selection 2802 of inspection data in the inspection list.
- the inspection list 2801 displays the inspection data generated by the image quality enhancement process, but does not display the original inspection data used for the image quality enhancement process. As a result, the number of inspection data items displayed in the list is reduced, making it easier to find desired data.
- the inspection list 2803 of the sub-layer and the plurality of inspection data 2804 used for the image quality enhancement processing are shown.
- the operator can switch between displaying and hiding the inspection list in the sub-layer. Therefore, normally, by hiding it, only desired data can be displayed as shown in FIG. 28A. Then, by displaying the data as needed, it is possible to individually confirm the data that is the basis of the high-quality image data.
- the display of the examination list shown in this embodiment can be applied to the report display in the first to third embodiments.
- step S2614 an instruction acquisition unit (not shown) externally acquires an instruction as to whether or not to end the imaging of the tomographic image by the image processing system 100.
- This instruction is input by the operator using the input unit 700.
- the image processing system 100 ends the process.
- pressing the shooting tab 1902 returns the processing to step S302 and continues shooting. As described above, the processing of the image processing system 100 is performed.
- the present embodiment it is possible to determine whether or not the operator continues shooting while checking the high quality OCTA image. Since the image can be determined each time the image capturing is completed, it is possible to repeatedly determine the image capturing end when the image having the required image quality is obtained. Further, since the superimposing process is executed every time photographing is performed, the amount of calculation required for each superimposing process is reduced. Therefore, the waiting time required for displaying an image can be reduced.
- Modification 1 In the present embodiment, an example in which the same shooting range is shot with the same scan pattern for the data shot repeatedly N times has been described, but the present invention is not limited to this.
- the data obtained by photographing the range of 3 mm ⁇ 3 mm by 300 ⁇ 300 (main scanning ⁇ sub scanning) and the data obtained by photographing the range of 3 mm ⁇ 3 mm by 600 ⁇ 600 may be aligned.
- the size in the depth direction at this time is common to both data, and is 1000, for example.
- the above-described alignment process is performed after performing the data conversion process for aligning the physical size per voxel.
- the 300 ⁇ 300 data may be expanded to 600 ⁇ 600 by interpolation processing before the processing, or the 600 ⁇ 600 data may be reduced to 300 ⁇ 300 by interpolation processing. You may make it process. Also, when aligning the data of a 3 mm ⁇ 3 mm area photographed at 300 ⁇ 300 with the data of a 6 mm ⁇ 6 mm area photographed at 600 ⁇ 600, the physical size per voxel is the same. , Align the sizes as they are. As a result, it is possible to add and average the data captured in different image capturing ranges and different scan densities.
- Modification 2 In the present embodiment, an example in which the selection is performed using the image evaluation value, the alignment parameter evaluation value, and the artifact area evaluation value in the reference image selection has been described, but the present invention is not limited to this.
- the reference image may be selected using the evaluation value of layer detection.
- the evaluation value of layer detection is calculated when the detection unit 333 detects layers.
- the reliability of the detection accuracy may be set and determined for each A scan based on the information of the brightness value of the tomographic image at the time of detection. good. For example, when the luminance value of the tomographic image is low, blinking or the like may occur, and the retina may not be detected correctly. Therefore, the reliability of detection is defined to be low. Alternatively, it may be defined based on not only the brightness value but also the position of the boundary line. For example, when the position of the boundary line is in contact with the upper end or the lower end in the Z direction, there is a possibility that the layer cannot be correctly detected, and thus the reliability of detection becomes low.
- the layer detection area above the threshold is evaluated using the layer detection reliability as described above.
- the evaluation value of the layer detection area can be evaluated by the same method as the evaluation value of the artifact area of Expression 5.
- a region that is not an artifact of T (x, y) may be replaced with a layer detection region having a threshold value or more.
- the first reference data is the center of the data in the Z direction alignment within the data, but the present invention is not limited to this.
- the position may be started based on a place near the center of the image where the layer detection reliability of the boundary line L1 is high.
- the reliability of layer detection is defined by the brightness of the image and the detected position of the layer boundary in the Z direction, as shown in the second modification. As a result, alignment is started based on a highly reliable location, and it can be expected to reduce alignment errors.
- Modification 4 In the present embodiment, an example has been shown in which both the three-dimensional motion contrast data and the three-dimensional tomographic image are three-dimensionally deformed and the arithmetic average is performed, but the invention is not limited to this. Only the motion contrast data may be transformed. In that case, although the fourth alignment unit 338 performs alignment using a tomographic image in the present embodiment, it performs alignment using motion contrast data. Further, the averaging process by the image synthesizing unit 339 also performs only the three-dimensional motion contrast data. When only the motion contrast data needs to be of high image quality, only the motion contrast data is deformed in shape, so that the processing load can be reduced.
- the OCTA image is enlarged and aligned in the xy plane in step S3734, and the movement parameters in the xy plane are converted into movement parameters corresponding to the original size in step S31015.
- the three-dimensional data itself may be enlarged, aligned, and output as it is.
- the size of the three-dimensional data is a numerical value such as 300 ⁇ 300 ⁇ 1000 (main scanning ⁇ sub scanning ⁇ depth).
- This may be enlarged to 600 ⁇ 600 ⁇ 1000, alignment and arithmetic averaging may be performed, and data output may be performed with the same size.
- the data may be enlarged to 600 ⁇ 600 ⁇ 1000, aligned and averaged, and finally returned to a size of 300 ⁇ 300 ⁇ 1000 for output.
- the output data after the three-dimensional arithmetic mean is expected to have higher image quality.
- the third alignment unit 337 performs the process of returning the data moved in the Z direction to the Z position at the time of input in step S3104, but the present invention is not limited to this.
- the result of the Z alignment performed by the third alignment unit 337 may be output as it is without returning to the Z position at the time of input.
- Mode 7 In the present embodiment, the series of flow from shooting to display is shown, but the present invention is not limited to this.
- high-quality image generation processing may be performed using data that has already been captured. In that case, the processing relating to imaging is skipped, and instead, a plurality of already captured three-dimensional motion contrast data and three-dimensional tomographic images are acquired. Then, high-quality image generation processing is performed.
- Each of the above embodiments implements the present invention as an image processing apparatus.
- the embodiment of the present invention is not limited to the image processing apparatus.
- the CPU of the image processing apparatus controls the entire computer by using computer programs and data stored in RAM and ROM. Further, the execution of software corresponding to each unit of the image processing apparatus is controlled to realize the function of each unit.
- the user interface such as buttons and the layout of the display are not limited to those shown above.
Landscapes
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Biophysics (AREA)
- Ophthalmology & Optometry (AREA)
- Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Eye Examination Apparatus (AREA)
Abstract
Cet appareil de traitement d'images comprend : un premier mode dans lequel une image composite est générée à partir de multiples images de l'œil d'un sujet sur la base d'un premier processus d'alignement d'images; et un second mode dans lequel une image composite est générée à partir des multiples images de l'œil du sujet sur la base d'un second processus d'alignement d'images impliquant un degré de traitement supérieur à celui du premier processus d'alignement d'images. Avec cette configuration, l'appareil de traitement d'images est capable d'obtenir à la fois une réduction du bruit et une réduction du degré de calcul requis pour le processus d'alignement jusqu'à ce qu'une image composite soit générée.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2018205554A JP2020069115A (ja) | 2018-10-31 | 2018-10-31 | 画像処理装置、画像処理方法およびプログラム |
| JP2018-205554 | 2018-10-31 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2020090439A1 true WO2020090439A1 (fr) | 2020-05-07 |
Family
ID=70462354
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2019/040461 Ceased WO2020090439A1 (fr) | 2018-10-31 | 2019-10-15 | Appareil de traitement d'images, procédé de traitement d'images, et programme |
Country Status (2)
| Country | Link |
|---|---|
| JP (1) | JP2020069115A (fr) |
| WO (1) | WO2020090439A1 (fr) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP7643899B2 (ja) * | 2021-03-19 | 2025-03-11 | 株式会社トプコン | グレード評価装置、眼科撮影装置、プログラム、記録媒体、およびグレード評価方法 |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2017046975A (ja) * | 2015-09-02 | 2017-03-09 | 株式会社ニデック | 眼科撮影装置及び眼科撮影プログラム |
| JP2018114068A (ja) * | 2017-01-17 | 2018-07-26 | キヤノン株式会社 | 画像処理装置、光干渉断層撮像装置、画像処理方法、及びプログラム |
| JP2018153611A (ja) * | 2017-03-17 | 2018-10-04 | キヤノン株式会社 | 情報処理装置、画像生成方法、及びプログラム |
-
2018
- 2018-10-31 JP JP2018205554A patent/JP2020069115A/ja active Pending
-
2019
- 2019-10-15 WO PCT/JP2019/040461 patent/WO2020090439A1/fr not_active Ceased
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2017046975A (ja) * | 2015-09-02 | 2017-03-09 | 株式会社ニデック | 眼科撮影装置及び眼科撮影プログラム |
| JP2018114068A (ja) * | 2017-01-17 | 2018-07-26 | キヤノン株式会社 | 画像処理装置、光干渉断層撮像装置、画像処理方法、及びプログラム |
| JP2018153611A (ja) * | 2017-03-17 | 2018-10-04 | キヤノン株式会社 | 情報処理装置、画像生成方法、及びプログラム |
Also Published As
| Publication number | Publication date |
|---|---|
| JP2020069115A (ja) | 2020-05-07 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP6526145B2 (ja) | 画像処理システム、処理方法及びプログラム | |
| JP7195745B2 (ja) | 画像処理装置、画像処理方法及びプログラム | |
| US10803568B2 (en) | Image processing apparatus, alignment method and storage medium | |
| JP2019150485A (ja) | 画像処理システム、画像処理方法及びプログラム | |
| WO2020050308A1 (fr) | Dispositif de traitement d'image, procédé de traitement d'image et programme | |
| US20190073780A1 (en) | Image processing apparatus, alignment method and storage medium | |
| JP7102112B2 (ja) | 画像処理装置、画像処理方法及びプログラム | |
| US10789721B2 (en) | Image processing apparatus, alignment method and storage medium | |
| JP2021122559A (ja) | 画像処理装置、画像処理方法及びプログラム | |
| JP2019063446A (ja) | 画像処理装置、画像処理方法及びプログラム | |
| JP7005382B2 (ja) | 情報処理装置、情報処理方法およびプログラム | |
| WO2020090439A1 (fr) | Appareil de traitement d'images, procédé de traitement d'images, et programme | |
| JP7297952B2 (ja) | 情報処理装置、情報処理方法およびプログラム | |
| JP7604160B2 (ja) | 画像処理装置、画像処理方法及びプログラム | |
| JP7158860B2 (ja) | 画像処理装置、画像処理方法及びプログラム | |
| JP7281872B2 (ja) | 画像処理装置、画像処理方法及びプログラム | |
| JP7646321B2 (ja) | 画像処理装置、画像処理方法及びプログラム | |
| US12383135B2 (en) | Image processing apparatus, image processing method, and storage medium | |
| JP2013153881A (ja) | 画像処理システム、処理方法及びプログラム | |
| JP7086708B2 (ja) | 画像処理装置、画像処理方法及びプログラム | |
| JP2019208845A (ja) | 画像処理装置、画像処理方法及びプログラム | |
| JP2019115827A (ja) | 画像処理システム、処理方法及びプログラム | |
| JP2013153880A (ja) | 画像処理システム、処理方法及びプログラム | |
| JP2019198384A (ja) | 画像処理装置、画像処理方法及びプログラム |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19879389 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 19879389 Country of ref document: EP Kind code of ref document: A1 |