US20190005662A1 - Image registration apparatus, image registration method, and image registration program - Google Patents
Image registration apparatus, image registration method, and image registration program Download PDFInfo
- Publication number
- US20190005662A1 US20190005662A1 US16/008,546 US201816008546A US2019005662A1 US 20190005662 A1 US20190005662 A1 US 20190005662A1 US 201816008546 A US201816008546 A US 201816008546A US 2019005662 A1 US2019005662 A1 US 2019005662A1
- Authority
- US
- United States
- Prior art keywords
- image
- projection
- point
- information
- registration
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/344—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10068—Endoscopic image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20072—Graph-based image processing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30061—Lung
Definitions
- the present invention relates to an image registration apparatus, an image registration method, and a non-transitory computer readable recording medium storing an image registration program that perform registration between a three-dimensional image obtained by imaging a subject and an image different from the three-dimensional image.
- a resolution of image data obtained through imaging using the imaging apparatus has been enhanced, and thus, detailed analysis of a subject has been available on the basis of the image data.
- multi slice CT Multi Detector-row Computed Tomography
- a plurality of tomographic images can be captured at a time, and the tomographic images can be captured with thin slice thicknesses.
- a resolution of a three-dimensional image in a body axis direction in which the plurality of tomographic images are layered is enhanced, and thus, it is possible to obtain a more detailed three-dimensional image.
- By displaying and analyzing such a three-dimensional image it is possible to find a lesion or the like that has not been easily found until now.
- the virtual endoscope display is a method for setting a visual point position in a lumen and generating and displaying a perspective projection image on the basis of the set visual point position.
- the displayed projection image is a virtual endoscope image.
- a user successively changes visual point positions, and accordingly, it is possible to provide an image as if an endoscope camera performs imaging while moving inside the body.
- JP2013-150650A discloses a technique for acquiring tubular tissue shape data indicating the shape of a tubular tissue in a subject from three-dimensional image data of the subject, acquiring endoscope path data indicating a path of an endoscope inserted in the subject, performing matching between tree-structure data that is the tubular tissue shape data and the endoscope path data, and displaying a virtual endoscope image obtained on the basis of a matching result and an actual endoscope image.
- JP2013-192569A discloses a technique for performing structure matching by extracting a graph structure in calculating a correspondence positional relationship between two three-dimensional images relating to the same subject that are comparison targets.
- the invention has been made in consideration of the above-mentioned problems, and an object of the invention is to provide a technique capable of performing registration between a three-dimensional image and an image different from the three-dimensional image at high speed, such as registration between a virtual endoscope image and an actual endoscope image, for example.
- an image registration apparatus comprising: an image acquisition unit that acquires a three-dimensional image obtained by imaging a subject and an image different from the three-dimensional image; a graph structure generation unit that generates a graph structure of a tubular structure included in the three-dimensional image; a contour information acquisition unit that acquires contour information of the tubular structure at each point on the graph structure; and a registration unit that performs registration between the three-dimensional image and the different image on the basis of the contour information.
- the different image may be an actual endoscope image acquired using an endoscope inserted in the tubular structure.
- the different image may be a three-dimensional image, different from the three-dimensional image, obtained by imaging the subject.
- the “different three-dimensional image” means a three-dimensional image captured at a different imaging time, a three-dimensional image captured by a different modality used in imaging, or the like.
- the three-dimensional image captured by the different modality used in imaging may be a CT image acquired through a CT apparatus, a magnetic resonance imaging (MRI) image acquired through an MRI apparatus, a three-dimensional ultrasound image acquired through a three-dimensional echo ultrasonic apparatus, or the like.
- MRI magnetic resonance imaging
- the image registration apparatus may further comprise: a visual point information acquisition unit that acquires visual point information inside the tubular structure; a projection point specification unit that specifies a projection point from respective points on the graph structure on the basis of the visual point information and the graph structure; and a projection image generation unit that generates a projection image obtained by projecting the contour information at the projection point on a two-dimensional plane, in which the registration unit may perform registration between the three-dimensional image and the different image using the projection image.
- the image registration apparatus may further comprise: a visual point information acquisition unit that acquires visual point information inside the tubular structure; a projection point specification unit that specifies a projection point from respective points on the graph structure on the basis of the visual point information and the graph structure; a projection image generation unit that generates a projection image obtained by projecting the contour information at the projection point on a two-dimensional plane, and a hole contour extraction unit that extracts a contour of a hole included in the actual endoscope image, in which the registration unit may perform registration between the three-dimensional image and the different image using the projection image and the extracted contour of the hole.
- the projection point specification unit may specify one point on the graph structure as a starting point, may specify points included in a predetermined range as projection candidate points while following the graph structure from the starting point, and may specify the projection point from the projection candidate points.
- the projection point specification unit may specify the projection point on the basis of shape information of the contour information.
- the projection image generation unit may acquire information on a branch to which the projection point belongs and add the information on the branch to the projection image.
- the image registration apparatus may further comprise: a visual point information estimation unit that estimates visual point information of the different image on the basis of a result of the registration process between the projection image and the different image and visual point information of the projection image.
- the tubular structure may be the bronchus.
- an image registration method comprising: acquiring a three-dimensional image obtained by imaging a subject and an image different from the three-dimensional image; generating a graph structure of a tubular structure included in the three-dimensional image; acquiring contour information of the tubular structure at each point on the graph structure; and performing registration between the three-dimensional image and the different image on the basis of the contour information.
- a non-transitory computer readable recording medium storing a program that causes a computer to execute the image registration method according to the above-mentioned aspect of the invention.
- an image registration apparatus comprising: a memory that stores a command to be executed by a computer; and a processor configured to executed the stored command, in which the processor executes a process of acquiring a three-dimensional image obtained by imaging a subject and an image different from the three-dimensional image; a process of generating a graph structure of a tubular structure included in the three-dimensional image; a process of acquiring contour information of the tubular structure at each point on the graph structure; and a process of performing registration between the three-dimensional image and the different image on the basis of the contour information.
- a three-dimensional image obtained by imaging a subject and an image different from the three-dimensional image are acquired, and a graph structure of a tubular structure included in the three-dimensional image is generated. Further, contour information of the tubular structure at each point on the graph structure is acquired, and registration between the three-dimensional image and the different image is performed on the basis of the contour information.
- the contour information may be acquired with a small amount of computation compared with a case where a volume rendering process or the like is performed. Accordingly, it is possible to perform registration between the three-dimensional image and the different image at high speed.
- FIG. 1 is a block diagram showing a schematic configuration of an endoscope image diagnosis support system using an embodiment of an image registration apparatus of the invention.
- FIG. 2 is a diagram showing an example of a graph structure.
- FIG. 3 is a diagram showing an example of contour information of the bronchus at each point on the graph structure.
- FIG. 4 is a flowchart illustrating a projection point specification method.
- FIG. 5 is a diagram illustrating the projection point specification method.
- FIG. 6 is a diagram showing an example of contour information at each point on a graph structure.
- FIG. 7 is a diagram showing an example of a projection image.
- FIG. 8 is a diagram showing an example of a projection image generated using contour information of a first diverging bronchial tube.
- FIG. 9 is a diagram showing an example of a projection image generated so that pieces of contour information do not overlap each other.
- FIG. 10 is a diagram showing an example of a projection image generated using contour information at a projection point that is finally specified.
- FIG. 11 is a diagram illustrating a method for estimating visual point information of an actual endoscope image on the basis of a deviation amount between a projection image and an actual endoscope image and visual point information of a projection image.
- FIG. 12 is a flowchart showing processes performed in this embodiment.
- FIG. 13 is a diagram showing an example in which branch information is additionally displayed with respect to a projection image.
- FIG. 14 is a block diagram showing a schematic configuration of an endoscope image diagnosis support system using another embodiment of the image registration apparatus of the invention.
- FIG. 15 is a diagram showing an example of a contour image obtained by extracting hole contours from an actual endoscope image.
- FIG. 16 is a diagram illustrating a method for estimating holes (contour information) included in an actual endoscope image corresponding to contour information included in a projection image.
- FIG. 1 is a block diagram showing a schematic configuration of an endoscope image diagnosis support system of this embodiment.
- An endoscope image diagnosis support system 1 includes an image registration apparatus 10 , a display device 30 , an input device 40 , a three-dimensional image storage server 50 , and an endoscope device 60 , as shown in FIG. 1 .
- the image registration apparatus 10 is configured by installing an image registration program of this embodiment into a computer.
- the image registration program may be recorded on a recording medium such as a digital versatile disc (DVD) or a compact disc read only memory (CD-ROM) for distribution, and may be installed into a computer from the recording medium.
- the image registration program may be stored in a storage device of a server computer connected to a network or in a network storage in a state of being accessible from the outside, and may be downloaded and installed into a computer to be used by a doctor as necessary.
- the image registration apparatus 10 includes a central processing unit (CPU), a semiconductor memory, a storage device such as a hard disk or a solid state drive (SSD) in which the above-described three-dimensional image registration program is installed, and the like.
- CPU central processing unit
- semiconductor memory a storage device such as a hard disk or a solid state drive (SSD) in which the above-described three-dimensional image registration program is installed, and the like.
- SSD solid state drive
- an image acquisition unit 11 a graph structure generation unit 12 , a contour information acquisition unit 13 , a visual point information acquisition unit 14 , a projection point specification unit 15 , a projection image generation unit 16 , a bronchial three-dimensional image generation unit 17 , a registration unit 18 , a visual point information estimation unit 19 , and a display controller 20 as shown in FIG. 1 are configured.
- the image registration apparatus 10 may be provided with a plurality of processors or processing circuits that respectively perform an image acquisition process, a graph structure generation process, a contour information acquisition process, a visual point information acquisition process, a projection point specification process, a projection image generation process, a bronchial three-dimensional image generation process, a registration process, a visual point information estimation process, and a display control process.
- the image acquisition unit 11 acquires a three-dimensional image of a subject obtained through imaging in advance before an operation or an inspection using an endoscope device, for example, and an actual endoscope image that is a two-dimensional endoscope image of the inside of the bronchus obtained by imaging using the endoscope device 60 .
- an image displayed by volume data that is reconfigured of slice data output from a CT apparatus, an MRI apparatus, or the like, an image displayed by volume data output from a multi slice (MS) CT apparatus or a cone beam CT apparatus, may be used.
- a three-dimensional ultrasound image acquired by a three-dimensional echo ultrasonic apparatus may be used.
- the three-dimensional image is stored in advance together with identification information of the subject in the three-dimensional image storage server 50 , and the image acquisition unit 11 reads out a three-dimensional image corresponding to identification information of a subject input through the input device 40 from the three-dimensional image storage server 50 .
- the graph structure generation unit 12 receives an input of a three-dimensional image acquired by the image acquisition unit 11 , and generates a graph structure of a tubular structure included in the input three-dimensional image.
- a graph structure of the bronchus is generated as the graph structure of the tubular structure.
- the bronchus included in the three-dimensional image pixels corresponding to the inside of the bronchus show an air region, and thus, are displayed as a region indicating low CT values (pixel values) on a CT image, whereas a bronchial wall may be considered as a cylindrical or tubular structure indicating relatively high CT values.
- the bronchus is extracted by performing structure analysis of a shape based on a distribution of the CT values for each pixel.
- the bronchus diverges in multiple stages, and as the end of a bronchial tube is approached, the diameter of the bronchial tube becomes smaller.
- Gaussian pyramid images obtained by multiple resolution conversion of a three-dimensional image, that is, a plurality of three-dimensional images having different resolutions are generated in advance, and a detection algorithm is scanned for each of the generated Gaussian pyramid images to detect tubular structures having different sizes.
- a Hessian matrix of each pixel of a three-dimensional image having each resolution is calculated, and it is determined whether the pixel is a pixel corresponding to the inside of a tubular structure from a magnitude correlation of eigenvalues of the Hessian matrix.
- the Hessian matrix refers to a matrix in which two-stage partial differential coefficients of density values in respective axes (an x-axis, a y-axis, and a z-axis of a three-dimensional image) are elements, and becomes a 3 ⁇ 3 matrix as follows.
- eigenvalues of the Hessian matrix in a certain pixel are ⁇ 1, ⁇ 2, and ⁇ 3, in a case where two eigenvalues among the eigenvalues are large and one eigenvalue is close to 0, for example, when ⁇ 3>> ⁇ 1, ⁇ 2>> ⁇ 1, and ⁇ 1 ⁇ 0 are satisfied, it is known that the pixel is a tubular structure. Further, an eigenvector corresponding to the smallest eigenvalue ( ⁇ 1 ⁇ 0) in the Hessian matrix coincides with a main axial direction of the tubular structure.
- the bronchus may be indicated by the graph structure, but the tubular structure extracted as described above is not necessarily extracted as one graph structure in which all tubular structures are connected to each other, due to the influence of a tumor or the like.
- the tubular structure extracted as described above is not necessarily extracted as one graph structure in which all tubular structures are connected to each other, due to the influence of a tumor or the like.
- FIG. 2 is a diagram showing an example of such a graph structure.
- Sp represents a starting point
- a diverging point Bp is indicated by a white circle
- an end point Ep is indicated by a black circle
- a branch E is indicated by a line.
- a method for generating a graph structure is not limited to the above-described method, and other known methods may be used.
- the contour information acquisition unit 13 acquires contour information of the bronchus at each point on a graph structure.
- the contour information acquisition unit 13 of this embodiment acquires contour information of a tubular structure detected in a case where the graph structure of the bronchus is generated as described above.
- the contour information is acquired at each point of the graph structure, but an interval between respective points may be set to an interval smaller than an interval between diverging points of the bronchus. For example, it is preferable to set the interval to about 1 mm to 2 mm.
- FIG. 3 is a diagram showing an example of contour information of the bronchus at each point on a graph structure. In FIG. 3 , the graph structure of the bronchus is also included.
- the visual point information acquisition unit 14 acquires visual point information inside the bronchus.
- the visual point information is set and input by a user using the input device 40 , and the visual point information acquisition unit 14 acquires the visual point information input through the input device 40 .
- the visual point information represents three-dimensional coordinates in a predetermined coordinate system in the bronchus.
- the setting and input of the visual point information may be designated by the user using the input device 40 such as a mouse on a three-dimensional image of the bronchus displayed on the display device 30 , for example.
- the visual point information is set and input by a user, but the invention is not limited thereto, and the visual point information may be automatically set on the basis of a predetermined condition.
- the visual point information may be set in a base end portion of the bronchus, or the visual point information may be set in a first diverge from the base end portion.
- the projection point specification unit 15 specifies one point on a graph structure as a starting point on the basis of visual point information, and specifies a projection point from respective points on the graph structure while following the graph structure from the starting point.
- the specification of the projection point in the projection point specification unit 15 will be described with reference to a flowchart shown in FIG. 4 and FIGS. 5 to 10 .
- the projection point specification unit 15 first specifies a point on a graph structure that is closest to visual point information S that is set and input by a user as a starting point Ns, as shown in FIG. 5 (step S 10 ). Further, the projection point specification unit 15 specifies points included in a predetermined range NR as projection candidate points while following the graph structure toward a downstream side (a side opposite to a base end side) of the bronchus from the starting point Ns (step S 12 ).
- a range NR a range of a predetermined distance from the starting point Ns may be used. Alternatively, a range in which the number of diverging points to be passed when following the graph structure from the starting point Ns is a predetermined number may be used as the range NR.
- the projection point specification unit 15 specifies partial projection points from the plurality of projection candidate points included in the predetermined range NR on the basis of a predetermined projection point condition (step S 14 ).
- a predetermined projection point condition for example, a condition that a central point, an initial point, or a final point of each branch in the graph structure included in the range NR is a projection point may be used.
- the initial point and the final point refer to an initial point and a final point of each side when following the graph structure toward a downstream side.
- a condition that with respect to each branch in the graph structure included in the range NR, an initial point spaced from a diverging point by a predetermined distance or longer is specified as a projection point may be used as the projection point condition.
- an example of the projection points Np specified in step S 14 is indicated by a broken line circle.
- FIG. 6 is a diagram showing an example of contour information at each point on a graph structure in the range NR.
- contour information at a part of points is not illustrated.
- numerical values of 0 to 6 shown in FIG. 6 represent branch numbers assigned to respective branches, and a numerical value shown in the vicinity of each piece of contour information represents a node number assigned to each point on the graph structure.
- a method for specifying projection points in a case where the above-described starting point Ns is a node 80 shown in FIG. 6 will be described.
- contour information at the two nodes 98 and 99 is projected on a two-dimensional plane to generate a projection image.
- a projection image as shown in FIG. 7 is obtained.
- the two-dimensional plane is a plane orthogonal to a body axis of a subject.
- the projection image shown in FIG. 7 two pieces of contour information overlap each other, and thus, the projection image is not preferable as a projection image. This is because the projection points specified in step S 14 are excessively close to a diverging point.
- the projection point specification unit 15 changes projection points, generates a projection image again using contour information at the changed projection points, and confirms whether pieces of contour information on the re-generated projection image overlap each other.
- the projection point specification unit 15 changes at least one of projection points of the nodes 98 and 99 to a projection point distant from the diverging point, and generates a projection image again using contour information at the changed projection point.
- the projection point specification unit 15 changes the projection point of the node 98 to a projection point of a node 107 , and changes the projection point of the node 99 to a projection point of a node 110 .
- the projection point specification unit 15 confirms whether a projection image condition that pieces of contour information on a projection image do not overlap each other is satisfied. In a case where the projection image condition is not satisfied (NO in step S 16 ), the projection point specification unit 15 changes the projection points on the basis of a predetermined condition (step S 18 ). Further, the projection point specification unit 15 generates a projection image again using contour information at the changed projection points, confirms whether the projection image condition is satisfied, and repeats the change of the projection points and the generation of the projection image until the projection image condition is satisfied.
- FIG. 8 is a diagram showing a projection image generated using contour information at projection points of a node 123 and a node 119 that satisfy the projection image condition.
- the projection point specification unit 15 repeats change of projection points and generation of a projection image until the projection image condition that pieces of contour information on a projection image do not overlap each other is satisfied. As a result, it is assumed that a projection image shown in FIG. 9 is generated.
- the projection point specification unit 15 specifies, with respect to a child branch, projection points that satisfy a projection image condition that contour information of the child branch is included in contour information at a node of a parental branch.
- the projection image condition relating to the child branch is not limited thereto.
- the contour information may remain as a final projection point without being deleted.
- the contour information at the node 214 of the branch 6 shown in FIG. 9 may remain since it is close to the contour information at the node 119 of the branch 2 .
- contour information at a node of a child branch is included in contour information at a node of a parental branch, in a case where the shape of the contour information is an ellipse of which the ratio of a short diameter to a long diameter is equal to or smaller than a predetermined threshold value, as in contour information at the node 289 shown in FIG. 9 , in a case where contour information at a node of a child branch cannot be projected, or in a case where the ratio of the magnitude of contour information at a node of a child branch to the magnitude of contour information at a node of a parental branch is equal to or smaller than a threshold value and the contour information at the node of the child branch is extremely small, the contour information may be deleted.
- the projection point specification unit 15 repeats the change and the deletion of the projection points so that the projection image condition is satisfied, as described above, to specify final projection points (step S 20 ).
- Whether or not the above-described projection image condition is satisfied may be confirmed by using shape information of contour information at projection points.
- the confirmation may be performed using, as the shape information, diameters of (radii, diameters, short diameters, long diameters, or the like) pieces of contour information and a distance between centers thereof, for example.
- a projection image is once generated using contour information at provisionally specified projection points, and whether the generated projection image satisfies a projection image condition to specify final projection points, but it is not essential that the projection image is generated. For example, on the basis of a positional correlation of the provisionally specified projection points on a three-dimensional coordinate space and the magnitudes of the contour information at the provisionally specified projection points, it may be confirmed whether the projection image condition is satisfied.
- the projection points specified on the basis of the projection point condition in step S 14 may be set as final projection points, and a projection image may be generated using contour information at the final projection points.
- FIG. 10 is a diagram showing a projection image generated using contour information at projection points that are finally specified.
- the bronchial three-dimensional image generation unit 17 performs a volume rendering process with respect to a three-dimensional image acquired in the image acquisition unit 11 to generate a bronchial three-dimensional image that represents a form of the bronchus, and outputs the generated bronchial three-dimensional image to the display controller 20 .
- the registration unit 18 acquires an actual endoscope image that is an endoscope image of a two-dimensional image obtained by actually imaging the inside of the bronchus by the endoscope device 60 , and performs a registration process between the actual endoscope image and a projection image.
- the projection image that is a target of the registration process is generated for each diverging point of a graph structure, for example, and a deviation amount with respect to a projection image having a minimum deviation amount with respect to the actual endoscope image among the plurality of generated projection images is acquired as a registration result in the registration process.
- the registration process for example, rigid-body registration or non-rigid-body registration may be used.
- the visual point information estimation unit 19 estimates visual point information (corresponding to a distal end position of an endoscope) of an actual endoscope image using a registration result in the registration unit 18 . That is, as shown in FIG. 11 , on the basis of a deviation amount between a projection image A obtained through the registration process and an actual endoscope image B, and visual point information of the projection image A, the visual point information estimation unit 19 estimates visual point information b of the actual endoscope image B.
- the visual point information b of the actual endoscope image B for example, in a case where only the magnitude of contour information included in the projection image A and the size of a hole of the bronchus included in the actual endoscope image B are different from each other, it is possible to estimate the visual point information b by moving, on the basis of a scaling rate of the hole of the bronchus with respect to the magnitude of the contour information and a distance between visual point information a of the projection image A and a projection surface, the visual point information a with respect to the projection surface.
- the method for estimating the visual point information b is not limited thereto, and various estimation methods based on geometrical relations may be used.
- the display controller 20 displays a projection image generated by the projection image generation unit 16 , a bronchial three-dimensional image generated by the bronchial three-dimensional image generation unit 17 , and an actual endoscope image acquired by the endoscope device 60 on the display device 30 .
- the display controller 20 displays three-dimensional coordinates of the visual point information b estimated by the visual point information estimation unit 19 on the display device 30 .
- the visual point information b may be displayed on the bronchial three-dimensional image displayed on the display device 30 .
- the display device 30 includes a liquid crystal display, or the like. Further, the display device 30 may be configured of a touch panel, and may be commonly used as the input device 40 .
- the input device 40 includes a mouse, a keyboard, or the like, and receives various setting inputs from a user.
- FIG. 12 is a flowchart showing processes performed in this embodiment.
- the image acquisition unit 11 acquires a three-dimensional image (step S 30 ).
- the graph structure generation unit 12 generates a graph structure of a tubular structure included in the three-dimensional image acquired by the image acquisition unit 11 (step S 32 ).
- the contour information acquisition unit 13 acquires contour information of the tubular structure at each point on the graph structure (step S 34 ).
- the projection image generation unit 16 generates a projection image obtained by projecting contour information at projection points specified by the process shown in FIG. 4 on a two-dimensional plane (step S 36 ).
- the image acquisition unit 11 acquires an actual endoscope image (step S 38 ), and the registration unit 18 performs registration between the three-dimensional image and the actual endoscope image using the projection image (step S 40 ). Further, the visual point information estimation unit 19 estimates visual point information of the actual endoscope image using a registration result in the registration unit 18 (step S 42 ). Further, the display controller 20 displays the projection image, the bronchial three-dimensional image, the actual endoscope image, and the visual point information b of the actual endoscope image estimated by the visual point information estimation unit 19 on the display device 30 (step S 44 ), and then, the procedure returns to step S 38 . Thus, registration between an actual endoscope image acquired at each position while an endoscope is moving in the bronchus and a three-dimensional image is performed.
- a graph structure of the bronchus included in a three-dimensional image is generated, and contour information of the bronchus at each point on the graph structure is acquired. Further, registration between the three-dimensional image and an actual endoscope image is performed on the basis of contour information.
- the contour information may be acquired with a small amount of computation compared with a case where a volume rendering process or the like is performed. Accordingly, according to this embodiment, it is possible to perform registration between a three-dimensional image and an actual endoscope image at high speed.
- visual point information inside the bronchus is acquired, one point on a graph structure is specified as a starting point on the basis of the visual point information, a projection point is specified from respective points on the graph structure while following the graph structure from the starting point, and a projection image obtained by projecting contour information at the projection point on a two-dimensional plane is generated.
- a projection image in which clear features remain so that a registration process can be performed with respect to an actual endoscope image at high speed.
- the projection image generation unit 16 may acquire information on a branch to which a projection point belongs and may add the acquired information on acquired branch to a projection image, to thereby display information on the branch on the projection image.
- FIG. 13 is a diagram showing an example in which “A” to “D” that correspond to branch information are additionally displayed with respect to a projection image.
- the hole contour extraction unit 21 that extracts a contour of a hole from the actual endoscope image may be provided in the image registration apparatus 10 .
- the hole contour extraction unit 21 extracts a contour of a hole from an actual endoscope image to generate a contour image, as shown in FIG. 15 .
- the hole contour extraction unit 21 detects a region where pixels having a brightness that is equal to or smaller than a threshold value are circularly distributed from the actual endoscope image, and extracts a contour of the detected region as a contour of a hole to generate a contour image.
- a threshold value a threshold value that is equal to or smaller than a threshold value
- branch information may be added with respect to a hole included in an actual endoscope image output from the endoscope device 60 , on the basis of contour information included in a projection image and information on a branch added to the contour information.
- registration between a projection image and an actual endoscope image is performed, as shown in FIG. 16 , it is possible to easily perform association between contour information included in a projection image A and a hole included in an actual endoscope image B.
- “A” to “D” that correspond to branch information added to respective pieces of contour information on the projection image A may be added to corresponding holes included in the actual endoscope image B to then display the actual endoscope image B on the display device 30 .
- the addition of branch information to the actual endoscope image B may be performed by the display controller 20 , and an exclusive processing unit for the addition of branch information to the actual endoscope image B may be provided.
- the image acquisition unit 11 acquires two three-dimensional images that are targets of registration.
- the graph structure generation unit 12 generates a graph structure of a tubular structure included in each of two three-dimensional images.
- the contour information acquisition unit 13 acquires contour information of the bronchus at each point on the graph structure, with respect to each of two three-dimensional images.
- the visual point information acquisition unit 14 acquires visual point information with respect to each of two three-dimensional images.
- the projection point specification unit 15 specifies projection points with respect to each of two three-dimensional images.
- the projection image generation unit 16 generates a projection image using contour information at final projection points specified by the projection point specification unit 15 , with respect to each of two three-dimensional images.
- the bronchial three-dimensional image generation unit 17 performs a volume rendering process with respect to each of two three-dimensional images to generate a bronchial three-dimensional image indicating a form of the bronchus.
- the registration unit 18 performs registration between two three-dimensional images. Specifically, the registration unit 18 performs registration between a projection image (referred to as a first projection image) generated from one three-dimensional image (referred to as a first three-dimensional image) among two three-dimensional images and a projection image (referred to as a second projection image) generated from the other three-dimensional image (referred to as a second three-dimensional image). That is, the registration unit 18 registers the second three-dimensional image with respect to the first three-dimensional image.
- a projection image that is a target of the registration process is generated, for example, at each diverging point of a graph structure, with respect to each of two three-dimensional images.
- one projection image for registration is selected from a plurality of first projection images by an operator.
- the selection of the projection image may be performed by an input through the input device 40 . Further, between one selected first projection image and a plurality of second projection images, a deviation amount relating to a projection image having a minimum deviation amount with respect to an actual endoscope image among the plurality of second projection images is acquired as a registration result in the registration process.
- the registration process for example, rigid-body registration or non-rigid-registration may be used.
- the second three-dimensional images are registered with respect to the first three-dimensional image, but one second projection image may be selected from the second three-dimensional images, and the first three-dimensional images may be registered with respect to the second three-dimensional image.
- the visual point information estimation unit 19 estimates visual point information of the second three-dimensional image, corresponding to visual point information of the selected first projection image using a registration result in the registration unit 18 .
- the projection image generation unit 16 re-generates a projection image relating to the second three-dimensional image on the basis of the estimated visual point information of the second three-dimensional image.
- the display controller 20 displays the selected first projection image and the re-generated projection image relating to the second three-dimensional image on the display device 30 .
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Endoscopes (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
- The present application claims priority under 35 U.S.C. § 119 to Japanese Patent Application No. 2017-129243, filed on Jun. 30, 2017. Each of the above application(s) is hereby expressly incorporated by reference, in its entirety, into the present application.
- The present invention relates to an image registration apparatus, an image registration method, and a non-transitory computer readable recording medium storing an image registration program that perform registration between a three-dimensional image obtained by imaging a subject and an image different from the three-dimensional image.
- In recent years, according to advances in an imaging apparatus, a resolution of image data obtained through imaging using the imaging apparatus has been enhanced, and thus, detailed analysis of a subject has been available on the basis of the image data. For example, in multi slice CT (Multi Detector-row Computed Tomography), a plurality of tomographic images can be captured at a time, and the tomographic images can be captured with thin slice thicknesses. As the slice thicknesses decrease, a resolution of a three-dimensional image in a body axis direction in which the plurality of tomographic images are layered is enhanced, and thus, it is possible to obtain a more detailed three-dimensional image. By displaying and analyzing such a three-dimensional image, it is possible to find a lesion or the like that has not been easily found until now.
- As a display method that uses a three-dimensional image as described above, there is a virtual endoscope display (for example, see WO2014/141968A and JP1999-120327A (JP-H11-120327A)). The virtual endoscope display is a method for setting a visual point position in a lumen and generating and displaying a perspective projection image on the basis of the set visual point position. The displayed projection image is a virtual endoscope image. In the virtual endoscope display, a user successively changes visual point positions, and accordingly, it is possible to provide an image as if an endoscope camera performs imaging while moving inside the body.
- Particularly, in bronchoscopy that is an inspection using an endoscope, since branches of the bronchus are extremely complicated, a technique for navigating the endoscope while referring to the above-described virtual endoscope display as a “map” is performed. In this case, since it is troublesome to manually sequentially change the virtual endoscope display in accordance with actual movement of the endoscope, a technique for estimating a place in the body where a distal end of the endoscope is present to create a virtual endoscope image is performed.
- For example, WO2014/141968A discloses, in order to estimate a distal end position of an endoscope, a technique for performing a registration process with respect to a virtual endoscope image and an actual endoscope image obtained through actual imaging using an endoscope. Further, in JP2013-150650A discloses a technique for acquiring tubular tissue shape data indicating the shape of a tubular tissue in a subject from three-dimensional image data of the subject, acquiring endoscope path data indicating a path of an endoscope inserted in the subject, performing matching between tree-structure data that is the tubular tissue shape data and the endoscope path data, and displaying a virtual endoscope image obtained on the basis of a matching result and an actual endoscope image. Further, JP2013-192569A discloses a technique for performing structure matching by extracting a graph structure in calculating a correspondence positional relationship between two three-dimensional images relating to the same subject that are comparison targets.
- Here, in order to estimate the distal end position of the endoscope for navigation, it is necessary to generate virtual endoscope images from a plurality of visual points, to perform registration between the plurality of virtual endoscope images and an actual endoscope image, and to select a virtual endoscope image that matches most closely the actual endoscope image. Further, as disclosed in JP2013-192569A, in calculating the correspondence relationship between two three-dimensional images, similarly, there is a case where virtual endoscope images are generated from two three-dimensional images and matching between the virtual endoscopes is performed.
- In order to enhance estimation accuracy and stability of a distal end position of an endoscope and accuracy of registration between two three-dimensional images, it is necessary to generate a large number of virtual endoscope images. However, a volume rendering process that is a process of generating a virtual endoscope image needs a large amount of computation. Thus, it takes long time to generate the virtual endoscope image. Further, in a case where registration between virtual endoscope images and an actual endoscope image is performed, similarly, it takes long time for computation. As a result, it is difficult to navigate the distal end position of the endoscope in real time.
- The invention has been made in consideration of the above-mentioned problems, and an object of the invention is to provide a technique capable of performing registration between a three-dimensional image and an image different from the three-dimensional image at high speed, such as registration between a virtual endoscope image and an actual endoscope image, for example.
- According to an aspect of the invention, there is provided an image registration apparatus comprising: an image acquisition unit that acquires a three-dimensional image obtained by imaging a subject and an image different from the three-dimensional image; a graph structure generation unit that generates a graph structure of a tubular structure included in the three-dimensional image; a contour information acquisition unit that acquires contour information of the tubular structure at each point on the graph structure; and a registration unit that performs registration between the three-dimensional image and the different image on the basis of the contour information.
- In the image registration apparatus according to this aspect of the invention, the different image may be an actual endoscope image acquired using an endoscope inserted in the tubular structure.
- In the image registration apparatus according to this aspect of the invention, the different image may be a three-dimensional image, different from the three-dimensional image, obtained by imaging the subject.
- Here, the “different three-dimensional image” means a three-dimensional image captured at a different imaging time, a three-dimensional image captured by a different modality used in imaging, or the like. The three-dimensional image captured by the different modality used in imaging may be a CT image acquired through a CT apparatus, a magnetic resonance imaging (MRI) image acquired through an MRI apparatus, a three-dimensional ultrasound image acquired through a three-dimensional echo ultrasonic apparatus, or the like.
- The image registration apparatus according to this aspect of the invention may further comprise: a visual point information acquisition unit that acquires visual point information inside the tubular structure; a projection point specification unit that specifies a projection point from respective points on the graph structure on the basis of the visual point information and the graph structure; and a projection image generation unit that generates a projection image obtained by projecting the contour information at the projection point on a two-dimensional plane, in which the registration unit may perform registration between the three-dimensional image and the different image using the projection image.
- The image registration apparatus according to this aspect of the invention may further comprise: a visual point information acquisition unit that acquires visual point information inside the tubular structure; a projection point specification unit that specifies a projection point from respective points on the graph structure on the basis of the visual point information and the graph structure; a projection image generation unit that generates a projection image obtained by projecting the contour information at the projection point on a two-dimensional plane, and a hole contour extraction unit that extracts a contour of a hole included in the actual endoscope image, in which the registration unit may perform registration between the three-dimensional image and the different image using the projection image and the extracted contour of the hole.
- In the image registration apparatus according to this aspect of the invention, the projection point specification unit may specify one point on the graph structure as a starting point, may specify points included in a predetermined range as projection candidate points while following the graph structure from the starting point, and may specify the projection point from the projection candidate points.
- In the image registration apparatus according to this aspect of the invention, the projection point specification unit may specify the projection point on the basis of shape information of the contour information.
- In the image registration apparatus according to this aspect of the invention, the projection image generation unit may acquire information on a branch to which the projection point belongs and add the information on the branch to the projection image.
- The image registration apparatus according to this aspect of the invention may further comprise: a visual point information estimation unit that estimates visual point information of the different image on the basis of a result of the registration process between the projection image and the different image and visual point information of the projection image.
- In the image registration apparatus according to this aspect of the invention, the tubular structure may be the bronchus.
- According to another aspect of the invention, there is provided an image registration method comprising: acquiring a three-dimensional image obtained by imaging a subject and an image different from the three-dimensional image; generating a graph structure of a tubular structure included in the three-dimensional image; acquiring contour information of the tubular structure at each point on the graph structure; and performing registration between the three-dimensional image and the different image on the basis of the contour information.
- According to still another aspect of the invention, there may be provided a non-transitory computer readable recording medium storing a program that causes a computer to execute the image registration method according to the above-mentioned aspect of the invention.
- According to still another aspect of the invention, there is provided an image registration apparatus comprising: a memory that stores a command to be executed by a computer; and a processor configured to executed the stored command, in which the processor executes a process of acquiring a three-dimensional image obtained by imaging a subject and an image different from the three-dimensional image; a process of generating a graph structure of a tubular structure included in the three-dimensional image; a process of acquiring contour information of the tubular structure at each point on the graph structure; and a process of performing registration between the three-dimensional image and the different image on the basis of the contour information.
- According to the invention, a three-dimensional image obtained by imaging a subject and an image different from the three-dimensional image are acquired, and a graph structure of a tubular structure included in the three-dimensional image is generated. Further, contour information of the tubular structure at each point on the graph structure is acquired, and registration between the three-dimensional image and the different image is performed on the basis of the contour information. Here, the contour information may be acquired with a small amount of computation compared with a case where a volume rendering process or the like is performed. Accordingly, it is possible to perform registration between the three-dimensional image and the different image at high speed.
-
FIG. 1 is a block diagram showing a schematic configuration of an endoscope image diagnosis support system using an embodiment of an image registration apparatus of the invention. -
FIG. 2 is a diagram showing an example of a graph structure. -
FIG. 3 is a diagram showing an example of contour information of the bronchus at each point on the graph structure. -
FIG. 4 is a flowchart illustrating a projection point specification method. -
FIG. 5 is a diagram illustrating the projection point specification method. -
FIG. 6 is a diagram showing an example of contour information at each point on a graph structure. -
FIG. 7 is a diagram showing an example of a projection image. -
FIG. 8 is a diagram showing an example of a projection image generated using contour information of a first diverging bronchial tube. -
FIG. 9 is a diagram showing an example of a projection image generated so that pieces of contour information do not overlap each other. -
FIG. 10 is a diagram showing an example of a projection image generated using contour information at a projection point that is finally specified. -
FIG. 11 is a diagram illustrating a method for estimating visual point information of an actual endoscope image on the basis of a deviation amount between a projection image and an actual endoscope image and visual point information of a projection image. -
FIG. 12 is a flowchart showing processes performed in this embodiment. -
FIG. 13 is a diagram showing an example in which branch information is additionally displayed with respect to a projection image. -
FIG. 14 is a block diagram showing a schematic configuration of an endoscope image diagnosis support system using another embodiment of the image registration apparatus of the invention. -
FIG. 15 is a diagram showing an example of a contour image obtained by extracting hole contours from an actual endoscope image. -
FIG. 16 is a diagram illustrating a method for estimating holes (contour information) included in an actual endoscope image corresponding to contour information included in a projection image. - Hereinafter, an endoscope image diagnosis support system using an embodiment of an image registration apparatus, an image registration method, and a non-transitory computer readable recording medium storing an image registration program of the invention will be described in detail with reference to the accompanying drawings.
FIG. 1 is a block diagram showing a schematic configuration of an endoscope image diagnosis support system of this embodiment. - An endoscope image
diagnosis support system 1 according to this embodiment includes animage registration apparatus 10, adisplay device 30, aninput device 40, a three-dimensionalimage storage server 50, and anendoscope device 60, as shown inFIG. 1 . - The
image registration apparatus 10 is configured by installing an image registration program of this embodiment into a computer. The image registration program may be recorded on a recording medium such as a digital versatile disc (DVD) or a compact disc read only memory (CD-ROM) for distribution, and may be installed into a computer from the recording medium. Alternatively, the image registration program may be stored in a storage device of a server computer connected to a network or in a network storage in a state of being accessible from the outside, and may be downloaded and installed into a computer to be used by a doctor as necessary. - The
image registration apparatus 10 includes a central processing unit (CPU), a semiconductor memory, a storage device such as a hard disk or a solid state drive (SSD) in which the above-described three-dimensional image registration program is installed, and the like. Through the above-mentioned hardware, animage acquisition unit 11, a graphstructure generation unit 12, a contourinformation acquisition unit 13, a visual pointinformation acquisition unit 14, a projectionpoint specification unit 15, a projectionimage generation unit 16, a bronchial three-dimensionalimage generation unit 17, aregistration unit 18, a visual pointinformation estimation unit 19, and adisplay controller 20 as shown inFIG. 1 are configured. Further, as the image registration program installed in the hard disk is executed by the central processing unit, the respective units are operated, respectively. Theimage registration apparatus 10 may be provided with a plurality of processors or processing circuits that respectively perform an image acquisition process, a graph structure generation process, a contour information acquisition process, a visual point information acquisition process, a projection point specification process, a projection image generation process, a bronchial three-dimensional image generation process, a registration process, a visual point information estimation process, and a display control process. - The
image acquisition unit 11 acquires a three-dimensional image of a subject obtained through imaging in advance before an operation or an inspection using an endoscope device, for example, and an actual endoscope image that is a two-dimensional endoscope image of the inside of the bronchus obtained by imaging using theendoscope device 60. As the three-dimensional image, for example, an image displayed by volume data that is reconfigured of slice data output from a CT apparatus, an MRI apparatus, or the like, an image displayed by volume data output from a multi slice (MS) CT apparatus or a cone beam CT apparatus, may be used. Further, a three-dimensional ultrasound image acquired by a three-dimensional echo ultrasonic apparatus may be used. The three-dimensional image is stored in advance together with identification information of the subject in the three-dimensionalimage storage server 50, and theimage acquisition unit 11 reads out a three-dimensional image corresponding to identification information of a subject input through theinput device 40 from the three-dimensionalimage storage server 50. - The graph
structure generation unit 12 receives an input of a three-dimensional image acquired by theimage acquisition unit 11, and generates a graph structure of a tubular structure included in the input three-dimensional image. In this embodiment, a graph structure of the bronchus is generated as the graph structure of the tubular structure. Hereinafter, an example of a method for generating a graph structure will be described. - In the bronchus included in the three-dimensional image, pixels corresponding to the inside of the bronchus show an air region, and thus, are displayed as a region indicating low CT values (pixel values) on a CT image, whereas a bronchial wall may be considered as a cylindrical or tubular structure indicating relatively high CT values. Thus, the bronchus is extracted by performing structure analysis of a shape based on a distribution of the CT values for each pixel.
- The bronchus diverges in multiple stages, and as the end of a bronchial tube is approached, the diameter of the bronchial tube becomes smaller. In order to enable detection of bronchial tubes of different sizes, Gaussian pyramid images obtained by multiple resolution conversion of a three-dimensional image, that is, a plurality of three-dimensional images having different resolutions are generated in advance, and a detection algorithm is scanned for each of the generated Gaussian pyramid images to detect tubular structures having different sizes.
- First, a Hessian matrix of each pixel of a three-dimensional image having each resolution is calculated, and it is determined whether the pixel is a pixel corresponding to the inside of a tubular structure from a magnitude correlation of eigenvalues of the Hessian matrix. The Hessian matrix refers to a matrix in which two-stage partial differential coefficients of density values in respective axes (an x-axis, a y-axis, and a z-axis of a three-dimensional image) are elements, and becomes a 3×3 matrix as follows.
-
- In a case where eigenvalues of the Hessian matrix in a certain pixel are λ1, λ2, and λ3, in a case where two eigenvalues among the eigenvalues are large and one eigenvalue is close to 0, for example, when λ3>>λ1, λ2>>λ1, and λ1≅0 are satisfied, it is known that the pixel is a tubular structure. Further, an eigenvector corresponding to the smallest eigenvalue (λ1≅0) in the Hessian matrix coincides with a main axial direction of the tubular structure.
- The bronchus may be indicated by the graph structure, but the tubular structure extracted as described above is not necessarily extracted as one graph structure in which all tubular structures are connected to each other, due to the influence of a tumor or the like. Thus, after extraction of the tubular structures is terminated from the entire three-dimensional image, whether the respective extracted tubular structures are within a predetermined distance and whether an angle formed by a direction of a basic line connecting certain points on two extracted tubular structures and a main axial direction of each tubular structure is within a predetermined angle are evaluated, and thus, whether a plurality of tubular structures are connected to each other is determined, and a connection correlation of the extracted tubular structures is reestablished. Through this reestablishment, the extraction of the graph structure of the bronchus is completed.
- Further, by classifying the extracted graph structure into a starting point, end points, diverging points, and branches, and by connecting the starting point, the end points and the diverging points using the branches, it is possible to obtain a graph structure indicating the bronchus. In this embodiment, feature amounts such as the diameter of the bronchus at each position of the graph structure and the length of each branch (a length between diverging points of the bronchus) are also acquired together with the graph structure.
FIG. 2 is a diagram showing an example of such a graph structure. InFIG. 2 , Sp represents a starting point, a diverging point Bp is indicated by a white circle, an end point Ep is indicated by a black circle, and a branch E is indicated by a line. - As a method for generating a graph structure is not limited to the above-described method, and other known methods may be used.
- The contour
information acquisition unit 13 acquires contour information of the bronchus at each point on a graph structure. The contourinformation acquisition unit 13 of this embodiment acquires contour information of a tubular structure detected in a case where the graph structure of the bronchus is generated as described above. The contour information is acquired at each point of the graph structure, but an interval between respective points may be set to an interval smaller than an interval between diverging points of the bronchus. For example, it is preferable to set the interval to about 1 mm to 2 mm.FIG. 3 is a diagram showing an example of contour information of the bronchus at each point on a graph structure. InFIG. 3 , the graph structure of the bronchus is also included. - The visual point
information acquisition unit 14 acquires visual point information inside the bronchus. The visual point information is set and input by a user using theinput device 40, and the visual pointinformation acquisition unit 14 acquires the visual point information input through theinput device 40. The visual point information represents three-dimensional coordinates in a predetermined coordinate system in the bronchus. The setting and input of the visual point information may be designated by the user using theinput device 40 such as a mouse on a three-dimensional image of the bronchus displayed on thedisplay device 30, for example. - In this embodiment, the visual point information is set and input by a user, but the invention is not limited thereto, and the visual point information may be automatically set on the basis of a predetermined condition. Specifically, for example, the visual point information may be set in a base end portion of the bronchus, or the visual point information may be set in a first diverge from the base end portion.
- The projection
point specification unit 15 specifies one point on a graph structure as a starting point on the basis of visual point information, and specifies a projection point from respective points on the graph structure while following the graph structure from the starting point. Hereinafter, the specification of the projection point in the projectionpoint specification unit 15 will be described with reference to a flowchart shown inFIG. 4 andFIGS. 5 to 10 . - The projection
point specification unit 15 first specifies a point on a graph structure that is closest to visual point information S that is set and input by a user as a starting point Ns, as shown inFIG. 5 (step S10). Further, the projectionpoint specification unit 15 specifies points included in a predetermined range NR as projection candidate points while following the graph structure toward a downstream side (a side opposite to a base end side) of the bronchus from the starting point Ns (step S12). As the range NR, a range of a predetermined distance from the starting point Ns may be used. Alternatively, a range in which the number of diverging points to be passed when following the graph structure from the starting point Ns is a predetermined number may be used as the range NR. - Then, the projection
point specification unit 15 specifies partial projection points from the plurality of projection candidate points included in the predetermined range NR on the basis of a predetermined projection point condition (step S14). As the predetermined projection point condition, for example, a condition that a central point, an initial point, or a final point of each branch in the graph structure included in the range NR is a projection point may be used. The initial point and the final point refer to an initial point and a final point of each side when following the graph structure toward a downstream side. Further, a condition that with respect to each branch in the graph structure included in the range NR, an initial point spaced from a diverging point by a predetermined distance or longer is specified as a projection point may be used as the projection point condition. InFIG. 5 , an example of the projection points Np specified in step S14 is indicated by a broken line circle. - Then, the projection
point specification unit 15 confirms whether the projection points specified in step S14 satisfy a predetermined projection image condition (step S16).FIG. 6 is a diagram showing an example of contour information at each point on a graph structure in the range NR. InFIG. 6 , for ease of illustration, contour information at a part of points is not illustrated. Further, numerical values of 0 to 6 shown inFIG. 6 represent branch numbers assigned to respective branches, and a numerical value shown in the vicinity of each piece of contour information represents a node number assigned to each point on the graph structure. Here, for example, a method for specifying projection points in a case where the above-described starting point Ns is anode 80 shown inFIG. 6 will be described. - First, in a case where the projection points specified in step S14 are a
node 98 and anode 99 shown inFIG. 6 , contour information at the two 98 and 99 is projected on a two-dimensional plane to generate a projection image. Then, a projection image as shown innodes FIG. 7 is obtained. The two-dimensional plane is a plane orthogonal to a body axis of a subject. In the projection image shown inFIG. 7 , two pieces of contour information overlap each other, and thus, the projection image is not preferable as a projection image. This is because the projection points specified in step S14 are excessively close to a diverging point. - In a case where two pieces of contour information overlap each other as in the projection image shown in
FIG. 7 , the projectionpoint specification unit 15 changes projection points, generates a projection image again using contour information at the changed projection points, and confirms whether pieces of contour information on the re-generated projection image overlap each other. Specifically, the projectionpoint specification unit 15 changes at least one of projection points of the 98 and 99 to a projection point distant from the diverging point, and generates a projection image again using contour information at the changed projection point. For example, the projectionnodes point specification unit 15 changes the projection point of thenode 98 to a projection point of anode 107, and changes the projection point of thenode 99 to a projection point of anode 110. - That is, the projection
point specification unit 15 confirms whether a projection image condition that pieces of contour information on a projection image do not overlap each other is satisfied. In a case where the projection image condition is not satisfied (NO in step S16), the projectionpoint specification unit 15 changes the projection points on the basis of a predetermined condition (step S18). Further, the projectionpoint specification unit 15 generates a projection image again using contour information at the changed projection points, confirms whether the projection image condition is satisfied, and repeats the change of the projection points and the generation of the projection image until the projection image condition is satisfied.FIG. 8 is a diagram showing a projection image generated using contour information at projection points of anode 123 and anode 119 that satisfy the projection image condition. - Further, with respect to a
branch 3 and abranch 4 connected to a tip of abranch 1 shown inFIG. 6 and abranch 5 and abranch 6 connected to abranch 2, in a similar way to the above description, the projectionpoint specification unit 15 repeats change of projection points and generation of a projection image until the projection image condition that pieces of contour information on a projection image do not overlap each other is satisfied. As a result, it is assumed that a projection image shown inFIG. 9 is generated. The projection images shown inFIG. 9 includes contour information at anode 289 of thebranch 4 and anode 296 of thebranch 3 protruding from anode 123 of thebranch 1 that is a parental branch, and contour information at anode 214 of thebranch 6 protruding from contour information at anode 119 of thebranch 2 that is a parental branch. It is preferable that such contour information is contour information that is not viewed on an actual endoscope image obtained by actually imaging the inside of the bronchus from thenode 80 that is a starting point and is thus deleted. - Accordingly, the projection
point specification unit 15 specifies, with respect to a child branch, projection points that satisfy a projection image condition that contour information of the child branch is included in contour information at a node of a parental branch. The projection image condition relating to the child branch is not limited thereto. For example, even though contour information of a child branch is not included in contour information at a node of a parental branch, in a case where a distance between the contour information at the node of the parental branch and the contour information of the child branch is within a predetermined threshold value, the contour information may remain as a final projection point without being deleted. Specifically, the contour information at thenode 214 of thebranch 6 shown inFIG. 9 may remain since it is close to the contour information at thenode 119 of thebranch 2. Further, even though contour information at a node of a child branch is included in contour information at a node of a parental branch, in a case where the shape of the contour information is an ellipse of which the ratio of a short diameter to a long diameter is equal to or smaller than a predetermined threshold value, as in contour information at thenode 289 shown inFIG. 9 , in a case where contour information at a node of a child branch cannot be projected, or in a case where the ratio of the magnitude of contour information at a node of a child branch to the magnitude of contour information at a node of a parental branch is equal to or smaller than a threshold value and the contour information at the node of the child branch is extremely small, the contour information may be deleted. - The projection
point specification unit 15 repeats the change and the deletion of the projection points so that the projection image condition is satisfied, as described above, to specify final projection points (step S20). - Whether or not the above-described projection image condition is satisfied may be confirmed by using shape information of contour information at projection points. Specifically, the confirmation may be performed using, as the shape information, diameters of (radii, diameters, short diameters, long diameters, or the like) pieces of contour information and a distance between centers thereof, for example.
- In the above description, a projection image is once generated using contour information at provisionally specified projection points, and whether the generated projection image satisfies a projection image condition to specify final projection points, but it is not essential that the projection image is generated. For example, on the basis of a positional correlation of the provisionally specified projection points on a three-dimensional coordinate space and the magnitudes of the contour information at the provisionally specified projection points, it may be confirmed whether the projection image condition is satisfied.
- Further, in the above description, whether the contour information at the provisionally specified projection points satisfies the projection image condition is confirmed to specify final projection points, but it is not essential that the projection image condition is confirmed. For example, the projection points specified on the basis of the projection point condition in step S14 may be set as final projection points, and a projection image may be generated using contour information at the final projection points.
- Returning to
FIG. 1 , the projectionimage generation unit 16 generates a projection image using the contour information at the final projection points specified by the projectionpoint specification unit 15.FIG. 10 is a diagram showing a projection image generated using contour information at projection points that are finally specified. - The bronchial three-dimensional
image generation unit 17 performs a volume rendering process with respect to a three-dimensional image acquired in theimage acquisition unit 11 to generate a bronchial three-dimensional image that represents a form of the bronchus, and outputs the generated bronchial three-dimensional image to thedisplay controller 20. - The
registration unit 18 acquires an actual endoscope image that is an endoscope image of a two-dimensional image obtained by actually imaging the inside of the bronchus by theendoscope device 60, and performs a registration process between the actual endoscope image and a projection image. The projection image that is a target of the registration process is generated for each diverging point of a graph structure, for example, and a deviation amount with respect to a projection image having a minimum deviation amount with respect to the actual endoscope image among the plurality of generated projection images is acquired as a registration result in the registration process. As the registration process, for example, rigid-body registration or non-rigid-body registration may be used. - The visual point
information estimation unit 19 estimates visual point information (corresponding to a distal end position of an endoscope) of an actual endoscope image using a registration result in theregistration unit 18. That is, as shown inFIG. 11 , on the basis of a deviation amount between a projection image A obtained through the registration process and an actual endoscope image B, and visual point information of the projection image A, the visual pointinformation estimation unit 19 estimates visual point information b of the actual endoscope image B. As a method for estimating the visual point information b of the actual endoscope image B, for example, in a case where only the magnitude of contour information included in the projection image A and the size of a hole of the bronchus included in the actual endoscope image B are different from each other, it is possible to estimate the visual point information b by moving, on the basis of a scaling rate of the hole of the bronchus with respect to the magnitude of the contour information and a distance between visual point information a of the projection image A and a projection surface, the visual point information a with respect to the projection surface. The method for estimating the visual point information b is not limited thereto, and various estimation methods based on geometrical relations may be used. - The
display controller 20 displays a projection image generated by the projectionimage generation unit 16, a bronchial three-dimensional image generated by the bronchial three-dimensionalimage generation unit 17, and an actual endoscope image acquired by theendoscope device 60 on thedisplay device 30. Here, thedisplay controller 20 displays three-dimensional coordinates of the visual point information b estimated by the visual pointinformation estimation unit 19 on thedisplay device 30. Further, the visual point information b may be displayed on the bronchial three-dimensional image displayed on thedisplay device 30. - The
display device 30 includes a liquid crystal display, or the like. Further, thedisplay device 30 may be configured of a touch panel, and may be commonly used as theinput device 40. - The
input device 40 includes a mouse, a keyboard, or the like, and receives various setting inputs from a user. - Next, a process performed in this embodiment will be described.
FIG. 12 is a flowchart showing processes performed in this embodiment. First, theimage acquisition unit 11 acquires a three-dimensional image (step S30). Then, the graphstructure generation unit 12 generates a graph structure of a tubular structure included in the three-dimensional image acquired by the image acquisition unit 11 (step S32). Further, the contourinformation acquisition unit 13 acquires contour information of the tubular structure at each point on the graph structure (step S34). Further, the projectionimage generation unit 16 generates a projection image obtained by projecting contour information at projection points specified by the process shown inFIG. 4 on a two-dimensional plane (step S36). - Then, the
image acquisition unit 11 acquires an actual endoscope image (step S38), and theregistration unit 18 performs registration between the three-dimensional image and the actual endoscope image using the projection image (step S40). Further, the visual pointinformation estimation unit 19 estimates visual point information of the actual endoscope image using a registration result in the registration unit 18 (step S42). Further, thedisplay controller 20 displays the projection image, the bronchial three-dimensional image, the actual endoscope image, and the visual point information b of the actual endoscope image estimated by the visual pointinformation estimation unit 19 on the display device 30 (step S44), and then, the procedure returns to step S38. Thus, registration between an actual endoscope image acquired at each position while an endoscope is moving in the bronchus and a three-dimensional image is performed. - As described above, according to this embodiment, a graph structure of the bronchus included in a three-dimensional image is generated, and contour information of the bronchus at each point on the graph structure is acquired. Further, registration between the three-dimensional image and an actual endoscope image is performed on the basis of contour information. Here, the contour information may be acquired with a small amount of computation compared with a case where a volume rendering process or the like is performed. Accordingly, according to this embodiment, it is possible to perform registration between a three-dimensional image and an actual endoscope image at high speed.
- Further, in this embodiment, visual point information inside the bronchus is acquired, one point on a graph structure is specified as a starting point on the basis of the visual point information, a projection point is specified from respective points on the graph structure while following the graph structure from the starting point, and a projection image obtained by projecting contour information at the projection point on a two-dimensional plane is generated. Thus, it is possible to generate a projection image in which clear features remain so that a registration process can be performed with respect to an actual endoscope image at high speed.
- In the above-described embodiment, the projection
image generation unit 16 may acquire information on a branch to which a projection point belongs and may add the acquired information on acquired branch to a projection image, to thereby display information on the branch on the projection image.FIG. 13 is a diagram showing an example in which “A” to “D” that correspond to branch information are additionally displayed with respect to a projection image. - Further, in the above-described embodiment, in the
registration unit 18, in a case where a registration process is performed, an actual endoscope image output from theendoscope device 60 is used, but the invention is not limited thereto. For example, as shown inFIG. 14 , the holecontour extraction unit 21 that extracts a contour of a hole from the actual endoscope image may be provided in theimage registration apparatus 10. The holecontour extraction unit 21 extracts a contour of a hole from an actual endoscope image to generate a contour image, as shown inFIG. 15 . Specifically, the holecontour extraction unit 21 detects a region where pixels having a brightness that is equal to or smaller than a threshold value are circularly distributed from the actual endoscope image, and extracts a contour of the detected region as a contour of a hole to generate a contour image. Thus, it is possible to perform a registration process between a projection image and an actual endoscope image performed by theregistration unit 18 at high speed. - In addition, in the above-described embodiment, branch information may be added with respect to a hole included in an actual endoscope image output from the
endoscope device 60, on the basis of contour information included in a projection image and information on a branch added to the contour information. In this embodiment, since registration between a projection image and an actual endoscope image is performed, as shown inFIG. 16 , it is possible to easily perform association between contour information included in a projection image A and a hole included in an actual endoscope image B. Accordingly, “A” to “D” that correspond to branch information added to respective pieces of contour information on the projection image A may be added to corresponding holes included in the actual endoscope image B to then display the actual endoscope image B on thedisplay device 30. The addition of branch information to the actual endoscope image B may be performed by thedisplay controller 20, and an exclusive processing unit for the addition of branch information to the actual endoscope image B may be provided. - Further, in the above-described embodiment, registration between a projection image generated from a three-dimensional image and an actual endoscope image is performed, but the invention may be similarly applied to a case where registration between two three-dimensional images is performed. In this case, the
image acquisition unit 11 acquires two three-dimensional images that are targets of registration. The graphstructure generation unit 12 generates a graph structure of a tubular structure included in each of two three-dimensional images. The contourinformation acquisition unit 13 acquires contour information of the bronchus at each point on the graph structure, with respect to each of two three-dimensional images. The visual pointinformation acquisition unit 14 acquires visual point information with respect to each of two three-dimensional images. The projectionpoint specification unit 15 specifies projection points with respect to each of two three-dimensional images. The projectionimage generation unit 16 generates a projection image using contour information at final projection points specified by the projectionpoint specification unit 15, with respect to each of two three-dimensional images. The bronchial three-dimensionalimage generation unit 17 performs a volume rendering process with respect to each of two three-dimensional images to generate a bronchial three-dimensional image indicating a form of the bronchus. - The
registration unit 18 performs registration between two three-dimensional images. Specifically, theregistration unit 18 performs registration between a projection image (referred to as a first projection image) generated from one three-dimensional image (referred to as a first three-dimensional image) among two three-dimensional images and a projection image (referred to as a second projection image) generated from the other three-dimensional image (referred to as a second three-dimensional image). That is, theregistration unit 18 registers the second three-dimensional image with respect to the first three-dimensional image. Here, a projection image that is a target of the registration process is generated, for example, at each diverging point of a graph structure, with respect to each of two three-dimensional images. Here, one projection image for registration is selected from a plurality of first projection images by an operator. The selection of the projection image may be performed by an input through theinput device 40. Further, between one selected first projection image and a plurality of second projection images, a deviation amount relating to a projection image having a minimum deviation amount with respect to an actual endoscope image among the plurality of second projection images is acquired as a registration result in the registration process. As the registration process, for example, rigid-body registration or non-rigid-registration may be used. In the above description, the second three-dimensional images are registered with respect to the first three-dimensional image, but one second projection image may be selected from the second three-dimensional images, and the first three-dimensional images may be registered with respect to the second three-dimensional image. - The visual point
information estimation unit 19 estimates visual point information of the second three-dimensional image, corresponding to visual point information of the selected first projection image using a registration result in theregistration unit 18. In this case, the projectionimage generation unit 16 re-generates a projection image relating to the second three-dimensional image on the basis of the estimated visual point information of the second three-dimensional image. Further, thedisplay controller 20 displays the selected first projection image and the re-generated projection image relating to the second three-dimensional image on thedisplay device 30. - 1: endoscope image diagnosis support system
- 10: image registration apparatus
- 11: image acquisition unit
- 12: graph structure generation unit
- 13: contour information acquisition unit
- 14: visual point information acquisition unit
- 15: projection point specification unit
- 16: projection image generation unit
- 17: bronchial three-dimensional image generation unit
- 18: registration unit
- 19: visual point information estimation unit
- 20: display controller
- 21: hole contour detection unit
- 30: display device
- 40: input device
- 50: three-dimensional image storage server
- 60: endoscope device
- A: projection image
- B: actual endoscope image
- a, b, S: visual point information
- Bp: diverging point
- E: branch
- Ep: end point
- Np: projection point
- NR: predetermined range
- Ns: starting point
Claims (20)
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2017-129243 | 2017-06-30 | ||
| JP2017129243A JP6820805B2 (en) | 2017-06-30 | 2017-06-30 | Image alignment device, its operation method and program |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20190005662A1 true US20190005662A1 (en) | 2019-01-03 |
| US10733747B2 US10733747B2 (en) | 2020-08-04 |
Family
ID=64734942
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/008,546 Active 2038-11-13 US10733747B2 (en) | 2017-06-30 | 2018-06-14 | Image registration apparatus, image registration method, and image registration program |
Country Status (2)
| Country | Link |
|---|---|
| US (1) | US10733747B2 (en) |
| JP (1) | JP6820805B2 (en) |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20210374990A1 (en) * | 2020-06-01 | 2021-12-02 | Olympus Corporation | Image processing system, image processing method, and storage medium |
| CN116132814A (en) * | 2022-04-01 | 2023-05-16 | 港珠澳大桥管理局 | Submarine immersed tube splicing structure information acquisition equipment, acquisition method, device and equipment |
| US20240005495A1 (en) * | 2022-06-29 | 2024-01-04 | Fujifilm Corporation | Image processing device, method, and program |
| US20240099701A1 (en) * | 2021-06-14 | 2024-03-28 | Fujifilm Corporation | Ultrasound system and control method of ultrasound system |
| US20240156437A1 (en) * | 2022-11-16 | 2024-05-16 | Fujifilm Healthcare Corporation | Ultrasound diagnostic apparatus and data processing method |
| US20250213221A1 (en) * | 2012-03-26 | 2025-07-03 | Teratech Corporation | Tablet ultrasound system |
| US12450100B2 (en) | 2023-10-05 | 2025-10-21 | International Business Machines Corporation | AI model based deployment of an AI model |
Family Cites Families (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP4053117B2 (en) | 1997-10-17 | 2008-02-27 | 東芝医用システムエンジニアリング株式会社 | Image processing device |
| WO2007129493A1 (en) * | 2006-05-02 | 2007-11-15 | National University Corporation Nagoya University | Medical image observation support device |
| JP5028191B2 (en) * | 2007-09-03 | 2012-09-19 | オリンパスメディカルシステムズ株式会社 | Endoscope device |
| JP2012505695A (en) * | 2008-10-20 | 2012-03-08 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | Image-based localization method and system |
| JP5918548B2 (en) | 2012-01-24 | 2016-05-18 | 富士フイルム株式会社 | Endoscopic image diagnosis support apparatus, operation method thereof, and endoscopic image diagnosis support program |
| JP5785120B2 (en) * | 2012-03-15 | 2015-09-24 | 富士フイルム株式会社 | Medical image diagnosis support apparatus and method, and program |
| JP5832938B2 (en) | 2012-03-15 | 2015-12-16 | 富士フイルム株式会社 | Image processing apparatus, method, and program |
| JP5826082B2 (en) * | 2012-03-21 | 2015-12-02 | 富士フイルム株式会社 | Medical image diagnosis support apparatus and method, and program |
| JP5718537B2 (en) | 2013-03-12 | 2015-05-13 | オリンパスメディカルシステムズ株式会社 | Endoscope system |
| CN105608687B (en) * | 2014-10-31 | 2019-01-08 | 东芝医疗系统株式会社 | Medical image processing method and medical image-processing apparatus |
-
2017
- 2017-06-30 JP JP2017129243A patent/JP6820805B2/en active Active
-
2018
- 2018-06-14 US US16/008,546 patent/US10733747B2/en active Active
Cited By (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20250213221A1 (en) * | 2012-03-26 | 2025-07-03 | Teratech Corporation | Tablet ultrasound system |
| US20210374990A1 (en) * | 2020-06-01 | 2021-12-02 | Olympus Corporation | Image processing system, image processing method, and storage medium |
| US11669997B2 (en) * | 2020-06-01 | 2023-06-06 | Evident Corporation | Image processing system, image processing method, and storage medium |
| US20240099701A1 (en) * | 2021-06-14 | 2024-03-28 | Fujifilm Corporation | Ultrasound system and control method of ultrasound system |
| US12471890B2 (en) * | 2021-06-14 | 2025-11-18 | Fujifilm Corporation | Ultrasound system and control method of ultrasound system |
| CN116132814A (en) * | 2022-04-01 | 2023-05-16 | 港珠澳大桥管理局 | Submarine immersed tube splicing structure information acquisition equipment, acquisition method, device and equipment |
| US20240005495A1 (en) * | 2022-06-29 | 2024-01-04 | Fujifilm Corporation | Image processing device, method, and program |
| US20240156437A1 (en) * | 2022-11-16 | 2024-05-16 | Fujifilm Healthcare Corporation | Ultrasound diagnostic apparatus and data processing method |
| US12450100B2 (en) | 2023-10-05 | 2025-10-21 | International Business Machines Corporation | AI model based deployment of an AI model |
Also Published As
| Publication number | Publication date |
|---|---|
| JP6820805B2 (en) | 2021-01-27 |
| US10733747B2 (en) | 2020-08-04 |
| JP2019010382A (en) | 2019-01-24 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10475235B2 (en) | Three-dimensional image processing apparatus, three-dimensional image processing method, and three-dimensional image processing program | |
| US10733747B2 (en) | Image registration apparatus, image registration method, and image registration program | |
| US9035941B2 (en) | Image processing apparatus and image processing method | |
| US9478028B2 (en) | Intelligent landmark selection to improve registration accuracy in multimodal image fusion | |
| US10417517B2 (en) | Medical image correlation apparatus, method and storage medium | |
| US11013495B2 (en) | Method and apparatus for registering medical images | |
| CN107111875B (en) | Feedback for multimodal auto-registration | |
| US20150055846A1 (en) | Image processing method and apparatus and program | |
| US9123096B2 (en) | Information processing apparatus and control method thereof | |
| JP6093347B2 (en) | Medical image processing system and method | |
| US10424067B2 (en) | Image processing apparatus, image processing method and storage medium | |
| US9619938B2 (en) | Virtual endoscopic image display apparatus, method and program | |
| JP6541334B2 (en) | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM | |
| JP5785120B2 (en) | Medical image diagnosis support apparatus and method, and program | |
| US8411914B1 (en) | Systems and methods for spatio-temporal analysis | |
| US10405811B2 (en) | Image processing method and apparatus, and program | |
| US20180263527A1 (en) | Endoscope position specifying device, method, and program | |
| US10970875B2 (en) | Examination support device, examination support method, and examination support program | |
| US11730384B2 (en) | Fluid analysis apparatus, method for operating fluid analysis apparatus, and fluid analysis program | |
| US9754368B2 (en) | Region extraction apparatus, method, and program | |
| US20190374114A1 (en) | Blood flow analysis apparatus, blood flow analysis method, and blood flow analysis program | |
| JP5991731B2 (en) | Information processing apparatus and information processing method | |
| JP6263248B2 (en) | Information processing apparatus, information processing method, and program | |
| JP2018061844A (en) | Information processing apparatus, information processing method, and program | |
| US20180263712A1 (en) | Endoscope position specifying device, method, and program |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| AS | Assignment |
Owner name: FUJIFILM CORPORATION, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HIRAKAWA, SHINNOSUKE;REEL/FRAME:046101/0124 Effective date: 20180411 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |