[go: up one dir, main page]

NL2038335B1 - A method based on CT images - Google Patents

A method based on CT images Download PDF

Info

Publication number
NL2038335B1
NL2038335B1 NL2038335A NL2038335A NL2038335B1 NL 2038335 B1 NL2038335 B1 NL 2038335B1 NL 2038335 A NL2038335 A NL 2038335A NL 2038335 A NL2038335 A NL 2038335A NL 2038335 B1 NL2038335 B1 NL 2038335B1
Authority
NL
Netherlands
Prior art keywords
data
optical
dimensional
projection
angle
Prior art date
Application number
NL2038335A
Other languages
Dutch (nl)
Other versions
NL2038335A (en
Inventor
Li Chao
Jiang Yuqiang
Huang Lu
Wang Yu
Original Assignee
Inst Genetics & Developmental Biology Cas
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inst Genetics & Developmental Biology Cas filed Critical Inst Genetics & Developmental Biology Cas
Publication of NL2038335A publication Critical patent/NL2038335A/en
Application granted granted Critical
Publication of NL2038335B1 publication Critical patent/NL2038335B1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10064Fluorescence image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Analysing Materials By The Use Of Radiation (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

This invention relates to a method of three-dimensional optical reconstruction based on CT images, comprising: obtaining multi-angle CT projection signals and optical data of the object to be reconstructed using a coaxial scanning device; reconstructing CT volumetric data from projection signals at all angles and extracting surface voxel data from the CT volumetric data, aligning each pixel in the optical data with corresponding pixels in the CT surface voxel data through registration, mapping the coordinates of optical data pixels to the CT image coordinate system of the CT 10 volumetric data, obtaining a first optical data set with three-dimensional spatial coordinates in the CT image coordinate system for all angles, stitching together all optical data in this dataset to form the reconstructed three-dimensional optical information. This method utilizes CT scanning to provide accurate three-dimensional spatial information of the object for three-dimensional optical reconstruction, enabling 15 accurate and comprehensive optical three-dimensional reconstruction of objects.

Description

A method based on CT images
TECHNICAL FIELD
This invention pertains to the field of three-dimensional optical reconstruction, particularly involving a method based on CT images.
BACKGROUND
Three-dimensional reconstruction (3D reconstruction) refers to converting two- dimensional images or projection data into spatial information of three-dimensional objects. The reconstructed models facilitate computer display and further processing, finding broad applications in fields such as medicine, biology, engineering, and computer vision for studying and analyzing the structure and morphology of objects.
With advancements in imaging technology, achieving high-resolution 3D reconstruction has become a significant research focus. Researchers improve algorithms, utilize high- resolution sensors, and enhance data collection methods to achieve more precise and detailed 3D reconstruction results.
In related technologies, three-dimensional reconstruction methods based on depth cameras provide depth information of objects, enabling direct modeling. Key methods include structured light projection and Time-of-Flight (TOF). Structured light projection involves projecting coded images onto objects using light sources, where the coded patterns deform based on the object's surface shape. By capturing the deformed structured light with cameras and analyzing the relationship between the camera, projector positions, and the extent of deformation, depth information of the detected object 1s obtained. While this method offers high accuracy, it is susceptible to ambient light interference, and the structured light projector requires pre-calibration.
TOF measures the time taken for light pulses to travel and reflect back from the object's surface to calculate depth. By emitting pulse signals from the system's transmitter, reflecting off the object, and detecting the signal's travel time relative to the speed of light, depth values are computed. TOF technology may face noise and accuracy limitations in long-distance or low-light conditions, resulting in lower resolution and inability to achieve high-precision 3D reconstruction or accurately restore the shape and geometric structure of objects.
SUMMARY
In order to solve the above problems in the prior art, the present invention provides a three-dimensional optical reconstruction method based on CT 1nages.
This invention relates to the field of three-dimensional optical reconstruction, specifically a method based on CT images.
The method includes the following steps:
S10: Using a coaxial scanning device to obtain CT projection signals and optical data of the object to be reconstructed from multiple angles. Each angle in the multi-angle setup represents the rotational angle of the coaxial scanning device relative to the stationary object. The coaxial scanning device includes a fixed CT imaging device and an optical imaging device with an installation angle relative to the CT imaging device.
S20: Reconstructing the CT projection signals from all angles using a reconstruction algorithm to obtain the CT voxel data of the object and extracting the surface voxel data from the CT voxel data, resulting in the CT surface voxel data of the object.
S30: Aligning each pixel in the optical data with the corresponding pixel in the CT surface voxel data, mapping the coordinates of aligned pixels from the optical data to the CT image coordinate system, thereby obtaining the first optical data set with three- dimensional spatial coordinates in the CT image coordinate system for all angles.
S40: Stitching together all optical data from the first optical data set to form the reconstructed three-dimensional optical information.
Optionally, the optical imaging device can be a hyperspectral imaging device, fluorescence imaging device, or RGB imaging device, producing hyperspectral data, fluorescence optical data, or RGB optical data respectively.
The CT projection signals and optical data from multiple angles include N CT projection signals and N optical data. wherein, when the rotation frame of the coaxial scanning device rotates by an angle
Ao relative to the object each time, the number of data points generated when the coaxial scanning device rotates 360 degrees relative to the object is given by: N=360°/A¢p
Further optional details include:
In S20, reconstructing the CT voxel data of the object from all angles using a reconstruction algorithm, which involves preprocessing each CT projection signal and applying filtered backprojection algorithm to obtain the CT voxel data of the object.
Optionally, applying the filtered backprojection algorithm to preprocessed CT projection signals from all angles involves:
S21: Performing one-dimensional Fourier transform on N preprocessed CT projection signals to obtain the first projection signals in the frequency domain.
S22: Filtering the first projection signals in the frequency domain to obtain filtered second projection signals.
S23: Performing one-dimensional inverse Fourier transform on the filtered second projection signals to restore them to the time domain, obtaining filtered third projection signals in the time domain.
S24: Performing backprojection on each third projection signal to reconstruct the
CT voxel data of the object by averaging the projected signals from each angle along their original projection paths and accumulating them to obtain the attenuation coefficients of the object at each point, thereby reconstructing the CT voxel data of the object in three dimensions.
The CT voxel data of the reconstructed object includes three-dimensional spatial coordinates of each voxel in the CT image and the Hounsfield Unit (HU) values corresponding to each voxel, which reflect the object's absorption of X-rays.
Optionally, in step S20 where surface voxel data of the CT voxel data of the object is extracted to obtain the CT surface voxel data of the object, the process also includes:
S25: Optimizing the CT voxel data to enhance its quality.
S26: Extracting surface information from the optimized CT voxel data.
S27: Using a predefined voxel data threshold to classify the surface information of the CT voxel data into data belonging to the object and background.
S28: Using a traversal method based on the voxel data belonging to the object to obtain the voxel data defining the object's boundaries. The HU values of voxels that do not belong to the object's boundary are set to 0, resulting in the surface voxel data of the object, i.e, the CT surface voxel data.
Additionally, optionally in step S30:
When the coaxial scanning device rotates relative to the object, the imaging angle at each imaging position relative to the object is ¢i , and the installation angle between the CT imaging device and the optical imaging device is 6 .
The coordinate systems of each imaging device of the coaxial scanning device are as follows: The coordinate origin is located on the central axis of the rotating frame and is at the same height as the optical axis of the imaging device. The Z-axis of the coordinate system points from the coordinate origin to the center of each imaging device.
The Z-axis of the X-ray imaging points from the coordinate origin to the X-ray center, and the XY plane is perpendicular to the Z-axis.
The coordinate systems of each imaging device in the coaxial scanning device are defined such that the origin is at the center axis of rotation, aligned with the optical axis of the imaging device, and the Z-axis points from the origin to the center of each imaging device. The X-ray imaging Z-axis points from the origin to the X-ray center, and the XY plane is perpendicular to the Z-axis.
S31: For each surface voxel data under each imaging angle ¢;, the XY plane is chosen as the projection plane. Orthogonal projection is performed along the negative
Z-axis direction, projecting the three-dimensional data (x, y, z, HU) of surface voxels containing object information onto the corresponding XY plane at each angle, forming the CT two-dimensional projection image (x, y, HU) at that angle. This process is repeated for all imaging angles to obtain the CT two-dimensional projection images.
S32. Perform feature detection on the CT 2D projection image at the imaging angle pi and the optical data at the imaging angle (@i+8) to obtain the respective significant feature points in the CT 2D projection image and the optical data.
S33. Obtain the feature descriptors of the respective significant feature points and perform matching to acquire pairs of matching feature points that exceed a preset threshold.
S34. Based on the pairs of matching feature points, obtain the spatial coordinate transformation mapping between the CT 2D projection image at the imaging angle 9; and the optical data at the imaging angle (+6).
The substep S34 also includes the following substeps:
S341. Based on each pair of matching feature points, divide the coordinates of the feature points from each imaging device by the focal length of that imaging device to obtain the normalized coordinates of the feature points.
S342. Based on the normalized coordinates of the feature points, construct a linear equation. Let p(x,y) and p’(x’,y’) be the normalized coordinates of the feature points. p(x,y) corresponds to the CT 2D projection image, and p’(x’,y’), y{\prime}) corresponds to the optical data.
‚ p”Fp=0
By the linear equation , determine the fundamental matrix, where F is the fundamental matrix and T denotes the transpose.
Using the linear equations constructed from all pairs of feature points, solve for the fundamental matrix.
S343. Based on the fundamental matrix, use the intrinsic parameters of the CT imaging device and the optical imaging device to perform triangulation, mapping the normalized feature point coordinates to 3D points in the world coordinate system.
S344. Using the 3D point coordinates mapped from the normalized feature point coordinates in the world coordinate system, obtain the spatial coordinate transformation mapping, which includes the translation vector and the rotation matrix.
Optionally, S34 includes:
Transforming the pixel positions in the optical data at each angle to coordinates in the registered CT 2D projection image coordinate system.
Establishing a correspondence between pixels in the optical data and spatial positions in the CT 3D voxel data, mapping the pixel information from optical data at each angle to the spatial positions of the CT 3D surface voxel data, thereby obtaining a first optical data set with three-dimensional spatial coordinates for all angles.
Optionally, S40 includes:
Using the method described in S30 to iterate through all adjacent optical data, registering the adjacent optical data, identifying overlapping regions, and stitching all optical data in the first optical data set based on identified overlapping regions to form information after three-dimensional optical reconstruction.
Optimizing the information after three-dimensional optical reconstruction to obtain complete three-dimensional optical reconstruction information.
Additionally, the invention provides a computing device comprising memory and a processor. The memory stores a computer program, and the processor executes the computer program stored in the memory to perform any step of the first aspect of the method for three-dimensional optical reconstruction based on CT images described above.
Beneficial effects
The three-dimensional optical reconstruction method of the present invention does not require prior calibration, is minimally affected by ambient light and shadows, and enhances the accuracy of three-dimensional optical data reconstruction. CT images offer high spatial resolution, providing detailed object structures and accurate three- dimensional spatial coordinates. This capability allows the method based on CT images to achieve high-precision three-dimensional optical reconstruction, accurately reproducing the shape and geometric structure of objects in optical images. It is particularly suitable for three-dimensional optical reconstruction of objects with pronounced surface or depth variations.
BRIEF DESCRIPTION OF THE FIGURES
FIG! is a flow chart of a three-dimensional optical reconstruction method based on
CT images provided by the present invention;
FIG2(a) and FIG2(b) are both schematic diagrams of the coaxial scanning device provided by the present invention;
FIG3(a) is a schematic diagram of X-ray imaging when the imaging device rotates by an angle relative to the object;
FIG3(b) is a schematic diagram of CT surface voxel data after segmentation of the object background and surface extraction after the reconstructed CT three-dimensional voxel data containing the object and the background;
FIG3(c) is a schematic diagram of a CT two-dimensional projection image obtained by two-dimensionally projecting the CT surface voxel data,
FIG3(d) is a schematic diagram of optical imaging when the imaging device rotates by an angle relative to the object;
FIG3(e) is a schematic diagram of the collected optical image data;
FIG3(f) is a schematic diagram of registering the CT two-dimensional projection image at an angle with the optical data at an angle.
DETAILED DESCRIPTION OF THE INVENTION
In order to better understand the above technical solution, the exemplary embodiments of the present invention will be described in more detail with reference to the accompanying drawings. Although the exemplary embodiments of the present invention are shown in the accompanying drawings, it should be understood that the present invention can be implemented in various forms and should not be limited by the embodiments described herein. On the contrary, these embodiments are provided to enable a clearer and more thorough understanding of the present invention and to fully convey the scope of the present invention to those skilled in the art.
The three-dimensional optical reconstruction method of the embodiment of the present invention does not belong to the optical reconstruction of human body images in the medical field. The object to be reconstructed in the embodiment of the present invention can be a larger plant or other food plant.
Example 1
As shown in Figures 1 to 3, the present invention provides a method for three- dimensional optical reconstruction based on CT images. The method is executed by any computing device and includes the following steps:
S10: Utilizing a coaxial scanning device to acquire multi-angle CT projection signals and optical data of the object to be reconstructed. Each angle in the multiple angles corresponds to the rotational angle of the coaxial scanning device relative to the stationary object. The coaxial scanning device includes a fixed CT imaging device and an optical imaging device installed at an angular offset relative to the CT imaging device.
In this embodiment, the computing device can be electrically connected to both the CT imaging device and the optical imaging device. In other embodiments, the processing functions of the computing device may be integrated into the CT imaging device or the optical imaging device.
The optical data in this step refers to optical images, as illustrated in this embodiment. For example, if the optical imaging device is a hyperspectral imaging device, the optical data would be hyperspectral data. If it is a fluorescence imaging device, then the optical data would be fluorescence optical data. If the optical imaging device is an RGB imaging device, the optical data could also be RGB optical data.
In this embodiment, the multi-angle CT projection signals and optical data include
N CT projection signals and N optical data.
In the coaxial scanning device, if the rotation stage rotates relative to the object by an angle Ao each time, the number of data points generated when the coaxial scanning device completes a 360° rotation relative to the object is N=360°/A¢p , where N isa natural number greater than or equal to 1. It is preferred that N be a natural number greater than or equal to 10. In this particular implementation, N can be 360, meaning that the rotation stage rotates relative to the object from 0° to 360°, and data at each angle are sequentially labeled as 90, ol... .@i...@N. For example, with an angle increment of 1°, there would be 360 data points. Ditferent angular resolutions result in different CT imaging resolutions.
S20: Reconstructing all angles’ CT projection signals using a reconstruction algorithm to obtain the CT three-dimensional voxel data of the object to be reconstructed. Surface voxel data of the object's CT three-dimensional voxel data is then extracted to obtain the CT three-dimensional surface voxel data of the object.
Typically, each angle's CT projection signals undergo preprocessing, and filtered back-projection algorithms are applied to reconstruct all preprocessed CT projection signals. This results in the CT three-dimensional voxel data of the object to be reconstructed, from which surface information is extracted to obtain the CT three- dimensional surface voxel data of the object.
S30: Aligning and registering each pixel in the optical data with the corresponding pixel in the CT three-dimensional surface voxel data's CT image coordinate system. This involves mapping the coordinates of aligned and registered pixel points from the optical data to the CT image coordinate system of the CT three-dimensional surface voxel data, thereby obtaining the first optical dataset with three-dimensional spatial coordinates in the CT image coordinate system for all angles.
S40: Stitching all optical data in the first optical dataset to form the information after three-dimensional optical reconstruction.
The method in this embodiment does not require pre-calibration, is less affected by ambient light, and enhances the accuracy of three-dimensional optical data reconstruction. CT images have high spatial resolution, capable of providing fine object structures and accurate three-dimensional spatial coordinates. This enables the method based on CT images for achieving high-precision three-dimensional optical reconstruction, accurately restoring the shape and geometric structure of objects' optical images, especially suitable for objects with drastic surface or depth variations.
For a better understanding of the processes in Steps S20 and S30, the following detailed explanations are provided for each sub-step:
Regarding Step S20, the process includes the following sub-steps:
S21: Performing one-dimensional Fourier transform on each of the N pre-processed
CT projection signals to obtain the first projection signal in N frequency domains.
S22: Filtering the N first projection signals in the frequency domain to obtain N filtered second projection signals.
S23: Performing one-dimensional inverse Fourier transform on the N filtered second projection signals to restore them to the time domain, obtaining the filtered third projection signals in the time domain.
S24: Performing back-projection on each third projection signal. Back-projection involves distributing the projection signals from each angle along their original paths through the object, averaging them at each point traversed by X-rays. Summing up all the back-projected signals from all angles at each point on the object yields the attenuation coefficients of the rays, reconstructing the CT three-dimensional voxel data of the object.
In other words, accumulating data from all angles results in a single CT three- dimensional voxel data set.
The CT three-dimensional voxel data includes the three-dimensional spatial coordinates of each voxel in the reconstructed object's CT image, as well as the
Hounsfield Unit (HU) values at each voxel position. The HU values reflect the object's absorption of X-rays.
In this field, data points in three-dimensional images are called voxels, while those in two-dimensional images are called pixels.
Sub-steps following S20 focus on extracting surface voxel data from the CT three- dimensional voxel data to obtain detailed CT three-dimensional surface voxel data of the object to be reconstructed.
S25: Optimizing the CT three-dimensional voxel data to obtain optimized data.
S26: Extracting surface information from the optimized CT three-dimensional voxel data.
For example, for the aforementioned CT voxel data, an appropriate threshold can be chosen to distinguish object voxels from background voxels. This threshold is used to segment the voxel data into two parts: object voxel data and background voxel data.
S27: Based on a pre-set voxel data threshold, segmenting the CT three-dimensional voxel data into object-associated voxel data and background-associated voxel data.
S28: Based on the voxel data belonging to the object, traverse to obtain the voxel data at the boundary of the object.
Specifically, iterate through all voxel data of the object. For each voxel data, check if its adjacent voxel data belongs to background voxel data. If any adjacent voxel data belongs to background voxel data, then this voxel data is on the boundary of the object.
S29: Set the Hounsfield Unit (HU) values of non-boundary voxel data in the object voxel data to O, obtaining the surface voxel data of the object, 1.e., the CT three- dimensional surface voxel data.
For example, set the HU values of voxel data that are not on the boundary to 0, while maintaining the HU values of boundary voxel data unchanged. This results in three-dimensional voxel data containing only the surface voxel data of the object.
Through the above sub-steps S21 to S29, it describes a method in step S20 to obtain the CT three-dimensional voxel data of the object to be reconstructed and the process to obtain the CT three-dimensional surface voxel data. Other implementations may use different methods, which are not limited in this embodiment.
In this embodiment, the coaxial scanning device rotates relative to the object, the imaging angle of each imaging position is @; with the object, and the installation angle between the CT imaging device and the optical imaging device is 8; (the rotation angle @i refers to each imaging position, there are N imaging positions, then there are N 5 9;,
N imaging data);
The coordinate system for each imaging device in this embodiment is defined as follows: the origin is located at the center axis of the rotation stage, at the same height as the optical axis of the imaging device. The Z-axis points from the origin to the center of the imaging device. The X-ray imaging Z-axis points from the origin to the center of the X-ray. The XY plane is perpendicular to the Z-axis.
Correspondingly, the process of Step S30 includes the following sub-steps:
S31: For the surface voxel data under each imaging angle, select the XY plane as the projection plane. Perform orthogonal projection along the negative Z-axis direction to project the three-dimensional data (x, y, z, HU) containing surface voxels onto the corresponding XY plane under each angle. Each pixel point in the plane represents the projection position of the three-dimensional surface voxel data on the projection plane in the CT coordinate system, forming the two-dimensional projection image (x, y, HU) of CT under that angle. This is done to obtain all the CT two-dimensional projection images under each imaging angle.
S32. Perform feature detection on the CT 2D projection image at the imaging angle pi and the optical data at the imaging angle (gi+9) to obtain the respective significant feature points in the CT 2D projection image and the optical data.
S11 -
S33. Obtain the feature descriptors of the respective significant feature points and perform matching to acquire pairs of matching feature points that exceed a preset threshold.
S34. Based on the pairs of matching feature points, obtain the spatial coordinate transformation mapping between the CT 2D projection image at the imaging angle gi and the optical data at the imaging angle (i+).
Sub-step S34 includes the following:
S341: Normalize the coordinates of each matched feature point pair belonging to each imaging device by dividing them by the focal length of that imaging device.
S342. Construct a linear equation based on the normalized feature point pair coordinates;
Set p(x,y) and p'(x',y') as the normalized feature point pair coordinates; p(x,y) corresponds to the CT two-dimensional projection image, and p'(x’y") corresponds to the optical data;
Determine the basic matrix through the linear equation p' T Fp =0, where F is the basic matrix and T is the transpose;
Collect the linear equations constructed by all feature point pairs and solve the basic matrix;
S343. Based on the basic matrix, use the internal parameters of the imaging device to which the CT two-dimensional projection image belongs and the imaging device to which the optical data belongs to perform triangulation, and map the normalized feature point pair coordinates to the three-dimensional points in the world coordinate system; coordinates in the world coordinate system to obtain the spatial coordinate transformation mapping, which includes translation vectors and rotation matrices.
S35: Register the CT two-dimensional projection images under each imaging angle and the optical data under each imaging angle based on the spatial coordinate transformation mapping.
By applying the steps from S32 to S35, traverse through all imaging angles to achieve alignment and registration of all CT two-dimensional projection images and optical data under each imaging angle. Transform the pixel positions in the optical data at each angle into coordinates in the registered CT two-dimensional projection image coordinate system.
Establish the correspondence between the pixel positions in the optical data and the spatial positions of the CT three-dimensional voxels, mapping the pixel information in the optical data at each angle to the spatial positions of the CT three-dimensional surface voxels to obtain the first optical data set with three-dimensional spatial coordinates for all angles.
The above process is not only applicable to hyperspectral images but also to fluorescence optical data, RGB optical data, etc. This embodiment does not limit itself to specific types of optical imaging data and can configure optical imaging data of the coaxial scanning device as needed to obtain corresponding optical data.
Embodiment 2
Combining Figure 2(a), Figure 2(b), and Figure 3(a) to Figure 3(f), this embodiment details a three-dimensional optical reconstruction method based on CT images in this embodiment, where the optical data is fluorescence images or other optical images. The method of this embodiment can include the following steps: 201: Initially, use the coaxial scanning device to obtain CT projection signals and fluorescence data of the object to be reconstructed from multiple angles. The object remains stationary during the process of obtaining three-dimensional data. At each angle, there is a CT projection signal and fluorescence data; the multiple angles can be the angles at which the imaging devices rotate around the object in the coaxial scanning device.
The coaxial scanning device of this embodiment includes: a rotating frame with a
CT imaging device (including X-ray source and X-ray detector) fixed on the rotating frame, fluorescence imaging device (including light source and camera).
Figure 2(a) and Figure 2(b) illustrate a schematic diagram of the coaxial scanning device, where XYZ represents the object coordinate system and X'Y'Z represents the rotating frame coordinate system. The CT imaging device (comprising X-ray source and
X-ray detector, with the X-ray source facing the X-ray detector, and the object to be reconstructed in between) and the optical imaging device (such as a fluorescence camera; here, the optical imaging device is not limited to a fluorescence camera and can include other types of optical imaging devices, and there can be multiple types simultaneously) are mounted on a circular rotating frame with their centers at the same height. The angle between the X-ray source and the fluorescence imaging camera is 9, and the object to be reconstructed is at the center of the circular rotating frame.
During imaging, the object does not move, and the rotating frame rotates to drive the imaging device on it to rotate around the object (that 1s, the object coordinate system remains unchanged, and the rotating frame coordinate system rotates around the Z axis).
Each time a certain angle is rotated (such as Ap=1°, the smaller the angle, the higher the
CT reconstruction accuracy) an imaging is completed. Imaging includes:
CT imaging at that angle: Primarily refers to the X-ray detector receiving the X- rays attenuated by the object at that angle, termed as the CT projection signal at that angle.
Fluorescence imaging at that angle: Optical image data used for reconstruction.
After rotating through 360° | completing all angle CT projection signals and optical image data acquisition, assuming imaging every 1° of rotation, when the rotating frame with devices scans 360° , it obtains 360 CT projection signals and 360 fluorescence data 202: Data Preprocessing
Preprocess the CT projection signals to reduce noise and enhance image quality.
This preprocessing can be implemented using existing methods such as artifact removal, gamma correction, filtering, etc. This embodiment does not limit the preprocessing methods and can be chosen based on specific needs. 203: Reconstruction Using Reconstruction Algorithm
Use a reconstruction algorithm to reconstruct the CT projection signals from all angles to obtain the CT three-dimensional voxel data of the object to be reconstructed.
CT reconstruction essentially involves solving the distribution of X-ray attenuation coefficients inside the object based on the CT projection signals obtained in Step 201 (i.e, the X-ray attenuation coefficients of different parts of the object. Different materials attenuate X-rays differently, and CT imaging detects the internal distribution of materials without damage).
In this embodiment, the Filtered Back Projection (FBP) algorithm is used. The reconstruction steps of this FBP algorithm are as follows:
Perform a one-dimensional Fourier transform on the 360 CT projection signals in the time domain to obtain 360 projection signals in the frequency domain.
Apply filtering to the 360 projection signals in the frequency domain to obtain filtered CT projection signals.
Perform a one-dimensional inverse Fourier transform on the 360 filtered CT projection signals to restore them to the time domain, obtaining filtered CT projection signals in the time domain.
Perform back projection on each of the filtered projection signals. Sum up the back- projected signals from 360 angles to compute the attenuation at various parts of the object, reconstructing a three-dimensional voxel data of the object.
Back projection distributes each projection signal back along its original projection path to every point within the object, incorporating both the three-dimensional spatial coordinates and the X-ray attenuation coefficients of each voxel. All the three- dimensional voxels together form the three-dimensional spatial shape of the object, providing accurate three-dimensional spatial coordinates for subsequent three- dimensional optical reconstruction of the object. 204: Post-processing of CT Voxel Data
Perform post-processing on the CT three-dimensional voxel data obtained from CT reconstruction, including denoising and contrast enhancement, to obtain improved quality CT three-dimensional voxel data.
Denoising reduces noise and artifacts in the image data to improve the quality of reconstructed data; contrast enhancement increases the readability and clarity of the reconstructed data. 205: Extraction of Surface Information from CT Voxel Data
Extract surface information from the CT three-dimensional voxel data of the object obtained in Step 204.
Based on predefined voxel data thresholds, classify the CT three-dimensional voxel data into object-related voxel data and background-related voxel data. Using a traversal method, obtain voxel data at the boundaries of the object. Set the Hounsfield Unit (HU) values of non-boundary voxel data in the object-related voxel data to 0, while retaining the original HU values of voxel data at the object interface, resulting in surface voxel data of the object, i.e., CT three-dimensional surface voxel data. 206: Registration of CT Surface Voxel Data and Fluorescence Data
Typically, registration refers to the process of aligning two or more sets of image data acquired at different times or from different imaging devices. It involves finding a spatial transformation that maps points from one image onto another image, ensuring that corresponding points in space between the two images are accurately matched.
Since both the CT imaging device and the fluorescence imaging device are mounted on the rotation stage, their angular separation within the same 3D coordinate system (the rotation stage’ s coordinate system) is denoted by 9 . Thus, when the rotation stage rotates relative to the object by an angle yi , the CT imaging perspective (as shown in
Figure 2(a)) corresponds to the same part of the object as the perspective at angle ¢i+6 (as shown in Figure 2(b)).
The steps for registering the reconstructed CT 3D voxel surface data obtained in
Step 205 with fluorescence data are described as follows: coordinate systems of coaxial scanning device imaging devices: The coordinate origin is located at the center axis of the rotation stage, aligned with the optical axis of each imaging device at the same height. The Z-axis extends from the origin towards the center of each imaging device, and the XY plane is perpendicular to the Z-axis. For each imaging angle %;, project the surface voxel data onto the XY plane: choose the XY plane as the projection plane; perform orthogonal projection along the negative Z-axis direction; project the 3D data (x, y, z, HU) containing surface voxels onto the XY plane corresponding to angle 9: each pixel within this plane represents the projected position of the 3D surface voxel data on the projection plane, forming the 2D projection image of CT at that angle (x, y, HU); this process generates 2D projection images of CT for all imaging angles ¢i.
For the CT two-dimensional projection image at the angle ¢iand the corresponding optical image at the angle @i+0, feature detection is performed manually or automatically to obtain the significant feature points in the CT two-dimensional projection image and the optical image.
For example, feature detection can be performed manually or automatically to find significant feature points in the image (which can be edges, intersections, contours, shapes, structures, etc.). Manual detection methods involve marking points of interest on the image. Features, suitable for situations where specific features need to be selected with high accuracy. Automatic feature detection utilizes computer vision libraries such as OpenCV, and can effectively and quickly locate feature points by calling appropriate feature detection algorithms such as Harris corner detection, SIFT, SURF, FAST, ORB, etc.;
Calculate the feature descriptors (feature descriptors are vectors used to represent the number of regions around feature points) of the feature points found in the CT two-
dimensional projection image at angle (i and the corresponding optical image at angle pi+8, and obtain the feature descriptors of the feature points of the two images; match the feature points of the two images (e.g., using nearest neighbor matching to match each feature point in one image with the closest feature point in the other image). After the matching is completed, the quality of the matching is determined by quantifying the similarity between the descriptors of the feature points of the two images. The similarity metric can be the Euclidean distance between the descriptor vectors. The smaller the distance, the more similar it is, which further indicates that the feature points are better matched.
From this, the corresponding salient features between the two-dimensional CT projection image and the corresponding fluorescence image are found. By comparing the distance measures between feature descriptors, the most matching descriptor pair is selected; thereby achieving feature point matching between images.
Calculate the spatial coordinate transformation relationship between the two images using matched feature point pairs. The specific steps are as follows: (1) For each feature point pair, normalize the coordinates of the feature points by dividing them by the focal length of the imaging device. (2) For the normalized coordinates of each feature point pair, construct a linear equation. Assuming that p(x, y) and p’ (x’ ‚y’ ) are the normalized coordinates of any pair of matching points in the CT two-dimensional projection image and the optical image, the basic matrix is defined by the linear equation p' T Fp=0, where F is the basic matrix. Combine the CT 2D projection images at all angles @i and the corresponding optical images at angles ¢i+6. Solve the linear equations constructed from all feature point pairs to obtain the fundamental matrix that describes the geometric relationship between the two imaging devices (CT imaging detector and optical imaging camera). (3). Perform triangulation using the intrinsic parameters of the
CT imaging detector and the optical imaging camera to map the feature point pairs to 3D points in the world coordinate system. (4). Utilize the 3D coordinates of the feature point pairs in the world coordinate system to calculate the spatial coordinate transformation relationship between the CT 2D projection images and the optical images. This transformation includes the translation vector and rotation matrix.
That is, by calculating the spatial coordinate transformation relationship (such as translation, rotation, scaling, etc.) between the CT 2D projection image at angle bi and the corresponding optical image at angle ¢i+6 through the matched feature point pairs, the process is repeated for all 9:. For all imaging angles, perform the operations described in steps 2) to 4). Finally, align and register the two images using the calculated spatial coordinate transformation relationships. This ensures pixel-by-pixel alignment and registration between the CT 2D projection images and optical images at each angle. 207. Coordinate Mapping
Firstly, map the coordinates of optical image pixels into the coordinate system of the CT image. Convert the pixel positions in the optical data at each angle to coordinates in the registered CT 2D projection image coordinate system. Establish correspondence between pixels in optical data and spatial positions in CT voxel data, mapping the pixel information from optical data at each angle to spatial positions on the CT 3D voxel surface, obtaining optical data sets with three-dimensional spatial coordinates for all angles. 208. Due to sufficient overlap between optical images at adjacent angles, use the registration method described in step 205 to register optical data with three- dimensional spatial coordinates from adjacent angles. This process stitches and reconstructs complete three-dimensional optical images. 209. Further optimize the generated three-dimensional optical images.
Smooth and enhance the stitching areas using Gaussian filtering to achieve a more realistic three-dimensional optical reconstruction. Adjust filter parameters based on image characteristics and requirements to achieve the best results.
In practical applications, this method leverages the high resolution and multi- layered information provided by CT scans to generate accurate and comprehensive three-dimensional models. This approach serves as a powerful tool and resource for visualization, analysis, and applications.
This embodiment benefits from high resolution, where CT images offer high spatial resolution capturing fine details and structures of objects accurately. By using CT images as input data, real three-dimensional spatial coordinate information of objects is provided, facilitating precise three-dimensional optical reconstruction.
Additionally, this invention provides a computing device comprising memory and a processor. The memory stores a computer program, and the processor executes the computer program stored in the memory to perform the steps of any of the methods for three-dimensional optical reconstruction based on CT images as described in any of the embodiments above.
In the description of this specification, the description of the terms "one embodiment”, "some embodiments", "embodiment", "example", "specific example" or "some examples" etc. means that the specific features, structures, materials or characteristics described in conjunction with the embodiment or example are included in at least one embodiment or example of the present invention. In this specification, the schematic representation of the above terms does not necessarily refer to the same embodiment or example. Moreover, the specific features, structures, materials or characteristics described may be combined in a combined manner in any one or more embodiments or examples. In addition, those skilled in the art may combine and combine different embodiments or examples described in this specification and the features of different embodiments or examples, unless they are contradictory.
Although the embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and cannot be understood as limitations of the present invention. Those skilled in the art may change, modify, replace and modify the above embodiments within the scope of the present invention.

Claims (10)

Conclusies l. Werkwijze voor driedimensionale optische reconstructie op basis van CT- beelden, omvattend: S10: het verkrijgen van CT-projectiesignalen uit meerdere hoeken en optische gegevens van een te reconstrueren object met behulp van een coaxiale scanapparaat; waarbij elke hoek van de meerdere hoeken de rotatiehoek van het coaxiale scanapparaat ten opzichte van het stationaire te reconstrueren object representeert, waarbij het coaxiale scanapparaat een vast geïnstalleerd CT-beeldvormingsapparaat en een optisch beeldvormingsapparaat met een installatiehoek ten opzichte van het CT- beeldvormingsapparaat omvat; S20: het reconstrueren van CT-volumegegevens van het te reconstrueren object uit de CT-projectiesignalen bij alle hoeken met behulp van een reconstructie-algoritme, en het extraheren van oppervlaktevoxelgegevens uit de CT-volumegegevens van het object, en het verkrijgen van CT-driedimensionale oppervlaktevoxelgegevens van het te reconstrueren object; S30: het uitlijnen en registreren van elk pixel in de optische gegevens met de corresponderende pixels in de CT-driedimensionale oppervlaktevoxelgegevens, en het toewijzen van de coördinaten van pixels in de uitgelijnde optische gegevens aan het CT- beeldcoördinatensysteem van de CT-volumegegevens, het verkrijgen van een eerste optisch gegevensset met driedimensionale ruimtelijke coördinaten in het CT- beeldcoördinatensysteem voor alle hoeken; S40: het samenvoegen van alle optische gegevens in de eerste optische gegevensset om de gereconstrueerde driedimensionale optische informatie te vormen.Conclusions l. A method for three-dimensional optical reconstruction based on CT images, comprising: S10: obtaining CT projection signals from multiple angles and optical data of an object to be reconstructed using a coaxial scanning apparatus; each angle of the multiple angles representing the rotation angle of the coaxial scanning apparatus with respect to the stationary object to be reconstructed, the coaxial scanning apparatus comprising a fixedly installed CT imaging apparatus and an optical imaging apparatus having an installation angle with respect to the CT imaging apparatus; S20: reconstructing CT volume data of the object to be reconstructed from the CT projection signals at all angles using a reconstruction algorithm, and extracting surface voxel data from the CT volume data of the object, and obtaining CT three-dimensional surface voxel data of the object to be reconstructed; S30: aligning and registering each pixel in the optical data with the corresponding pixels in the CT three-dimensional surface voxel data, and assigning the coordinates of pixels in the aligned optical data to the CT image coordinate system of the CT volume data, obtaining a first optical data set with three-dimensional spatial coordinates in the CT image coordinate system for all angles; S40: merging all the optical data in the first optical data set to form the reconstructed three-dimensional optical information. 2. Werkwijze volgens conclusie 1, waarbij het optische beeldvormingsapparaat een hyperspectraal beeldvormingsapparaat is, en de optische gegevens hyperspectrale gegevens zijn; waarbij het optische beeldvormingsapparaat een fluorescentiebeeldvormingsapparaat is, en de optische gegevens fluorescentie optische gegevens zijn; waarbij het optische beeldvormingsapparaat een RGB- beeldvormingsapparaat is, en de optische gegevens RGB optische gegevens zijn; waarbij de CT-projectiesignalen uit meerdere hoeken en optische gegevens N CT- projectiesignalen en N optische gegevens omvatten; waarbij, wanneer het rotatieframe van het coaxiale scanapparaat elke keer met een hoek Ao ten opzichte van het object draait, het aantal gegevenspunten dat wordt gegenereerd wanneer het coaxiale scanapparaat 360 graden ten opzichte van het object draait wordt gegeven door: N=360°/Ag.2. The method of claim 1, wherein the optical imaging apparatus is a hyperspectral imaging apparatus, and the optical data is hyperspectral data; wherein the optical imaging apparatus is a fluorescence imaging apparatus, and the optical data is fluorescence optical data; wherein the optical imaging apparatus is an RGB imaging apparatus, and the optical data is RGB optical data; wherein the multi-angle CT projection signals and optical data include N CT projection signals and N optical data; wherein, when the rotation frame of the coaxial scanning apparatus rotates by an angle Δo with respect to the object each time, the number of data points generated when the coaxial scanning apparatus rotates 360 degrees with respect to the object is given by: N=360°/Δg. 3. Werkwijze volgens conclusie 1, waarbij in stap S20, het reconstrueren van CT- volumegegevens van het te reconstrueren object uit CT-projectiesignalen bij alle hoeken met behulp van een reconstructie-algoritme omvat: het voorbewerken van CT-projectiesignalen bij elke hoek, het gebruiken van een gefilterde-terugprojectie-algoritme om voorbewerkte CT- projectiesignalen bij alle hoeken te reconstrueren, en het verkrijgen van CT-volumegegevens van het te reconstrueren object.3. The method of claim 1, wherein in step S20, reconstructing CT volume data of the object to be reconstructed from CT projection signals at all angles using a reconstruction algorithm comprises: preprocessing CT projection signals at each angle, using a filtered backprojection algorithm to reconstruct preprocessed CT projection signals at all angles, and obtaining CT volume data of the object to be reconstructed. 4. Werkwijze volgens conclusie 3, waarbij het gefilterde-terugprojectie-algoritme dat dwordt gebruikt om voorbewerkte CT-projectiesignalen bij alle hoeken te reconstrueren en CT-volumegegevens van het te reconstrueren object te verkrijgen, omvat: S21: het uitvoeren van een één-dimensionale Fourier-transformatie op N voorbewerkte CT-projectiesignalen om N eerste projectiesignalen in het frequentiedomein te verkrijgen; S22: het filteren van de N eerste projectiesignalen in het frequentiedomein om gefilterde N tweede projectiesignalen te verkrijgen; S23: het uitvoeren van een één-dimensionale inverse Fourier-transformatie op de N gefilterde tweede projectiesignalen om deze te herstellen naar het tijddomein, waardoor gefilterde N derde projectiesignalen in het tijddomein worden verkregen; S24: het uitvoeren van terugprojectie op elk derde projectiesignaal, waarbij terugprojectie de projectiesignalen van elke hoek langs hun oorspronkelijke projectiepaden distribueert, deze gemiddeld over elk punt door het object, en de teruggeprojecteerde signalen van hetzelfde punt uit alle hoeken accumuleert om de verzwakkingscoëfficiënten van stralen bij elk punt van het object te verkrijgen, waarbij de CT-volumegegevens van het object driedimensionaal worden gereconstrueerd; waarbij de CT-volumegegevens het volgende omvatten: driedimensionale ruimtelijke coördinaten van elk voxel in het CT-beeld van het gereconstrueerde object, en4. The method of claim 3, wherein the filtered backprojection algorithm used to reconstruct preprocessed CT projection signals at all angles and obtain CT volume data of the object to be reconstructed comprises: S21: performing a one-dimensional Fourier transform on N preprocessed CT projection signals to obtain N first projection signals in the frequency domain; S22: filtering the N first projection signals in the frequency domain to obtain filtered N second projection signals; S23: performing a one-dimensional inverse Fourier transform on the N filtered second projection signals to restore them to the time domain, thereby obtaining filtered N third projection signals in the time domain; S24: performing backprojection on every third projection signal, wherein backprojection distributes the projection signals from each angle along their original projection paths, averages them over each point through the object, and accumulates the backprojected signals from the same point from all angles to obtain the attenuation coefficients of rays at each point of the object, thereby three-dimensionally reconstructing the CT volume data of the object; wherein the CT volume data includes: three-dimensional spatial coordinates of each voxel in the CT image of the reconstructed object, and Hounsfield-eenheid (HU) -waarden van elke voxelpositie, die de absorptie van röntgenstralen door het object representeren.Hounsfield unit (HU) values of each voxel position, which represent the absorption of X-rays by the object. 5. Werkwijze volgens conclusie 4, waarbij in stap S20, het extraheren van oppervlaktevoxelgegevens van de CT-volumegegevens van het te reconstrueren object en het verkrijgen van CT-driedimensionale oppervlaktevoxelgegevens van het object omvat: S25: het optimaliseren van de CT-volumegegevens om geoptimaliseerde CT- volumegegevens te verkrijgen; S26: het extraheren van oppervlakte-informatie uit de geoptimaliseerde CT- volumegegevens; S27: op basis van een vooraf gedefinieerde voxelgegevensdrempel, het classificeren van de oppervlakte-informatie van de CT-volumegegevens in voxelgegevens die tot het object behoren en voxelgegevens die tot de achtergrond behoren; S28: het gebruik van een doorloopwerkwijze gebaseerd op voxelgegevens die tot het object behoren, om voxelgegevens van de grenzen van het object te verkrijgen; het instellen van HU-waarden van voxelgegevens die zich niet aan de grens van het object bevinden op O in de voxelgegevens die tot het object behoren, om oppervlaktevoxelgegevens, d.w.z. CT-driedimensionale oppervlaktevoxelgegevens, van het object te verkrijgen5. The method of claim 4, wherein in step S20, extracting surface voxel data from the CT volume data of the object to be reconstructed and obtaining CT three-dimensional surface voxel data of the object comprises: S25: optimizing the CT volume data to obtain optimized CT volume data; S26: extracting surface information from the optimized CT volume data; S27: based on a predefined voxel data threshold, classifying the surface information of the CT volume data into voxel data belonging to the object and voxel data belonging to the background; S28: using a traversal method based on voxel data belonging to the object to obtain voxel data of the boundaries of the object; setting HU values of voxel data not at the boundary of the object to O in the voxel data belonging to the object, to obtain surface voxel data, i.e. CT three-dimensional surface voxel data, of the object 6. Werkwijze volgens conclusie 5, waarbij stap S30 omvat: wanneer het coaxiale scanapparaat draait ten opzichte van het object, met de beeldhoek bij elke beeldpositionering ten opzichte van het object gi, en met de installatichoek tussen het CT-beeldvormingsapparaat en het optische beeldvormingsapparaat 6; dan zijn de coördinatensystemen van elk beeldvormingsapparaat van het coaxiale scanapparaat als volgt: de coördinaat-oorsprong is gelegen op de centrale as van het draaiende frame en bevindt zich op dezelfde hoogte als de optische as van het beeldvormingsapparaat; de Z-as van het coördinatensysteem wijst van de coördinaat oorsprong naar het midden van elk beeldvormingsapparaat; de Z-as van de röntgenbeeldvorming wijst van de coördinaat oorsprong naar het röntgencentrum, en het XY-vlak staat loodrecht op de Z-as;6. The method according to claim 5, wherein step S30 comprises: when the coaxial scanning apparatus rotates with respect to the object, with the image angle at each image positioning with respect to the object gi, and with the installation angle between the CT imaging apparatus and the optical imaging apparatus 6; then the coordinate systems of each imaging apparatus of the coaxial scanning apparatus are as follows: the coordinate origin is located on the central axis of the rotating frame and is at the same height as the optical axis of the imaging apparatus; the Z axis of the coordinate system points from the coordinate origin to the center of each imaging apparatus; the Z axis of the X-ray imaging points from the coordinate origin to the X-ray center, and the XY plane is perpendicular to the Z axis; S31: voor elke oppervlaktevoxelgegevens bij elke beeldhoek oi, het kiezen van het XY-vlak als het projectievlak; het uitvoeren van orthogonale projectie langs de negatieve Z-as richting, waarbij de driedimensionale gegevens (x, y, z, HU) van oppervlaktevoxel die objectinformatie omvatten, worden geprojecteerd op het overeenkomstige XY-vlak bij elke hoek, waardoor het CT tweedimensionale projectiebeeld (x, y, HU) bij die hoek wordt gevormd; het herhalen van dit proces voor alle beeldhoeken om de CT tweedimensionale projectiebeelden te verkrijgen; S32: het uitvoeren van kenmerkdetectie op het CT 2D-projectiebeeld bij de beeldhoek ¢1 en de optische gegevens bij de beeldhoek (i+8) om de respectieve significante kenmerkpunten in het CT 2D-projectiebeeld en de optische gegevens te verkrijgen; S33: het verkrijgen van de kenmerkbeschrijvers van de respectieve significante kenmerkpunten en het uitvoeren van matching om paren van overeenkomende kenmerkpunten te verkrijgen die een vooraf ingestelde drempel overschrijden;S31: for each surface voxel data at each view angle oi, choosing the XY plane as the projection plane; performing orthogonal projection along the negative Z-axis direction, whereby the three-dimensional data (x, y, z, HU) of surface voxel including object information are projected onto the corresponding XY plane at each angle, thereby forming the CT two-dimensional projection image (x, y, HU) at that angle; repeating this process for all view angles to obtain the CT two-dimensional projection images; S32: performing feature detection on the CT 2D projection image at the view angle ¢1 and the optical data at the view angle (i+8) to obtain the respective significant feature points in the CT 2D projection image and the optical data; S33: obtaining the feature descriptors of the respective significant feature points and performing matching to obtain pairs of matched feature points exceeding a preset threshold; S34. op basis van de paren van overeenkomende kenmerkpunten, het verkrijgen van de ruimtelijke coördinaattransformatiemapping tussen het CT 2D-projectiebeeld bij de beeldhoek oi en de optische gegevens bij de beeldhoek (@i+6).S34. Based on the pairs of corresponding feature points, obtain the spatial coordinate transformation mapping between the CT 2D projection image at the image angle oi and the optical data at the image angle (@i+6). 7. Werkwijze volgens conclusie 6, waarbij stap S34 omvat: S341. op basis van elk paar van overeenkomende kenmerkpunten, het delen van de coördinaten van de kenmerkpunten van elk beeldvormingsapparaat door de brandpuntsafstand van dat beeldvormingsapparaat om de genormaliseerde coördinaten van de kenmerkpunten te verkrijgen; S342. op basis van de genormaliseerde coördinaten van de kenmerkpunten, het opstellen van een lineaire vergelijking, waarbij p(x,y) en p’(x’,y’) de genormaliseerde coördinaten van de kenmerkpunten zijn; p(x,y) overeenkomt met het CT 2D- projectiebeeld, en p’(x’,y’) overeenkomt met de optische gegevens; het bepalen van de fundamentele matrix met behulp van de lineaire vergelijking, waarbij F de fundamentele matrix is en T de transposée betekent; het gebruiken van de lineaire vergelijkingen die zijn opgebouwd uit alle paren van kenmerkpunten om de fundamentele matrix op te lossen; S343. op basis van de fundamentele matrix, het gebruiken van de intrinsieke parameters van het CT-beeldvormingsapparaat en het optische beeldvormingsapparaat om triangulatie uit te voeren, waarbij de genormaliseerde kenmerkpuntcoördinaten worden omgezet naar 3D-punten in het wereldcoördinatensysteem; S344. het gebruiken van de 3D-puntcoördinaten die zijn omgezet vanuit de genormaliseerde kenmerkpuntcoördinaten in het wereldcoördinatensysteem om de ruimtelijke coördinaattransformatie mapping te verkrijgen, die de translatievector en de rotatlematrix omvat;7. The method of claim 6, wherein step S34 comprises: S341. based on each pair of corresponding feature points, dividing the coordinates of the feature points of each imaging device by the focal length of that imaging device to obtain the normalized coordinates of the feature points; S342. based on the normalized coordinates of the feature points, establishing a linear equation, where p(x,y) and p’(x’,y’) are the normalized coordinates of the feature points; p(x,y) corresponds to the CT 2D projection image, and p’(x’,y’) corresponds to the optical data; determining the fundamental matrix using the linear equation, where F is the fundamental matrix and T means the transpose; using the linear equations constructed from all the pairs of feature points to solve the fundamental matrix; S343. based on the fundamental matrix, using the intrinsic parameters of the CT imaging device and the optical imaging device to perform triangulation, converting the normalized feature point coordinates into 3D points in the world coordinate system; S344. using the 3D point coordinates converted from the normalized feature point coordinates in the world coordinate system to obtain the spatial coordinate transformation mapping, which includes the translation vector and the rotation matrix; 8. Werkwijze volgens conclusie 6, waarbij stap S34 omvat: het omzetten van pixelposities in de optische gegevens bij elke hoek naar coördinaten in het geregistreerde CT tweedimensionale projectiebeeld coördinatensysteem; het vaststellen van een overeenkomst tussen pixels in de optische gegevens en ruimtelijke posities in de CT driedimensionale oppervlaktevoxelgegevens, waarbij pixelinformatie in de optische gegevens bij elke hoek wordt toegewezen aan ruimtelijke posities in de CT driedimensionale oppervlaktevoxelgegevens, en het verkrijgen van een eerste optische gegevensset met driedimensionale ruimtelijke coördinaten bij alle hoeken.8. The method of claim 6, wherein step S34 comprises: converting pixel positions in the optical data at each angle to coordinates in the registered CT two-dimensional projection image coordinate system; determining a correspondence between pixels in the optical data and spatial positions in the CT three-dimensional surface voxel data, wherein pixel information in the optical data at each angle is mapped to spatial positions in the CT three-dimensional surface voxel data, and obtaining a first optical data set having three-dimensional spatial coordinates at all angles. 9. Werkwijze volgens conclusie 8, waarbij stap S40 omvat: het gebruiken van de werkwijze van S30 om alle aangrenzende optische gegevens te doorlopen, aangrenzende optische gegevens te registreren, overlappende gebieden te identificeren, en op basis van de geïdentificeerde overlappende gebieden alle optische gegevens in de eerste optische gegevensset samen te voegen, waardoor informatie na driedimensionale optische reconstructie wordt gevormd; het optimaliseren van de informatie na driedimensionale optische reconstructie om volledige informatie van de driedimensionale optische reconstructie te verkrijgen.9. The method of claim 8, wherein step S40 comprises: using the method of S30 to traverse all the adjacent optical data, register adjacent optical data, identify overlapping areas, and based on the identified overlapping areas, merge all the optical data in the first optical data set, thereby forming information after three-dimensional optical reconstruction; optimizing the information after three-dimensional optical reconstruction to obtain complete information of the three-dimensional optical reconstruction. 10. Rekentoestel omvattend een geheugen en een processor, waarbij het geheugen een computerprogramma opslaat, en de processor het computerprogramma dat in het geheugen is opgeslagen uitvoert en de stappen van een van de conclusies 1 - 9 uit te voeren, die een werkwijze voor driedimensionale optische reconstructie op basis van CT-beelden beschrijven.10. A computing device comprising a memory and a processor, the memory storing a computer program, and the processor executing the computer program stored in the memory and performing the steps of any of claims 1 to 9, which describe a method for three-dimensional optical reconstruction based on CT images.
NL2038335A 2023-10-09 2024-07-25 A method based on CT images NL2038335B1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311304245.9A CN117649484B (en) 2023-10-09 2023-10-09 Three-dimensional optical reconstruction method based on CT image

Publications (2)

Publication Number Publication Date
NL2038335A NL2038335A (en) 2024-09-20
NL2038335B1 true NL2038335B1 (en) 2025-05-27

Family

ID=90043972

Family Applications (1)

Application Number Title Priority Date Filing Date
NL2038335A NL2038335B1 (en) 2023-10-09 2024-07-25 A method based on CT images

Country Status (2)

Country Link
CN (1) CN117649484B (en)
NL (1) NL2038335B1 (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7198404B2 (en) * 2003-04-03 2007-04-03 Siemens Medical Solutions Usa, Inc. Real-time acquisition of co-registered X-ray and optical images
ATE500778T1 (en) * 2005-12-22 2011-03-15 Visen Medical Inc COMBINED X-RAY AND OPTICAL TOMOGRAPHY IMAGING SYSTEM
KR101849705B1 (en) * 2015-11-13 2018-05-30 한국전기연구원 Method and system for generating 3D image using spectral x-ray and optical image
WO2017117517A1 (en) * 2015-12-30 2017-07-06 The Johns Hopkins University System and method for medical imaging
CN107784684B (en) * 2016-08-24 2021-05-25 深圳先进技术研究院 A method and system for three-dimensional reconstruction of cone beam CT
CN109872353B (en) * 2019-01-04 2023-05-12 西北大学 Registration Method of White Light Data and CT Data Based on Improved Iterative Closest Point Algorithm
CN114842154B (en) * 2022-07-04 2022-11-15 江苏集萃苏科思科技有限公司 Method and system for reconstructing three-dimensional image based on two-dimensional X-ray image

Also Published As

Publication number Publication date
NL2038335A (en) 2024-09-20
CN117649484B (en) 2025-10-17
CN117649484A (en) 2024-03-05

Similar Documents

Publication Publication Date Title
Mac Aodha et al. Patch based synthesis for single depth image super-resolution
Kaur et al. A comprehensive review of denoising techniques for abdominal CT images
US20150254857A1 (en) Image processing system with registration mechanism and method of operation thereof
CN111602177B (en) Method and device for generating 3D reconstructions of objects
El-Shafai et al. Traditional and deep-learning-based denoising methods for medical images
CN120318603B (en) A Smart Image Signal Processing Method and System Based on Multimodal Fusion
WO2023082306A1 (en) Image processing method and apparatus, and electronic device and computer-readable storage medium
Maas et al. Nerf for 3d reconstruction from x-ray angiography: Possibilities and limitations
Hu et al. Predicting high-fidelity human body models from impaired point clouds
NL2038335B1 (en) A method based on CT images
Tabkha et al. Semantic enrichment of point cloud by automatic extraction and enhancement of 360° panoramas
Petrovska et al. Geometric accuracy analysis between neural radiance fields (NeRFs) and terrestrial laser scanning (TLS)
Lai et al. Computer Vision–ACCV 2016: 13th Asian Conference on Computer Vision, Taipei, Taiwan, November 20-24, 2016, Revised Selected Papers, Part III
Elss et al. Motion estimation in coronary CT angiography images using convolutional neural networks
US20230394718A1 (en) Segmentation of computed tomography voxel data using machine learning
CN118570326A (en) Human tissue imaging method, system, medium and computer equipment
CN113052929B (en) Linear scanning CL reconstruction method based on projection view weighting
KR20240152935A (en) Method and system for cross-referencing of two-dimensional (2D) ultrasound scans of tissue volumes
Yang et al. Differential camera tracking through linearizing the local appearance manifold
Nikolakakis et al. GaSpCT: Gaussian splatting for novel brain CBCT projection view synthesis
Velten et al. Tomographical object detection using a reduced number of projection angles
Sobani et al. 3D model reconstruction from multi-views of 2D images using radon transform
Kim et al. A high quality depth map upsampling method robust to misalignment of depth and color boundaries
Rahal et al. Object oriented structure from motion: Can a scribble help?
Sreekumar et al. Low Cost 3D Scanner Using Iterative Closest Point Algorithm