[go: up one dir, main page]

WO2016160862A1 - Procédé d'assemblage d'images dépendant d'un chevauchement pour des images capturées à l'aide d'un appareil de prise de vues in vivo par capsule - Google Patents

Procédé d'assemblage d'images dépendant d'un chevauchement pour des images capturées à l'aide d'un appareil de prise de vues in vivo par capsule Download PDF

Info

Publication number
WO2016160862A1
WO2016160862A1 PCT/US2016/024811 US2016024811W WO2016160862A1 WO 2016160862 A1 WO2016160862 A1 WO 2016160862A1 US 2016024811 W US2016024811 W US 2016024811W WO 2016160862 A1 WO2016160862 A1 WO 2016160862A1
Authority
WO
WIPO (PCT)
Prior art keywords
stage
image
images
stitched
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2016/024811
Other languages
English (en)
Inventor
Chenyu Wu
Yi Xu
Kang-Huai Wang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Capsovision Inc
Original Assignee
Capsovision Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US14/675,744 external-priority patent/US9324172B2/en
Application filed by Capsovision Inc filed Critical Capsovision Inc
Priority to CN201680020175.6A priority Critical patent/CN107529944B/zh
Publication of WO2016160862A1 publication Critical patent/WO2016160862A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/041Capsule endoscopes for imaging
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/32Determination of transform parameters for the alignment of images, i.e. image registration using correlation-based methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/35Determination of transform parameters for the alignment of images, i.e. image registration using statistical methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/38Registration of image sequences
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30092Stomach; Gastric

Definitions

  • TITLE METHOD OF OVERLAP-DEPENDENT IMAGE STITCHING FOR IMAGES CAPTURED USING A CAPSULE CAMERA
  • the present invention relates to reducing time and efforts of examining images captured by an in vivo capsule camera.
  • the large number of images are intelligently stitched into larger images to reduce the number of images.
  • Capsule endoscope is an in vivo imaging device which addresses many of problems of traditional endoscopes.
  • a camera is housed in a swallowable capsule along with a radio transmitter for transmitting data to a base-station receiver or transceiver.
  • a data recorder outside the body may also be used to receive and record the transmitted data.
  • the data primarily comprises images recorded by the digital camera.
  • the capsule may also include a radio receiver for receiving instructions or other data from a base-station transmitter. Instead of using radio-frequency transmission, lower-frequency electromagnetic signals may be used. Power may be supplied inductively from an external inductor to an internal inductor within the capsule or from a battery within the capsule.
  • the captured images are stored on-board instead of transmitted to an external device.
  • the capsule with on-board storage is retrieved after the excretion of the capsule.
  • the capsule with on-board storage provides the patient the comfort and freedom without wearing the data recorder or being restricted to proximity of a wireless data receiver.
  • the images and data after being acquired and processed are usually displayed on a display device for a diagnostician or medical professional to examine. However, each image only provides a limited view of a small section of the GI tract. It is desirable to form (stitch) a single composite picture with a larger field of view from multiple capsule images. A large picture can take advantage of the high-resolution large-screen display device to allow a user to visualize more information at the same time.
  • An image stitching process may involve removing redundant overlapped areas between images so that a larger area of the inner GI tract surface can be viewed at the same time in a single composite picture.
  • a large picture can provide a complete view or a significant portion of the inner GI tract surface. It should be easier and faster for a diagnostician or a medical professional to quickly spot an area of interest, such as a polyp.
  • captured images may have, for example, 30,000 frames. It will take users more than one hour for review. An image stitching process can thus reduce frame numbers and accelerate the review procedure.
  • tissues in the GI tract often deform. Also the capsule movement inside the GI track is not steady. The camera may rotate and hesitate inside the human GI tract. In addition, while the GI tract is supposedly cleaned well before administering the capsule, various objects such as food residues and bubbles may still appear in the images. Therefore, the images captured by the capsule camera are non-ideal from the image models used in various image composition or image stitching processing. It is desirable to develop methods that take into consideration of the fact that the captured images are non-ideal and improve the processing or algorithm convergence speed. For example, if a method can reliably stitch certain types of images, it would reduce the total number to be processed images. Furthermore, if another method can be developed to reliably stitch another type of images, the total number of images to be processed is further reduced.
  • a method of processing images captured using an in vivo capsule camera is disclosed.
  • the images captured by an in vivo capsule camera usually are in large quantity. Examination by a medical professional may take extensive time to complete the task, which will increase the healthcare cost.
  • the present invention first stitches images that can be reliably stitched.
  • the images having large overlap exceeding a threshold are stitched into larger images so that the number of images is reduced.
  • the larger images imply that larger areas of the corresponding scenes (e.g., the gastrointestinal track of a human body) can be viewed at the same time.
  • the two images are stitched, where Nl is a positive integer. If the degree of picture overlap between the current image and none of its neighboring Nl images is larger than the first threshold, the current image is designated as a non-stitched image. In addition, if any image exists between two images stitched and is not included in the stitched image, the image is also designated as a non-stitched image.
  • the large-overlap stitching can be performed iteratively by treating the stitched images and non-stitched image as to be processed images in the next round. The iteration of large-overlap stitching can be performed until no images can be stitched or a stop criterion is reached.
  • the large-overlap stitching can be followed by small-overlap stitching to further stitch images generated from the large-overlap stitching.
  • small-overlap stitching two images will be stitched only if the degree of picture overlap between the current image and one of the neighboring N2 images is below a second threshold, where N2 is a positive integer.
  • the small-overlap stitching process is similar to the large-overlap stitching.
  • the small-overlap stitching process can be applied iteratively until no more images can be stitched or a certain stop criterion is reached.
  • the system may further apply third-stage image stitching to the stitching results from the small-overlap stitching.
  • a system may also only use the small-overlap stitching.
  • the stitched images may be treated as a new image replacing the current image and the matched image. Further stitching can be applied to the new image for large-overlap or small-overlap stitching with one of neighboring Nl or N2 images of the matched image during the same iteration of image stitching. Alternatively, no further stitching for the stitched images is applied in the same iteration of image stitching.
  • the iterative stitching process can be terminated if a stop criterion is asserted. For example, if the stitched image reached a maximum size, no more stitching will be applied to this stitched image.
  • the maximum size may correspond to a maximum picture width or a maximum picture height. In another example, after a maximum number of images are stitched together into a stitched image, no more stitching will be applied to this stitched image.
  • the indices of stitched images and image model parameters associated with image stitching can be stored along with the original images without the need to store stitched images.
  • determining the images that can be stitched and determining the model parameters for stitching are computationally intensive.
  • the process to generate the stitched images for viewing based on the indices of stitched images and model parameters derived can be performed efficiently. Therefore, with the indices of stitched images and model parameters pre-computed, a low-cost device such as a personal computer or laptop can be used by an end-user to display the stitched images for viewing and examination.
  • the degree of overlap can be determined based on global transformation of two images. Furthermore, the global transformation can be estimated by exhaustive search for intensity-based image matching between two images. After two images are stitched, local transformation can be applied to the overlapped image areas.
  • the local transformation may include free-form deformation cubic B-splines. Image model parameters required for generating stitched images can be optimized using a gradient-based process. BRIEF DESCRIPTION OF THE DRAWINGS
  • Fig. 1 illustrates an exemplary image sequence stitching consisting of three processing stages for large-overlap images, small-overlap images and the output images from the second stage according to an embodiment of the present invention.
  • FIG. 2 illustrates an exemplary scenario of images captured by an in vivo camera, where a set of images have a large percentage of overlap.
  • FIG. 3 illustrates an exemplary scenario of images captured by an in vivo camera, where a set of images have a small percentage of overlap.
  • Fig. 4 illustrates an exemplary scenario of images captured by an in vivo camera, where the best matched images may be at some distance way instead of an adjacent image due to camera oscillation.
  • FIG. 5 illustrates an exemplary flowchart for an image sequence stitching system according to an embodiment of the present invention, where images with large overlap are detected and stitched.
  • Two images may also be registered directly in the pixel domain.
  • the pixel -based registration is also called direct match, which compares the similarity of two image areas based on the image intensity.
  • There are several similarity measurements that can be used for evaluating the quality of pixel-based registration such as sum of squared distance (SSD), normalized cross correlation (NCC), mutual information (MI), etc.
  • SSD sum of squared distance
  • NCC normalized cross correlation
  • MI mutual information
  • the mutual information of images A and B is defined as:
  • I(A, B) ⁇ p(a,b)log(- ⁇ ) ⁇ (1) p ⁇ a)p ⁇ b)
  • the mutual information measures the distance between the joint distribution of the images intensity values p(a,b) and the joint distribution with independent the images, p(a)p(b).
  • the MI is a measure of dependence between two images.
  • the underlying assumption for MI is that there is a maximal dependence between the intensity values of the images when they are correctly aligned. Mis-registration will result in a decrease in the measure of mutual information. Therefore, larger mutual information implies more reliable registration.
  • Image registration based on features extracted from images is another popular approach to the image registration.
  • the feature-based registration first determines a set of feature points in each image and then compares the corresponding feature descriptors.
  • an affine camera model including scaling, rotation, etc. is estimated based on the correspondences.
  • a non-rigid camera model including local deformation can be used.
  • the number of feature points is usually much smaller than the number of pixels of a corresponding image. Therefore, the computational load for feature-based image registration is substantially less than that for pixel-based image matching. However, it is still time consuming for pair-wise matching.
  • Fig. 1 illustrates an example of three-stage image stitching according to an embodiment of the present invention.
  • redundant images are removed by stitching images with big overlap.
  • the field of view is enlarged by stitching images with small overlap.
  • the rest of images i.e., output images from the second stage processing, are stitched. While Fig.
  • FIG. 1 illustrates an example of an embodiment of the present invention
  • a person skilled in the art may practice the present invention by rearranging the steps in Fig. 1 without departing from the spirit of the present invention.
  • the second stage can be skipped.
  • the first stage may be skipped.
  • a set of images may have a big percentage of overlap, as shown in Fig. 2, where (N+l) images are processed having indices t through t+N covering an area of the GI tract. Due to camera oscillation, the indices of the images may not be sequentially increased from the top image (i.e., index t) to the bottom image (i.e., index t+N). For example, image t+i may be further toward the bottom than image t+i+l .
  • a global transformation can be estimated by exhaustively searching for intensity based image matching under the assumption of a global rigid transformation. Once the global transformation is estimated, the overlap between images can be computed. If the degree of overlap satisfies the criteria of first category, a local transformation such as free-form deformation cubic B-splines can be applied to the overlap area. Image model parameters will be required for image stitching. Gradient-based method can be used for optimization of the image model parameters. Finally the deformation field will be extrapolated into the rest of images.
  • the global transformation can be estimated by averaging the local transformation of individual pixels, while assuming the two images are fully overlapped. The above global transformation estimation is only valid for images with big overlap area.
  • large-overlap image stitching can be started by checking every two neighboring images. If the pair of images falls into the first category, they are stitched. The stitched image is then treated as a to-be- stitched image for the next round of large overlap stitching and is placed in a to-be-stitched list. After all images are processed for large overlap stitching, a new series of images in the to-be- stitched list is obtained for further stitching.
  • This to-be-stitched list includes images that do not meet the large overlap criterion and stitched image from the previous round of stitching.
  • the large-overlap stitching is performed repeatedly until no neighboring images have large overlap. In other words, the to-be- stitched list becomes empty.
  • the neighboring image having the largest overlap may not necessarily be an adjacent image, i.e., the immediately neighboring image.
  • the capsule may be in the position X at time t.
  • the camera may move forward during the next frame period (t+1) and move backward in the following frame period (t+2). Therefore, image t+2 may have the largest overlap with image t instead of image t+1. Therefore, if the search window is larger, it is more likely to find a neighboring image with a large overlap.
  • the current image and the neighboring image with large overlap can be stitched.
  • the large overlap stitching process may continue for the stitched image by searching the next n neighboring images. If a large overlap is found between the stitched image and one of the neighboring images, the stitched image is further stitched. If no large overlapped neighboring image is found, the large overlap stitching moves to the next image.
  • the next image may correspond to the image after the current image if the current image cannot be stitched with any of neighboring n image. If a stitched image cannot be further stitched, the next image to be processed corresponds to the image after the last image stitched.
  • image 0 is designated as a non-stitched image.
  • images 1 and 2 are stitched to form a stitched image (1, 2).
  • the stitched image (1, 2) is then compared with the next 5 neighboring images (i.e., images 3, 4, 5, 6 and 7) to identify any large overlap. If image 5 is found to have a large overlap with stitched image (1, 2), the stitched image (1, 2) is stitched with image 5 to form stitched image (1, 2, 5). If no more image can be stitched with image (1, 2, 5), the large overlap stitching moved to the next image.
  • image 3 becomes the first non-stitched image. Therefore, the next image to be processed is image 3. If images 3 and 4 still cannot be stitched with stitched image (1, 2, 5), the stitching process will not be applied to images 3 and 4 in this round.
  • the large overlap stitching process will output stitched image (1, 2, 5), etc. and non-stitched images 0, 3, 4, etc.
  • the stitched images and non-stitched images are subject to a next round of large overlap stitching.
  • the iteration may be continued until no more images can be stitched. This process may result in a stitched image that grows very big.
  • a stop criterion may be set to stop further stitching if the number of images in the stitched image exceeds a threshold.
  • the process when two large overlapped images are stitched (e.g., images 0 and 2), the process outputs the stitched images as a new image (i.e., (0, 2)) and the process moves to the next to-be-processed image, i.e., image 1.
  • Image 4 becomes the next image to be processed. If no large overlap can be found for image 4, image 4 will be outputted as a non-stitched image. The process will continue until all images are processed. The process will output stitched images (0, 2), (1, 5), etc. and non-stitched images 3, 4, etc. The stitched images and non-stitched images from the process are subject to the next round of large overlap stitching processing. The stitching process is repeated until no more images can be stitched.
  • the time stamp (or index) for the stitched image can be computed as a summary of two or more time stamps (or indices).
  • the summary of two or more time stamps can be the average, medium or any individual time stamp, such as example the first one, the middle one or the last one.
  • all individual time stamps can be displayed all together in addition to the stitched image.
  • a stitched image usually is larger than the original image. Therefore, the composite image (i.e., stitched image) after stitching more images will become larger and larger.
  • the display device is limited to a certain size. Therefore, the composite image has to be fitted into the display device. Otherwise the composite image needs to be panned in order to display all parts of the image. Accordingly, the size of the stitched image will be checked and the stitching will be stopped for this composite image if the composite image exceeds the screen size.
  • the stitched image width can be checked to determine whether the width is close to, equal to or larger than the width of the display window. If so, no further stitching will be performed on this stitched image.
  • the stitched image height can be checked to determine whether the height is close to, equal to or larger than the height of the display window. If so, no further stitching will be performed on this stitched image.
  • the image stitching can also be simply limited to a number of images to be stitched. For example, no more than 10 images can be stitched. If the stitched image already contains 10 images, this composite image will not be further stitched.
  • the stitched images may be viewed on a display device.
  • the stitching process will result in stitched images larger than the original size.
  • each final stitched image may correspond to a different number of original images and have different degrees of overlap. Therefore, the size of final stitched images will vary from image to image.
  • two stitched images with a large size difference are viewed consecutively, one image may cover a small area on the screen while another may nearly fill up the screen. This would result in very distracting viewing experience.
  • the first stage of stitching only images with large overlap will be stitched. As a result, the size of the stitched images may not vary too much. It will avoid the potential distracting viewing problem.
  • the overlap of a current image with neighboring images will be less than a threshold overlap, such as 80%.
  • Fig. 3 illustrates an example of stitched results after the stage 1 stitching, where picture overlap is always less than 80%).
  • the overlap of some neighboring images may be substantially small (e.g., 15%) or less as shown between images t and t+1).
  • the second stage will stitch images with small overlap (i.e., overlap below a low threshold) to generate a composite image with bigger field of view.
  • the stitching for images with small overlap is performed using similar procedure as that for the large-overlap stitching.
  • small-over stitching can be started by stitching two neighboring images having small overlap (e.g., 15%>). After all images are processed for small-overlap stitching, a new series of images in the to-be- stitched list is obtained for further stitching.
  • This to-be-stitched list includes images that do not meet the small overlap criterion and stitched image from the previous round of stitching.
  • the small-overlap stitching is performed repeatedly until no neighboring images have large overlap. In other words, the to-be- stitched list becomes empty.
  • a global transformation can be estimated by exhaustive search for intensity -based image matching under the assumption of global rigid transformation. Once the global transformation is estimated, the overlap between images can be computed. A local transformation, such as free-form deformation cubic B-splines can be applied to the overlapped area. The model parameters can be optimized by using Gradient-based method. Finally the deformation field will be extrapolated into the rest of images. [0041] In another embodiment, instead of searching the overlap exhaustively in the image, a percentage for expected small overlap, such as 15% can be pre-defined. The intensity based image matching can be applied to fixed area from two adjacent images. For example, 20% image from the bottom of the first image and the top of the second image can be selected.
  • the matching satisfies certain criteria, such as the NCC matching score being larger than a threshold, these two images do not belong to the small overlap category. They will not be stitched.
  • a high NCC matching score means these two image areas are very similar and these two images are overlapped in the area. .
  • a fixed image area for matching can be selected.
  • a rigid transformation model can be applied by searching in both x and y directions within this fixed area in order to find the best overlap area.
  • the size of the composite image (i.e., stitched image) after stitching more images will become larger and larger.
  • the size checking process can be applied to the small-overlap procedure. If the stitched size is over a certain size, the small-overlap stitching is not performed to form the stitched image.
  • the stitched image width or height can be checked to determine whether the width or height is close to, equal to or larger than the width or height of the display window. If so, no further stitching will be performed on this stitched image.
  • the number of images stitched can also be checked and no further stitching is applied to a stitched image if the number of images in the stitched image exceeds a limit.
  • the searching for a small overlapped neighboring image can also be extended to include n neighboring images as for the large overlap stitching. After two images are stitched, the process can search the next n neighboring images of the stitched image for any small overlapped neighboring image similar to the case of large overlap stitching. Alternatively, the stitched images can be outputted as a new image for further stitching in the next stage similar to the case of large overlap stitching. [0045] The stitching of images with small overlap in the second stage of the sequence stitching in Fig. 1 can be performed directly on the original captured images. Since the first stage stitching involves computing local deformation based on optimization, it is
  • an embodiment identifies images with small overlap from the original captured images.
  • the time stamp for the stitched image can be computed as a summary of two or more time stamps.
  • the summary of two or more time stamps can be average, medium or any individual time stamp for example the first one, the middle one or the last one. Alternatively, all individual time stamps can be displayed all together in addition to the stitched image.
  • indices of images to be stitched and registration results are computed offline so that the original images can be warped and blended very fast in real time. There is no need to store both original and stitched videos. Instead, the original images, the indices of images to be stitched, and the registration parameters are stored to conserve storage space. In this way, a user can be provided with the option to view the original images or to view the stitched images without the need of intensive computations at the point of use and without the large increase in storage requirement.
  • images with large overlap are stitched first.
  • a second stage stitch is applied to the final output from the first stage.
  • images with small overlap are stitched.
  • the final output from the second stage stitch is processed by the third stage stitch.
  • the images to be stitched in the third stage have moderate overlap among them. Therefore, there will be higher risk of losing information during the stitching process.
  • various object detectors such as bleed detector, tumor detector, etc., are applied to identify suspicious clinical features before the stitching process is applied to these images. After object detection, non-linear matching similar to that in the first two stages of image stitching is applied. The suspicious clinical features are then "inpainted" back to the stitched images if they are blended out during stitching. Accordingly, the risk of losing important information is reduced.
  • Fig. 5 illustrates an exemplary flowchart of a system for image stitching incorporating an embodiment of the present invention.
  • a plurality of images captured by the camera is received as shown in step 510.
  • the images may be retrieved from memory or received from a processor.
  • a first-stage stitched image is generated by stitching a current to-be-processed image with a previous to-be-processed image or a previously stitched image if a degree of picture overlap between the current to-be-processed image and the previous to-be-processed image or the previously stitched image is larger than a first threshold as shown in step 520.
  • the current to-be-processed image is within Nl neighboring to-be-processed images of the previous to-be-processed image or the previously stitch image, and Nl is a positive integer.
  • a first-stage non-stitched images is generated if the degree of picture overlap between the current to-be-processed image and none of Nl neighboring to-be-processed images is larger than the first threshold as shown in step 530.
  • First information associated with said one or more first-stage stitched images is provided in step 540 if said one or more first-stage stitched images exist and providing said one or more first-stage non-stitched images if said one or more first-stage non-stitched images exist.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Optics & Photonics (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Signal Processing (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne un procédé de traitement d'images capturées par un appareil de prise de vues in vivo par capsule. Les images ayant un grand chevauchement dépassant un seuil sont assemblées en de plus grandes images. Si l'image courante et aucune de ses images voisines a un grand chevauchement, l'image courante est désignée comme image non assemblée. Toute image, qui existe entre deux images assemblées et n'est pas incluse dans l'image assemblée, est également désignée comme image non assemblée. L'assemblage à grand chevauchement peut être réalisé sur les images de manière itérative en traitant les images assemblées et l'image non assemblée comme étant des images à traiter la fois suivante. Un assemblage de deuxième étage peut être appliqué pour assembler des images à chevauchement léger. L'assemblage d'images à chevauchement léger peut également être appliqué de manière itérative. Un assemblage de troisième étage peut en outre être appliqué pour assembler les images de sortie provenant du traitement de deuxième étage.
PCT/US2016/024811 2013-05-29 2016-03-30 Procédé d'assemblage d'images dépendant d'un chevauchement pour des images capturées à l'aide d'un appareil de prise de vues in vivo par capsule Ceased WO2016160862A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201680020175.6A CN107529944B (zh) 2013-05-29 2016-03-30 用于使用胶囊相机所拍摄影像的重叠相依影像拼接方法

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US14/675,744 2015-04-01
US14/675,744 US9324172B2 (en) 2013-05-29 2015-04-01 Method of overlap-dependent image stitching for images captured using a capsule camera

Publications (1)

Publication Number Publication Date
WO2016160862A1 true WO2016160862A1 (fr) 2016-10-06

Family

ID=55745836

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2016/024811 Ceased WO2016160862A1 (fr) 2013-05-29 2016-03-30 Procédé d'assemblage d'images dépendant d'un chevauchement pour des images capturées à l'aide d'un appareil de prise de vues in vivo par capsule

Country Status (1)

Country Link
WO (1) WO2016160862A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110458757A (zh) * 2019-07-15 2019-11-15 中国计量大学 一种阈值自适应的特征点匹配图像拼接方法
CN111091494A (zh) * 2018-10-24 2020-05-01 纬创资通股份有限公司 影像拼接处理方法以及其系统
CN114445274A (zh) * 2020-11-06 2022-05-06 中煤航测遥感集团有限公司 图像拼接方法、装置、电子设备及存储介质

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090041314A1 (en) * 2007-08-02 2009-02-12 Tom Vercauteren Robust mosaicing method. notably with correction of motion distortions and tissue deformations for a vivo fibered microscopy
WO2014193670A2 (fr) * 2013-05-29 2014-12-04 Capso Vision, Inc. Reconstruction d'images provenant d'une capsule à plusieurs caméras pour imagerie in vivo

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090041314A1 (en) * 2007-08-02 2009-02-12 Tom Vercauteren Robust mosaicing method. notably with correction of motion distortions and tissue deformations for a vivo fibered microscopy
WO2014193670A2 (fr) * 2013-05-29 2014-12-04 Capso Vision, Inc. Reconstruction d'images provenant d'une capsule à plusieurs caméras pour imagerie in vivo

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SZELISKI: "Image Alignment and Stitching: A Tutorial", MICROSOFT RESEARCH TECHNICAL REPORT MSR-TR-2004-92, 10 December 2006 (2006-12-10)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111091494A (zh) * 2018-10-24 2020-05-01 纬创资通股份有限公司 影像拼接处理方法以及其系统
CN110458757A (zh) * 2019-07-15 2019-11-15 中国计量大学 一种阈值自适应的特征点匹配图像拼接方法
CN114445274A (zh) * 2020-11-06 2022-05-06 中煤航测遥感集团有限公司 图像拼接方法、装置、电子设备及存储介质

Similar Documents

Publication Publication Date Title
US9324172B2 (en) Method of overlap-dependent image stitching for images captured using a capsule camera
EP3246871B1 (fr) Composition d'image
Lurie et al. 3D reconstruction of cystoscopy videos for comprehensive bladder records
US10716457B2 (en) Method and system for calculating resected tissue volume from 2D/2.5D intraoperative image data
EP3148399B1 (fr) Reconstitution d'images à partir d'une capsule in vivo à caméras multiples avec mise en équivalence de la confiance
US20160295126A1 (en) Image Stitching with Local Deformation for in vivo Capsule Images
KR102202398B1 (ko) 영상처리장치 및 그의 영상처리방법
Iakovidis et al. Efficient homography-based video visualization for wireless capsule endoscopy
CN111904379A (zh) 多模态医学设备的扫描方法和装置
WO2016160862A1 (fr) Procédé d'assemblage d'images dépendant d'un chevauchement pour des images capturées à l'aide d'un appareil de prise de vues in vivo par capsule
Bergen et al. A graph-based approach for local and global panorama imaging in cystoscopy
US20190287229A1 (en) Method and Apparatus for Image Stitching of Images Captured Using a Capsule Camera
CN114782566B (zh) Ct数据重建方法和装置、电子设备和计算机可读存储介质
Spyrou et al. Panoramic visual summaries for efficient reading of capsule endoscopy videos
Horovistiz et al. Computer vision-based solutions to overcome the limitations of wireless capsule endoscopy
US11120547B2 (en) Reconstruction of images from an in vivo multi-camera capsule with two-stage confidence matching
US20200013143A1 (en) Method of Image Processing and Display for Images Captured by a Capsule Camera
Kim et al. Performance Improvement for Two‐Lens Panoramic Endoscopic System during Minimally Invasive Surgery
Seets et al. OFDVDnet: A Sensor Fusion Approach for Video Denoising in Fluorescence Guided Surgery
Hu et al. Homographic patch feature transform: a robustness registration for gastroscopic surgery
Yan et al. Gastrointestinal image stitching based on improved unsupervised algorithm
Ali Total variational optical flow for robust and accurate bladder image mosaicing
Kim et al. A Stable Video Stitching Technique for Minimally Invasive Surgery
CN120031783A (zh) 评估4d断层摄影图像数据的可利用性的方法、计算机程序产品和扫描器设备
Bontala Image Mosaicing of Neonatal Retinal Images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16715964

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16715964

Country of ref document: EP

Kind code of ref document: A1