WO2018143565A1 - Procédé de traitement d'images - Google Patents
Procédé de traitement d'images Download PDFInfo
- Publication number
- WO2018143565A1 WO2018143565A1 PCT/KR2017/015389 KR2017015389W WO2018143565A1 WO 2018143565 A1 WO2018143565 A1 WO 2018143565A1 KR 2017015389 W KR2017015389 W KR 2017015389W WO 2018143565 A1 WO2018143565 A1 WO 2018143565A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- size information
- image
- data
- transform data
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/008—Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/41—Medical
Definitions
- the present invention relates to an image processing method, and more particularly, to an image processing method for extracting a common element or a non-common element from two images.
- a motion vector is calculated by selecting key points in the reference image and the target image (eg, grid point setting, feature point detection, region of interest designation, etc.), and identifying correspondences between the key points between the reference image and the target image. (E.g., block matching, feature point matching, region of interest tracking, etc.), warping the reference image to the target image (e.g., free-form deformation (FFD), thin-plate spline (TPS)) , Affine transformation, pixel-shift, etc.), and the reference image and the target image are subtracted from each other to generate a result image from which the difference between the reference image and the target image is extracted.
- key points in the reference image and the target image e.g, grid point setting, feature point detection, region of interest designation, etc.
- identifying correspondences between the key points between the reference image and the target image e.g., block matching, feature point matching, region of interest tracking, etc.
- warping the reference image to the target image e.g., free-form deformation
- Oclusions often cause inconsistencies between the two images, and two-dimensional perspective images, such as X-rays, have a plurality of objects superimposed on one pixel and thus multiple images in different images.
- the problem of obscuring correspondences often arises when objects move in different directions.
- An object of the present invention is to provide an image processing method capable of effectively selecting common or non-common elements from two images even in noise and dynamic noise.
- a method of processing an image including: obtaining a plurality of first transform data having different scale spaces and orientations based on a first image, Acquiring a plurality of second transform data having different scale spaces and directions based on the second image, and converting the transform data having the same scale space and direction among the plurality of first transform data and the plurality of second transform data Comparing size information with each other, extracting any one of the compared size information according to an extraction rule, obtaining a plurality of third converted data based on the extracted size information, and the plurality of third conversions. Restoring the data to the spatial domain.
- the acquiring of the plurality of first transformed data may include applying a steerable filter having an adjustable property to scale space and directionality to the first image to obtain the plurality of first transformed data.
- the plurality of second transform data may be obtained by applying the steerable filter to the second image.
- the plurality of first transform data and the plurality of second transform data may be represented by a complex steerable pyramid.
- Each of the first transform data and the second transform data may include a magnitude image in which the magnitude information is represented as an image and a phase image in which phase information is represented as an image on the complex steerable pyramid.
- the size image of the first converted data and the size image of the second converted data may be compared for each pixel.
- the extraction rule may be to extract size information of a smaller value among pixel size information of the first transform data and pixel size information of the second transform data.
- the extraction rule extracts the size information of the first transform data when the difference between the pixel size information of the first transform data and the pixel size information of the second transform data is equal to or less than a threshold value, and extracts the first transform data. If the difference between the pixel size information of the data and the pixel size information of the second converted data exceeds the threshold, among the pixel size information of the first converted data and pixel size information of the second converted data, It may be to extract smaller size information.
- the extraction rule may be to extract size information of a larger value among pixel size information of the first transform data and pixel size information of the second transform data.
- the extraction rule extracts the size information of the second transform data when the difference between the pixel size information of the first transform data and the pixel size information of the second transform data is equal to or less than a threshold value. If the difference between the pixel size information of the data and the pixel size information of the second converted data exceeds the threshold, among the pixel size information of the first converted data and pixel size information of the second converted data, It may be to extract larger size information.
- the plurality of third transform data may be obtained by extracting the phase information corresponding to the extracted magnitude information.
- the plurality of first transform data, the plurality of second transform data, and the plurality of third transform data may be configured by a combination of M different scale spaces and N different directionalities.
- FIG. 1 is a block diagram of an image processing apparatus according to an embodiment of the present invention.
- FIG. 2 is a flowchart illustrating an image processing method according to an embodiment of the present invention.
- FIG 3 is a view schematically illustrating an example of a first image.
- FIG. 4 is a view schematically illustrating an example of a second image.
- FIG. 5 is a diagram illustrating a steerable filter in the frequency domain.
- 6 is an actual image used as the first image.
- FIG. 7 is a diagram illustrating a complex stirrable pyramid obtained by applying the steerable filter of FIG. 5 to the image of FIG. 6.
- FIG. 9 is a diagram schematically illustrating a common element extracted through an image processing method according to an embodiment of the present invention.
- FIG. 10 is a schematic representation of a non-common element extracted through an image processing method according to an embodiment of the present invention.
- 11 and 12 are actual images used as the first image and the second image.
- FIG. 13 is an image representing a common element extracted by applying an image processing method according to an exemplary embodiment of the present invention with respect to the images of FIGS. 11 and 12.
- FIG. 14 is an image representing non-common elements extracted by applying an image processing method according to an exemplary embodiment to the images of FIGS. 11 and 12.
- FIG. 15 is an image representing non-common elements extracted by applying an existing image processing method to the images of FIGS. 11 and 12.
- FIG. 1 is a block diagram of an image processing apparatus according to an embodiment of the present invention
- FIG. 2 is a flowchart illustrating an image processing method according to an embodiment of the present invention.
- an image processing apparatus may include an image receiver 10, an image converter 20, a comparator 30, an extractor 40, and a reconstructor 50. And a subtraction unit 60 and a display unit 70.
- a first image and a second image are acquired, and a plurality of first converted data are obtained in operation S12.
- Acquiring a plurality of second transform data (S13) comparing the size information (S14), extracting the size information and the phase information (S15), and obtaining a plurality of third converted data (S16). And restoring to the spatial domain (S17).
- the image receiving unit 10, the image converting unit 20, the comparing unit 30, the extracting unit 40, the reconstructing unit 50, and the subtracting unit 60 constituting the image processing apparatus according to an exemplary embodiment of the present invention.
- the functions of the display unit 70 will be described below with the image processing method according to an embodiment of the present invention.
- the image receiver 10 receives the first image and the second image.
- the first image and the second image are images including common elements commonly present in the two images, and may be images captured by the same region with a time difference, or images captured so that some regions overlap.
- At least one of the first image and the second image may include a non-common element that does not exist in another image.
- the first image is an image of an X-ray, a CT, etc. of a body part of a patient
- the second image may be an image of an X-ray, a CT, and the like taken after injecting a contrast agent to the same patient.
- FIG. 3 is a diagram schematically illustrating an example of a first image
- FIG. 4 is a diagram schematically illustrating an example of a second image.
- the first image is an image of the spine 1 and the internal organs 2 and 3.
- Internal organs 2, 3 may be liver 2, stomach 3, and the like.
- the second image is an image obtained by capturing the same region as the first image, and is photographed after the injection of a blood vessel contrast agent to further capture the blood vessel 4.
- the liver 2 ′ and the stomach 3 ′ may be different from the positions 2 and 3 of the first image due to movement or breathing of the patient.
- the image processing apparatus may receive the first and second images captured by the separate photographing equipment by the image receiving unit 10 or may be configured to include the photographing equipment.
- the display unit 70 is a configuration for visually displaying image information. According to an embodiment, the display unit 70 receives image information of the first image and the second image from the image receiver 10 to visually display the first image and the second image. can do.
- the image converter 20 steerable filters the first image and the second image, respectively. ) To obtain a plurality of first transform data and a plurality of first transform data.
- the image converting unit 20 receives the first image and the second image from the image receiving unit 10, and acquires a stiffable filter on the first image in step S12 of obtaining a plurality of first converted data.
- a plurality of second transform data may be obtained by applying a steerable filter to the second image.
- Steerable filters have adjustable characteristics for scale space and directionality.
- the steerable filter when applied to the image, the image is transformed into transform data having a spatial frequency band and a direction band.
- FIG. 5 is a diagram illustrating a steerable filter in the frequency domain.
- FIG. 5 illustrates steerable filters having three scale spaces and four directivity as an example of a stiffable filter.
- the steerable filters F11 to F34 composed of three different scale spaces and four directional combinations may be represented on the two-dimensional frequency domain, as shown in FIG. 5.
- the steerable filters of F11, F21, and F31 are filters having the same directionality, and the steerable filters of F11, F12, F13, and F14 are filters having the same scale space.
- the steerable filters of F12, F22, and F32 have the same directivity, and the steerable filters of F21, F22, F23, and F24 are filters having the same scale space.
- the steerable filters of F13, F23, and F33 have the same directionality
- the steerable filters of F31, F32, F33, and F34 are filters having the same scale space.
- the steerable filters of F14, F24, and F34 are filters having the same directivity.
- the steerable filters having three scale spaces and four directionality are illustrated, but the number of scale spaces and the number of directionality can be adjusted.
- the low frequency region Z L and the high frequency region Z H which do not include the main information, may be excluded, and the steerable filters ( The symmetry region Z S symmetric to F11 to F34 may also be excluded.
- the first transform data (or second transform data) that is a result of applying the steerable filter to the first image (or the second image) may be expressed by the following equation.
- R w, ⁇ means first transform data (or second transform data) as a complex value
- I means first image (or second image)
- ⁇ w, ⁇ means steerable filter do.
- (x, y) means pixel coordinates of the image. w corresponds to scale space and ⁇ corresponds to directionality.
- a w, ⁇ and ⁇ w, ⁇ can be calculated as follows , respectively.
- a w, ⁇ means size information of the first transform data (or second transform data), and ⁇ w, ⁇ means phase information.
- the first transform data and the second transform data include size information A w, ⁇ and phase information ⁇ w, ⁇ according to the pixel coordinates x and y.
- FIG. 6 is an actual image used as a first image
- FIG. 7 is a diagram illustrating a complex stirrable pyramid obtained by applying the steerable filter of FIG. 5 to the image of FIG. 6.
- the plurality of first transformed data obtained in step S12 may be represented by a complex steerable pyramid.
- the complex stableable pyramid shown in FIG. 7 is a result of the stiffable filters F11 ⁇ F34 shown in FIG. 5, and thus has four orientations and three scale spaces.
- the plurality of first transformed data on the complex stirrable pyramid is represented by a set of 3 ⁇ 4 images.
- each first transformed data is identified by a dotted line.
- each first transform data has size information A w, ⁇ and phase information ⁇ w, ⁇ according to each pixel coordinate x, y, as shown in FIG.
- Each of the first converted data is a size image (A11 to A34) in which the size information A w, ⁇ for each pixel is represented as an image, and a phase image (P11 to P34) in which the phase information ( ⁇ w, ⁇ ) for each pixel is represented as an image. ) Will be included. That is, the size images A11 to A34 and the phase images P11 to P34 are configured as one image set to constitute the first converted data.
- the number of pixels of the size images A11 to A34 may be different according to the scale space.
- the number of pixels of the phase images P11 to P34 may also be different depending on the scale space.
- the size images A11 to A34 and the phase images P11 to P34 may have a larger number of pixels. As the number of pixels increases, more texture information of the first image may be retained.
- the plurality of second transformed data is also represented by a complex steerable pyramid similar to that shown in FIG. Since the detailed description of the complex stirrable pyramid has been described above, the detailed description thereof will be omitted.
- the display unit 70 receives a plurality of first transformed data and a plurality of second transformed data A11 through A34 and a phase image P11 through P34 from the image converter 20 to visualize the complex stable pyramid. Can be displayed as
- step S14 of comparing the size information the comparator 30 compares the plurality of first transform data and the plurality of second transform data obtained in steps S12 and S13.
- a plurality of first transform data C1 and a plurality of second transform data C2 represented by the complex stirrable pyramid are obtained through steps S12 and S13.
- the comparator 30 compares, for each pixel, a size image of the converted data among the plurality of first converted data C1 and the plurality of second converted data C2.
- the size image A11 of the specific transform data among the plurality of first transform data C1 is equal to the size image A11 of the transform data having the same scale space and directivity among the plurality of second transform data C2.
- the comparison unit converts the size information A w and ⁇ of each pixel of the size image A11 of the first transform data C1 into the size information A w, of each pixel of the second transform data C2 size image A11 . ⁇ ) and one-to-one comparison.
- the comparison unit 30 performs such a task for each of the first transform data and the second transform data, and compares both the plurality of first transform data and the size images of the plurality of second transform data.
- the extractor 40 may match any one of the pixel size information A w and ⁇ compared by the comparator 30 in step S14 according to a predetermined extraction rule.
- the extracted pixel size information A w, ⁇ and the corresponding phase information ⁇ w, ⁇ are extracted together.
- the size information A w, ⁇ of each pixel of the size image A11 of the first converted data C1 is extracted by comparing the size information A w, ⁇ , the phase of the first converted data C1 is determined.
- the phase information ( ⁇ w, ⁇ ) of a pixel to be extracted from the image (P11) is extracted along.
- the extraction rule based on this may vary depending on the embodiment.
- any one of the following two extraction rules may be used. Can be applied.
- One extraction rule has a smaller value among the pixel-specific size information A w, ⁇ of the first transformed data C1 and the pixel-specific size information A w, ⁇ of the second transformed data C2.
- the size information A w, ⁇ is extracted.
- the other one of the extraction rules the difference comparing a first pixel-by-pixel size information of the converted data (C1) (A w, ⁇ ) and the second conversion (A w, ⁇ ) pixel-by-pixel size information of the data (C2) preset If it is less than (or less than) the threshold, the pixel-specific size information A w, ⁇ of the transformed first transformed data C1 is extracted from the first image in which the non-common element 4 is not present, and the difference thereof. If is greater than (or greater than) the predetermined threshold value, the size information A w, ⁇ having a smaller value is extracted. Performed according to the example may be configured such that the difference is a predetermined threshold value or less (or less than) the case where the second conversion data (C2) size information (A w, ⁇ ) by the pixel extraction.
- any one of the following two extraction rules may be applied.
- One extraction rule has a larger value among the pixel-specific size information A w, ⁇ of the first transformed data C1 and the pixel-specific size information A w, ⁇ of the second transformed data C2.
- the size information A w, ⁇ is extracted.
- the other one of the extraction rules the difference comparing a first pixel-by-pixel size information of the converted data (C1) (A w, ⁇ ) and the second conversion (A w, ⁇ ) pixel-by-pixel size information of the data (C2) preset If it is less than (or less than) the threshold, the pixel-specific size information A w, ⁇ of the transformed second transform data C2 is extracted from the second image in which the non-common element 4 is present, and the difference is In the case of exceeding (or exceeding) a predetermined threshold value, the size information A w, ⁇ having a larger value is extracted. According to an exemplary embodiment, when the difference is less than (or less than) a preset threshold, the pixel-specific size information A w and ⁇ of the first transform data C1 may be extracted.
- the extractor 40 performs step S15 on all pixels of the plurality of first transform data C1 and the plurality of second transform data C2. As shown in FIG. 8, a plurality of third converted data C3 are obtained.
- the extractor 40 compares all the pixels of the plurality of first transform data C1 and the plurality of second transform data C2 in one-to-one relationship, and compares the size information A w and ⁇ and the phase information of one pixel. ⁇ w, ⁇ ), and thus, as shown in FIG. 8, a plurality of third transforms represented by a complex stable pyramid similar to the plurality of first transform data C1 or the plurality of second transform data C2. Data C3 is obtained.
- step S17 of restoring to the spatial domain the restoring unit 50 restores the plurality of third converted data C3 to the spatial domain.
- the spatial domain means the same domain as the image shown in FIG. 3, 4, or 6.
- the reconstructor 50 may reconstruct the plurality of third transform data C3 into the spatial domain through inverse Fourier transform after converting the plurality of third transform data C3.
- the display unit 70 may receive image information of the restored image from the restorer 50 and visually display the restored image.
- FIG. 9 is a view schematically illustrating a common element extracted through an image processing method according to an embodiment of the present invention
- FIG. 10 is a schematic view of a non-common element extracted through an image processing method according to an embodiment of the present invention. This is expressed as.
- the image reconstructed into the spatial domain in step S17 is illustrated in FIG. 9.
- the common elements 1, 2 (2 ′, 3 (3 ′)) of the first image (see FIG. 3) and the second image (FIG. 4) are represented as shown in FIG. 10.
- the non-common element 4 of the first image (see FIG. 3) and the second image (see FIG. 4) can be represented.
- Steer scalable pyramids are a common element (1, 2 (2 ') on the basis of the size information (A w, ⁇ ) based on the orientation band are distinguished by the spatial frequency band and directional distinguishing the scale space by the steering scalable filter, 3 ( 3 ')) and the non-common element (4), so that the elements (1, 2 (2'), 3 (3 ') common to the two images that are strong against noise and are not present at the same location It also has a strong advantage in the dynamic noise due to the non-rigid nature of the variable.
- the non-common element ( A subtraction step for extracting 4) can be added.
- the subtraction unit 60 may obtain an image from which the non-common element 4 is extracted by subtracting the image reconstructed in step S17 (see FIG. 9) from the second image (see FIG. 4).
- the display unit 70 may receive image information of the subtracted image from the subtractor 60 to visually display the subtracted image.
- FIGS. 11 and 12 are actual images used as the first image and the second image
- FIG. 13 is extracted by applying an image processing method according to an embodiment of the present invention to the images of FIGS. 11 and 12.
- a common element is an image
- FIG. 14 is an image representing a non-common element extracted by applying an image processing method according to an exemplary embodiment of the present invention with respect to the image of FIGS. 11 and 12.
- FIG. 12 it can be seen in FIG. 12 that a blood vessel image (non-common element) not shown in FIG. 11 is observed.
- FIG. 13 it can be seen that only the common elements common to FIGS. 11 and 12 are represented in the reconstructed image as a result of applying the step S17 of the image processing method according to the exemplary embodiment of the present invention. have.
- FIG. 14 it can be seen that the blood vessel image (non-common element) existing only in FIG. 12 is clearly expressed in the result of applying the subtraction step of the image processing method according to the exemplary embodiment of the present invention. .
- FIG. 15 is an image representing non-common elements extracted by applying an existing image processing method to the images of FIGS. 11 and 12. As shown in FIG. 15, as a result of applying the conventional image processing method, it may be confirmed that a large amount of noise is expressed compared to the image shown in FIG. 14.
- the image processing apparatus and the image processing method according to an embodiment of the present invention may extract and extract common elements and non-common elements in two images, and common elements are present at the same position in the two images. Common elements can also be selected for dynamic noise due to non-rigid or non-rigid characteristics.
- an image processing method includes obtaining a plurality of first transformed data having different scale spaces and orientations based on a first image, and based on a second image. Acquiring a plurality of second transform data having different scale spaces and directions, and comparing size information of the transform data having the same scale space and direction among the plurality of first transform data and the plurality of second transform data; Extracting any one of the compared size information according to an extraction rule, obtaining a plurality of third converted data based on the extracted size information, and restoring the plurality of third converted data to a spatial domain It includes a step.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
Un procédé de traitement d'images selon un mode de réalisation de la présente invention comprend les étapes consistant à : obtenir une pluralité de premières données de transformée ayant différents espaces d'échelle et différentes orientations d'échelle sur la base d'une première image ; obtenir une pluralité de deuxièmes données de transformée ayant différents espaces d'échelle et différentes orientations d'échelle sur la base d'une seconde image ; comparer des informations de taille des données de transformée ayant le même espace d'échelle et la même orientation d'échelle parmi la pluralité des premières données de transformée et la pluralité des deuxièmes données de transformée ; extraire l'une quelconque des informations de taille comparées selon une règle d'extraction ; obtenir une pluralité de troisièmes données de transformée sur la base des informations de taille extraites ; et restaurer la pluralité de troisièmes données de transformée dans un domaine spatial.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| KR1020170013709A KR101870355B1 (ko) | 2017-01-31 | 2017-01-31 | 영상 처리 방법 |
| KR10-2017-0013709 | 2017-01-31 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2018143565A1 true WO2018143565A1 (fr) | 2018-08-09 |
Family
ID=62768300
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/KR2017/015389 Ceased WO2018143565A1 (fr) | 2017-01-31 | 2017-12-22 | Procédé de traitement d'images |
Country Status (2)
| Country | Link |
|---|---|
| KR (1) | KR101870355B1 (fr) |
| WO (1) | WO2018143565A1 (fr) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR102151779B1 (ko) * | 2019-03-25 | 2020-09-03 | 엘에스일렉트릭(주) | 데이터 변환 장치 |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20150021351A (ko) * | 2013-08-20 | 2015-03-02 | 삼성테크윈 주식회사 | 영상 정합 장치 및 이를 이용한 영상 정합 방법 |
| KR20160071781A (ko) * | 2014-12-12 | 2016-06-22 | 삼성전자주식회사 | 이미지에서 객체를 검출하는 객체 검출 방법 및 이미지 처리 장치 |
| WO2016108847A1 (fr) * | 2014-12-30 | 2016-07-07 | Nokia Technologies Oy | Procédés et appareil de traitement d'images à informations de mouvement |
Family Cites Families (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5253307A (en) * | 1991-07-30 | 1993-10-12 | Xerox Corporation | Image analysis to obtain typeface information |
| JP3606430B2 (ja) * | 1998-04-14 | 2005-01-05 | 松下電器産業株式会社 | 画像整合性判定装置 |
| KR100452097B1 (ko) * | 2002-05-02 | 2004-10-12 | 주식회사 윈포넷 | 영상데이터의 변화값 추출을 이용한 영상데이터 저장방법 |
| JP4449576B2 (ja) * | 2004-05-28 | 2010-04-14 | パナソニック電工株式会社 | 画像処理方法および画像処理装置 |
| GB0807411D0 (en) * | 2008-04-23 | 2008-05-28 | Mitsubishi Electric Inf Tech | Scale robust feature-based indentfiers for image identification |
| KR101181086B1 (ko) * | 2009-11-03 | 2012-09-07 | 중앙대학교 산학협력단 | 보간 검출 장치, 보간 영역 검출 장치 및 그 방법 |
| CN104077773A (zh) * | 2014-06-23 | 2014-10-01 | 北京京东方视讯科技有限公司 | 图像边缘检测方法、图像目标识别方法及装置 |
-
2017
- 2017-01-31 KR KR1020170013709A patent/KR101870355B1/ko active Active
- 2017-12-22 WO PCT/KR2017/015389 patent/WO2018143565A1/fr not_active Ceased
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20150021351A (ko) * | 2013-08-20 | 2015-03-02 | 삼성테크윈 주식회사 | 영상 정합 장치 및 이를 이용한 영상 정합 방법 |
| KR20160071781A (ko) * | 2014-12-12 | 2016-06-22 | 삼성전자주식회사 | 이미지에서 객체를 검출하는 객체 검출 방법 및 이미지 처리 장치 |
| WO2016108847A1 (fr) * | 2014-12-30 | 2016-07-07 | Nokia Technologies Oy | Procédés et appareil de traitement d'images à informations de mouvement |
Non-Patent Citations (2)
| Title |
|---|
| KIM HYUN WOO ET AL.: "Multi images Preprocess Method for License Plate...", IEIE CONFERENCE, vol. 28, no. 2, November 2005 (2005-11-01), pages 477 - 480 * |
| SAPNA SHARMA: "Different Parameters Of Image Fusion Using Streerable Pyramid Transformation", INTERNATIONAL JOURNAL OF ADVANCED RESEARCH IN ELECTRICA 1, ELECTRONICS AND INSTRUMENTATION ENGINEERING, vol. 5, no. 5, May 2016 (2016-05-01), pages 4218 - 4223, XP055530919, ISSN: 2278-8875 * |
Also Published As
| Publication number | Publication date |
|---|---|
| KR101870355B1 (ko) | 2018-06-22 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2013151270A1 (fr) | Appareil et procédé de reconstruction d'image tridimensionnelle à haute densité | |
| WO2018128355A1 (fr) | Robot et dispositif électronique servant à effectuer un étalonnage œil-main | |
| WO2020130309A1 (fr) | Dispositif de masquage d'image et procédé de masquage d'image | |
| WO2018135906A1 (fr) | Caméra et procédé de traitement d'image d'une caméra | |
| WO2019139234A1 (fr) | Appareil et procédé pour supprimer la distorsion d'un objectif ultra-grand-angulaire et d'images omnidirectionnelles | |
| CN106709894B (zh) | 一种图像实时拼接方法及系统 | |
| KR20150021351A (ko) | 영상 정합 장치 및 이를 이용한 영상 정합 방법 | |
| WO2019132590A1 (fr) | Procédé et dispositif de transformation d'image | |
| WO2017115905A1 (fr) | Système et procédé de reconnaissance de pose de corps humain | |
| WO2021141253A1 (fr) | Système et procédé d'identification de la position d'une capsule endoscopique sur la base d'informations de position concernant la capsule endoscopique | |
| WO2018139847A1 (fr) | Procédé d'identification personnelle par comparaison faciale | |
| WO2018143565A1 (fr) | Procédé de traitement d'images | |
| WO2024106630A1 (fr) | Système et procédé de production de contenu vidéo basé sur l'intelligence artificielle | |
| WO2020111311A1 (fr) | Système et procédé d'amélioration de qualité d'image d'objet d'intérêt | |
| WO2012002601A1 (fr) | Procédé et appareil permettant de reconnaître une personne à l'aide d'informations d'image 3d | |
| WO2015026002A1 (fr) | Appareil d'appariement d'images et procédé d'appariement d'images au moyen de cet appareil | |
| WO2016104842A1 (fr) | Système de reconnaissance d'objet et procédé de prise en compte de distorsion de caméra | |
| WO2019098421A1 (fr) | Dispositif de reconstruction d'objet au moyen d'informations de mouvement et procédé de reconstruction d'objet l'utilisant | |
| WO2011043498A1 (fr) | Appareil intelligent de surveillance d'images | |
| WO2018038300A1 (fr) | Dispositif, procédé et programme informatique de fourniture d'image | |
| WO2011078430A1 (fr) | Procédé de recherche séquentielle pour reconnaître une pluralité de marqueurs à base de points de caractéristique et procédé de mise d'oeuvre de réalité augmentée utilisant ce procédé | |
| WO2018169110A1 (fr) | Appareil de réalité augmentée sans marqueur et procédé d'expression d'objet tridimensionnel | |
| WO2024071888A1 (fr) | Dispositif et procédé de surveillance | |
| JP2003317033A (ja) | 画像処理におけるアフィン変換係数算出方法および画像処理装置 | |
| KR102450466B1 (ko) | 영상 내의 카메라 움직임 제거 시스템 및 방법 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17895302 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 17895302 Country of ref document: EP Kind code of ref document: A1 |