[go: up one dir, main page]

WO2023205984A1 - Association de roi et procédé d'étiquetage - Google Patents

Association de roi et procédé d'étiquetage Download PDF

Info

Publication number
WO2023205984A1
WO2023205984A1 PCT/CN2022/088879 CN2022088879W WO2023205984A1 WO 2023205984 A1 WO2023205984 A1 WO 2023205984A1 CN 2022088879 W CN2022088879 W CN 2022088879W WO 2023205984 A1 WO2023205984 A1 WO 2023205984A1
Authority
WO
WIPO (PCT)
Prior art keywords
point set
contour point
surface contour
reference surface
roi
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2022/088879
Other languages
English (en)
Chinese (zh)
Inventor
蓝培钦
龚强
蔡博凡
李恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
KLARITY MEDICAL AND EQUIPMENT (GZ) CO Ltd
Original Assignee
KLARITY MEDICAL AND EQUIPMENT (GZ) CO Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by KLARITY MEDICAL AND EQUIPMENT (GZ) CO Ltd filed Critical KLARITY MEDICAL AND EQUIPMENT (GZ) CO Ltd
Priority to PCT/CN2022/088879 priority Critical patent/WO2023205984A1/fr
Priority to CN202280007558.5A priority patent/CN116802686A/zh
Publication of WO2023205984A1 publication Critical patent/WO2023205984A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods

Definitions

  • the present application relates to the field of imaging technology, and more specifically, to a ROI region association and labeling method.
  • this application provides a ROI region association and labeling method, which can realize the correlation identification and automatic labeling of the ROI region on the real-time surface contour point set.
  • a ROI area association and labeling method including:
  • the reference surface profile point set is a surface profile point set obtained by scanning the detection object of the placement standard
  • the transformed contour point set including the second ROI region is inversely transformed to generate a target contour point set, and the target contour point set includes the inverse transformation of the second ROI region.
  • the third ROI area obtained.
  • determining the first ROI region of the reference surface contour point set, migrating and transforming the first ROI region to the transformed contour point set, and generating a second ROI region includes:
  • the second ROI area is determined based on the feature points having the same coordinate position as the feature points in the first ROI area.
  • determining the first ROI area of the reference surface contour point set includes:
  • a visual volume space transformation is performed between the graphics area and the reference surface contour point set, and a corresponding image corresponding to the graphics area is generated on the reference surface contour point set according to the transformation result.
  • the first ROI area is a visual volume space transformation performed between the graphics area and the reference surface contour point set, and a corresponding image corresponding to the graphics area is generated on the reference surface contour point set according to the transformation result.
  • pose correlation calculation is performed on the reference surface contour point set and the real-time surface contour point set to obtain a correlation transformation matrix, including:
  • the first attitude correlation calculation is performed on the reference surface contour point set and the real-time surface contour point set to generate a correlation transformation matrix, including:
  • a visual volume space transformation is performed between the graphics area and the reference surface contour point set, and based on the transformation result, a shape corresponding to the graphic is generated on the reference surface contour point set.
  • the first ROI area corresponding to the area includes:
  • the area where the target feature point is located on the reference surface contour point set is determined as the first ROI area.
  • a visual volume space transformation is performed between the graphics area and the reference surface contour point set, and based on the transformation result, a shape corresponding to the graphic is generated on the reference surface contour point set.
  • the first ROI area corresponding to the area includes:
  • the area where the target feature point is located on the reference surface contour point set is determined as the first ROI area.
  • the method before transforming the real-time surface contour point set according to the associated transformation matrix and generating the transformed contour point set, the method further includes:
  • prompt information is displayed to prompt the detection object to adjust its position.
  • the method further includes:
  • a unique correspondence relationship between each feature point constituting the transformed contour point set and each feature point constituting the reference surface contour point set is determined, and the same index number is set for each set of corresponding feature points.
  • the ROI area association and labeling method obtains the real-time surface contour point set of the currently scanned detection object and the reference surface contour point set of the detection object.
  • the reference surface contour point set is a surface contour point set obtained by scanning the detection object of the placement standard. Perform pose correlation calculations on the reference surface contour point set and the real-time surface contour point set to obtain a correlation transformation matrix, and transform the real-time surface contour point set according to the correlation transformation matrix to generate a transformed contour point set .
  • Determine the first ROI area of the reference surface contour point set migrate and transform the first ROI area to the transformed contour point set, and generate a second ROI area, where the first ROI area is within the reference surface contour point set.
  • the transformed contour point set including the second ROI region is inversely transformed to generate a target contour point set.
  • the target contour point set includes the inverse transformation of the second ROI region.
  • the third ROI area obtained by transformation.
  • This application uses the pose correlation calculation to detect the reference surface contour point set and the real-time surface contour point set of the detection object, and migrates and transforms the first ROI area marked on the reference surface contour point set to the transformed contour point set obtained after the pose correlation calculation transformation. , obtain the transformed contour point set containing the second ROI area, and perform inverse transformation on the transformed contour point set to obtain the target contour point set containing the third ROI area. Since the transformation process is transformed through the associated transformation matrix, and the inverse transformation process is transformed through the inverse matrix of the associated transformation matrix, the target contour point set is exactly the same as the real-time surface contour point set except for the labeled third ROI area.
  • the ROI area can be automatically identified and marked on the same area on the real-time surface contour point set obtained by scanning based on the reference surface contour point set including the first ROI area, thereby realizing the real-time surface contour during the radiotherapy process.
  • the ROI area on the point set is tracked, monitored and treated.
  • Figure 1 is a flow chart of an ROI region association and labeling method disclosed in this application.
  • Figure 2 is a schematic diagram of a reference surface contour point set and a target contour point set disclosed in the embodiment of the present application;
  • FIG. 3 is a schematic diagram of an ROI region disclosed in the embodiment of the present application.
  • Figure 1 is a flow chart of an ROI region association and labeling method disclosed in an embodiment of the present application.
  • the method may include:
  • Step S1 Obtain the real-time surface contour point set of the currently scanned detection object.
  • the real-time surface contour point set consists of several feature points. After the detection object enters the scanning instrument, it can be scanned by various depth scanning instruments, and a real-time surface contour point set can be constructed through modeling software such as 3dsMAX and Maya. This application does not limit the specific scanning equipment. Equipment that can scan the detection object and generate a corresponding real-time surface contour point set are all scanning equipment that can be used in this application.
  • Step S2 Obtain the reference surface contour point set of the detection object.
  • the reference surface contour point set is a surface contour point set obtained by scanning the detection object of the placement standard, wherein the reference surface contour point set is composed of several feature points. At this time, the posture and position of the detected object all meet the detection and scanning requirements.
  • the reference surface contour point set may be a surface contour point set obtained by scanning a human body in a positioning standard.
  • the reference surface contour point set of the detection object can be obtained and stored.
  • the identity information of the detection object can be directly used. , ID information, coding, etc., to retrieve the corresponding reference surface contour point set of the detection object, and perform subsequent ROI area determination.
  • Step S3 Perform pose correlation calculation on the reference surface contour point set and the real-time surface contour point set to obtain a correlation transformation matrix.
  • the reference surface contour point set and the real-time surface contour point set perform global pose correlation calculations, and the pose correlation calculation method for the reference surface contour point set and the real-time surface contour point set can be performed by The first pose correlation calculation and the second pose correlation calculation are combined, etc., and after the pose correlation calculation is completed, the correlation transformation matrix is generated.
  • Associative transformation matrices can be used to transform real-time surface contour point sets.
  • Step S4 Transform the real-time surface contour point set according to the associated transformation matrix to generate a transformed contour point set.
  • the real-time surface contour point set performs matrix operations on the real-time surface contour point set based on the associated transformation matrix calculated by pose correlation.
  • the transformed contour point set is obtained after rotation, displacement, stretching and other transformations.
  • the transformed contour point set It is basically consistent with the reference surface contour point set in all aspects such as position and placement. It can be considered that the transformed contour point set obtained by transformation basically coincides with the reference surface contour point set.
  • Step S5 Determine the first ROI region of the reference surface contour point set, migrate and transform the first ROI region to the transformed contour point set, and generate a second ROI region.
  • the first ROI area is an ROI area marked on the reference surface contour point set.
  • the first ROI area is an ROI area marked in advance on the reference surface contour point set, or it can be an immediately outlined and marked ROI area at the current moment.
  • the first ROI region can be migrated and transformed to the transformed contour point set to generate a second ROI area.
  • the method of generating the second ROI area by the first ROI area migration transformation may include mapping the first ROI area to the transformed outline point set to determine the second ROI area of the transformed outline point set, and the second ROI area is in the transformed outline
  • the position and size on the point set are the same as the position and size of the first ROI region on the reference surface contour point set.
  • Step S6 Perform inverse transformation on the transformed contour point set including the second ROI region according to the associated transformation matrix to generate a target contour point set.
  • the target contour point set includes a third ROI region obtained by inversely transforming the second ROI region.
  • the inverse matrix of the associated transformation matrix is calculated, and the set of transformed contour points containing the second ROI area is inversely transformed through the inverse matrix of the associated transformation matrix, that is, the real-time surface contour that has been rotated, displaced, and stretched in step S4
  • the point set is restored to the shape before rotation, displacement, and stretching, and a target outline point set containing a third ROI region obtained by inversely transforming the second ROI region can be generated.
  • the target contour point set including the corresponding third ROI area can be obtained according to this method.
  • the position and size of the second ROI region on the transformed contour point set are the same as the position and size of the first ROI region on the reference surface contour point set.
  • the third ROI region passes through the second ROI region.
  • the area represented on the target surface contour point set is the area represented by the first ROI area on the reference surface contour point set, which is the diseased area that requires radiotherapy.
  • the ROI area association and labeling method obtains the real-time surface contour point set of the currently scanned detection object and the reference surface contour point set of the detection object.
  • the reference surface contour point set is a surface contour point set obtained by scanning the detection object of the placement standard. Perform pose correlation calculations on the reference surface contour point set and the real-time surface contour point set to obtain a correlation transformation matrix, and transform the real-time surface contour point set according to the correlation transformation matrix to generate a transformed contour point set .
  • Determine the first ROI area of the reference surface contour point set migrate and transform the first ROI area to the transformed contour point set, and generate a second ROI area, where the first ROI area is within the reference surface contour point set.
  • the transformed contour point set including the second ROI region is inversely transformed to generate a target contour point set.
  • the target contour point set includes the inverse transformation of the second ROI region.
  • the third ROI area obtained by transformation.
  • This application uses the pose correlation calculation to detect the reference surface contour point set and the real-time surface contour point set of the detection object, and migrates and transforms the first ROI area marked on the reference surface contour point set to the transformed contour point set obtained after the pose correlation calculation transformation. , obtain the transformed contour point set containing the second ROI area, and perform inverse transformation on the transformed contour point set to obtain the target contour point set containing the third ROI area. Since the transformation process is transformed through the associated transformation matrix, and the inverse transformation process is transformed through the inverse matrix of the associated transformation matrix, the target contour point set is exactly the same as the real-time surface contour point set except for the labeled third ROI area.
  • the ROI area can be automatically identified and marked on the same area on the real-time surface contour point set obtained by scanning based on the reference surface contour point set including the first ROI area, thereby realizing the real-time surface contour during the radiotherapy process.
  • the ROI area on the point set is tracked, monitored and treated.
  • step S4 after transforming the real-time surface contour point set according to the associated transformation matrix and generating the transformed contour point set, it may also include:
  • Step S7 Determine the unique correspondence between each feature point constituting the transformed contour point set and each feature point constituting the reference surface contour point set, and set the same index number for each set of corresponding feature points.
  • each feature point constituting the transformed contour point set is determined.
  • For each feature point constituting the transformed contour point set There is a unique corresponding feature point that constitutes the reference surface contour point set.
  • each set of corresponding feature points can be set to the same index number and the index can be maintained. There are no duplicate serial numbers. According to the index serial number, the corresponding feature points of the group can be directly found.
  • the human body may be displaced, in order to avoid damage caused by invalid irradiation due to human body displacement, it can also be further determined based on the correlation transformation matrix whether the detection object needs to be adjusted.
  • step S4 before transforming the real-time surface contour point set according to the associated transformation matrix and generating the transformed contour point set, it may also include:
  • Step S8 Check whether the pose correlation calculation result is within the allowed difference range
  • step S4 a process of transforming the real-time surface contour point set according to the associated transformation matrix to generate a transformed contour point set
  • prompt information is displayed to prompt the detection object to adjust its position.
  • the detection pose correlation calculation result is within the allowed difference range, the subsequent ROI area determination and labeling work will continue. If the amplitude of the position change is large and the position and attitude correlation calculation result is not within the allowed difference range, a prompt message will be displayed to prompt the detection object to adjust its position. At the same time, the rays will not be controlled. The position of the detection object monitored next time will be different. Restoring the set threshold range, that is, if the pose correlation calculation result is within the allowed difference range, the linked control will resume the determination and labeling of the ROI area, as well as the radiation treatment of the ROI area.
  • the first ROI area on the reference surface contour point set may be a pre-marked ROI area, or may be an ROI area marked at the current moment. Considering the above two different ways, two different implementation ways are provided.
  • step S5 determine the first ROI region of the reference surface contour point set, migrate and transform the first ROI region to the transformed contour point set, and generate a second ROI
  • the regional process will be introduced, which may include:
  • the first ROI area has been marked on the reference unit model.
  • Step S51 Obtain the coordinate positions of the feature points in the first ROI region of the reference surface contour point set, where the first ROI region is the ROI region marked on the reference surface contour point set.
  • the reference surface contour point set contains the marked first ROI region. At this time, the coordinate positions of each feature point in the first ROI region on the reference surface contour point set are obtained.
  • Step S52 Determine the feature point whose coordinate position in the transformed contour point concentration is the same as the coordinate position of the feature point in the first ROI region.
  • a set of transformed contour points that closely coincides with the set of reference surface contour points is determined, and each feature point has the same coordinate position as each feature point in the set of transformed contour points. It can also be determined by mapping that each feature point in the first ROI area is mapped on the transformed contour point set, and the feature points that are the same as the coordinate positions of the feature points in the first ROI area.
  • Step S53 Determine a second ROI region based on the feature points whose coordinate positions are the same as those of the feature points in the first ROI region.
  • the area surrounded by each feature point is the second ROI region.
  • the second method is to mark the first ROI area on the reference unit model in real time.
  • step S5 and determining the first ROI area of the reference surface contour point set is introduced, which may include:
  • Step S54 In response to human operation, obtain the point set parameters of the outlined graphics area and the reference surface contour point set.
  • the first ROI region can be marked on the reference surface contour point set at the current moment.
  • the user can determine the ROI area that needs to be generated by rotating the reference surface contour point set and drawing with the mouse.
  • Step S55 Determine the visual volume space transformation matrix corresponding to the point set parameters.
  • the point set parameters can be used to obtain a visual volume space transformation matrix that matches the model.
  • the visual volume space transformation matrix is used for visual volume space transformation to achieve switching between three-dimensional space and two-dimensional plane.
  • the visual volume space transformation process includes model matrix transformation, view matrix transformation and projection matrix transformation in sequence, including:
  • Model matrix transformation Calculate the position of each point of the model in world space in order to place the model in world space
  • Projection matrix transformation It is a perspective projection transformation that converts three-dimensional objects into two-dimensional images that can be displayed on the screen.
  • Step S56 According to the visual volume space transformation matrix, perform a visual volume space transformation between the graphics area and the reference surface contour point set, and generate the same shape as the graphics on the reference surface contour point set based on the transformation result.
  • the visual volume space transformation between the graphics area and the reference surface contour point set can be realized, that is, through the feature points of the graphics area, the reference surface contour point set and the The corresponding feature points are used to determine the first ROI area corresponding to the graphics area on the reference surface contour point set. It is also possible to convert the feature points in the reference surface contour point set to locate the corresponding feature points after conversion. For the feature points in the graphics area, the area formed by the reference surface contour point set is determined as the first ROI area corresponding to the graphics area.
  • step S56 two optional ways of determining the first ROI area according to the visual volume space transformation matrix are provided, namely step S56, according to the visual volume space transformation matrix, in the graphics
  • step S56 two different implementations of the process of performing a visual volume space transformation between an area and the reference surface outline point set, and generating a first ROI region corresponding to the graphics area on the reference surface outline point set based on the transformation result.
  • the two methods are introduced below, which may include:
  • a visual volume space transformation matrix matching the point set parameters can be determined.
  • the 2D coordinates corresponding to each feature point of the reference surface contour point set can be determined according to the view volume space transformation matrix.
  • a view volume space transformation matrix matching the point set parameters can be determined, and the inverse matrix of the view volume space transformation matrix can be obtained by converting it.
  • the 3D coordinates corresponding to the 2D coordinates of each feature point in the graphics area can be determined based on the inverse matrix of the view volume space transformation matrix.
  • this embodiment in order to speed up the pose correlation calculation as much as possible and improve the pose correlation calculation efficiency without affecting the pose correlation calculation accuracy, this embodiment provides a first pose correlation calculation and a third pose correlation calculation.
  • a pose correlation calculation method that combines two pose correlation calculations.
  • step S3 performing pose correlation calculation on the reference surface contour point set and the real-time surface contour point set to obtain the correlation transformation matrix.
  • it may include:
  • Step S31 Perform first attitude correlation calculation on the reference surface contour point set and the real-time surface contour point set to generate a correlation transformation matrix.
  • Step S32 Based on the last pose correlation calculation, perform a second pose correlation calculation on the reference surface contour point set and the real-time surface contour point set, and update the correlation transformation matrix.
  • Step S33 If it is detected that the current pose correlation calculation result is not within the allowed difference range, return to the execution on the basis of the last pose correlation calculation, and perform the calculation on the reference surface contour point set and the real-time surface contour point set.
  • the second pose correlation calculation process is until the current pose correlation calculation result is within the allowed difference range.
  • the first attitude correlation calculation is performed on the reference surface contour point set and the real-time surface contour point set, and an correlation transformation matrix is generated at the same time.
  • the reference surface is calculated.
  • the contour point set and the real-time surface contour point set perform a second pose correlation calculation, and at the same time, the correlation transformation matrix is updated.
  • the second pose correlation calculation is performed again on the reference surface contour point set and the real-time surface contour point set, and the correlation transformation matrix is updated again until the current pose correlation calculation result is within the allowed difference range.
  • step S31 the first attitude correlation calculation is performed on the reference surface contour point set and the real-time surface contour point set to generate an correlation transformation matrix.
  • the process will be introduced, which may include:
  • the real-time surface contour point set and the reference surface contour point set are first filtered and denoised respectively to avoid the impact of impurity factors on the pose correlation calculation process.
  • the feature point geometric description set is a parameterized spatial difference between the query point and the neighborhood point, and a multi-dimensional histogram is formed to describe the geometric attributes within the k neighborhood of the point.
  • the steps to generate a geometric description set of feature points include:
  • p i and p j are two three-dimensional coordinate points, and their respective normal vectors are n i and n j respectively.
  • w j is the weight of point pair p i and p j , which is measured by the distance between p i and p j in space.
  • the second pose correlation calculation method used in step S32 of this application may include the following steps:
  • p is the reference surface contour point set
  • p′ is the real-time surface contour point set
  • R is the rotation matrix
  • t is the translation matrix
  • R and t form the updated correlation transformation matrix

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

La présente demande divulgue un procédé d'association et d'étiquetage de ROI. Le procédé consiste : à obtenir un ensemble de points de contour de surface en temps réel d'un sujet détecté actuellement balayé ; à obtenir un ensemble de points de contour de surface de référence du sujet détecté, l'ensemble de points de contour de surface de référence étant un ensemble de points de contour de surface obtenu par balayage du sujet détecté qui est dans une position standard et une posture standard ; à effectuer un calcul d'association de pose sur l'ensemble de points de contour de surface de référence et l'ensemble de points de contour de surface en temps réel afin d'obtenir une matrice de transformation d'association ; à transformer l'ensemble de points de contour de surface en temps réel selon la matrice de transformation d'association afin de générer un ensemble de points de contour transformé ; à déterminer une première ROI de l'ensemble de points de contour de surface de référence, et à transférer et à transformer la première ROI en l'ensemble de points de contour transformé afin de générer une seconde ROI ; et à effectuer, selon la matrice de transformation d'association, une transformation inverse sur l'ensemble de points de contour transformé comprenant la seconde ROI afin de générer un ensemble de points de contour cible. Au moyen de la présente demande, selon l'ensemble de points de contour de surface de référence et de la première ROI, un étiquetage automatique de la ROI sur l'ensemble de points de contour de surface en temps réel peut être réalisé.
PCT/CN2022/088879 2022-04-25 2022-04-25 Association de roi et procédé d'étiquetage Ceased WO2023205984A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2022/088879 WO2023205984A1 (fr) 2022-04-25 2022-04-25 Association de roi et procédé d'étiquetage
CN202280007558.5A CN116802686A (zh) 2022-04-25 2022-04-25 一种roi区域关联及标注方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/088879 WO2023205984A1 (fr) 2022-04-25 2022-04-25 Association de roi et procédé d'étiquetage

Publications (1)

Publication Number Publication Date
WO2023205984A1 true WO2023205984A1 (fr) 2023-11-02

Family

ID=88034888

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/088879 Ceased WO2023205984A1 (fr) 2022-04-25 2022-04-25 Association de roi et procédé d'étiquetage

Country Status (2)

Country Link
CN (1) CN116802686A (fr)
WO (1) WO2023205984A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101887525A (zh) * 2010-07-09 2010-11-17 北京师范大学 基于分级的正反互逆的三维稠密点集快速配准方法
WO2012069965A1 (fr) * 2010-11-23 2012-05-31 Koninklijke Philips Electronics N.V. Corrections interactives de cartes de déformation
CN110276790A (zh) * 2019-06-28 2019-09-24 易思维(杭州)科技有限公司 基于形状约束的点云配准方法

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12159484B2 (en) * 2020-03-30 2024-12-03 Nec Corporation Photographing system, photographing method, and non-transitory computer-readable medium storing photographing program
CN113743403A (zh) * 2021-09-03 2021-12-03 上海深至信息科技有限公司 一种基于关键帧的病灶自动标注方法和系统

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101887525A (zh) * 2010-07-09 2010-11-17 北京师范大学 基于分级的正反互逆的三维稠密点集快速配准方法
WO2012069965A1 (fr) * 2010-11-23 2012-05-31 Koninklijke Philips Electronics N.V. Corrections interactives de cartes de déformation
CN110276790A (zh) * 2019-06-28 2019-09-24 易思维(杭州)科技有限公司 基于形状约束的点云配准方法

Also Published As

Publication number Publication date
CN116802686A (zh) 2023-09-22

Similar Documents

Publication Publication Date Title
CN107767442B (zh) 一种基于Kinect和双目视觉的脚型三维重建与测量方法
WO2020173052A1 (fr) Procédé de mesure d'image tridimensionnelle, dispositif électronique, support d'enregistrement et produit de programme
US20200268251A1 (en) System and method for patient positioning
CN104679831B (zh) 一种匹配人体模型的方法及装置
CN112115953A (zh) 一种基于rgb-d相机结合平面检测与随机抽样一致算法的优化orb算法
US20210103753A1 (en) Computer-implemented method for registering low dimensional images with a high dimensional image, a method for training an aritificial neural network useful in finding landmarks in low dimensional images, a computer program and a system for registering low dimensional images with a high dimensional image
CN110378881B (zh) 一种基于深度学习的肿瘤定位系统
WO2021078065A1 (fr) Procédé et appareil de reconstruction de nuage de points tridimensionnel du sein, et support d'enregistrement et dispositif informatique
CN107808377A (zh) 一种肺叶中病灶的定位方法及装置
CN110096925A (zh) 人脸表情图像的增强方法、获取方法和装置
CN108245788B (zh) 一种双目测距装置及方法、包括该装置的加速器放疗系统
CN112146647B (zh) 一种地面纹理的双目视觉定位方法及芯片
CN103745470A (zh) 基于小波的多边形轮廓演化医学ct图像交互式分割方法
CN109872353B (zh) 基于改进迭代最近点算法的白光数据与ct数据配准方法
CN110570473A (zh) 一种基于点线融合的权重自适应位姿估计方法
CN112070884A (zh) 一种同步实现三维重建和ar虚实注册的方法、系统及装置
WO2020134925A1 (fr) Procédé et appareil de détection d'éclairage pour image faciale, et dispositif et support de stockage
CN115018890A (zh) 一种三维模型配准方法及系统
CN113256693A (zh) 基于K-means与正态分布变换的多视角配准方法
Chen et al. 3D reconstruction of unstructured objects using information from multiple sensors
CN107993277B (zh) 基于先验知识的损伤部位人造骨骼修补模型重建方法
CN116664394A (zh) 一种三维人眼图像生成方法及装置、电子设备、存储介质
CN110378333B (zh) 一种sd-oct图像黄斑中央凹中心定位方法
WO2023205984A1 (fr) Association de roi et procédé d'étiquetage
CN112381952B (zh) 一种基于多相机的面部轮廓点云模型重构方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22938843

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22938843

Country of ref document: EP

Kind code of ref document: A1