[go: up one dir, main page]

CN118657817B - Image processing method, system, medium and product - Google Patents

Image processing method, system, medium and product Download PDF

Info

Publication number
CN118657817B
CN118657817B CN202411132467.1A CN202411132467A CN118657817B CN 118657817 B CN118657817 B CN 118657817B CN 202411132467 A CN202411132467 A CN 202411132467A CN 118657817 B CN118657817 B CN 118657817B
Authority
CN
China
Prior art keywords
image
navigation
tooth
result
key frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411132467.1A
Other languages
Chinese (zh)
Other versions
CN118657817A (en
Inventor
盛鸿
陈云
陶艳
崔晗欢
章征贵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Dikaier Medical Technology Co ltd
Original Assignee
Suzhou Dikaier Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Dikaier Medical Technology Co ltd filed Critical Suzhou Dikaier Medical Technology Co ltd
Priority to CN202411132467.1A priority Critical patent/CN118657817B/en
Publication of CN118657817A publication Critical patent/CN118657817A/en
Application granted granted Critical
Publication of CN118657817B publication Critical patent/CN118657817B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C8/00Means to be fixed to the jaw-bone for consolidating natural teeth or for fixing dental prostheses thereon; Dental implants; Implanting tools
    • A61C8/0089Implanting tools or instruments
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2068Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis using pointers, e.g. pointers having reference marks for determining coordinates of body points
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Veterinary Medicine (AREA)
  • Surgery (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Artificial Intelligence (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Orthopedic Medicine & Surgery (AREA)
  • Dentistry (AREA)
  • Epidemiology (AREA)
  • Computing Systems (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Robotics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses an image processing method, an image processing system, a medium and a product. The method comprises the following steps: acquiring mouth scan data and a navigation CT image, and determining a first registration result between the mouth scan data and the navigation CT image; if the current navigation scene image accords with the image non-shielding condition, taking the current navigation scene image as a current key frame image, and determining a navigation rough registration result between the mouth scan data and the current key frame image; determining projection data of the mouth scan data on a plane where the current key frame image is located according to the navigation rough registration result to obtain a point cloud projection image; and determining a first matching result between the point cloud projection image and the current key frame image, and converting the navigation CT image into a coordinate system where the navigation system is located according to the first registration result and the first matching result. The technical scheme provided by the embodiment of the invention can accurately map the navigation CT image to the coordinate system where the navigation system is located without the participation of the marker.

Description

图像处理方法、系统、介质及产品Image processing method, system, medium and product

技术领域Technical Field

本发明涉及医学图像处理技术领域,尤其涉及一种图像处理方法、系统、介质及产品。The present invention relates to the technical field of medical image processing, and in particular to an image processing method, system, medium and product.

背景技术Background Art

光学导航种植方法因具有较高的术中灵活性,而被广泛使用。但现有光学导航种植方法通常需要在患者牙齿上粘接标记物(比如指定参考板),以便于图像配准与手术导航。如果标记物没有因手术碰撞发生晃动,其可以保证较高的手术导航效果;如果标记物术中因碰撞而发生晃动,通常需要基于标记物的新位置重新进行配准,因此光学导航种植方法需要用户在术中完美地避开标记物。Optical navigation implantation methods are widely used due to their high intraoperative flexibility. However, existing optical navigation implantation methods usually require the bonding of markers (such as designated reference plates) on the patient's teeth to facilitate image registration and surgical navigation. If the marker does not shake due to surgical collision, it can ensure a high surgical navigation effect; if the marker shakes due to collision during surgery, it is usually necessary to re-register based on the new position of the marker. Therefore, the optical navigation implantation method requires the user to perfectly avoid the marker during surgery.

综上,现有光学导航种植方法的导航准确性依赖标记物的稳定程度,这种依赖降低了导航的灵活性。In summary, the navigation accuracy of existing optical navigation implantation methods depends on the stability of the markers, and this dependence reduces the flexibility of navigation.

发明内容Summary of the invention

本发明提供了一种图像处理方法,以解决现有光学导航种植方法的导航准确性依赖标记物固定程度的问题。The present invention provides an image processing method to solve the problem that the navigation accuracy of the existing optical navigation implantation method depends on the degree of fixation of the marker.

根据本发明的一方面,提供了一种图像处理方法,包括:According to one aspect of the present invention, there is provided an image processing method, comprising:

获取口扫数据与导航CT(Computed Tomography,计算机断层成像)图像,并确定所述口扫数据与所述导航CT图像之间的第一配准结果,所述口扫数据与所述导航CT图像均包括目标对象的牙齿;Acquire oral scan data and a navigation CT (Computed Tomography) image, and determine a first registration result between the oral scan data and the navigation CT image, wherein both the oral scan data and the navigation CT image include teeth of a target object;

如果当前导航场景图像符合图像未遮挡条件,则将所述当前导航场景图像作为当前关键帧图像,并确定所述口扫数据与所述当前关键帧图像之间的导航粗配准结果;If the current navigation scene image meets the image unobstructed condition, the current navigation scene image is used as the current key frame image, and a navigation rough registration result between the oral scan data and the current key frame image is determined;

根据所述导航粗配准结果确定所述口扫数据在所述当前关键帧图像所在平面的投影数据,得到点云投影图像;Determine the projection data of the oral scan data on the plane where the current key frame image is located according to the navigation rough registration result, and obtain a point cloud projection image;

确定所述点云投影图像与所述当前关键帧图像之间的第一匹配结果,并根据所述第一配准结果与所述第一匹配结果,将所述导航CT图像转换至所述导航系统所在坐标系。A first matching result between the point cloud projection image and the current key frame image is determined, and according to the first registration result and the first matching result, the navigation CT image is converted to the coordinate system of the navigation system.

根据本发明的另一方面,提供了一种图像处理装置,包括:According to another aspect of the present invention, there is provided an image processing apparatus, comprising:

第一配准模块,用于获取口扫数据与导航CT图像,并确定所述口扫数据与所述导航CT图像之间的第一配准结果,所述口扫数据与所述导航CT图像均包括目标对象的牙齿;A first registration module, used for acquiring oral scan data and a navigation CT image, and determining a first registration result between the oral scan data and the navigation CT image, wherein both the oral scan data and the navigation CT image include teeth of a target object;

无遮挡模块,用于如果当前导航场景图像符合图像未遮挡条件,则将所述当前导航场景图像作为当前关键帧图像,并确定所述口扫数据与所述当前关键帧图像之间的导航粗配准结果;An unobstructed module, used for taking the current navigation scene image as the current key frame image if the current navigation scene image meets the image unobstructed condition, and determining a navigation rough registration result between the oral scan data and the current key frame image;

投影模块,用于根据所述导航粗配准结果确定所述口扫数据在所述当前关键帧图像所在平面的投影数据,得到点云投影图像;A projection module, used to determine the projection data of the oral scan data on the plane where the current key frame image is located according to the navigation coarse registration result, and obtain a point cloud projection image;

第二配准模块,用于确定所述点云投影图像与所述当前关键帧图像之间的第一匹配结果,并根据所述第一配准结果与所述第一匹配结果,确定所述导航CT图像在所述导航系统中的位姿。The second registration module is used to determine a first matching result between the point cloud projection image and the current key frame image, and determine the position and posture of the navigation CT image in the navigation system according to the first registration result and the first matching result.

根据本发明的另一方面,提供了一种电子设备,所述导航系统包括:According to another aspect of the present invention, an electronic device is provided, wherein the navigation system comprises:

手术器械;Surgical instruments;

摄像装置,用于获取目标对象术中的导航场景图像,所述导航场景图像包括所述手术器械;A camera device, used to obtain a navigation scene image during surgery on the target object, wherein the navigation scene image includes the surgical instrument;

处理器,用于执行任意实施例所述的图像处理方法;A processor, configured to execute the image processing method described in any embodiment;

显示装置,展示所述导航系统所在坐标下的导航CT图像与所述导航场景图像。The display device displays the navigation CT image and the navigation scene image at the coordinates of the navigation system.

根据本发明的另一方面,提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机指令,所述计算机指令用于使处理器执行时实现本发明任一实施例所述的图像处理方法。According to another aspect of the present invention, a computer-readable storage medium is provided, wherein the computer-readable storage medium stores computer instructions, and the computer instructions are used to enable a processor to implement the image processing method described in any embodiment of the present invention when executed.

根据本发明的另一方面,提供了一种计算机程序产品,所述计算机程序产品包括计算机程序,所述计算机程序在被处理器执行时实现任意实施例所述的图像处理方法。According to another aspect of the present invention, a computer program product is provided. The computer program product comprises a computer program. When the computer program is executed by a processor, the image processing method according to any embodiment is implemented.

本发明实施例的技术方案中,当前关键帧图像为目标对象牙齿的二维图像,口扫数据为目标对象牙齿的三维图像;先确定口扫数据与当前关键帧图像之间的导航粗配准结果,然后根据该导航粗配准结果确定投影视角,并按照该投影视角,将口扫数据投影至当前关键帧图像所在平面得到点云投影图像;由于点云投影图像与当前关键帧图像对应相同的视角,且均为同一对象的二维图像,因此二者之间的第一匹配结果具有较高的准确性;由于该第一匹配结果为当前关键图像与口扫数据中的部分数据的精配准结果,因此可基于第一匹配结果得到当前关键帧图像与整个口扫数据的精配准结果,然后以口扫数据为中介得到当前关键帧图像与导航CT图像之间的精配准结果,再基于该精配准结果以及导航CT图像与口扫数据之间的第一配准结果,将导航CT图像映射至导航系统所在坐标系。由于第一配准结果、第一匹配结果以及当前关键帧图像与整个口扫数据之间的精配准结果均具有较高的准确性,因此最终确定的当前关键帧图像与导航CT图像之间的精配准结果具有较高的准确性,因此可以将导航CT图像准确地映射至导航系统所在坐标系,达到了在无参考标记的情况下,完成导航CT图像与导航图像之间的精配准。In the technical solution of the embodiment of the present invention, the current key frame image is a two-dimensional image of the teeth of the target object, and the oral scan data is a three-dimensional image of the teeth of the target object; first, the navigation rough registration result between the oral scan data and the current key frame image is determined, and then the projection angle of view is determined according to the navigation rough registration result, and according to the projection angle of view, the oral scan data is projected to the plane where the current key frame image is located to obtain a point cloud projection image; since the point cloud projection image and the current key frame image correspond to the same angle of view and are both two-dimensional images of the same object, the first matching result between the two has high accuracy; since the first matching result is the precise registration result of the current key image and part of the data in the oral scan data, the precise registration result of the current key frame image and the entire oral scan data can be obtained based on the first matching result, and then the precise registration result between the current key frame image and the navigation CT image is obtained with the oral scan data as the medium, and then based on the precise registration result and the first registration result between the navigation CT image and the oral scan data, the navigation CT image is mapped to the coordinate system where the navigation system is located. Since the first registration result, the first matching result and the precise registration result between the current key frame image and the entire oral scan data all have high accuracy, the final precise registration result between the current key frame image and the navigation CT image has high accuracy. Therefore, the navigation CT image can be accurately mapped to the coordinate system of the navigation system, thereby achieving precise registration between the navigation CT image and the navigation image without reference markers.

应当理解,本部分所描述的内容并非旨在标识本发明的实施例的关键或重要特征,也不用于限制本发明的范围。本发明的其它特征将通过以下的说明书而变得容易理解。It should be understood that the contents described in this section are not intended to identify the key or important features of the embodiments of the present invention, nor are they intended to limit the scope of the present invention. Other features of the present invention will become easily understood through the following description.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the accompanying drawings required for use in the description of the embodiments will be briefly introduced below. Obviously, the accompanying drawings in the following description are only some embodiments of the present invention. For ordinary technicians in this field, other accompanying drawings can be obtained based on these accompanying drawings without paying creative work.

图1是根据本发明实施例提供的图像处理方法的流程图;FIG1 is a flow chart of an image processing method provided according to an embodiment of the present invention;

图2是根据本发明实施例提供的又一图像处理方法的流程图;FIG2 is a flow chart of another image processing method provided according to an embodiment of the present invention;

图3A是根据本发明实施例提供的图像处理装置的结构示意图;FIG3A is a schematic diagram of the structure of an image processing device provided according to an embodiment of the present invention;

图3B是根据本发明实施例提供的图像处理装置的又一结构示意图;FIG3B is another schematic diagram of the structure of an image processing device provided according to an embodiment of the present invention;

图3C是根据本发明实施例提供的图像处理装置的又一结构示意图;FIG3C is another schematic diagram of the structure of an image processing device provided according to an embodiment of the present invention;

图4是根据本发明实施例提供的手术导航系统的结构示意图;FIG4 is a schematic diagram of the structure of a surgical navigation system provided according to an embodiment of the present invention;

图5是根据本发明实施例提供的手术导航系统的又一结构示意图。FIG. 5 is another schematic diagram of the structure of a surgical navigation system according to an embodiment of the present invention.

具体实施方式DETAILED DESCRIPTION

为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。In order to enable those skilled in the art to better understand the scheme of the present invention, the technical scheme in the embodiments of the present invention will be clearly and completely described below in conjunction with the drawings in the embodiments of the present invention. Obviously, the described embodiments are only part of the embodiments of the present invention, not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by ordinary technicians in this field without creative work should fall within the scope of protection of the present invention.

需要说明的是,本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本发明的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及二者的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。It should be noted that the terms "first", "second", etc. in the specification and claims of the present invention and the above-mentioned drawings are used to distinguish similar objects, and are not necessarily used to describe a specific order or sequence. It should be understood that the data used in this way can be interchanged where appropriate, so that the embodiments of the present invention described herein can be implemented in an order other than those illustrated or described herein. In addition, the terms "including" and "having" and any variations of the two are intended to cover non-exclusive inclusions, for example, a process, method, system, product or device that includes a series of steps or units is not necessarily limited to those steps or units clearly listed, but may include other steps or units that are not clearly listed or inherent to these processes, methods, products or devices.

图1为本发明实施例提供的图像处理方法的流程图,本实施例可适用于在不引入参考板的情况下进行手术导航的情况,该方法可以由图像处理装置来执行,该图像处理装置可以采用硬件和/或软件的形式实现,该图像处理装置可配置于导航系统的处理器中。如图1所示,该方法包括:FIG1 is a flow chart of an image processing method provided by an embodiment of the present invention. This embodiment is applicable to the case of performing surgical navigation without introducing a reference plate. The method can be executed by an image processing device, which can be implemented in the form of hardware and/or software. The image processing device can be configured in a processor of a navigation system. As shown in FIG1 , the method includes:

S110、获取口扫数据与导航CT图像,并确定口扫数据与导航CT图像之间的第一配准结果,口扫数据与导航CT图像均包括目标对象的牙齿。S110, acquiring oral scan data and a navigation CT image, and determining a first registration result between the oral scan data and the navigation CT image, wherein both the oral scan data and the navigation CT image include teeth of the target object.

导航CT图像是采用诸如CBCT(锥形束CT)等方式获取的三维图像。Navigation CT images are three-dimensional images acquired using methods such as CBCT (cone beam CT).

导航CT图像获取后,采用手动或自动的方式在其上确定目标位置,该目标位置为手术位置,比如牙齿种植位置。After the navigation CT image is acquired, a target position is determined thereon manually or automatically, and the target position is a surgical position, such as a dental implant position.

其中,第一配准结果为精配准结果。在一个实施例中,通过以下步骤确定该第一配准结果:The first registration result is a precise registration result. In one embodiment, the first registration result is determined by the following steps:

步骤a1、对口扫数据进行牙位分割得到第一牙位分割结果,对导航CT图像进行牙位分割得到第二牙位分割结果。Step a1: performing tooth segmentation on the oral scan data to obtain a first tooth segmentation result, and performing tooth segmentation on the navigation CT image to obtain a second tooth segmentation result.

第一牙位分割结果与第二牙位分割结果均包括各牙齿的牙位号与三维形貌。在一个实施例中,采用人工智能技术对口扫数据与导航CT图像进行牙位分割,得到每个牙齿的三维形貌与牙位号。其中,牙位号为牙齿编号。该实施例可以快速准确地完成口扫数据与导航CT图像的牙位分割。The first tooth segmentation result and the second tooth segmentation result both include the tooth position number and three-dimensional shape of each tooth. In one embodiment, artificial intelligence technology is used to perform tooth segmentation on the oral scan data and the navigation CT image to obtain the three-dimensional shape and tooth position number of each tooth. The tooth position number is the tooth number. This embodiment can quickly and accurately complete the tooth segmentation of the oral scan data and the navigation CT image.

步骤a2、分别确定所述第一牙位分割结果中各牙齿的第一牙位特征与所述第二牙位分割结果中各牙齿的第二牙位特征。Step a2: respectively determine the first tooth position feature of each tooth in the first tooth position segmentation result and the second tooth position feature of each tooth in the second tooth position segmentation result.

其中,第一牙位特征与第二牙位特征包括相同的特征项目,比如质心、咬合面中心、牙龈线顶点等。The first tooth position feature and the second tooth position feature include the same feature items, such as the centroid, the occlusal surface center, the gum line vertex, etc.

步骤a3、基于所述第一牙位分割结果中各牙齿的第一牙位特征与所述第二牙位分割结果中各牙齿的第二牙位特征,确定口扫数据与导航CT图像之间的第一粗配准结果。Step a3: Determine a first rough registration result between the oral scan data and the navigation CT image based on the first tooth position feature of each tooth in the first tooth position segmentation result and the second tooth position feature of each tooth in the second tooth position segmentation result.

步骤a4、基于所述第一牙位分割结果中各牙齿的三维形貌与所述第二牙位分割结果中各牙齿的三维形貌,对所述第一粗配准结果进行调整,得到所述口扫数据与所述导航CT图像之间的第一配准结果。Step a4: Based on the three-dimensional morphology of each tooth in the first tooth segmentation result and the three-dimensional morphology of each tooth in the second tooth segmentation result, the first coarse registration result is adjusted to obtain a first registration result between the oral scan data and the navigation CT image.

在一个实施例中,采用迭代最近点算法、第一牙位分割结果中各牙齿的三维形貌、第二牙位分割结果中各牙齿的三维形貌,对第一粗配准结果进行调整,得到口扫数据与导航CT图像之间的精配准结果,将该精配准结果作为第一配准结果。第一配准结果的确定,标志着术前准备流程结束。In one embodiment, the first rough registration result is adjusted by using an iterative closest point algorithm, the three-dimensional morphology of each tooth in the first tooth segmentation result, and the three-dimensional morphology of each tooth in the second tooth segmentation result to obtain a precise registration result between the oral scan data and the navigation CT image, and the precise registration result is used as the first registration result. The determination of the first registration result marks the end of the preoperative preparation process.

S120、如果当前导航场景图像符合图像未遮挡条件,则将当前导航场景图像作为当前关键帧图像,并确定口扫数据与当前关键帧图像之间的导航粗配准结果。S120: If the current navigation scene image meets the image unobstructed condition, the current navigation scene image is used as the current key frame image, and a navigation coarse registration result between the oral scan data and the current key frame image is determined.

术前准流程结束,术中导航开始后,通过摄像装置采集的当前导航场景图像,该当前导航场景图像为二维图像,其包括目标对象的牙齿。After the preoperative process is completed and the intraoperative navigation begins, the current navigation scene image is captured by the camera device. The current navigation scene image is a two-dimensional image, which includes the teeth of the target object.

为了保证手术导航结果,需要在手术开始获取关键帧图像。所谓关键帧图像是指符合图像未遮挡条件的导航场景图像,具体为,被遮挡牙齿区域与所有牙齿区域的比值小于或等于设定比例阈值的导航场景图像。因此在术中导航开始时,要先采集关键帧图像,然后确定该关键帧图像与口扫数据之间的导航粗配准结果。In order to ensure the surgical navigation results, it is necessary to obtain key frame images at the beginning of the surgery. The so-called key frame image refers to a navigation scene image that meets the image unobstructed condition, specifically, a navigation scene image in which the ratio of the obstructed tooth area to the total tooth area is less than or equal to the set ratio threshold. Therefore, at the beginning of intraoperative navigation, the key frame image must be acquired first, and then the navigation rough registration result between the key frame image and the oral scan data must be determined.

在一个实施例中,通过以下步骤确定口扫数据与当前关键帧图像之间的导航粗配准结果:In one embodiment, the navigation rough registration result between the oral scan data and the current key frame image is determined by the following steps:

步骤b1、确定所述当前关键帧图像的各牙齿区域,以及所述当前关键帧图像中各牙齿区域的第三牙位特征。Step b1, determining each tooth region in the current key frame image, and a third tooth position feature of each tooth region in the current key frame image.

对当前关键帧图像进行牙齿分割得到各牙齿区域,并确定各牙齿区域的第三牙位特征,该第三牙位特征所包括的特征内容与第一牙位特征所包括的特征项目相同,比如均包括质心、咬合面中心、牙龈线顶点等。Perform tooth segmentation on the current key frame image to obtain various tooth areas, and determine the third tooth position feature of each tooth area. The feature content included in the third tooth position feature is the same as the feature items included in the first tooth position feature, such as both including the center of mass, occlusal surface center, gum line vertex, etc.

步骤b2、根据所述第一牙位特征与所述第三牙位特征,确定所述口扫数据与所述当前关键帧图像之间的导航粗配准结果。Step b2: determining a navigation rough registration result between the oral scan data and the current key frame image according to the first tooth position feature and the third tooth position feature.

由于口扫数据为三维数据,当前关键帧图像为二维图像,因此基于口扫数据确定的第一牙位特征与基于当前关键帧图像确定的第三牙位特征,肯定存在差异。因此基于第一牙位特征与第三牙位特征仅能对口扫数据与当前关键帧图像进行粗配准,得到导航粗配准结果。Since the oral scan data is three-dimensional data and the current key frame image is a two-dimensional image, there must be a difference between the first tooth position feature determined based on the oral scan data and the third tooth position feature determined based on the current key frame image. Therefore, based on the first tooth position feature and the third tooth position feature, only the oral scan data and the current key frame image can be roughly aligned to obtain the navigation rough alignment result.

在一个实施例中,使用PnP(Perspective-n-Point,n点透视问题)算法确定当前关键帧图像与口扫数据之间的导航粗配准结果。其中,PnP是求解3D到2D点的对应方法。它描述了当知道n个3D空间点及其位置,如何估计相机的位姿的算法。可通过直接线性变换法(DLT)、有效PnP(Efficient PnP)、bundle adjustment(BA,光束平差法)等方式求解。In one embodiment, the PnP (Perspective-n-Point) algorithm is used to determine the navigation rough registration result between the current key frame image and the oral scan data. PnP is a method for solving the correspondence between 3D and 2D points. It describes an algorithm for estimating the camera's pose when n 3D space points and their positions are known. It can be solved by direct linear transformation (DLT), effective PnP (Efficient PnP), bundle adjustment (BA), etc.

S130、根据导航粗配准结果确定口扫数据在当前关键帧图像所在平面的投影数据,得到点云投影图像。S130. Determine the projection data of the oral scan data on the plane where the current key frame image is located according to the navigation coarse registration result, and obtain a point cloud projection image.

粗配准结果的准确度虽然不能满足像素级别配准需求,但可以确定目标投影角度,然后按照该目标投影角度对口扫数据进行投影,即可将口扫数据投影到当前关键帧图像所在平面,得到点云投影图像。因此本实施例中的点云投影图像对应图像与当前关键帧图像为同一视角下的二维图像。Although the accuracy of the coarse registration result cannot meet the pixel-level registration requirements, the target projection angle can be determined, and then the mouth scan data can be projected according to the target projection angle, so that the mouth scan data can be projected to the plane where the current key frame image is located to obtain a point cloud projection image. Therefore, the point cloud projection image in this embodiment corresponds to the image and the current key frame image is a two-dimensional image at the same viewing angle.

S140、确定点云投影图像与当前关键帧图像之间的第一匹配结果,并根据第一配准结果与第一匹配结果,将导航CT图像转换至导航系统所在坐标系。S140, determining a first matching result between the point cloud projection image and the current key frame image, and converting the navigation CT image to a coordinate system where the navigation system is located according to the first registration result and the first matching result.

在一个实施例中,确定所述点云投影图像中的各牙齿区域,以及所述点云投影图像中各牙齿区域的第四牙位特征;根据所述第四牙位特征与所述第三牙位特征,确定所述点云投影图像与所述当前关键帧图像之间的第一匹配结果。由于点云投影图像对应图像与当前关键帧图像为同一视角下的二维图像,因此可基于点云投影图像中各牙齿区域的第四牙位特征与当前关键帧图像的第三牙位特征,对点云投影图像与当前关键帧图像进行亚像素映射,得到第一匹配结果。In one embodiment, each tooth region in the point cloud projection image and the fourth tooth position feature of each tooth region in the point cloud projection image are determined; and a first matching result between the point cloud projection image and the current key frame image is determined based on the fourth tooth position feature and the third tooth position feature. Since the image corresponding to the point cloud projection image and the current key frame image are two-dimensional images under the same viewing angle, the point cloud projection image and the current key frame image can be sub-pixel mapped based on the fourth tooth position feature of each tooth region in the point cloud projection image and the third tooth position feature of the current key frame image to obtain a first matching result.

第一匹配结果确定后,根据所述第一匹配结果确定所述口扫数据与所述当前关键帧图像之间的第二配准结果;根据所述第一配准结果与所述第二配准结果,确定所述导航CT图像在所述导航系统中的位姿。After the first matching result is determined, a second registration result between the oral scan data and the current key frame image is determined according to the first matching result; and a position of the navigation CT image in the navigation system is determined according to the first registration result and the second registration result.

具体地,根据第一匹配结果更新口扫数据与当前关键帧图像之间的映射,然后采用PnP算法对二者之间的映射关系进行优化得到第二配准结果,由于第一配准结果是口扫数据与导航CT图像之间的配准结果,因此可以口扫数据为中介得到当前关键帧图像与导航CT图像之间的配准结果,然后根据该配准结果确定导航CT图像在导航系统中的位姿,即将导航CT图像映射至导航系统所在坐标系。Specifically, the mapping between the oral scan data and the current key frame image is updated according to the first matching result, and then the PnP algorithm is used to optimize the mapping relationship between the two to obtain the second registration result. Since the first registration result is the registration result between the oral scan data and the navigation CT image, the registration result between the current key frame image and the navigation CT image can be obtained using the oral scan data as a medium, and then the position and posture of the navigation CT image in the navigation system is determined according to the registration result, that is, the navigation CT image is mapped to the coordinate system where the navigation system is located.

在一个实施例中,确定所述当前导航场景图像中的手术器械以及所述手术器械位姿,以及所述导航CT图像中预先确定的目标位置在所述导航系统下的目标坐标;根据所述目标坐标与所述手术器械位姿确定手术器械的运动路径;根据所述运动路径生成用于引导所述手术器械运动至所述目标坐标的引导信息。In one embodiment, the surgical instrument and the surgical instrument posture in the current navigation scene image, as well as the target coordinates of the target position predetermined in the navigation CT image under the navigation system are determined; the motion path of the surgical instrument is determined based on the target coordinates and the surgical instrument posture; and guidance information for guiding the surgical instrument to move to the target coordinates is generated based on the motion path.

具体地,由于牙齿种植需要用到手术器械,因此当前导航场景图像包括手术器械。确定手术器械在当前导航场景图像中的定位结果,并确定当前导航场景图像中的手术器械位姿,以及导航CT图像中预先确定的目标位置在导航系统下的目标坐标;由于该目标坐标为手术区域,因此需要将手术器械移动至该目标坐标。因此目标坐标与手术器械位姿确定后,根据二者确定手术器械的运动路径,然后根据该运动路径引导用户将手术器械移动至目标坐标。Specifically, since dental implants require surgical instruments, the current navigation scene image includes surgical instruments. The positioning result of the surgical instrument in the current navigation scene image is determined, and the surgical instrument posture in the current navigation scene image and the target coordinates of the target position predetermined in the navigation CT image under the navigation system are determined; since the target coordinates are the surgical area, the surgical instrument needs to be moved to the target coordinates. Therefore, after the target coordinates and the surgical instrument posture are determined, the motion path of the surgical instrument is determined based on the two, and then the user is guided to move the surgical instrument to the target coordinates based on the motion path.

需要说明的时,采用现有技术确定手术器械在当前导航场景图像中的定位结果即可,本实施例在此不予赘述。When it is necessary to explain, the existing technology can be used to determine the positioning result of the surgical instrument in the current navigation scene image, and this embodiment will not be described in detail here.

本发明实施例提供的技术方案中,当前关键帧图像为目标对象牙齿的二维图像,口扫数据为目标对象牙齿的三维图像;先确定口扫数据与当前关键帧图像之间的导航粗配准结果,然后根据该导航粗配准结果确定投影视角,并按照该投影视角,将口扫数据投影至当前关键帧图像所在平面得到点云投影图像;由于点云投影图像与当前关键帧图对应相同的视角,且均为同一对象的二维图像,因此二者之间的第一匹配结果具有较高的准确性;由于该第一匹配结果为当前关键图像与口扫数据中的部分数据的精配准结果,因此可基于第一匹配结果得到当前关键帧图像与整个口扫数据的精配准结果,然后以口扫数据为中介得到当前关键帧图像与导航CT图像之间的精配准结果,再基于该精配准结果以及导航CT图像与口扫数据之间的第一配准结果,将导航CT图像映射至导航系统所在坐标系。由于第一配准结果、第一匹配结果以及当前关键帧图像与整个口扫数据之间的精配准结果均具有较高的准确性,因此最终确定的当前关键帧图像与导航CT图像之间的精配准结果具有较高的准确性,因此可以将导航CT图像准确地映射至导航系统所在坐标系,达到了在无参考标记的情况下,完成导航CT图像与导航图像之间的精配准。In the technical solution provided by the embodiment of the present invention, the current key frame image is a two-dimensional image of the teeth of the target object, and the oral scan data is a three-dimensional image of the teeth of the target object; first, the navigation rough registration result between the oral scan data and the current key frame image is determined, and then the projection angle of view is determined according to the navigation rough registration result, and according to the projection angle of view, the oral scan data is projected to the plane where the current key frame image is located to obtain a point cloud projection image; since the point cloud projection image and the current key frame image correspond to the same angle of view and are both two-dimensional images of the same object, the first matching result between the two has high accuracy; since the first matching result is the precise registration result of the current key image and part of the data in the oral scan data, the precise registration result of the current key frame image and the entire oral scan data can be obtained based on the first matching result, and then the precise registration result between the current key frame image and the navigation CT image is obtained with the oral scan data as the medium, and then based on the precise registration result and the first registration result between the navigation CT image and the oral scan data, the navigation CT image is mapped to the coordinate system where the navigation system is located. Since the first registration result, the first matching result and the precise registration result between the current key frame image and the entire oral scan data all have high accuracy, the final precise registration result between the current key frame image and the navigation CT image has high accuracy. Therefore, the navigation CT image can be accurately mapped to the coordinate system of the navigation system, thereby achieving precise registration between the navigation CT image and the navigation image without reference markers.

图2为本发明实施例提供的图像处理方法的流程图,本发明实施例在前述实施例的基础上增加了对不符合图像未遮挡条件的当前导航场景图像的处理步骤。如图2所示,该方法包括:FIG2 is a flow chart of an image processing method provided by an embodiment of the present invention. The embodiment of the present invention adds a processing step for the current navigation scene image that does not meet the image unobstructed condition on the basis of the above-mentioned embodiment. As shown in FIG2, the method includes:

S210、获取口扫数据与导航CT图像,确定所述口扫数据与所述导航CT图像之间的第一配准结果,以及当前导航场景图像是否不符合图像未遮挡条件。S210, acquiring oral scan data and a navigation CT image, determining a first registration result between the oral scan data and the navigation CT image, and determining whether the current navigation scene image does not meet an image unobstructed condition.

S220、如果当前导航场景图像符合图像未遮挡条件,则将当前导航场景图像作为当前关键帧图像,并确定口扫数据与当前关键帧图像之间的导航粗配准结果。S220: If the current navigation scene image meets the image unobstructed condition, the current navigation scene image is used as the current key frame image, and a navigation coarse registration result between the oral scan data and the current key frame image is determined.

S230、根据导航粗配准结果确定口扫数据在当前关键帧图像所在平面的投影数据,得到点云投影图像。S230. Determine the projection data of the oral scan data on the plane where the current key frame image is located according to the navigation coarse registration result, and obtain a point cloud projection image.

S240、确定点云投影图像与所述当前关键帧图像之间的第一匹配结果,并根据第一配准结果与第一匹配结果,将导航CT图像转换至导航系统所在坐标系。S240, determining a first matching result between the point cloud projection image and the current key frame image, and converting the navigation CT image to a coordinate system where the navigation system is located according to the first registration result and the first matching result.

S250、如果当前导航场景图像不符合图像未遮挡条件,则确定当前导航场景图像与当前关键帧图像之间的第二匹配结果。S250: If the current navigation scene image does not meet the image unobstructed condition, determine a second matching result between the current navigation scene image and the current key frame image.

在一个实施例中,如果导航过程中的当前导航场景图像中被遮挡的牙齿区域与所有牙齿区域的占比大于设定比例阈值,则判定所述当前导航场景图像不符合图像未遮挡条件,使用亚像素图像处理方法确定所述当前导航场景图像与所述当前关键帧图像之间的第二匹配结果。In one embodiment, if the ratio of the obscured tooth area to all tooth areas in the current navigation scene image during the navigation process is greater than a set ratio threshold, it is determined that the current navigation scene image does not meet the image unobstructed condition, and a sub-pixel image processing method is used to determine a second matching result between the current navigation scene image and the current key frame image.

当前关键帧图像为最新关键帧图像,当前导航场景图像为最新采集的导航图像,且导航过程中导航图像的刷新频率较高,因此当前导航场景图像中未被遮挡的牙齿区域与当前关键帧图像中该部分牙齿区域较为相似,因此采用亚像素图像配准算法对当前导航场景图像与当前关键帧图像进行匹配,得到第二匹配结果。The current key frame image is the latest key frame image, the current navigation scene image is the latest acquired navigation image, and the refresh frequency of the navigation image is high during the navigation process. Therefore, the unobstructed tooth area in the current navigation scene image is relatively similar to the part of the tooth area in the current key frame image. Therefore, a sub-pixel image registration algorithm is used to match the current navigation scene image with the current key frame image to obtain a second matching result.

S260、根据第二配准结果与第二匹配结果,确定当前导航场景图像与口扫数据之间的第三配准结果。S260: Determine a third registration result between the current navigation scene image and the oral scan data according to the second registration result and the second matching result.

第二配准结果与第二匹配结果,也就是口扫数据与当前关键帧图像之间的配准结果,以及当前导航场景图像与当前关键帧图像之间的配准结果。因此第二配准结果与第二匹配结果确定后,即可以口扫数据为桥梁确定口扫数据与当前导航数据之间的精配准结果,将该精配准结果作为第三配准结果。The second registration result and the second matching result, that is, the registration result between the mouth scan data and the current key frame image, and the registration result between the current navigation scene image and the current key frame image. Therefore, after the second registration result and the second matching result are determined, the mouth scan data can be used as a bridge to determine the precise registration result between the mouth scan data and the current navigation data, and the precise registration result is used as the third registration result.

S270、根据第一配准结果与第三配准结果,确定导航CT图像在导航系统中的位姿。S270: Determine the position and posture of the navigation CT image in the navigation system according to the first registration result and the third registration result.

由于第一配准结果为口扫数据与导航CT图像之间的精配准结果,因此可以口扫数据为桥梁,建立当前导航场景图像与导航CT图像之间的精配准结果,即第三配准结果,然后根据该第三配准结果确定导航CT图像在导航系统中的位姿。Since the first registration result is a precise registration result between the oral scan data and the navigation CT image, the oral scan data can be used as a bridge to establish a precise registration result between the current navigation scene image and the navigation CT image, that is, the third registration result, and then the position and posture of the navigation CT image in the navigation system is determined based on the third registration result.

本发明实施例提供的技术方案,将符合遮挡条件的当前导航场景图像中未遮挡的牙齿部分与当前关键帧图像的对应部分,进行亚像素级别的精配准,可以得到二者之间的第二匹配结果,然后以当前关键帧为桥梁,建立当前导航场景图像与口扫数据之间的精配准结果,即第三配准结果;再以口扫数据为桥梁,建立当前导航场景图像与导航CT图像之间的精配准结果,这样就可以准确地确定导航CT图像在导航系统中的位姿;达到了没有借助任何的标记物完成符合遮挡条件当前导航场景图像与导航CT图像之间的准确配准,使得用户在手术过程中具有较高的灵活性。The technical solution provided by the embodiment of the present invention performs sub-pixel-level precise registration of the unobstructed tooth portion in the current navigation scene image that meets the occlusion condition with the corresponding portion of the current key frame image, so as to obtain a second matching result between the two. Then, using the current key frame as a bridge, a precise registration result between the current navigation scene image and the oral scan data, i.e., the third registration result, is established. Then, using the oral scan data as a bridge, a precise registration result between the current navigation scene image and the navigation CT image is established, so that the position and posture of the navigation CT image in the navigation system can be accurately determined. This achieves accurate registration between the current navigation scene image that meets the occlusion condition and the navigation CT image without the aid of any markers, thereby allowing the user to have higher flexibility during the operation.

图3A为本发明实施例提供的图像处理装置的结构示意图。如图3A所示,该装置包括:FIG3A is a schematic diagram of the structure of an image processing device provided by an embodiment of the present invention. As shown in FIG3A , the device includes:

第一配准模块31,用于获取口扫数据与导航CT图像,并确定所述口扫数据与所述导航CT图像之间的第一配准结果,所述口扫数据与所述导航CT图像均包括目标对象的牙齿;A first registration module 31 is used to obtain oral scan data and a navigation CT image, and determine a first registration result between the oral scan data and the navigation CT image, wherein both the oral scan data and the navigation CT image include teeth of a target object;

无遮挡模块32,用于如果当前导航场景图像符合图像未遮挡条件,则将所述当前导航场景图像作为当前关键帧图像,并确定所述口扫数据与所述当前关键帧图像之间的导航粗配准结果;The unobstructed module 32 is used for taking the current navigation scene image as the current key frame image if the current navigation scene image meets the image unobstructed condition, and determining the navigation rough registration result between the oral scan data and the current key frame image;

投影模块33,用于根据所述导航粗配准结果确定所述口扫数据在所述当前关键帧图像所在平面的投影数据,得到点云投影图像;A projection module 33 is used to determine the projection data of the oral scan data on the plane where the current key frame image is located according to the navigation rough registration result, and obtain a point cloud projection image;

第二配准模块34,用于确定所述点云投影图像与所述当前关键帧图像之间的第一匹配结果,并根据所述第一配准结果与所述第一匹配结果,将所述导航CT图像转换至所述导航系统所在坐标系。The second registration module 34 is used to determine a first matching result between the point cloud projection image and the current key frame image, and convert the navigation CT image to the coordinate system of the navigation system according to the first registration result and the first matching result.

在一个实施例中,第一配准模块31具体用于:In one embodiment, the first registration module 31 is specifically used for:

对口扫数据进行牙位分割得到第一牙位分割结果,对导航CT图像进行牙位分割得到第二牙位分割结果;Performing tooth segmentation on the oral scan data to obtain a first tooth segmentation result, and performing tooth segmentation on the navigation CT image to obtain a second tooth segmentation result;

分别确定所述第一牙位分割结果中各牙齿的第一牙位特征与所述第二牙位分割结果中各牙齿的第二牙位特征;respectively determining a first tooth position feature of each tooth in the first tooth position segmentation result and a second tooth position feature of each tooth in the second tooth position segmentation result;

基于所述第一牙位分割结果中各牙齿的第一牙位特征与所述第二牙位分割结果中各牙齿的第二牙位特征,确定口扫数据与导航CT图像的第一粗配准结果;Determine a first rough registration result between the oral scan data and the navigation CT image based on a first tooth position feature of each tooth in the first tooth position segmentation result and a second tooth position feature of each tooth in the second tooth position segmentation result;

基于所述第一粗配准结果、所述第一牙位分割结果中各牙齿的三维形貌与所述第二牙位分割结果中各牙齿的三维形貌,确定所述口扫数据与所述导航CT图像之间的第一配准结果。Based on the first coarse registration result, the three-dimensional morphology of each tooth in the first tooth position segmentation result, and the three-dimensional morphology of each tooth in the second tooth position segmentation result, a first registration result between the oral scan data and the navigation CT image is determined.

在一个实施例中,无遮挡模块32具体用于:In one embodiment, the non-blocking module 32 is specifically used for:

确定所述当前关键帧图像的各牙齿区域,以及所述当前关键帧图像中各牙齿区域的第三牙位特征;Determine each tooth region of the current key frame image, and a third tooth position feature of each tooth region in the current key frame image;

根据所述第一牙位特征与所述第三牙位特征,确定所述口扫数据与所述当前关键帧图像之间的导航粗配准结果;Determining a navigation rough registration result between the oral scan data and the current key frame image according to the first tooth position feature and the third tooth position feature;

在一个实施例中,第二配准模块33具体用于:In one embodiment, the second registration module 33 is specifically used for:

确定所述点云投影图像中的各牙齿区域,以及所述点云投影图像中各牙齿区域的第四牙位特征;Determine each tooth region in the point cloud projection image, and a fourth tooth position feature of each tooth region in the point cloud projection image;

根据所述第四牙位特征与所述第三牙位特征,确定所述点云投影图像与所述当前关键帧图像之间的第一匹配结果。A first matching result between the point cloud projection image and the current key frame image is determined according to the fourth tooth position feature and the third tooth position feature.

在一个实施例中,第二配准模块33具体用于:In one embodiment, the second registration module 33 is specifically used for:

根据所述第一匹配结果确定所述口扫数据与所述当前关键帧图像之间的第二配准结果;Determine a second registration result between the oral scan data and the current key frame image according to the first matching result;

根据所述第一配准结果与所述第二配准结果,确定所述导航CT图像在所述导航系统中的位姿。The position and posture of the navigation CT image in the navigation system is determined according to the first registration result and the second registration result.

在一个实施例中,如图3B所示,该装置还包括遮挡模块35,用于:In one embodiment, as shown in FIG3B , the device further includes a shielding module 35, which is used to:

如果导航过程中的当前导航场景图像不符合图像未遮挡条件,则确定所述当前导航场景图像与所述当前关键帧图像之间的第二匹配结果;If the current navigation scene image during the navigation process does not meet the image unobstructed condition, determining a second matching result between the current navigation scene image and the current key frame image;

根据所述第二配准结果与所述第二匹配结果,确定所述当前导航场景图像与所述口扫数据之间的第三配准结果;Determining a third registration result between the current navigation scene image and the oral scan data according to the second registration result and the second matching result;

根据所述第一配准结果与所述第三配准结果,确定所述导航CT图像在所述导航系统中的位姿。The position and posture of the navigation CT image in the navigation system is determined according to the first registration result and the third registration result.

在一个实施例中,遮挡模块35具体用于:In one embodiment, the shielding module 35 is specifically used for:

如果导航过程中的当前导航场景图像中被遮挡的牙齿区域与所有牙齿区域的占比大于设定比例阈值,则判定所述当前导航场景图像不符合图像未遮挡条件,使用亚像素图像处理方法确定所述当前导航场景图像与所述当前关键帧图像之间的第二匹配结果。If the ratio of the obscured tooth area to all tooth areas in the current navigation scene image during the navigation process is greater than a set ratio threshold, it is determined that the current navigation scene image does not meet the image unobstructed condition, and a sub-pixel image processing method is used to determine a second matching result between the current navigation scene image and the current key frame image.

在一个实施例中,如图3C所示,该装置还包括引导模块36,该引导模块36用于:In one embodiment, as shown in FIG. 3C , the device further includes a guiding module 36, and the guiding module 36 is used to:

确定所述当前导航场景图像中的手术器械以及所述手术器械位姿,以及所述导航CT图像中预先确定的目标位置在所述导航系统下的目标坐标;Determine the surgical instrument and the surgical instrument posture in the current navigation scene image, and the target coordinates of the target position predetermined in the navigation CT image under the navigation system;

根据所述目标坐标与所述手术器械位姿确定手术器械的运动路径;Determine the motion path of the surgical instrument according to the target coordinates and the position and posture of the surgical instrument;

根据所述运动路径生成用于引导所述手术器械运动的引导信息。Guidance information for guiding the movement of the surgical instrument is generated according to the movement path.

本发明实施例提供的技术方案中,当前关键帧图像为目标对象牙齿的二维图像,口扫数据为目标对象牙齿的三维图像;先确定口扫数据与当前关键帧图像之间的导航粗配准结果,然后根据该导航粗配准结果确定投影视角,并按照该投影视角,将口扫数据投影至当前关键帧图像所在平面得到点云投影图像;由于点云投影图像与当前关键帧图对应相同的视角,且均为同一对象的二维图像,因此二者之间的第一匹配结果具有较高的准确性;由于该第一匹配结果为当前关键图像与口扫数据中的部分数据的精配准结果,因此可基于第一匹配结果得到当前关键帧图像与整个口扫数据的精配准结果,然后以口扫数据为中介得到当前关键帧图像与导航CT图像之间的精配准结果,再基于该精配准结果以及导航CT图像与口扫数据之间的第一配准结果,将导航CT图像映射至导航系统所在坐标系。由于第一配准结果、第一匹配结果以及当前关键帧图像与整个口扫数据之间的精配准结果均具有较高的准确性,因此最终确定的当前关键帧图像与导航CT图像之间的精配准结果具有较高的准确性,因此可以将导航CT图像准确地映射至导航系统所在坐标系,达到了在无参考标记的情况下,完成导航CT图像与导航图像之间的精配准。In the technical solution provided by the embodiment of the present invention, the current key frame image is a two-dimensional image of the teeth of the target object, and the oral scan data is a three-dimensional image of the teeth of the target object; first, the navigation rough registration result between the oral scan data and the current key frame image is determined, and then the projection angle of view is determined according to the navigation rough registration result, and according to the projection angle of view, the oral scan data is projected to the plane where the current key frame image is located to obtain a point cloud projection image; since the point cloud projection image and the current key frame image correspond to the same angle of view and are both two-dimensional images of the same object, the first matching result between the two has high accuracy; since the first matching result is the precise registration result of the current key image and part of the data in the oral scan data, the precise registration result of the current key frame image and the entire oral scan data can be obtained based on the first matching result, and then the precise registration result between the current key frame image and the navigation CT image is obtained with the oral scan data as the medium, and then based on the precise registration result and the first registration result between the navigation CT image and the oral scan data, the navigation CT image is mapped to the coordinate system where the navigation system is located. Since the first registration result, the first matching result and the precise registration result between the current key frame image and the entire oral scan data all have high accuracy, the final precise registration result between the current key frame image and the navigation CT image has high accuracy. Therefore, the navigation CT image can be accurately mapped to the coordinate system of the navigation system, thereby achieving precise registration between the navigation CT image and the navigation image without reference markers.

本发明实施例所提供的图像处理装置可执行本发明任意实施例所提供的图像处理方法,具备执行方法相应的功能模块和有益效果。The image processing device provided by the embodiment of the present invention can execute the image processing method provided by any embodiment of the present invention, and has the corresponding functional modules and beneficial effects of the execution method.

图4示出了可以用来实施本发明的实施例的导航系统10的结构示意图。如图4所示,导航系统10包括手术器械101;摄像装置102,用于获取目标对象术中的导航场景图像,所述导航场景图像包括所述手术器械;处理器11,用于执行任意实施例所述的图像处理方法;显示装置12,展示所述导航系统所在坐标下的导航CT图像与所述导航场景图像。Fig. 4 shows a schematic diagram of the structure of a navigation system 10 that can be used to implement an embodiment of the present invention. As shown in Fig. 4, the navigation system 10 includes a surgical instrument 101; a camera 102, which is used to obtain a navigation scene image of a target object during surgery, and the navigation scene image includes the surgical instrument; a processor 11, which is used to execute the image processing method described in any embodiment; and a display device 12, which displays the navigation CT image and the navigation scene image at the coordinates of the navigation system.

在一个实施例中,如图5所示,导航系统包括至少一个处理器11,以及与至少一个处理器11通信连接的存储器,如只读存储器(ROM)13、随机访问存储器(RAM)14等,其中,存储器存储有可被至少一个处理器执行的计算机程序,处理器11可以根据存储在只读存储器(ROM)13中的计算机程序或者从存储单元18加载到随机访问存储器(RAM)14中的计算机程序,来执行各种适当的动作和处理。在RAM 14中,还可存储电子设备10操作所需的各种程序和数据。处理器11、ROM 13以及RAM 14通过总线15彼此相连。输入/输出(I/O)接口16也连接至总线15。In one embodiment, as shown in FIG5 , the navigation system includes at least one processor 11, and a memory connected to the at least one processor 11 in communication, such as a read-only memory (ROM) 13, a random access memory (RAM) 14, etc., wherein the memory stores a computer program that can be executed by at least one processor, and the processor 11 can perform various appropriate actions and processes according to the computer program stored in the read-only memory (ROM) 13 or the computer program loaded from the storage unit 18 to the random access memory (RAM) 14. In the RAM 14, various programs and data required for the operation of the electronic device 10 can also be stored. The processor 11, the ROM 13, and the RAM 14 are connected to each other through a bus 15. An input/output (I/O) interface 16 is also connected to the bus 15.

导航系统10中的多个部件连接至I/O接口16,包括:输入装置17,例如键盘、鼠标等;输出装置12,例如各种类型的显示器、扬声器等;存储单元18,例如磁盘、光盘等;以及通信单元19,例如网卡、调制解调器、无线通信收发机等。通信单元19允许电子设备10通过诸如因特网的计算机网络和/或各种电信网络与其他设备交换信息/数据。A number of components in the navigation system 10 are connected to the I/O interface 16, including: an input device 17, such as a keyboard, a mouse, etc.; an output device 12, such as various types of displays, speakers, etc.; a storage unit 18, such as a disk, an optical disk, etc.; and a communication unit 19, such as a network card, a modem, a wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices through a computer network such as the Internet and/or various telecommunication networks.

处理器11可以是各种具有处理和计算能力的通用和/或专用处理组件。处理器11的一些示例包括但不限于中央处理单元(CPU)、图形处理单元(GPU)、各种专用的人工智能(AI)计算芯片、各种运行机器学习模型算法的处理器、数字信号处理器(DSP)、以及任何适当的处理器、控制器、微控制器等。处理器11执行上文所描述的各个方法和处理,例如图像处理方法。The processor 11 may be a variety of general and/or special processing components with processing and computing capabilities. Some examples of the processor 11 include, but are not limited to, a central processing unit (CPU), a graphics processing unit (GPU), various special artificial intelligence (AI) computing chips, various processors running machine learning model algorithms, a digital signal processor (DSP), and any appropriate processor, controller, microcontroller, etc. The processor 11 performs the various methods and processes described above, such as an image processing method.

在一些实施例中,图像处理方法可被实现为计算机程序,其被有形地包含于计算机可读存储介质,例如存储单元18。在一些实施例中,计算机程序的部分或者全部可以经由ROM 13和/或通信单元19而被载入和/或安装到电子设备10上。当计算机程序加载到RAM 14并由处理器11执行时,可以执行上文描述的图像处理方法的一个或多个步骤。备选地,在其他实施例中,处理器11可以通过其他任何适当的方式(例如,借助于固件)而被配置为执行图像处理方法。In some embodiments, the image processing method may be implemented as a computer program, which is tangibly contained in a computer-readable storage medium, such as a storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed on the electronic device 10 via the ROM 13 and/or the communication unit 19. When the computer program is loaded into the RAM 14 and executed by the processor 11, one or more steps of the image processing method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the image processing method in any other appropriate manner (e.g., by means of firmware).

本文中以上描述的系统和技术的各种实施方式可以在数字电子电路系统、集成电路系统、场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、芯片上系统的系统(SOC)、负载可编程逻辑设备(CPLD)、计算机硬件、固件、软件、和/或它们的组合中实现。这些各种实施方式可以包括:实施在一个或者多个计算机程序中,该一个或者多个计算机程序可在包括至少一个可编程处理器的可编程系统上执行和/或解释,该可编程处理器可以是专用或者通用可编程处理器,可以从存储系统、至少一个输入装置、和至少一个输出装置接收数据和指令,并且将数据和指令传输至该存储系统、该至少一个输入装置、和该至少一个输出装置。Various implementations of the systems and techniques described above herein can be implemented in digital electronic circuit systems, integrated circuit systems, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), application specific standard products (ASSPs), systems on chips (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include: being implemented in one or more computer programs that can be executed and/or interpreted on a programmable system including at least one programmable processor, which can be a special purpose or general purpose programmable processor that can receive data and instructions from a storage system, at least one input device, and at least one output device, and transmit data and instructions to the storage system, the at least one input device, and the at least one output device.

用于实施本发明的方法的计算机程序可以采用一个或多个编程语言的任何组合来编写。这些计算机程序可以提供给通用计算机、专用计算机或其他可编程数据处理装置的处理器,使得计算机程序当由处理器执行时使流程图和/或框图中所规定的功能/操作被实施。计算机程序可以完全在机器上执行、部分地在机器上执行,作为独立软件包部分地在机器上执行且部分地在远程机器上执行或完全在远程机器或服务器上执行。Computer programs for implementing the methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general-purpose computer, a special-purpose computer, or other programmable data processing device, so that when the computer program is executed by the processor, the functions/operations specified in the flow chart and/or block diagram are implemented. The computer program may be executed entirely on the machine, partially on the machine, partially on the machine and partially on a remote machine as a stand-alone software package, or entirely on a remote machine or server.

在本发明的上下文中,计算机可读存储介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的计算机程序。计算机可读存储介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。备选地,计算机可读存储介质可以是机器可读信号介质。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。In the context of the present invention, a computer-readable storage medium may be a tangible medium that may contain or store a computer program for use by or in conjunction with an instruction execution system, device, or equipment. A computer-readable storage medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or equipment, or any suitable combination of the foregoing. Alternatively, a computer-readable storage medium may be a machine-readable signal medium. A more specific example of a machine-readable storage medium may include an electrical connection based on one or more lines, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.

为了提供与用户的交互,可以在电子设备上实施此处描述的系统和技术,该电子设备具有:用于向用户显示信息的显示装置(例如,CRT(阴极射线管)或者LCD(液晶显示器)监视器);以及键盘和指向装置(例如,鼠标或者轨迹球),用户可以通过该键盘和该指向装置来将输入提供给电子设备。其它种类的装置还可以用于提供与用户的交互;例如,提供给用户的反馈可以是任何形式的传感反馈(例如,视觉反馈、听觉反馈、或者触觉反馈);并且可以用任何形式(包括声输入、语音输入或者、触觉输入)来接收来自用户的输入。To provide interaction with a user, the systems and techniques described herein may be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user; and a keyboard and a pointing device (e.g., a mouse or trackball) through which the user can provide input to the electronic device. Other types of devices may also be used to provide interaction with the user; for example, the feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form (including acoustic input, voice input, or tactile input).

可以将此处描述的系统和技术实施在包括后台部件的计算系统(例如,作为数据服务器)、或者包括中间件部件的计算系统(例如,应用服务器)、或者包括前端部件的计算系统(例如,具有图形用户界面或者网络浏览器的用户计算机,用户可以通过该图形用户界面或者该网络浏览器来与此处描述的系统和技术的实施方式交互)、或者包括这种后台部件、中间件部件、或者前端部件的任何组合的计算系统中。可以通过任何形式或者介质的数字数据通信(例如,通信网络)来将系统的部件相互连接。通信网络的示例包括:局域网(LAN)、广域网(WAN)、区块链网络和互联网。The systems and techniques described herein may be implemented in a computing system that includes backend components (e.g., as a data server), or a computing system that includes middleware components (e.g., an application server), or a computing system that includes frontend components (e.g., a user computer with a graphical user interface or a web browser through which a user can interact with implementations of the systems and techniques described herein), or a computing system that includes any combination of such backend components, middleware components, or frontend components. The components of the system may be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: a local area network (LAN), a wide area network (WAN), a blockchain network, and the Internet.

计算系统可以包括客户端和服务器。客户端和服务器一般远离彼此并且通常通过通信网络进行交互。通过在相应的计算机上运行并且彼此具有客户端-服务器关系的计算机程序来产生客户端和服务器的关系。服务器可以是云服务器,又称为云计算服务器或云主机,是云计算服务体系中的一项主机产品,以解决了传统物理主机与VPS服务中,存在的管理难度大,业务扩展性弱的缺陷。A computing system may include a client and a server. The client and the server are generally remote from each other and usually interact through a communication network. The client and server relationship is generated by computer programs running on the corresponding computers and having a client-server relationship with each other. The server may be a cloud server, also known as a cloud computing server or cloud host, which is a host product in the cloud computing service system to solve the defects of difficult management and weak business scalability in traditional physical hosts and VPS services.

本发明实施例还提供了一种计算机程序产品,包括计算机程序,该计算机程序在被处理器执行时实现如本申请任一实施例所提供的图像处理方法。An embodiment of the present invention further provides a computer program product, including a computer program, which, when executed by a processor, implements the image processing method provided in any embodiment of the present application.

计算机程序产品在实现的过程中,可以以一种或多种程序设计语言或其组合来编写用于执行本发明操作的计算机程序代码,程序设计语言包括面向对象的程序设计语言,诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言,诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。In the process of implementation, the computer program product can be written in one or more programming languages or a combination thereof to perform the computer program code of the present invention, including object-oriented programming languages, such as Java, Smalltalk, C++, and conventional procedural programming languages, such as "C" language or similar programming languages. The program code can be executed entirely on the user's computer, partially on the user's computer, as an independent software package, partially on the user's computer and partially on a remote computer, or completely on a remote computer or server. In the case of a remote computer, the remote computer can be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or can be connected to an external computer (for example, using an Internet service provider to connect through the Internet).

应该理解,可以使用上面所示的各种形式的流程,重新排序、增加或删除步骤。例如,本发明中记载的各步骤可以并行地执行也可以顺序地执行也可以不同的次序执行,只要能够实现本发明的技术方案所期望的结果,本文在此不进行限制。It should be understood that the various forms of processes shown above can be used to reorder, add or delete steps. For example, the steps described in the present invention can be executed in parallel, sequentially or in different orders, as long as the desired results of the technical solution of the present invention can be achieved, and this document does not limit this.

上述具体实施方式,并不构成对本发明保护范围的限制。本领域技术人员应该明白的是,根据设计要求和其他因素,可以进行各种修改、组合、子组合和替代。任何在本发明的精神和原则之内所作的修改、等同替换和改进等,均应包含在本发明保护范围之内。The above specific implementations do not constitute a limitation on the protection scope of the present invention. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions can be made according to design requirements and other factors. Any modification, equivalent substitution and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1.一种图像处理方法,其特征在于,包括:1. An image processing method, comprising: 获取口扫数据与导航CT图像,并确定所述口扫数据与所述导航CT图像之间的第一配准结果,所述口扫数据与所述导航CT图像均包括目标对象的牙齿;Acquire oral scan data and a navigation CT image, and determine a first registration result between the oral scan data and the navigation CT image, wherein both the oral scan data and the navigation CT image include teeth of a target object; 如果当前导航场景图像符合图像未遮挡条件,则将所述当前导航场景图像作为当前关键帧图像,并确定所述口扫数据与所述当前关键帧图像之间的导航粗配准结果;If the current navigation scene image meets the image unobstructed condition, the current navigation scene image is used as the current key frame image, and a navigation rough registration result between the oral scan data and the current key frame image is determined; 根据所述导航粗配准结果确定所述口扫数据在所述当前关键帧图像所在平面的投影数据,得到点云投影图像;Determine the projection data of the oral scan data on the plane where the current key frame image is located according to the navigation rough registration result, and obtain a point cloud projection image; 确定所述点云投影图像与所述当前关键帧图像之间的第一匹配结果,并根据所述第一配准结果与所述第一匹配结果,将所述导航CT图像转换至导航系统所在坐标系;Determine a first matching result between the point cloud projection image and the current key frame image, and transform the navigation CT image to a coordinate system where a navigation system is located according to the first registration result and the first matching result; 其中,所述确定所述口扫数据与所述导航CT图像之间的第一配准结果包括:Wherein, determining a first registration result between the oral scan data and the navigation CT image comprises: 对口扫数据进行牙位分割得到第一牙位分割结果,对导航CT图像进行牙位分割得到第二牙位分割结果;Performing tooth segmentation on the oral scan data to obtain a first tooth segmentation result, and performing tooth segmentation on the navigation CT image to obtain a second tooth segmentation result; 分别确定所述第一牙位分割结果中各牙齿的第一牙位特征与所述第二牙位分割结果中各牙齿的第二牙位特征;respectively determining a first tooth position feature of each tooth in the first tooth position segmentation result and a second tooth position feature of each tooth in the second tooth position segmentation result; 基于所述第一牙位分割结果中各牙齿的第一牙位特征与所述第二牙位分割结果中各牙齿的第二牙位特征,确定所述口扫数据与导航CT图像之间的第一粗配准结果;Determine a first rough registration result between the oral scan data and the navigation CT image based on a first tooth position feature of each tooth in the first tooth position segmentation result and a second tooth position feature of each tooth in the second tooth position segmentation result; 基于所述第一粗配准结果、所述第一牙位分割结果中各牙齿的三维形貌与所述第二牙位分割结果中各牙齿的三维形貌,确定所述口扫数据与所述导航CT图像之间的第一配准结果;Determine a first registration result between the oral scan data and the navigation CT image based on the first rough registration result, the three-dimensional morphology of each tooth in the first tooth segmentation result, and the three-dimensional morphology of each tooth in the second tooth segmentation result; 其中,所述确定所述口扫数据与所述当前关键帧图像之间的导航粗配准结果,包括:Wherein, determining the navigation rough registration result between the oral scan data and the current key frame image includes: 确定所述当前关键帧图像的各牙齿区域,以及所述当前关键帧图像中各牙齿区域的第三牙位特征;Determine each tooth region of the current key frame image, and a third tooth position feature of each tooth region in the current key frame image; 根据所述第一牙位特征与所述第三牙位特征,确定所述口扫数据与所述当前关键帧图像之间的导航粗配准结果;Determining a navigation rough registration result between the oral scan data and the current key frame image according to the first tooth position feature and the third tooth position feature; 其中,所述确定所述点云投影图像与所述当前关键帧图像之间的第一匹配结果,包括:Wherein, determining a first matching result between the point cloud projection image and the current key frame image includes: 确定所述点云投影图像中的各牙齿区域,以及所述点云投影图像中各牙齿区域的第四牙位特征;Determine each tooth region in the point cloud projection image, and a fourth tooth position feature of each tooth region in the point cloud projection image; 根据所述第四牙位特征与所述第三牙位特征,确定所述点云投影图像与所述当前关键帧图像之间的第一匹配结果。A first matching result between the point cloud projection image and the current key frame image is determined according to the fourth tooth position feature and the third tooth position feature. 2.根据权利要求1所述的图像处理方法,其特征在于,所述根据所述第一配准结果与所述第一匹配结果,将所述导航CT图像转换至导航系统所在坐标系,包括:2. The image processing method according to claim 1, characterized in that the step of converting the navigation CT image to a coordinate system of a navigation system according to the first registration result and the first matching result comprises: 根据所述第一匹配结果确定所述口扫数据与所述当前关键帧图像之间的第二配准结果;Determine a second registration result between the oral scan data and the current key frame image according to the first matching result; 根据所述第一配准结果与所述第二配准结果,确定所述导航CT图像在所述导航系统中的位姿。The position and posture of the navigation CT image in the navigation system is determined according to the first registration result and the second registration result. 3.根据权利要求2所述的图像处理方法,其特征在于,所述确定所述口扫数据与所述导航CT图像之间的第一配准结果之后,还包括:3. The image processing method according to claim 2, characterized in that after determining the first registration result between the oral scan data and the navigation CT image, it also includes: 如果导航过程中的所述当前导航场景图像不符合图像未遮挡条件,则确定所述当前导航场景图像与所述当前关键帧图像之间的第二匹配结果;If the current navigation scene image during the navigation process does not meet the image unobstructed condition, determining a second matching result between the current navigation scene image and the current key frame image; 根据所述第二配准结果与所述第二匹配结果,确定所述当前导航场景图像与所述口扫数据之间的第三配准结果;Determining a third registration result between the current navigation scene image and the oral scan data according to the second registration result and the second matching result; 根据所述第一配准结果与所述第三配准结果,确定所述导航CT图像在所述导航系统中的位姿。The position and posture of the navigation CT image in the navigation system is determined according to the first registration result and the third registration result. 4.根据权利要求3所述的图像处理方法,其特征在于,所述如果导航过程中的所述当前导航场景图像不符合图像未遮挡条件,则确定所述当前导航场景图像与所述当前关键帧图像之间的第二匹配结果,包括:4. The image processing method according to claim 3, characterized in that if the current navigation scene image in the navigation process does not meet the image unobstructed condition, determining a second matching result between the current navigation scene image and the current key frame image comprises: 如果导航过程中的所述当前导航场景图像中被遮挡的牙齿区域与所有牙齿区域的占比大于设定比例阈值,则判定所述当前导航场景图像不符合图像未遮挡条件,使用亚像素图像匹配方法确定所述当前导航场景图像与所述当前关键帧图像之间的第四匹配结果。If the ratio of the obscured tooth area to all tooth areas in the current navigation scene image during the navigation process is greater than the set ratio threshold, it is determined that the current navigation scene image does not meet the image unobstructed condition, and the fourth matching result between the current navigation scene image and the current key frame image is determined using a sub-pixel image matching method. 5.根据权利要求1所述的图像处理方法,其特征在于,所述将所述导航CT图像转换至导航系统所在坐标系之后,还包括:5. The image processing method according to claim 1, characterized in that after converting the navigation CT image to the coordinate system of the navigation system, it also includes: 确定所述当前导航场景图像中的手术器械以及所述手术器械的位姿,以及所述导航CT图像中预先确定的目标位置在所述导航系统下的目标坐标;Determine the surgical instrument and the posture of the surgical instrument in the current navigation scene image, and the target coordinates of the target position predetermined in the navigation CT image under the navigation system; 根据所述目标坐标与所述位姿确定手术器械的运动路径;Determine a motion path of the surgical instrument according to the target coordinates and the position and posture; 根据所述运动路径生成用于引导所述手术器械运动的引导信息。Guidance information for guiding the movement of the surgical instrument is generated according to the movement path. 6.一种导航系统,其特征在于,包括:6. A navigation system, comprising: 手术器械;Surgical instruments; 摄像装置,用于获取目标对象术中的导航场景图像,所述导航场景图像包括所述手术器械;A camera device, used to obtain a navigation scene image during surgery on the target object, wherein the navigation scene image includes the surgical instrument; 处理器,用于执行权利要求1-5任一所述的图像处理方法;A processor, configured to execute the image processing method according to any one of claims 1 to 5; 显示装置,展示所述导航系统所在坐标下的导航CT图像与所述导航场景图像。The display device displays the navigation CT image and the navigation scene image at the coordinates of the navigation system. 7.一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机指令,所述计算机指令用于使处理器执行时实现权利要求1-5任一所述的图像处理方法。7. A computer-readable storage medium, characterized in that the computer-readable storage medium stores computer instructions, and the computer instructions are used to enable a processor to implement the image processing method according to any one of claims 1 to 5 when executed. 8.一种计算机程序产品,其特征在于,所述计算机程序产品包括计算机程序,所述计算机程序在被处理器执行时实现根据权利要求1-5任一所述的图像处理方法。8. A computer program product, characterized in that the computer program product comprises a computer program, and when the computer program is executed by a processor, the image processing method according to any one of claims 1 to 5 is implemented.
CN202411132467.1A 2024-08-19 2024-08-19 Image processing method, system, medium and product Active CN118657817B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411132467.1A CN118657817B (en) 2024-08-19 2024-08-19 Image processing method, system, medium and product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411132467.1A CN118657817B (en) 2024-08-19 2024-08-19 Image processing method, system, medium and product

Publications (2)

Publication Number Publication Date
CN118657817A CN118657817A (en) 2024-09-17
CN118657817B true CN118657817B (en) 2024-11-05

Family

ID=92700850

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411132467.1A Active CN118657817B (en) 2024-08-19 2024-08-19 Image processing method, system, medium and product

Country Status (1)

Country Link
CN (1) CN118657817B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116563461A (en) * 2023-05-08 2023-08-08 上海博恩登特科技有限公司 System for quick simulation tooth alignment based on CBCT and mouth scanning data
CN117045349A (en) * 2023-08-11 2023-11-14 苏州迪凯尔医疗科技有限公司 Stereoscopic vision planting navigation system and navigation method

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019148154A1 (en) * 2018-01-29 2019-08-01 Lang Philipp K Augmented reality guidance for orthopedic and other surgical procedures
US11357576B2 (en) * 2018-07-05 2022-06-14 Dentsply Sirona Inc. Method and system for augmented reality guided surgery
WO2020131880A1 (en) * 2018-12-17 2020-06-25 The Brigham And Women's Hospital, Inc. System and methods for a trackerless navigation system
CN112686899B (en) * 2021-03-22 2021-06-18 深圳科亚医疗科技有限公司 Medical image analysis method and device, computer equipment and storage medium
CN116071409A (en) * 2023-02-22 2023-05-05 中国医学科学院生物医学工程研究所 Navigation image registration method, device, equipment and storage medium
CN117765042A (en) * 2023-12-26 2024-03-26 苏州迪凯尔医疗科技有限公司 Registration method and device for oral tomographic image, computer equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116563461A (en) * 2023-05-08 2023-08-08 上海博恩登特科技有限公司 System for quick simulation tooth alignment based on CBCT and mouth scanning data
CN117045349A (en) * 2023-08-11 2023-11-14 苏州迪凯尔医疗科技有限公司 Stereoscopic vision planting navigation system and navigation method

Also Published As

Publication number Publication date
CN118657817A (en) 2024-09-17

Similar Documents

Publication Publication Date Title
CN105447908B (en) Dental arch model generation method based on oral cavity scan data and CBCT data
JP7202737B2 (en) Tracking method and apparatus for dental implant navigation surgery
US11963845B2 (en) Registration method for visual navigation in dental implant surgery and electronic device
JP2016511661A (en) Intraoral scanning device in which an illumination frame is incorporated into an image frame
CN112043359B (en) Breast puncture method, device, equipment and storage medium
JP2025502852A (en) Scan data processing method, device, equipment and medium
WO2024183760A1 (en) Scanning data splicing method and apparatus, and device and medium
WO2024244323A1 (en) Vascular image processing system and apparatus, and storage medium
CN114494374A (en) Method for determining fusion error of three-dimensional model and two-dimensional image and electronic equipment
CN118657817B (en) Image processing method, system, medium and product
CN117765042A (en) Registration method and device for oral tomographic image, computer equipment and storage medium
KR20190007693A (en) Navigation apparatus and method for fracture correction
JP2019150358A (en) Image processing device, image processing program and image processing method
JP2024144633A (en) IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, IMAGE PROCESSING SYSTEM, AND PROGRAM
WO2025060980A1 (en) Three-dimensional data acquisition method and apparatus, and device and storage medium
CN118806389A (en) Visible puncture endoscope puncture method, device, computer equipment and storage medium
CN118615020A (en) A method, system and storage medium for calibrating the relative position of a navigation plate and teeth in dental surgery navigation
WO2024087910A1 (en) Orthodontic treatment monitoring method and apparatus, device, and storage medium
CN119131310B (en) Image processing method, device, equipment, medium and product
CN117522933A (en) Preoperative and intraoperative registration method and device based on nerve radiation field
CN112288689B (en) Three-dimensional reconstruction method and system for operation area in microsurgery imaging process
CN115115547A (en) Attitude adjustment method, device, electronic device, and computer-readable storage medium
CN116363030A (en) Medical image processing method, medical image processing device, electronic equipment and storage medium
CN114565646A (en) Image registration method, device, electronic device and readable storage medium
JP7387280B2 (en) Image processing device, image processing method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant