[go: up one dir, main page]

CN117994234A - Image processing method and image processing apparatus - Google Patents

Image processing method and image processing apparatus Download PDF

Info

Publication number
CN117994234A
CN117994234A CN202410174033.1A CN202410174033A CN117994234A CN 117994234 A CN117994234 A CN 117994234A CN 202410174033 A CN202410174033 A CN 202410174033A CN 117994234 A CN117994234 A CN 117994234A
Authority
CN
China
Prior art keywords
target
image
angle
point
osteotomy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410174033.1A
Other languages
Chinese (zh)
Inventor
马信龙
韩佳奇
王树新
原续波
吕奕欧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute Of Medical Robot And Intelligent System Tianjin University
Original Assignee
Institute Of Medical Robot And Intelligent System Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute Of Medical Robot And Intelligent System Tianjin University filed Critical Institute Of Medical Robot And Intelligent System Tianjin University
Priority to CN202410174033.1A priority Critical patent/CN117994234A/en
Publication of CN117994234A publication Critical patent/CN117994234A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • A61B6/505Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for diagnosis of bone
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/105Modelling of the patient, e.g. for ligaments or bones
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/107Visualisation of planned trajectories or target regions

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Surgery (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Veterinary Medicine (AREA)
  • General Physics & Mathematics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Theoretical Computer Science (AREA)
  • Molecular Biology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Pathology (AREA)
  • Multimedia (AREA)
  • Optics & Photonics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Biophysics (AREA)
  • Quality & Reliability (AREA)
  • Human Computer Interaction (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Orthopedic Medicine & Surgery (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Robotics (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides an image processing method and an image processing apparatus, the method including: acquiring an initial radioactive image of a target human body, wherein the initial radioactive image comprises a plurality of human body force line key points of lower limbs of the target human body; determining a plurality of target key points and target force lines according to a plurality of human force line key points in the initial radioactive image; determining an osteotomy angle according to the plurality of target key points and the target force lines; cutting the initial radioactive image based on a plurality of target key points to obtain a plurality of intermediate radioactive images; and performing angle adjustment on the plurality of intermediate radioactive images based on the osteotomy angle and the plurality of target key points to obtain a target radioactive image, wherein the target radioactive image represents a lower limb radiographic image after the target human body is corrected.

Description

Image processing method and image processing apparatus
Technical Field
The present disclosure relates to the field of radiological image processing technology, and more particularly, to an image processing method, an image processing apparatus, an electronic device, a computer-readable storage medium, and a computer program product.
Background
Osteotomy is one of the common methods of clinical orthopedics. Wherein, for the knee osteoarthritis patient with varus malformation, the tibia high osteotomy is a safe and effective mainstream knee protection. The high tibia osteotomy reduces the pressure of the internal measuring compartment through the high tibia osteotomy and the correction force line, reduces the pain, simultaneously maintains the bone to the maximum extent and delays the progress of the illness state.
In the traditional tibial high-level osteotomy planning process, a doctor manually marks key points of the force lines on the radiological image of the patient, measures each angle of the force lines, and manually plans the osteotomy points and the osteotomy angles. This approach relies entirely on manual labor and is inefficient.
Disclosure of Invention
In view of this, the embodiments of the present disclosure provide an image processing method, an image processing apparatus, an electronic device, a computer-readable storage medium, and a computer program product.
An aspect of an embodiment of the present disclosure provides an image processing method, including:
acquiring an initial radioactive image of a target human body, wherein the initial radioactive image comprises a plurality of human body force line key points of lower limbs of the target human body;
Determining a plurality of target keypoints and target force lines according to a plurality of the human force line keypoints in the initial radiological image;
Determining an osteotomy angle according to the plurality of target key points and the target force lines;
Clipping the initial radioactive image based on a plurality of target key points to obtain a plurality of intermediate radioactive images;
And performing angle adjustment on the plurality of intermediate radioactive images based on the osteotomy angle and the plurality of target key points to obtain a target radioactive image, wherein the target radioactive image represents a lower limb radiographic image after the target human body is corrected.
According to an embodiment of the present disclosure, the above-mentioned human body force line key points include: the femoral head center point on the left side, the femoral head center point on the right side, the femoral distal tangent outer side point, the femoral distal tangent inner side point, the tibial proximal tangent outer side point, the tibial proximal tangent inner side point, the tibial distal tangent outer side point, the tibial distal tangent inner side point, the tibial plateau inner side point and the tibial plateau outer side point.
According to an embodiment of the present disclosure, determining a plurality of target keypoints and target force lines from a plurality of the human force line keypoints in the initial radiological image includes:
Determining a force line angle set and Fujisawa points according to a plurality of human force line key points, wherein the force line angle set comprises a plurality of angles representing different bones or joints of the target human body;
Processing the initial radioactive image by using a thermodynamic diagram offset deep learning model to obtain an osteotomy point and a hinge point, wherein the target key point comprises the osteotomy point and the hinge point;
And determining the target force line according to a plurality of key points of the human force line and the Fujisawa points.
According to an embodiment of the present disclosure, determining the target force line from the plurality of human force line keypoints and the Fujisawa point includes:
determining a femoral head center point among a plurality of human body force line key points;
And connecting the femoral head center point and the Fujisawa point to obtain the target force line.
According to an embodiment of the present disclosure, the set of force line angles includes a lower limb hip-knee-ankle angle, a proximal tibial tilt angle, a distal tibial camber angle, a joint line intersection angle, a distal tibial tilt angle, a lateral patellofemoral angle, and a mechanical axis deflection angle.
According to an embodiment of the present disclosure, determining an osteotomy angle from a plurality of the target keypoints and the target force lines includes:
Determining a rotation radius according to a hinge point and a reference point, wherein the target key point comprises the hinge point, and the reference point represents a tibia distal center point determined according to a tibia distal tangent outer side point and a tibia distal tangent inner side point;
And determining the osteotomy angle according to the hinge point, the rotation radius and the target force line.
According to an embodiment of the present disclosure, determining the osteotomy angle from the hinge point, the radius of rotation, and the target force line includes: .
Cutting the radioactive image according to the hinge points and the osteotomy points to obtain two sub-images;
Rotating one of the sub-images about the hinge point as a center of a circle, and rotating the other sub-image based on the rotation radius;
And determining the rotation angle as the osteotomy angle when the reference point coincides with the target force line.
According to an embodiment of the present disclosure, the image processing method further includes:
Responding to the rotation operation of the input device, and adjusting the rotation angle in the target radioactive image to obtain a new target radioactive image;
displaying the target radioactive image or the new target radioactive image.
According to an embodiment of the present disclosure, the above image processing method further includes:
under the condition of angle adjustment, calculating a force line angle set, the osteotomy angle and the osteotomy length in real time according to the current target radioactive image, and displaying, wherein the osteotomy length is determined according to an osteotomy point and a hinge point.
Another aspect of an embodiment of the present disclosure provides an image processing apparatus including:
the acquisition module is used for acquiring an initial radioactive image of a target human body, wherein the initial radioactive image comprises a plurality of human body force line key points of the lower limb of the target human body;
the first determining module is used for determining a plurality of target key points and target force lines according to a plurality of human force line key points in the initial radioactive image;
the second determining module is used for determining an osteotomy angle according to a plurality of target key points and the target force lines;
the clipping module is used for clipping the initial radioactive image based on a plurality of target key points to obtain a plurality of intermediate radioactive images;
the obtaining module is used for carrying out angle adjustment on the plurality of intermediate radioactive images based on the osteotomy angle and the plurality of target key points to obtain a target radioactive image, wherein the target radioactive image represents the lower limb radiographic image after the target human body is corrected.
According to an embodiment of the present disclosure, the image processing apparatus further includes:
the adjusting module is used for responding to the rotation operation of the input equipment and adjusting the rotation angle in the target radioactive image to obtain a new target radioactive image;
the display module is used for displaying the target radioactive image or the new target radioactive image;
the display module is further used for displaying the real-time calculation of the force line angle set, the osteotomy angle and the osteotomy length according to the current target radioactive image, wherein the osteotomy length is determined according to the osteotomy point and the hinge point.
Another aspect of an embodiment of the present disclosure provides an electronic device, including: one or more processors; and a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method as described above.
Another aspect of an embodiment of the present disclosure provides a computer-readable storage medium storing computer-executable instructions that, when executed, are configured to implement a method as described above.
Another aspect of the disclosed embodiments provides a computer program product comprising computer executable instructions which, when executed, are to implement a method as described above.
According to the embodiment of the disclosure, the target radioactive image is obtained by determining a plurality of target key points and target force lines according to a plurality of human body force line key points in the initial radioactive image and determining an osteotomy angle according to the target key points and the target force lines, so that the initial radioactive image can be cut based on the plurality of target key points and a plurality of intermediate radioactive images obtained by cutting can be subjected to angle adjustment based on the osteotomy angle and the plurality of target key points. Because the angle adjustment is carried out on the cut image after the target key points and the target force lines are used for determining the osteotomy angle, a doctor can know more proper operation parameters and larger help in time, and the operation success rate of a patient is effectively improved.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments thereof with reference to the accompanying drawings in which:
FIG. 1 schematically illustrates an exemplary system architecture to which an image processing method may be applied, according to an embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow chart of an image processing method according to an embodiment of the disclosure;
FIG. 3 schematically illustrates a schematic diagram of angles LDFA according to an embodiment of the present disclosure;
FIG. 4 schematically illustrates a schematic view of angles of osteotomies according to an embodiment of the disclosure;
FIG. 5 schematically illustrates a schematic view of angles of osteotomies according to an embodiment of the disclosure;
FIG. 6 schematically illustrates a schematic view of angles of osteotomies according to an embodiment of the disclosure;
Fig. 7 schematically illustrates a block diagram of an image processing apparatus according to an embodiment of the present disclosure; and
Fig. 8 schematically illustrates a block diagram of an electronic device adapted to implement the above-described method according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is only exemplary and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a convention should be interpreted in accordance with the meaning of one of skill in the art having generally understood the convention (e.g., "a system having at least one of A, B and C" would include, but not be limited to, systems having a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
Embodiments of the present disclosure provide an image processing method and an image processing apparatus, the method including: acquiring an initial radioactive image of a target human body, wherein the initial radioactive image comprises a plurality of human body force line key points of lower limbs of the target human body; determining a plurality of target key points and target force lines according to a plurality of human force line key points in the initial radioactive image; determining an osteotomy angle according to the plurality of target key points and the target force lines; cutting the initial radioactive image based on a plurality of target key points to obtain a plurality of intermediate radioactive images; and performing angle adjustment on the plurality of intermediate radioactive images based on the osteotomy angle and the plurality of target key points to obtain a target radioactive image, wherein the target radioactive image represents a lower limb radiographic image after the target human body is corrected.
Fig. 1 schematically illustrates an exemplary system architecture 100 to which an image processing method may be applied, according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which embodiments of the present disclosure may be applied to assist those skilled in the art in understanding the technical content of the present disclosure, but does not mean that embodiments of the present disclosure may not be used in other devices, systems, environments, or scenarios.
As shown in fig. 1, a system architecture 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired and/or wireless communication links, and the like.
The user may interact with the server 105 via the network 104 using the terminal devices 101, 102, 103 to receive or send messages or the like. Various communication client applications may be installed on the terminal devices 101, 102, 103, such as medical class applications, web browser applications, search class applications, instant messaging tools, mailbox clients and/or social platform software, to name a few.
The terminal devices 101, 102, 103 may be a variety of electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, laptop and desktop computers, and the like.
The server 105 may be a server providing various services, such as a background management server (by way of example only) providing support for websites browsed by users using the terminal devices 101, 102, 103. The background management server may analyze and process the received data such as the user request, and feed back the processing result (e.g., the web page, information, or data obtained or generated according to the user request) to the terminal device.
It should be noted that, the image processing method provided by the embodiment of the present disclosure may be generally executed by the server 105. Accordingly, the image processing apparatus provided by the embodiments of the present disclosure may be generally provided in the server 105. The image processing method provided by the embodiments of the present disclosure may also be performed by a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Accordingly, the image processing apparatus provided by the embodiments of the present disclosure may also be provided in a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Or the image processing method provided by the embodiment of the present disclosure may be performed by the terminal apparatus 101, 102, or 103, or may be performed by another terminal apparatus other than the terminal apparatus 101, 102, or 103. Accordingly, the image processing apparatus provided by the embodiments of the present disclosure may also be provided in the terminal device 101, 102, or 103, or in another terminal device different from the terminal device 101, 102, or 103.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Fig. 2 schematically shows a flowchart of an image processing method according to an embodiment of the present disclosure.
As shown in fig. 2, the image processing method includes operations S201 to S203.
In operation S201, an initial radiological image of a target human body is acquired, wherein the initial radiological image includes a plurality of human body force line keypoints of a lower limb of the target human body;
In operation S202, a plurality of target keypoints and target force lines are determined from a plurality of human force line keypoints in the initial radiological image;
in operation S203, an osteotomy angle is determined from the plurality of target keypoints and target force lines;
in operation S204, clipping the initial radiological image based on the plurality of target keypoints to obtain a plurality of intermediate radiological images;
In operation S205, a plurality of intermediate radiological images are angularly adjusted based on the osteotomy angle and the plurality of target keypoints to obtain a target radiological image, wherein the target radiological image characterizes a lower limb radiological image after correction of the target human body.
According to embodiments of the present disclosure, the initial radiological image may be any one of an X-ray map, a B-ray image, and a nuclear magnetic resonance image. The method comprises the steps of displaying a plurality of human body force line key points in the initial radioactive image in the form of mark points, marking the human body force line key points by manpower or by a neural network, and the source of the human body force line key points is not particularly limited in the present disclosure.
In accordance with embodiments of the present disclosure, prior to commencing the image processing methods of the present disclosure, the initial radiological image may be identified by a physician or electronic device to determine a suitable surgical type, wherein the surgical type may be a medial open wedge tibial plateau resection, a lateral open wedge tibial plateau resection, a medial closed wedge tibial plateau resection, a lateral closed wedge tibial plateau resection, a medial open distal femoral resection, a lateral open distal femoral resection, a medial closed distal femoral resection, a lateral closed distal femoral resection, or the like.
It should be noted that any of the above surgical types may be planned using the method of the present disclosure, and the difference is that the values of parameters such as the target key points, the target force lines, the osteotomy angles, etc. of different surgical types are different.
According to embodiments of the present disclosure, the present disclosure is exemplified by medial closing wedge tibial high osteotomies, wherein key points of human body force lines used at this time are a femoral head center point F0 on the left side and a femoral head center point F on the right side, a femoral distal tangent outer side point FK1, a femoral distal tangent inner side point FK2, a tibial proximal tangent outer side point TK1, a tibial proximal tangent inner side point TK2, a tibial distal tangent outer side point A1, a tibial distal tangent inner side point A2, a tibial plateau inner side point T1, and a tibial plateau outer side point T2, respectively.
According to an embodiment of the present disclosure, based on the positions of the above-mentioned plurality of human body force line key points in the initial radiological image, a plurality of target key points and target force lines are determined, for example, an osteotomy point and a hinge point are determined, and a Fujisawa point is determined through a tibia plateau medial point T1 and a tibia plateau lateral point T2, so that the target force lines F0-Fujisawa are constructed according to the Fujisawa point and a femoral head center point F0. At this time, according to a plurality of target key points and target force lines, the osteotomy angle is determined.
According to an embodiment of the disclosure, the initial radiological image is clipped based on the target key point and the target force line to obtain a plurality of intermediate radiological images, for example, clipping is performed from a hinge point to an osteotomy point, so that two intermediate radiological images can be obtained, then one intermediate radiological image can be rotated by taking the hinge point as a center of a circle, the rotation angle is an osteotomy angle, and then the two rotated intermediate radiological images can be combined into the target radiological image.
According to embodiments of the present disclosure, after the target radiological image is obtained, it may be provided to a physician so that the physician may plan the procedure based on the target radiological image, e.g., before the patient makes a tibial high-dimensional osteotomy, the physician may plan how to perform the osteotomy on the target human body based on intermediate parameters (i.e., the target radiological image of the present disclosure) to maximize the accuracy of the orthopedics, it being particularly emphasized that the target radiological image provided by the present disclosure, as well as the osteotomy angle and target keypoints, provide only some surgical references for the operation of the target human body, with specific surgical parameters requiring specific selection by the practitioner based on the radiological image and parameters provided by the present disclosure.
According to the embodiment of the disclosure, the target radioactive image is obtained by determining a plurality of target key points and target force lines according to a plurality of human body force line key points in the initial radioactive image and determining an osteotomy angle according to the target key points and the target force lines, so that the initial radioactive image can be cut based on the plurality of target key points and a plurality of intermediate radioactive images obtained by cutting can be subjected to angle adjustment based on the osteotomy angle and the plurality of target key points. Because the angle adjustment is carried out on the cut image after the target key points and the target force lines are used for determining the osteotomy angle, a doctor can know more proper operation parameters and larger help in time, and the operation success rate of a patient is effectively improved.
According to an embodiment of the present disclosure, determining a plurality of target keypoints and target force lines from a plurality of human force line keypoints in an initial radiological image comprises:
determining a force line angle set and Fujisawa points according to a plurality of human force line key points, wherein the force line angle set comprises a plurality of angles representing different bones or joints of a target human body;
processing an initial radioactive image by using a thermodynamic diagram offset deep learning model to obtain an osteotomy point and a hinge point, wherein the target key point comprises the osteotomy point and the hinge point;
And determining a target force line according to the plurality of human force line key points and the Fujisawa points.
According to an embodiment of the present disclosure, a set of force line angles is identified from acquired body force line key point coordinates, the angles including the length of the lower limb hip-knee-ankle angle (HKAA), proximal tibial internal inclination angle (MPTA), distal tibial camber angle (LDFA), joint line intersection angle (JLCA), distal tibial internal inclination angle (LDTA), lateral patellofemoral angle (LPFA), and mechanical axis deviation angle (MAD).
According to an embodiment of the present disclosure, the distal femur center point FK0 is defined as the center of FK1 and FK2, the proximal tibia center point TK0 is the center of TK1 and TK2, and the distal tibia center point A0 is the center of A1 and A2.
Fig. 3 schematically illustrates a schematic diagram of angles LDFA according to an embodiment of the present disclosure.
According to an embodiment of the present disclosure, the angle calculation is exemplified by angle LDFA, where the calculated angle is LDFA, defined as the lateral angle of the mechanical axis of the femoral head from the distal femur tangent. Angle LDFA is the angle between the line segment from FK0 to FK1 and the line segment from FK0 to F0, as shown in fig. 3.
According to an embodiment of the present disclosure, the method of calculating LDFA the angle is as follows:
the line segments FK0 to F1 are denoted as FK0-F1, and the line segments FK0 to F0 are denoted as FK0-F0
LDFA=acos(DotProduct(Norm(FK0-F1),Norm(FK0-F0)))*180/π
Wherein acos is acos function, dotProduct is dot product and Norm is normalized.
According to embodiments of the present disclosure, the Fujisawa point is located 62.5% of the tibial plateau.
According to an embodiment of the disclosure, an initial radiological image is processed using a thermodynamic offset deep learning model to obtain osteotomy points and hinge points, wherein the thermodynamic offset deep learning model may be trained by:
acquiring a key point training set, wherein the human body key point training set comprises a plurality of training radiographic images and a real osteotomy point or a real hinge point (hereinafter, the real osteotomy point is exemplified);
iteratively training the initial thermodynamic diagram offset module by utilizing a plurality of training radiographic images and a plurality of real osteotomy points to obtain a trained target thermodynamic diagram offset module;
And constructing a thermodynamic diagram offset deep learning model according to the target thermodynamic diagram offset module and the fusion scoring module.
According to an embodiment of the present disclosure, iteratively training an initial thermodynamic diagram offset module using a plurality of training radiographic images and a plurality of real osteotomy points, resulting in a trained target thermodynamic diagram offset module, comprising:
For each training radiographic image, processing the target radiographic image by utilizing a thermodynamic diagram offset model to obtain a predicted thermodynamic diagram and a predicted offset vector diagram, wherein the predicted thermodynamic diagram comprises a plurality of first areas marked with first values, the first values represent whether the first areas are prediction key points, the predicted offset vector diagram comprises a plurality of second areas marked with second values, and the second values represent the offset degree between the second areas and the real osteotomy points;
generating thermodynamic diagram loss values according to first numerical values of the first areas;
Generating an offset loss value according to the second values of the plurality of second areas;
generating a thermodynamic diagram offset loss value according to the thermodynamic diagram loss value and the offset loss value;
and iteratively adjusting network parameters of the thermodynamic diagram offset model according to the thermodynamic diagram offset loss value to obtain a target thermodynamic diagram offset module.
According to an embodiment of the present disclosure, the value within the radius of the true osteotomy point coordinate in the predicted thermodynamic diagram is 1, the remainder being 0, i.e., the first numerical value.
According to embodiments of the present disclosure, the backbone network of the thermodynamic diagram offset model may be built by embedding a global attention mechanism (CBAM) module at the last convolution layer of Resnet or ResNet network, with two convolution output heads corresponding to two outputs, respectively. The output of the predicted thermodynamic diagram uses a sigmoid function, and the corresponding Loss function is the sum of Logistic Loss of key points and is denoted as L h. The loss function of the predicted offset vector map is a robust loss that calculates only the true offset versus predicted offset difference for each location within the real osteotomy point radius R disk, as shown in equation (1).
Where K is the kth real osteotomy point, H () is the robustness loss, lk is the kth key point position, fk (xi) =lk-xi is the predicted offset value of the xi position or pixel point in the predicted offset vector diagram, i.e. the second value, and the vector modulo length is smaller the closer to the real osteotomy point.
According to an embodiment of the present disclosure, the thermodynamic diagram offset loss value is calculated as shown in equation (2):
L(θ)=λhLh(θ)+λoLo(θ) (2)
Wherein lambda h、λo is a predetermined constant, for example, 0.3 and 0.7, respectively, L h is a thermodynamic diagram loss value, and L o is an offset loss value.
According to an embodiment of the present disclosure, the fusion scoring module calculates a scoring value of each pixel ij using formula (3), and determines a pixel with the largest scoring value as an osteotomy point or hinge point of the present disclosure.
Fij=Oij*Hij (3)
Wherein, P ij represents pixel ij, O ij represents offset score of P ij, and H ij represents thermodynamic diagram score of P ij.
According to an embodiment of the present disclosure, determining a target force line from a plurality of human force line keypoints and Fujisawa points comprises:
Determining a femoral head center point F0 in a plurality of human body force line key points;
the femoral head center point F0 and the Fujishawa point are connected to obtain a target force line F0-Fujishawa.
Fig. 4 schematically illustrates a schematic view of the included angle of an osteotomy angle, in accordance with an embodiment of the present disclosure.
According to an embodiment of the present disclosure, determining an osteotomy angle from a plurality of target keypoints and target force lines, comprises:
determining a rotation radius according to a hinge point and a reference point, wherein the target key point comprises the hinge point, and the reference point represents a tibia distal center point A0 determined according to a tibia distal tangent outer side point A1 and a tibia distal tangent inner side point A2;
and determining the osteotomy angle according to the hinge point, the rotation radius and the target force line.
According to the embodiment of the disclosure, a hinge point and an A0 are connected, the hinge point and an osteotomy point are connected, an image of a tibia part on the affected side is segmented from a connecting line of the hinge point and the osteotomy point, the hinge point is taken as a rotation center, the distance from the hinge point to the A0 is taken as a rotation radius, the segmented image is rotated until the A0 coincides with a target force line, and at the moment, the rotation angle is determined as an opening angle of the osteotomy, namely, an osteotomy angle, such as a black included angle in fig. 4.
According to an embodiment of the present disclosure, determining an osteotomy angle from a hinge point, a radius of rotation, and a target force line, comprises: .
Cutting the radioactive image according to the hinge points and the osteotomy points to obtain two sub-images;
rotating one sub-image by taking a hinge point as a circle center and based on a rotation radius;
in the case where the reference point coincides with the target force line, the angle of rotation is determined as the osteotomy angle.
According to an embodiment of the present disclosure, as shown in fig. 4, one sub-image (the image in which the lower right corner in the drawing is located) is rotated based on a rotation radius with respect to the hinge point as a center, and in the case where the reference point A0 coincides with the target force line, the rotated angle is determined as an osteotomy angle.
Fig. 5 schematically illustrates an included angle schematic of an osteotomy angle, according to an embodiment of the present disclosure.
According to an embodiment of the present disclosure, the image processing method further includes:
responding to the rotation operation of the input device, and adjusting the rotation angle in the target radioactive image to obtain a new target radioactive image;
A target radiological image or a new target radiological image as shown in fig. 5 is shown.
According to the embodiment of the disclosure, since the rotation angle predicted by the image processing method of the disclosure may not be combined with the physical condition of the patient, at this time, the physician may adjust the rotation angle through an input device such as a mouse and a keyboard, so as to adjust the rotation angle, and simultaneously, the target radioactive image is changed, and in this process, the display may be used to display the target radioactive image in the adjustment process in real time, so that the physician can confirm whether the current rotation angle is suitable.
Fig. 6 schematically illustrates a schematic view of the included angle of an osteotomy angle, in accordance with an embodiment of the present disclosure.
According to an embodiment of the present disclosure, the image processing method further includes:
Under the condition of angle adjustment, a force line angle set, an osteotomy angle and an osteotomy length are calculated in real time according to the current target radioactive image, and are displayed, wherein the osteotomy length is determined according to an osteotomy point and a hinge point.
According to the embodiment of the disclosure, in the process of displaying the display in real time, the set of force line angles, the osteotomy angle and the osteotomy length at the current time can be displayed, for example, the target radioactive image at the current time can be displayed on the left side, and the corresponding HKAA, MPTA, LDFA, JLCA, LDTA, LPFA angles, MAD length, osteotomy angle, osteotomy length and the like are displayed on the right side, as shown in fig. 6.
According to the embodiment of the disclosure, the image processing method of the disclosure can also mark each physical force line key point on the display interface, and meanwhile, the coordinate position of the physical force line key point can also be adjusted through the input device.
Fig. 7 schematically shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure.
As shown in fig. 7, the image processing apparatus 700 includes an acquisition module 710, a first determination module 720, a second determination module 730, a cropping module 740, and an obtaining module 750.
An acquisition module 710 for acquiring an initial radiological image of a target human body, wherein the initial radiological image includes a plurality of human body force line keypoints of a lower limb of the target human body;
A first determining module 720, configured to determine a plurality of target keypoints and target force lines according to a plurality of human force line keypoints in the initial radiological image;
a second determining module 730, configured to determine an osteotomy angle according to the plurality of target keypoints and the target force lines;
The cropping module 740 is configured to crop the initial radiological image based on the plurality of target keypoints to obtain a plurality of intermediate radiological images;
The obtaining module 750 is configured to perform angle adjustment on the plurality of intermediate radiological images based on the osteotomy angle and the plurality of target keypoints to obtain a target radiological image, where the target radiological image characterizes a lower limb radiographic image after correction of the target human body.
According to the embodiment of the disclosure, the target radioactive image is obtained by determining a plurality of target key points and target force lines according to a plurality of human body force line key points in the initial radioactive image and determining an osteotomy angle according to the target key points and the target force lines, so that the initial radioactive image can be cut based on the plurality of target key points and a plurality of intermediate radioactive images obtained by cutting can be subjected to angle adjustment based on the osteotomy angle and the plurality of target key points. Because the angle adjustment is carried out on the cut image after the target key points and the target force lines are used for determining the osteotomy angle, a doctor can know more proper operation parameters and larger help in time, and the operation success rate of a patient is effectively improved.
According to an embodiment of the present disclosure, the image processing apparatus 700 further includes:
The adjusting module is used for responding to the rotation operation of the input equipment and adjusting the rotation angle in the target radioactive image to obtain a new target radioactive image;
The display module is used for displaying the target radioactive image or a new target radioactive image;
The display module is also used for displaying the real-time calculation of the force line angle set, the osteotomy angle and the osteotomy length according to the current target radioactive image, wherein the osteotomy length is determined according to the osteotomy point and the hinge point.
Any number of the modules, or at least some of the functionality of any number, according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules according to embodiments of the present disclosure may be implemented as split into multiple modules. Any one or more of the modules according to embodiments of the present disclosure may be implemented at least in part as a hardware Circuit, such as a field programmable gate array (Field Programmable GATE ARRAY, FPGA), a programmable logic array (Programmable Logic Arrays, PLA), a system on a chip, a system on a substrate, a system on a package, an Application SPECIFIC INTEGRATED Circuit (ASIC), or any other reasonable manner of hardware or firmware that integrates or encapsulates circuitry, or any one of or a suitable combination of three of software, hardware, and firmware. Or one or more of the modules according to embodiments of the present disclosure may be at least partially implemented as computer program modules that, when executed, perform the corresponding functions.
For example, any of the acquisition module 710, the first determination module 720, the second determination module 730, the clipping module 740, the obtaining module 750 may be combined in one module to be implemented, or any of the modules may be split into a plurality of modules. Or at least some of the functionality of one or more of the modules may be combined with, and implemented in, at least some of the functionality of other modules. According to embodiments of the present disclosure, at least one of the acquisition module 710, the first determination module 720, the second determination module 730, the clipping module 740, the obtaining module 750 may be implemented at least in part as hardware circuitry, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in hardware or firmware in any other reasonable way of integrating or packaging the circuitry, or in any one of or a suitable combination of any of the three implementations of software, hardware, and firmware. Or at least one of the acquisition module 710, the first determination module 720, the second determination module 730, the cropping module 740, and the obtaining module 750 may be at least partially implemented as computer program modules which, when executed, perform the corresponding functions.
It should be noted that, in the embodiment of the present disclosure, the image processing apparatus portion corresponds to the image processing method portion in the embodiment of the present disclosure, and the description of the image processing apparatus portion specifically refers to the image processing method portion and is not described herein.
Fig. 8 schematically illustrates a block diagram of an electronic device adapted to implement the above-described method according to an embodiment of the present disclosure. The electronic device shown in fig. 8 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 8, an electronic device 800 according to an embodiment of the present disclosure includes a processor 801 that can perform various appropriate actions and processes according to a program stored in a Read-Only Memory (ROM) 802 or a program loaded from a storage section 808 into a random access Memory (Random Access Memory, RAM) 803. The processor 801 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or an associated chipset and/or special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. The processor 801 may also include on-board memory for caching purposes. The processor 801 may include a single processing unit or multiple processing units for performing the different actions of the method flows according to embodiments of the disclosure.
In the RAM 803, various programs and data required for the operation of the electronic device 800 are stored. The processor 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. The processor 801 performs various operations of the method flow according to the embodiments of the present disclosure by executing programs in the ROM 802 and/or the RAM 803. Note that the program may be stored in one or more memories other than the ROM 802 and the RAM 803. The processor 801 may also perform various operations of the method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.
According to an embodiment of the present disclosure, the electronic device 800 may also include an input/output (I/O) interface 805, the input/output (I/O) interface 805 also being connected to the bus 804. The system 800 may also include one or more of the following components connected to the I/O interface 805: an input portion 806 including a keyboard, mouse, etc.; an output portion 807 including a Cathode Ray Tube (CRT), a Liquid crystal display (Liquid CRYSTAL DISPLAY, LCD), and a speaker, etc.; a storage section 808 including a hard disk or the like; and a communication section 809 including a network interface card such as a LAN card, a modem, or the like. The communication section 809 performs communication processing via a network such as the internet. The drive 810 is also connected to the I/O interface 805 as needed. A removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 810 as needed so that a computer program read out therefrom is mounted into the storage section 808 as needed.
According to embodiments of the present disclosure, the method flow according to embodiments of the present disclosure may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via the communication section 809, and/or installed from the removable media 811. The above-described functions defined in the system of the embodiments of the present disclosure are performed when the computer program is executed by the processor 801. The systems, devices, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
The present disclosure also provides a computer-readable storage medium that may be embodied in the apparatus/device/system described in the above embodiments; or may exist alone without being assembled into the apparatus/device/system. The computer-readable storage medium carries one or more programs which, when executed, implement methods in accordance with embodiments of the present disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium. Examples may include, but are not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-Only Memory (ROM), an erasable programmable read-Only Memory (EPROM) or flash Memory, a portable compact disc read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
For example, according to embodiments of the present disclosure, the computer-readable storage medium may include ROM 802 and/or RAM 803 and/or one or more memories other than ROM 802 and RAM 803 described above.
Embodiments of the present disclosure also include a computer program product comprising a computer program comprising program code for performing the methods provided by the embodiments of the present disclosure, the program code for causing an electronic device to implement the image processing methods provided by the embodiments of the present disclosure when the computer program product is run on the electronic device.
The above-described functions defined in the system/apparatus of the embodiments of the present disclosure are performed when the computer program is executed by the processor 501. The systems, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
In one embodiment, the computer program may be based on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program may also be transmitted, distributed, and downloaded and installed in the form of a signal on a network medium, and/or from a removable medium 811 via a communication portion 809. The computer program may include program code that may be transmitted using any appropriate network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
According to embodiments of the present disclosure, program code for performing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, such computer programs may be implemented in high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. Programming languages include, but are not limited to, such as Java, c++, python, "C" or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. Those skilled in the art will appreciate that the features recited in the various embodiments of the disclosure and/or in the claims may be combined in various combinations and/or combinations, even if such combinations or combinations are not explicitly recited in the disclosure. In particular, the features recited in the various embodiments of the present disclosure and/or the claims may be variously combined and/or combined without departing from the spirit and teachings of the present disclosure. All such combinations and/or combinations fall within the scope of the present disclosure.
The embodiments of the present disclosure are described above. These examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described above separately, this does not mean that the measures in the embodiments cannot be used advantageously in combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be made by those skilled in the art without departing from the scope of the disclosure, and such alternatives and modifications are intended to fall within the scope of the disclosure.

Claims (10)

1. An image processing method, comprising:
acquiring an initial radioactive image of a target human body, wherein the initial radioactive image comprises a plurality of human body force line key points of lower limbs of the target human body;
determining a plurality of target keypoints and target force lines according to a plurality of the human force line keypoints in the initial radiological image;
determining an osteotomy angle according to the plurality of target keypoints and the target force lines;
Clipping the initial radioactive image based on a plurality of target key points to obtain a plurality of intermediate radioactive images;
And performing angle adjustment on the plurality of intermediate radioactive images based on the osteotomy angle and the plurality of target key points to obtain a target radioactive image, wherein the target radioactive image represents a lower limb radiographic image of the corrected target human body.
2. The method of claim 1, wherein the human body force line keypoints comprise: the femoral head center point on the left side, the femoral head center point on the right side, the femoral distal tangent outer side point, the femoral distal tangent inner side point, the tibial proximal tangent outer side point, the tibial proximal tangent inner side point, the tibial distal tangent outer side point, the tibial distal tangent inner side point, the tibial plateau inner side point and the tibial plateau outer side point.
3. The method of claim 1 or 2, wherein determining a plurality of target keypoints and target force lines from a plurality of the human force line keypoints in the initial radiological image comprises:
Determining a force line angle set and Fujisawa points according to a plurality of human force line key points, wherein the force line angle set comprises a plurality of angles representing different bones or joints of the target human body;
Processing the initial radioactive image by using a thermodynamic diagram offset deep learning model to obtain an osteotomy point and a hinge point, wherein the target key point comprises the osteotomy point and the hinge point;
And determining the target force line according to a plurality of human force line key points and the Fujisawa points.
4. The method of claim 3, wherein determining the target force line from a plurality of the human force line keypoints and the Fujisawa point comprises:
determining a femoral head center point in a plurality of human body force line key points;
and connecting the femoral head center point with the Fujisawa point to obtain the target force line.
5. The method of claim 3, wherein the set of force line angles comprises a lower limb hip-knee-ankle angle, a proximal tibial inclination angle, a distal tibial camber angle, a joint line intersection angle, a distal tibial inclination angle, a lateral patellofemoral angle, and a mechanical axis deviation angle.
6. The method of claim 1, wherein determining an osteotomy angle from a plurality of the target keypoints and the target force lines comprises:
Determining a rotation radius according to a hinge point and a reference point, wherein the target key point comprises the hinge point, and the reference point represents a tibia distal center point determined according to a tibia distal tangent outer side point and a tibia distal tangent inner side point;
and determining the osteotomy angle according to the hinge point, the rotation radius and the target force line.
7. The method of claim 6, wherein determining the osteotomy angle from the hinge point, the radius of rotation, and the target force line comprises: .
Cutting the radioactive image according to the hinge points and the osteotomy points to obtain two sub-images;
rotating one sub-image by taking the hinge point as a circle center and based on the rotation radius;
In the event that the reference point coincides with the target force line, the angle of rotation is determined as the osteotomy angle.
8. The method of claim 1, further comprising:
Responding to the rotation operation of the input device, and adjusting the rotation angle in the target radioactive image to obtain a new target radioactive image;
Displaying the target radiological image or the new target radiological image;
wherein the image processing method further comprises:
Under the condition of angle adjustment, calculating a force line angle set, the osteotomy angle and the osteotomy length in real time according to the current target radioactive image, and displaying, wherein the osteotomy length is determined according to an osteotomy point and a hinge point.
9. An image processing apparatus comprising:
The acquisition module is used for acquiring an initial radioactive image of a target human body, wherein the initial radioactive image comprises a plurality of human body force line key points of lower limbs of the target human body;
The first determining module is used for determining a plurality of target key points and target force lines according to a plurality of human body force line key points in the initial radioactive image;
The second determining module is used for determining an osteotomy angle according to a plurality of target key points and the target force lines;
the clipping module is used for clipping the initial radioactive image based on a plurality of target key points to obtain a plurality of intermediate radioactive images;
The obtaining module is used for carrying out angle adjustment on the plurality of intermediate radioactive images based on the osteotomy angle and the plurality of target key points to obtain a target radioactive image, wherein the target radioactive image represents the lower limb radiographic image after the target human body is corrected.
10. The apparatus of claim 9, further comprising:
the adjusting module is used for responding to the rotation operation of the input equipment and adjusting the rotation angle in the target radioactive image to obtain a new target radioactive image;
a display module for displaying the target radiological image or the new target radiological image;
the display module is further used for displaying the real-time calculation of the force line angle set, the osteotomy angle and the osteotomy length according to the current target radioactive image, wherein the osteotomy length is determined according to the osteotomy point and the hinge point.
CN202410174033.1A 2024-02-07 2024-02-07 Image processing method and image processing apparatus Pending CN117994234A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410174033.1A CN117994234A (en) 2024-02-07 2024-02-07 Image processing method and image processing apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410174033.1A CN117994234A (en) 2024-02-07 2024-02-07 Image processing method and image processing apparatus

Publications (1)

Publication Number Publication Date
CN117994234A true CN117994234A (en) 2024-05-07

Family

ID=90898725

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410174033.1A Pending CN117994234A (en) 2024-02-07 2024-02-07 Image processing method and image processing apparatus

Country Status (1)

Country Link
CN (1) CN117994234A (en)

Similar Documents

Publication Publication Date Title
US11529075B2 (en) Determining a range of motion of an artificial knee joint
US12274511B2 (en) Systems and methods for medical image analysis
US20240325087A1 (en) System and methods for positioning bone cut guide
US11612436B2 (en) Systems, methods, and devices for developing patient-specific medical treatments, operations, and procedures
US20240096508A1 (en) Systems and methods for using generic anatomy models in surgical planning
US11877801B2 (en) Systems, methods, and devices for developing patient-specific spinal implants, treatments, operations, and/or procedures
JP2002159478A (en) Method of detecting anatomical landmark automatically in radiographic image and program storage device read by device
CN115281831B (en) Method, device and computer equipment for simulating postoperative results of knee replacement surgery
CN117994234A (en) Image processing method and image processing apparatus
CN114266790A (en) Blood vessel segmentation method and computer device
CN115345779B (en) Data processing method, device, terminal device and computer-readable storage medium
CN118986495A (en) Device and method for positioning femoral canal of medial patellofemoral ligament
CN120531483A (en) A surgical navigation positioning method, device, computer equipment and storage medium
Çakmak et al. Oblique Plane Deformities
CN116269497A (en) Image display method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination