WO2025074367A1 - Systèmes et procédés de vérification d'un ou de plusieurs modèles de segmentation - Google Patents
Systèmes et procédés de vérification d'un ou de plusieurs modèles de segmentation Download PDFInfo
- Publication number
- WO2025074367A1 WO2025074367A1 PCT/IL2024/050976 IL2024050976W WO2025074367A1 WO 2025074367 A1 WO2025074367 A1 WO 2025074367A1 IL 2024050976 W IL2024050976 W IL 2024050976W WO 2025074367 A1 WO2025074367 A1 WO 2025074367A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- measurement
- model
- image data
- segmentation
- numerical
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30008—Bone
Definitions
- the present disclosure is generally directed to segmentation models, and relates more particularly to verifying one or more segmentation models.
- Imaging may be used by a medical provider for diagnostic, therapeutic, and/or surgical planning purposes.
- anatomical element(s) in the images may be segmented to enable information and/or measurements about the anatomical element(s) to be obtained.
- the information and/or measurements may be useful in the aforementioned diagnostic, therapeutic, and/or surgical planning purposes.
- Example aspects of the present disclosure include:
- a method for verifying one or more segmentation according to at least one embodiment of the present disclosure comprises.
- receiving image data depicting an anatomical element inputting the image data into the one or more segmentation models, the one or more segmentation models configured to segment the image data and output segmented image data; receiving at least one numerical measurement of the anatomical element; receiving at least one model measurement of the anatomical element, wherein the at least one model measurement is measured by a segmentation measurement model using the segmented image data; comparing the at least one numerical measurement with the at least one model measurement; determining an error based on the difference between the at least one numerical measurement and the at least one model measurement; and verifying the one or more segmentation models as a result of determining that the error is less than a predetermined error threshold.
- the at least one numerical measurement comprises at least a cortical bone thickness measurement.
- the one or more trajectories are at least one of predefined trajectories or random trajectories.
- the one or more users comprises at least three users.
- any of the aspects herein, wherein the at least one numerical measurement and the at least one model measurement are measured for one or more testing groups of in-scope, pediatric, and/or metals.
- any of the aspects herein further comprising: inputting the image data into a segmentation model, the segmentation model configured to segment the image data and output a segmented image; and inputting the segmented image data, the at least one numerical measurement, and one or more trajectories into the segmentation measurement model, wherein the segmentation measurement model measures the at least one numerical measurement in the segmented image.
- a system for verifying one or more segmentation models comprises an imaging device; a processor; and a memory storing data for processing by the processor, the data, when processed, causes the processor to: receive image data depicting an anatomical element from the imaging device; input the image data into the one or more segmentation models, the one or more segmentation models configured to segment the image data and output segmented image data; receive at least one numerical measurement of the anatomical element; receive at least one model measurement of the anatomical element, wherein the at least one model measurement is measured by a segmentation measurement model; compare the at least one model measurement with the at least one model measurement; determine an error based on the difference between the at least one numerical measurement and the at least one model measurement; and verify the one or more segmentation models as a result of determining that the error is less than a predetermined error threshold.
- the imaging device comprises at least one of a CT imaging device or an 0-arm imaging device.
- the at least one model measurement is received from the segmentation measurement model configured to received segmented image data and the at least one numerical measurement and output the at least one model measurement.
- the at least one numerical measurement comprises at least a cortical bone thickness measurement.
- the memory stores further data for processing by the processor that, when processed, causes the processor to: average the at least one numerical measurement prior to the comparing step.
- a method for verifying one or more segmentation models comprises receiving image data depicting an anatomical element from a database; inputting the image data into the one or more segmentation models, the one or more segmentation models configured to segment the image data and output segmented image data; receiving at least one numerical measurement of the anatomical element; inputting the segmented image data, the at least one numerical measurement, and one or more trajectories into segmentation measurement model, the segmentation measurement model configured to measure the at least one numerical measurement from the segmented image data; receiving at least one model measurement of the anatomical element from the segmentation measurement model; averaging the at least one numerical measurement; comparing the averaged numerical measurement with the at least one model measurement; determining an error based on the difference between the averaged numerical measurement and the at least one model measurement; and verifying the one or more segmentation models as a result of determining that the error is less than a predetermined error threshold.
- each of the expressions “at least one of A, B and C”, “at least one of A, B, or C”, “one or more of A, B, and C”, “one or more of A, B, or C” and “A, B, and/or C” means A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B and C together.
- each one of A, B, and C in the above expressions refers to an element, such as X, Y, and Z, or class of elements, such as XI -Xn, Yl-Ym, and Zl- Zo
- the phrase is intended to refer to a single element selected from X, Y, and Z, a combination of elements selected from the same class (e.g., XI and X2) as well as a combination of elements selected from two or more classes (e.g., Y 1 and Zo).
- FIG. 1 is a block diagram of a system according to at least one embodiment of the present disclosure
- Fig. 2A is a flowchart according to at least one embodiment of the present disclosure
- Fig. 2B is a flowchart according to at least one embodiment of the present disclosure
- Fig. 3 is a flowchart according to at least one embodiment of the present disclosure
- Fig. 4 is a flowchart according to at least one embodiment of the present disclosure
- Fig. 5A is an example image of an anatomical element with marked trajectories according to at least one embodiment of the present disclosure
- Fig. 5B is an example image of an anatomical element with marked trajectories and a location of measurements according to at least one embodiment of the present disclosure
- Fig. 5C is an example image of an anatomical element with marked trajectories according to at least one embodiment of the present disclosure
- Fig. 5D is an example image of an anatomical element with marked trajectories and measurements according to at least one embodiment of the present disclosure
- Fig. 6 is a chart illustrating a differences mean distribution of numerical measurements taken according to at least one embodiment of the present disclosure
- Fig. 7 is an example evaluation of numerical measurements and model measurements according to at least one embodiment of the present disclosure.
- Fig. 8 A is an example image of a numerical measurement and a model measurement according to at least one embodiment of the present disclosure.
- Fig. 8B is another example image of a numerical measurement and a model measurement according to at least one embodiment of the present disclosure.
- the described methods, processes, and techniques may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. Alternatively or additionally, functions may be implemented using machine learning models, neural networks, artificial neural networks, or combinations thereof (alone or in combination with instructions).
- Computer-readable media may include non-transitory computer-readable media, which corresponds to a tangible medium such as data storage media (e.g., RAM, ROM, EEPROM, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer).
- processors such as one or more digital signal processors (DSPs), general purpose microprocessors (e.g., Intel Core i3, i5, i7, or i9 processors; Intel Celeron processors; Intel Xeon processors; Intel Pentium processors; AMD Ryzen processors; AMD Athlon processors; AMD Phenom processors; Apple A10 or 10X Fusion processors; Apple Al l, A12, A12X, A12Z, or A13 Bionic processors; or any other general purpose microprocessors), graphics processing units (e.g., Nvidia GeForce RTX 2000-series processors, Nvidia GeForce RTX 3000-series processors, AMD Radeon RX 5000-series processors, AMD Radeon RX 6000-series processors, or any other graphics processing units), application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuit
- DSPs digital signal processors
- Segmentation of cortical and cancellous vertebrae bone layers in images obtained from, for example, CT imaging devices and/or 0-arm imaging devices have a variety of applications: bone mineral density evaluation, force feedback prediction for robotic bone removal, and/or providing data for surgery auto-planning.
- Testing of segmentation algorithms conventionally uses manually segmentation of ground truth, by human annotators. The annotation effort for manual cortical bone segmentation is time consuming and can be ambiguous in different vertebral areas.
- a novel testing method for testing segmentation models takes numerical measurement(s) of a cortical bone thickness, in pre-defined trajectories, and compares it to the cortical bone thickness, measured by the algorithm, at the same trajectories.
- a plurality of cases was annotated by 3 different annotators to calculate the average thickness and to perform a variance study.
- the annotations were conducted by loading CT and O-arm scans from a testing data set, in each one of the 3D views (Axial, Sagittal and Coronal).
- the testing set was composed from 3 testing groups - In-scope, Pediatric and Metals, which were aimed to represent different patient populations.
- the testing method was verified based on the results where the algorithm thickness error 90th percentile was ⁇ 1.4mm while the 90th percentile of the measurements differences mean was ⁇ 0.76mm. These results indicate that the ground truth annotation is a valid testing method.
- the average error of the algorithm was -0.11+0.02mm.
- the average annotation time per vertebra was approximately one hour as compared to previous segmentation annotation projects where the average annotation time per vertebra was about seven hours.
- the novel testing method may result in a reduction in time, resources, and/or costs.
- Embodiments of the present disclosure provide technical solutions to one or more of the problems of (1) reducing resources, time, and/or costs related to segmentation, (2) providing an efficient testing method for segmentation models, and/or (3) increasing an accuracy of segmentation models, in particular, segmentation models of cortical bone.
- a block diagram of a system 100 according to at least one embodiment of the present disclosure is shown.
- the system 100 may be used to generate, use, validate, and/or verify one or more segmentation algorithms and/or carry out one or more other aspects of one or more of the methods disclosed herein.
- the system 100 comprises a computing device 102, one or more imaging devices 112, a database 130, and/or a cloud or other network 134.
- Systems according to other embodiments of the present disclosure may comprise more or fewer components than the system 100.
- the system 100 may not include the imaging device 112, one or more components of the computing device 102, the database 130, and/or the cloud 134.
- the computing device 102 comprises a processor 104, a memory 106, a communication interface 108, and a user interface 110.
- Computing devices according to other embodiments of the present disclosure may comprise more or fewer components than the computing device 102.
- the processor 104 of the computing device 102 may be any processor described herein or any similar processor.
- the processor 104 may be configured to execute instructions stored in the memory 106, which instructions may cause the processor 104 to carry out one or more computing steps utilizing or based on data received from the imaging device 112, the database 130, and/or the cloud 134.
- the memory 106 may be or comprise RAM, DRAM, SDRAM, other solid-state memory, any memory described herein, or any other tangible, non-transitory memory for storing computer- readable data and/or instructions.
- the memory 106 may store information or data useful for completing, for example, any step of the methods 200, 210, 300, and/or 400 described herein, or of any other methods.
- the memory 106 may store, for example, instructions and/or machine learning models.
- the memory 106 may store content (e.g., instructions and/or machine learning models) that, when executed by the processor 104, enable image processing 120, a segmentation model 122, a segmentation measurement model 124, and/or a verification model 126.
- the image processing 120 enables the processor 104 to process image data of an image (received from, for example, the imaging device 112 or any imaging device) for the purpose of, for example, identifying information about one or more anatomical element(s) depicted in the image data.
- the information may comprise, for example, identification of hard tissue and/or soft tissues, a boundary between hard tissue and soft tissue, a boundary of hard tissue and/or soft tissue, identification of cortical bone, identification of cancellous bone, etc.
- the image processing 120 may also use the segmentation model 122, as described below. In some embodiments, the image processing 120 may process the image data for use in the segmentation model 122.
- the segmentation model 122 enables the processor 104 to segment the image data so as to identify anatomical elements in the image. More generally, segmentation performed by the segmentation model 122 is the process of partitioning a digital image into multiple segments (sets of pixels, also known as image objects). The goal of segmentation is to simplify and/or change the representation of a complex image into something that is more meaningful and/or easier to analyze.
- the segmentation model 122 may enable the processor 104 to identify a boundary of an anatomical element by using, for example, feature recognition. For example, the segmentation 122 may enable the processor 104 to identify a vertebra in image data. In other instances, the segmentation 122 may enable the processor 104 to identify a boundary of an anatomical element by determining a difference in or contrast between colors or grayscales of image pixels.
- the segmentation measurement model 124 enables the processor 104 to, for example, measure a thickness of an anatomical element depicted in the segmented image data (or to measure any other measurement of the anatomical element).
- the segmentation measurement model 124 receives the segmented image data and at least one numerical measurement as input.
- the anatomical element is bone and the segmentation measurement model 124 measures a cortical bone thickness of a vertebra (or multiple vertebrae) depicted in the image data.
- the segmentation measurement model 124 may, in some instances, output one or more model measurements of an anatomical element for verifying an accuracy of or validating the segmentation model 122, as will be described in Figs. 2A-2B and Figs. 4-8B.
- the model measurements may be one or more measurements of a cortical bone thickness taken along one or more trajectories. In other embodiments, the model measurements may be any measurements of any anatomical element. In still other embodiments, the model measurements may be any measurements and may not be taken along a predefined trajectory.
- the verification model 126 may enable the processor 104 to verify an accuracy or validate the segmentation model 122.
- the verification model 126 may receive numerical measurements (as will be described in Figs. 2B and 4 below) and model measurements of an anatomical element from the segmentation measurement model 124.
- the verification model 126 may then verify an accuracy of or validate the segmentation model 122 based on the numerical measurements and the model measurements. More specifically, a difference between the numerical measurements and the model measurements is determined and the difference may be compared to one or more predetermined thresholds.
- the segmentation model 122 may be verified if the difference is below the one or more predetermined thresholds.
- Such content may, in some embodiments, be organized into one or more applications, modules, packages, layers, or engines.
- the memory 106 may store other types of content or data (e.g., machine learning models, artificial neural networks, deep neural networks, etc.) that can be processed by the processor 104 to carry out the various method and features described herein.
- various contents of memory 106 may be described as instructions, it should be appreciated that functionality described herein can be achieved through use of instructions, algorithms, and/or machine learning models.
- the data, algorithms, and/or instructions may cause the processor 104 to manipulate data stored in the memory 106 and/or received from or via the imaging device 112, the database 130, and/or the cloud 134.
- the computing device 102 may also comprise a communication interface 108.
- the communication interface 108 may be used for receiving image data or other information from an external source (such as the imaging device 112, the database 130, the cloud 134, and/or any other system or component not part of the system 100), and/or for transmitting instructions, images, or other information to an external system or device (e.g., another computing device 102, the imaging device 112, the database 130, the cloud 134, and/or any other system or component not part of the system 100).
- an external source such as the imaging device 112, the database 130, the cloud 134, and/or any other system or component not part of the system 100.
- the communication interface 108 may comprise one or more wired interfaces (e.g., a USB port, an Ethernet port, a Firewire port) and/or one or more wireless transceivers or interfaces (configured, for example, to transmit and/or receive information via one or more wireless communication protocols such as 802.11a/b/g/n, Bluetooth, NFC, ZigBee, and so forth).
- the communication interface 108 may be useful for enabling the device 102 to communicate with one or more other processors 104 or computing devices 102, whether to reduce the time needed to accomplish a computing-intensive task or for any other reason.
- the computing device 102 may also comprise one or more user interfaces 110.
- the user interface 110 may be or comprise a keyboard, mouse, trackball, monitor, television, screen, touchscreen, and/or any other device for receiving information from a user and/or for providing information to a user.
- the user interface 110 may be used, for example, to receive a user selection or other user input regarding any step of any method described herein. Notwithstanding the foregoing, any required input for any step of any method described herein may be generated automatically by the system 100 (e.g., by the processor 104 or another component of the system 100) or received by the system 100 from a source external to the system 100.
- the user interface 110 may be useful to allow a surgeon or other user to modify instructions to be executed by the processor 104 according to one or more embodiments of the present disclosure, and/or to modify or adjust a setting of other information displayed on the user interface 110 or corresponding thereto.
- the computing device 102 may utilize a user interface 110 that is housed separately from one or more remaining components of the computing device 102.
- the user interface 110 may be located proximate one or more other components of the computing device 102, while in other embodiments, the user interface 110 may be located remotely from one or more other components of the computer device 102.
- the imaging device 112 may be operable to image anatomical feature(s) (e.g., a bone, veins, tissue, etc.) and/or other aspects of patient anatomy to yield image data (e.g., image data depicting or corresponding to a bone, veins, tissue, etc.).
- image data refers to the data generated or captured by an imaging device 112, including in a machine-readable form, a graphical/visual form, and in any other form.
- the image data may comprise data corresponding to an anatomical feature of a patient, or to a portion thereof.
- the image data may be historical image data taken from one or more patient categories such as, for example, inscope, pediatric, and/or metals.
- the image data may be taken for other patient categories, for any other category, and at any time (e.g., historical, preoperative, intraoperative, postoperative, etc.).
- the anatomical feature(s) depicted in the image data may be, for example, bone such as one or more vertebrae.
- segmentation of vertebrae and in particular, cortical bone of a vertebra may be difficult to segment as the cortical bone may be or appear thin in certain orientations.
- a first imaging device 112 may be used to obtain first image data (e.g., a first image) at a first time, and a second imaging device 112 may be used to obtain second image data (e.g., a second image) at a second time after the first time.
- the imaging device 112 may be capable of taking a 2D image or a 3D image to yield the image data.
- the imaging device 112 may be or comprise, for example, an ultrasound scanner (which may comprise, for example, a physically separate transducer and receiver, or a single ultrasound transceiver), an 0-arm, a C-arm, a G-arm, or any other device utilizing X-ray-based imaging (e.g., a fluoroscope, a CT scanner, or other X-ray machine), a magnetic resonance imaging (MRI) scanner, an optical coherence tomography (OCT) scanner, an endoscope, a microscope, an optical camera, a thermographic camera (e.g., an infrared camera), a radar system (which may comprise, for example, a transmitter, a receiver, a processor, and one or more antennae), or any other imaging device 112 suitable for obtaining images of an anatomical feature of a patient.
- the imaging device 112 may be contained entirely within a single housing, or may comprise a transmitter/emitter and a receiver/detector that are in separate housings or are otherwise
- the imaging device 112 may comprise more than one imaging device 112.
- a first imaging device may provide first image data and/or a first image
- a second imaging device may provide second image data and/or a second image.
- the same imaging device may be used to provide both the first image data and the second image data, and/or any other image data described herein.
- the imaging device 112 may be operable to generate a stream of image data.
- the imaging device 112 may be configured to operate with an open shutter, or with a shutter that continuously alternates between open and shut so as to capture successive images.
- image data may be considered to be continuous and/or provided as an image data stream if the image data represents two or more frames per second.
- the database 130 may store the image data and/or sets of image data.
- the database 130 may additionally or alternatively store, for example, the segmentation model(s) 122, the verification model 126, and/or any other useful information.
- the database 130 may be configured to provide any such information to the computing device 102 or to any other device of the system 100 or external to the system 100, whether directly or via the cloud 134.
- the database 130 may be or comprise part of a hospital image storage system, such as a picture archiving and communication system (PACS), a health information system (HIS), and/or another system for collecting, storing, managing, and/or transmitting electronic medical records including image data.
- a hospital image storage system such as a picture archiving and communication system (PACS), a health information system (HIS), and/or another system for collecting, storing, managing, and/or transmitting electronic medical records including image data.
- PACS picture archiving and communication system
- HIS health information system
- the cloud 134 may be or represent the Internet or any other wide area network.
- the computing device 102 may be connected to the cloud 134 via the communication interface 108, using a wired connection, a wireless connection, or both.
- the computing device 102 may communicate with the database 130 and/or an external device (e.g., a computing device) via the cloud 134.
- the system 100 or similar systems may be used, for example, to carry out one or more aspects of any of the methods 200, 210, 300, and/or 400 described herein.
- the system 100 or similar systems may also be used for other purposes.
- FIG. 2A an example of a model architecture 200 that supports methods and systems (e.g., Artificial Intelligence (Al)-based methods and/or system) for segmenting image data using one or more segmentation models is shown.
- methods and systems e.g., Artificial Intelligence (Al)-based methods and/or system
- Image data 202 (example image data 202 also shown in Figs. 5A-5B) of an anatomical element (e.g., one or more vertebrae) may be used by a processor such as the processor 104 as input for the segmentation model 122.
- the segmentation model 122 may output segmented image data 206 and/or model measurement(s) 208.
- the image data 202 may be received from an imaging device such as the imaging 112, any other imaging device, or any component of a system such as the system 100.
- the image data 202 may be historical image data 202 taken from one or more patient categories such as, for example, in-scope, pediatric, and/or metals.
- the image data 202 may be taken for other patient categories, for any other category, and at any time (e.g., historical, preoperative, intraoperative, postoperative, etc.).
- the anatomical feature(s) depicted in the image data 202 may be, for example, bone such as one or more vertebrae, though it will be appreciated that in other embodiments the anatomical feature(s) may be any anatomical feature (e.g., soft tissue, hard tissue, etc.).
- the segmentation model 122 may identify the anatomical element, and/or a boundary between one or more features of the anatomical element (e.g., hard tissue, soft tissue, cortical bone, cancellous bone, etc.).
- the segmentation model may achieve the identification by, for example, determining a difference in or contrast between colors or grayscales of image pixels. For example, a boundary between hard tissue and soft tissue may be identified as a contrast between lighter pixels and darker pixels.
- Segmenting the one or more anatomical elements from the image data 202 when the image data comprises a three-dimensional representation of the patient anatomy may alternatively or additionally comprise identifying a boundary of one or more anatomical elements and forming a separate three-dimensional representation of the anatomical elements.
- identifying the boundary may comprise identifying adjacent sets of pixels having a large enough contrast to represent a border of an anatomical element or a feature of the anatomical element depicted therein.
- feature recognition may be used to identify a border of an anatomical element or a feature of the anatomical element. For example, a cortical bridge of a vertebrae may be identified using feature recognition.
- the segmentation model 122 may be trained using historical image data. In other embodiments, the segmentation model 122 may be trained using the image data 202. In such embodiments, the segmentation model 122 may be trained prior to inputting the image data 202 into the segmentation model 122 or may be trained in parallel with inputting the image data 202 into the segmentation model 122.
- the segmentation model 122 may output segmented image data 206, which may be used by the segmentation measurement model 124.
- the segmentation measurement model 124 enables the processor 104 to, for example, measure a thickness of an anatomical element depicted in the segmented image data 206.
- the segmentation measurement model 124 receives the segmented image data 206 and at least one numerical measurement 212 as input.
- the segmentation measurement model 124 may also receive one or more trajectories in some instances.
- the anatomical element is bone and the segmentation measurement model 124 measures a cortical bone thickness of a vertebra (or multiple vertebrae) depicted in the segmented image data 206.
- the segmentation measurement model 124 may, in some instances, output one or more model measurements 208 of an anatomical element for verifying an accuracy of or validating the segmentation model 122, as will be described in Figs. 2B and 4-8B, below.
- the model measurements 208 may be one or more measurements of a cortical bone thickness taken along one or more trajectories.
- the model measurements 208 may be any measurements of any anatomical element.
- the model measurements 208 may be any measurements and may not be taken along a predefined trajectory.
- Fig. 2B an example of a model architecture 210 that supports methods and systems for validating one or more segmentation models 122 is shown.
- Numerical measurements 212 and model measurements 208 of an anatomical element may be used by a processor such as the processor 104 as input for the verification model 126.
- the verification model 126 may output one or more validation(s) 218 that verify or validate an accuracy of the segmentation model(s) 122.
- the numerical measurements 212 (also depicted in Fig. 5D) may be generated from one or more annotators manually measuring an anatomical feature depicted in the image data 202.
- the numerical measurements 212 may be measured by one or more people.
- the measurement may be a thickness of a cortical bone of a vertebra.
- the measurement may be any measurement of any anatomical element or feature.
- the numerical measurements 212 are obtained from three annotators. In other embodiments, the numerical measurements 212 are obtained from one annotator, two annotators, or more than two annotators.
- the numerical measurements 212 may be taken along at least one trajectory 504 (shown in Figs. 5A-5C).
- the at least one trajectory 504 may be, for example, orthogonal to a surface of the anatomical element.
- the at least one trajectory 504 is three trajectories and may be in, for example, trajectories along the axial view, the sagittal view, and/or the coronal view.
- the at least one trajectory 504 is one trajectory, two trajectories, or more than two trajectories. In some embodiments, the at least one trajectory 504 is predefined, though it will be appreciated that in other embodiments, the at least one trajectory 504 may be randomized.
- the at least one trajectory 504 may be generated using the following method: (1) A vertebra segmentation mask is computed using, for example, an Al algorithm in image data that is a CT scan or an 0-arm scan; (2) The vertebra segmentation mask is converted to a floating point data type and a spatial low pass filter is applied to the vertebra segmentation mask; (3) A filter such as a Sobel filter is applied to the result of (2) which generates a map of gradients; (4) A vertebra segmentation mask outer shell is generated by applying morphological erosion and skeletonization operators; (5) A plurality of random positions are sampled from the vertebra segmentation mask outer shell (per vertebra) that was generated in (4); (6) For each random position of the plurality of random positions, a normal vector (e.g., orthogonal to vertebra surface) is generated by sampling the corresponding gradient map position generated in (3); (7) Vectors which are almost parallel to one of the annotation planes are utilized (e.g.,
- measurements may not be taken along a predefined trajectory.
- the model measurements 214 are received from, for example, the segmentation measurement model 124 described above.
- the model measurements 208 may be taken at the same predefined or randomized trajectories 504 as the numerical measurements 212. It will be appreciated that in some embodiments, the model measurements 214 and the numerical measurements 212 may not be taken along a predefined trajectory.
- the verification model 126 may verify an accuracy or validate the segmentation model 122 based on the numerical measurements 212 and the model measurements 208. More specifically, a difference between the numerical measurements 212 and the model measurements 208 is determined and the difference may be compared to one or more predetermined thresholds. The segmentation model 122 may be verified if the difference is below the one or more predetermined thresholds. It will be appreciated that the verification model 126 may use any other form of verification or validation.
- Fig. 3 depicts a method 300 that may be used, for example, for generating a model such as the segmentation model 122 and/or the image processing 120 is provided.
- the method 300 comprises generating a model (step 304).
- the model may be the segmentation model 122 and/or the image processing 120.
- a processor such as the processor 104 may generate the segmentation model 122 and/or the image processing 120.
- the segmentation model 122 and the image processing 120 may be generated to facilitate and enable, for example, identification of one or more anatomical elements and/or anatomical features (e.g., cortical bone) depicted in image data and verify an accuracy of or validate the segmentation model 122.
- the method 300 also comprises training the model (step 308).
- the segmentation model 122 and/or the image processing 120 are trained using historical image data from a number of patients.
- the historical data may be obtained from any patient.
- the historical data may be obtained from patients with similar statistics.
- the method 300 also comprises storing the model (step 312).
- the segmentation model 122 and/or the image processing 120 may be stored in memory such as the memory 106 and/or a database such as the database 130 for later use.
- the segmentation model 122 and/or the image processing 120 is stored in the memory when the segmentation model 122 and/or the image processing 120 are sufficiently trained.
- the segmentation model 122 and/or the image processing 120 may be sufficiently trained when the segmentation model 122 and/or the image processing 120 produces an output that meets a predetermined threshold, which may be determined by, for example, a user, or may be automatically determined by a processor such as the processor 104.
- the present disclosure encompasses embodiments of the method 300 that comprise more or fewer steps than those described above, and/or one or more steps that are different than the steps described above.
- FIG. 4 depicts a method 400 that may be used, for example, for verifying one or more segmentation models;
- Fig. 5A depicts an image of an anatomical element with the one or more trajectories 504 marked;
- Fig. 5B depicts an image of an anatomical element with the marked trajectories 504 and a location of numerical measurements 212;
- Fig. 5C is an example image of an anatomical element with marked trajectories 504 according to at least one embodiment of the present disclosure;
- Fig. 4 depicts a method 400 that may be used, for example, for verifying one or more segmentation models;
- Fig. 5A depicts an image of an anatomical element with the one or more trajectories 504 marked;
- Fig. 5B depicts an image of an anatomical element with the marked trajectories 504 and a location of numerical measurements 212;
- Fig. 5C is an example image of an anatomical element with marked trajectories
- 5D is an example image of an anatomical element with marked trajectories 504 and numerical measurements 212 according to at least one embodiment of the present disclosure
- Fig. 6 is a chart illustrating a differences mean distribution of numerical measurements taken
- Fig. 7 is an example evaluation of numerical measurements and model measurements
- Fig. 8A is an example image of a numerical measurement and a model measurement taken along the same trajectory
- 8B is an another example image of a numerical measurement and a model measurement taken along the same trajectory.
- the method 400 (and/or one or more steps thereof) may be carried out or otherwise performed, for example, by at least one processor.
- the at least one processor may be the same as or similar to the processor(s) 104 of the computing device 102 described above.
- a processor other than any processor described herein may also be used to execute the method 400.
- the at least one processor may perform the method 400 by executing elements stored in a memory such as the memory 106.
- the elements stored in memory and executed by the processor may cause the processor to execute one or more steps of a function as shown in method 400.
- One or more portions of a method 400 may be performed by the processor executing any of the contents of memory, such as an image processing 120, a segmentation model 122, a segmentation measurement model 124, and/or a verification model 126.
- the method 400 comprises receiving image data depicting an anatomical element (step 404).
- the image data may be the same as or similar to the image data 202 (shown in Figs. 2A and 5A-5D and 8A-8B).
- the image data may be received via a user interface such as the user interface 110 and/or a communication interface such as the communication interface 108 of a computing device such as the computing device 102, and may be stored in a memory such as the memory 106 of the computing device.
- the image data may also be received from an external database or image repository (e.g., a hospital image storage system, such as a picture archiving and communication system (PACS), a health information system (HIS), and/or another system for collecting, storing, managing, and/or transmitting electronic medical records including image data), and/or via the Internet or another network.
- a hospital image storage system such as a picture archiving and communication system (PACS), a health information system (HIS), and/or another system for collecting, storing, managing, and/or transmitting electronic medical records including image data
- the image data may be received or obtained from an imaging device such as the imaging device 112, which may be any imaging device such as an MRI scanner, a CT scanner, an 0-arm scanner, any other X-ray based imaging device, or an ultrasound imaging device.
- the image data may also be generated by and/or uploaded to any other component of a system such as the system 100.
- the image data may be indirectly received via any other component of the system or a node of a network to which the system is connected.
- the image data may be a 2D image or a 3D image or a set of 2D and/or 3D images.
- the image data may depict a patient’s anatomy or portion thereof.
- the image data may depict multiple anatomical elements associated with the patient anatomy, including incidental anatomical elements (e.g., ribs or other anatomical objects on which a surgery or surgical procedure will not be performed) in addition to target anatomical elements (e.g., vertebrae or other anatomical objects on which a surgery or surgical procedure is to be performed).
- incidental anatomical elements e.g., ribs or other anatomical objects on which a surgery or surgical procedure will not be performed
- target anatomical elements e.g., vertebrae or other anatomical objects on which a surgery or surgical procedure is to be performed.
- the image data may comprise various features corresponding to the patient’s anatomy and/or anatomical elements (and/or portions thereof), including gradients corresponding to boundaries and/or contours of the various depicted anatomical elements, varying levels of intensity corresponding to varying surface textures of the various depicted anatomical elements, combinations thereof, and/or the like.
- the image data may depict any portion or part of patient anatomy and may include, but is in no way limited to, one or more vertebrae, ribs, lungs, soft tissues (e.g., skin, tendons, muscle fiber, etc.), vertebra, and/or the like.
- the image data may be processed using image processing such as the image processing 120 to identify anatomical elements and/or prepare the image data for segmentation by a segmentation model such as the segmentation model 122, as will be described below.
- image processing such as the image processing 120 to identify anatomical elements and/or prepare the image data for segmentation by a segmentation model such as the segmentation model 122, as will be described below.
- the method 400 also comprises receiving at least one numerical measurement of the anatomical element (step 408).
- the at least one numerical measurement may be the same as or similar to the numerical measurement(s) 212 (shown in Figs. 2B and 5D) and may be received as input via the user interface and may be stored in, for example, the memory, a database such as the database 130 and/or a cloud such as the cloud 134.
- the numerical measurements may be obtained from one or more annotators manually measuring an anatomical feature of an anatomical element such as anatomical element 502 (shown in Figs. 5A-5D) depicted in the image data (which may be segmented or unsegmented).
- the measurement may be any measurement of any anatomical element or feature.
- the numerical measurements are obtained from three annotators.
- the numerical measurements are obtained from one annotator, two annotators, or more than two annotators.
- the numerical measurements may be taken along at least one predetermined trajectory such as trajectories 504 (shown in Figs. 5A-5C).
- the at least one trajectory is three trajectories and may be in, for example, the axial view, the sagittal view, and/or the coronal view.
- the at least one trajectory is one trajectory, two trajectories, or more than two trajectories.
- the at least one trajectory is predefined, though it will be appreciated that in other embodiments, the at least one trajectory may be randomized.
- the numerical measurements may correspond to measurements of, for example, a thickness of cortical bone of a vertebra. It will be appreciated that in other embodiments the numerical measurements may correspond to any measurement and/or any anatomical element or feature.
- the at least one numerical measurement may be used as ground truth data.
- the at least one numerical measurement may be used in training or validating the segmentation model(s). This is illustrated in a measurement differences mean chart 600 shown in Fig. 6 in which a 90 th percentile 602 of the measurement difference between three annotators was 0.76mm, indicating that the numerical measurement(s) from different annotators are relatively the same. Further, the 90 th percentile 602 of the measurement difference of the annotators was lower than a 90 th percentile of the model thickness error (not shown), which means that the numerical measurement(s) were more accurate than the model measurements.
- one set of numerical measurements from one annotator may be used to validate the segmentation model.
- step 408 may occur after the step 410, described below.
- the method 400 also comprises inputting the image data into a segmentation model (step 410).
- the segmentation model may be the same as or similar to the segmentation model 122.
- the image data may be received from, for example, the step 410.
- the segmentation model may be used by the processor to identify the anatomical element, and/or a boundary between one or more features of the anatomical element (e.g., hard tissue, soft tissue, cortical bone, cancellous bone, etc.).
- the segmentation model may achieve the identification by, for example, determining a difference in or contrast between colors or grayscales of image pixels. For example, a boundary between hard tissue and soft tissue may be identified as a contrast between lighter pixels and darker pixels.
- Segmenting the one or more anatomical elements from the image data when the image data comprises a three-dimensional representation of the patient anatomy may alternatively or additionally comprise identifying a boundary of one or more anatomical elements and forming a separate three-dimensional representation of the anatomical elements.
- identifying the boundary may comprise identifying adjacent sets of pixels having a large enough contrast to represent a border of an anatomical element or a feature of the anatomical element depicted therein.
- feature recognition may be used to identify a border of an anatomical element or a feature of the anatomical element. For example, a cortical bridge of a vertebrae may be identified using feature recognition.
- the segmentation model outputs one or more segmented images (whether 2D image(s) or 3D image(s)).
- the method 400 also comprises inputting the segmented image into a segmentation measurement model (step 411).
- the numerical measurements taken in the step 408 may also be inputted into the segmentation measurement model.
- the segmentation measurement model may be the same as or similar to the segmentation measurement model 124. In embodiments where the numerical measurements were taken along predetermined trajectories, the predetermined trajectories may also be inputted into the segmentation measurement model.
- the trajectories may be the same as the trajectories on which at least one numerical measurements are taken along in the step 408. It will be appreciated that in some embodiments, the model measurements and the numerical measurements may not be taken along a predefined trajectory.
- the segmentation model may automatically obtain model measurements corresponding to the numerical measurement(s). In other words, the segmentation model may obtain the same type of measurements at the same trajectories (in embodiments where the measurements are taken along one or more trajectories) as the numerical measurements.
- the method 400 also comprises receiving at least one model measurement of the anatomical element (step 412).
- the at least one model measurement may be received in, for example, the step 411 from the segmentation measurement model.
- the method 400 also comprises averaging the at least one numerical measurement (step 416).
- the at least one numerical measurement may be averaged by, for example, the processor or any other processor.
- the at least one numerical measurement may be combined by averaging start coordinates and averaging end coordinates of the at least one numerical measurement.
- the coordinates of the at least one numerical measurement may be checked such that the corresponding start points and end points are close to each other and may be flipped if needed.
- the averaged numerical measurement(s) may be used in comparison with the model measurements and/or the verification of the segmentation model as described below.
- the method 400 may not include the step 416.
- one set of numerical measurements from one annotator may be used in the comparison and/or verification.
- each set of numerical measurements from different annotators may be used and each compared to the model measurements and/or each used in the verification.
- the method 400 also comprises comparing the at least one numerical measurement with the at least one model measurement (step 420).
- the at least one numerical measurement may be compared to the at least one model measurement.
- an average of the sets of numerical measurements may be compared with the at least one model measurement.
- one set of numerical measurements taken from one annotator may be compared with the at least one model measurement.
- each set of multiple sets of numerical measurements may each be compared to the at least one model measurement.
- Figs. 8A and 8B shown at least one numerical measurement 800 of a cortical bone 804 thickness and at least one model measurement 802 overlaid on the at least one numerical measurement 800.
- the method 400 also comprises determining an error based on a difference between the at least one numerical measurement and the at least one model measurement (step 424).
- the difference between the at least one numerical measurement (whether as an average or as a single set of numerical measurements) and the at least one model measurement may be determined by the processor. In some embodiments, the difference is determined by subtracting the at least one numerical measurement from the at least one model measurement, or vice versa.
- the error may be determined by creating a vector vl 700 from a first measurement point ml 702 and a point where a cortical layer 704 ends is found at point al 706, as shown in Fig. 7. Then, the same is performed in the opposite direction where a vector v2 708 is created from a second measurement point m2 710 and a point where the cortical layer 704 is found at point a2712. The sampled values are then median filtered to reduce isolated voxels and noise such that the measurement accuracy is approximately 0.1 voxels. The model measurement(s) are then measured as the Euclidean distance between the al 706 and a2 712 points which indicate a cortical layer thickness along an annotated trajectory. The error between the model measurement(s) (al 706 and a2 712) and the numerical measurements (ml 702 and m2 710) are calculated using, for example, the following equation:
- the method 400 also comprises verifying the one or more segmentation models (step 428).
- the one or more segmentation models may be verified by the processor using a verification model such as the verification model 126 to verify an accuracy or validate the segmentation model.
- the verification model may receive the numerical measurements and the model measurements and verify an accuracy of or validate the segmentation model based on the numerical measurements and the model measurements. More specifically, the difference or error as measured in the step 424 between the numerical measurements and the model measurements is compared to one or more predetermined thresholds.
- the predetermined threshold may correlate to a maximum allowable difference for each expected difference.
- the predetermined threshold may be determined automatically using artificial intelligence and training data (e.g., historical cases) in some embodiments.
- the predetermined threshold may be or comprise, or be based on, annotator input or any other user input received via the user interface.
- the predetermined threshold may be determined automatically using artificial intelligence, and may thereafter be reviewed and approved (or modified) by an annotator or other user.
- a user may be alerted (e.g., a notification may be generated) for each expected predetermined threshold that is met or exceeded.
- the segmentation model may be verified (and a notification may be generated accordingly) if the difference is below the one or more predetermined thresholds. It will be appreciated that the verification model may use any other form of verification or validation.
- the present disclosure encompasses embodiments of the method 400 that comprise more or fewer steps than those described above, and/or one or more steps that are different than the steps described above.
- Verification of and/or verifying the segmentation model as described beneficially enables testing and verification of multiple segmentation models. As more segmentation models are developed and/or further refined, verification is beneficial in determining whether the segmentation model is accurate and/or useful for the purpose of, for example, measuring cortical bone thickness.
- the cortical bone thickness measurement is beneficial for bone mineral density evaluation, force feedback prediction for robotic bone removal, and/or data for surgery auto-planning.
- the present disclosure encompasses methods with fewer than all of the steps identified in Figs. 2A-2B, 3, and 4 (and the corresponding description of the methods 200, 210, 300, and 400), as well as methods that include additional steps beyond those identified in Figs.
- the present disclosure also encompasses methods that comprise one or more steps from one method described herein, and one or more steps from another method described herein. Any correlation described herein may be or comprise a registration or any other correlation.
- Example 1 A method for verifying one or more segmentation models comprising: receiving image data (202) depicting an anatomical element; inputting the image data into the one or more segmentation models (122), the one or more segmentation models configured to segment the image data and output segmented image data (206) receiving at least one numerical measurement (212) of the anatomical element; receiving at least one model measurement (208) of the anatomical element, wherein the at least one model measurement is measured by a segmentation measurement model (124) using the segmented image data; comparing the at least one numerical measurement with the at least one model measurement; determining an error based on the difference between the at least one numerical measurement and the at least one model measurement; and verifying the one or more segmentation models as a result of determining that the error is less than a predetermined error threshold.
- Example 2 The method of Example 1, wherein the at least one numerical measurement comprises at least a cortical bone thickness measurement.
- Example 3 The method of Examples 1 or 2, wherein the at least one numerical measurement and the at least one model measurement are each taken along one or more trajectories.
- Example 4 The method of any of Examples 1-3, wherein the one or more trajectories are at least one of predefined trajectories or random trajectories.
- Example 5 The method of any of Examples 1-4, wherein the at least one numerical measurement is measured by one or more users.
- Example 6 The method of Example 5, wherein the one or more users comprises at least three users.
- Example 8 The method of any of Examples 1-7, wherein the at least one numerical measurement and the at least one model measurement are measured for one or more testing groups of in-scope, pediatric, and/or metals.
- Example 9 The method of any of Examples 1-8, wherein the image data is received from at least one of an CT imaging device or an 0-arm imaging device.
- Example 10 The method of any of Examples 1-9, further comprising: averaging the at least one numerical measurement prior to the comparing step.
- Example 12 A system for verifying one or more segmentation models comprising: an imaging device (112); a processor (104); and a memory (106) storing data for processing by the processor, the data, when processed, causes the processor to: receive image data (202) depicting an anatomical element from the imaging device; input the image data into the one or more segmentation models (122), the one or more segmentation models configured to segment the image data and output segmented image data (206); receive at least one numerical measurement (212) of the anatomical element; receive at least one model measurement (208) of the anatomical element, wherein the at least one model measurement is measured by a segmentation measurement model (124); compare the at least one model measurement with the at least one model measurement; determine an error based on the difference between the at least one numerical measurement and the at least one model measurement; and verify the one or more segmentation models as a result of determining that the error is less than a predetermined error threshold.
- Example 13 The system of Example 12, wherein the imaging device comprises at least one of a CT imaging device or an 0-arm imaging device.
- Example 14 The system of Examples 12 or 13, wherein the at least one model measurement is received from the segmentation measurement model configured to received segmented image data and the at least one numerical measurement and output the at least one model measurement.
- Example 15 The system of any of Examples 12-14, wherein the at least one numerical measurement comprises at least a cortical bone thickness measurement.
- Example 16 The system of any of Examples 12-15, wherein the at least one numerical measurement and the at least one model measurement are each taken along one or more trajectories.
- Example 17 The system of any of Examples 12-16, wherein the at least one numerical measurement is measured by one or more users.
- Example 20 A method for verifying one or more segmentation models comprising: receiving image data (202) depicting an anatomical element from a database (130); inputting the image data into the one or more segmentation models (122), the one or more segmentation models configured to segment the image data and output segmented image data (206); receiving at least one numerical measurement (212) of the anatomical element; inputting the segmented image data, the at least one numerical measurement, and one or more trajectories (504) into a segmentation measurement model (124), the segmentation measurement model configured to measure the at least one numerical measurement from the segmented image data; receiving at least one model measurement (124) of the anatomical element from the segmentation measurement model; averaging the at least one numerical measurement; comparing the averaged numerical measurement with the at least one model measurement; determining an error based on the difference between the averaged numerical measurement and the at least one model measurement; and verifying the one or more segmentation models as a result of determining that the error is less than a predetermined error threshold.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
L'invention concerne des systèmes et des procédés de vérification d'un ou de plusieurs modèles de segmentation. Des données d'image représentant un élément anatomique peuvent être reçues et les données d'image peuvent être entrées dans le ou les modèles de segmentation. Le ou les modèles de segmentation peuvent être configurés pour segmenter les données d'image et délivrer des données d'image segmentées. Au moins une mesure numérique et au moins une mesure de modèle de l'élément anatomique peuvent être reçues. La ou les mesures de modèle peuvent être comparées à la ou aux mesures de modèle et une erreur sur la base de la différence entre la ou les mesures numériques et la ou les mesures de modèle peut être déterminée. Le ou les modèles de segmentation peuvent être vérifiés suite à la détermination du fait que l'erreur est inférieure à un seuil d'erreur prédéterminé.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202363542175P | 2023-10-03 | 2023-10-03 | |
| US63/542,175 | 2023-10-03 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025074367A1 true WO2025074367A1 (fr) | 2025-04-10 |
Family
ID=93377959
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/IL2024/050976 Pending WO2025074367A1 (fr) | 2023-10-03 | 2024-10-02 | Systèmes et procédés de vérification d'un ou de plusieurs modèles de segmentation |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025074367A1 (fr) |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20120224758A1 (en) * | 2009-10-07 | 2012-09-06 | Cambridge Enterprise Limited | Image data processing systems |
-
2024
- 2024-10-02 WO PCT/IL2024/050976 patent/WO2025074367A1/fr active Pending
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20120224758A1 (en) * | 2009-10-07 | 2012-09-06 | Cambridge Enterprise Limited | Image data processing systems |
Non-Patent Citations (1)
| Title |
|---|
| LOUBELE ET AL: "Assessment of bone segmentation quality of cone-beam CT versus multislice spiral CT: a pilot study", ORAL SURGERY, ORAL MEDICINE, ORAL PATHOLOGY, ORAL RADIOLOGY AND ENDODONTICS, MOSBY-YEAR BOOK, ST. LOUIS, MO, US, vol. 102, no. 2, 1 August 2006 (2006-08-01), pages 225 - 234, XP005568638, ISSN: 1079-2104 * |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20220245400A1 (en) | Autonomous segmentation of three-dimensional nervous system structures from medical images | |
| EP3525171B1 (fr) | Procédé et système de reconstruction 3d d'un volume de tomographie par rayons x et masque de segmentation de quelques radiographies à rayons x | |
| CN102525534B (zh) | 医用图像处理装置、医用图像处理方法 | |
| US9317661B2 (en) | Automatic implant detection from image artifacts | |
| US11426119B2 (en) | Assessment of spinal column integrity | |
| US10335105B2 (en) | Method and system for synthesizing virtual high dose or high kV computed tomography images from low dose or low kV computed tomography images | |
| US20060216681A1 (en) | Method and system for characterization of knee joint morphology | |
| US10628963B2 (en) | Automatic detection of an artifact in patient image | |
| US20250127572A1 (en) | Methods and systems for planning a surgical procedure | |
| KR20160061248A (ko) | 의료 영상 처리 장치 및 그에 따른 의료 영상 처리 방법 | |
| CN107752979B (zh) | 人工投影的自动生成方法、介质和投影图像确定装置 | |
| US11837352B2 (en) | Body representations | |
| EP3754598B1 (fr) | Procédé et système de sélection d'une région d'intérêt dans une image | |
| WO2014114327A1 (fr) | Procédé et appareil pour calculer la position de contact d'une sonde à ultrasons sur la tête | |
| KR20160057024A (ko) | 마커리스 3차원 객체추적 장치 및 그 방법 | |
| JP2023036805A (ja) | 人体部分の撮像方法、コンピュータ、コンピュータ読み取り可能記憶媒体、コンピュータプログラム、および医療システム | |
| KR102136107B1 (ko) | 뼈 감쇄된 x-선 영상의 정합 장치 및 방법 | |
| WO2025074367A1 (fr) | Systèmes et procédés de vérification d'un ou de plusieurs modèles de segmentation | |
| KR102247545B1 (ko) | 수술위치 정보제공방법 및 수술위치 정보제공장치 | |
| EP4592949A1 (fr) | Détermination automatisée de la densité minérale osseuse à partir d'une image médicale | |
| WO2025229497A1 (fr) | Systèmes et procédés de génération d'une ou de plusieurs reconstructions | |
| WO2025108547A1 (fr) | Procédé de détermination d'image de référence | |
| WO2025004041A1 (fr) | Systèmes et procédés d'étalonnage d'un dispositif d'imagerie |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24801355 Country of ref document: EP Kind code of ref document: A1 |