[go: up one dir, main page]

WO2020198560A1 - Systems and methods for non-destructive de-identification of facial data in medical images - Google Patents

Systems and methods for non-destructive de-identification of facial data in medical images Download PDF

Info

Publication number
WO2020198560A1
WO2020198560A1 PCT/US2020/025147 US2020025147W WO2020198560A1 WO 2020198560 A1 WO2020198560 A1 WO 2020198560A1 US 2020025147 W US2020025147 W US 2020025147W WO 2020198560 A1 WO2020198560 A1 WO 2020198560A1
Authority
WO
WIPO (PCT)
Prior art keywords
medical image
image data
data
subject
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2020/025147
Other languages
French (fr)
Inventor
Christopher G. SCHWARZ
Jeffrey L. GUNTER
Clifford R. JACK Jr.
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mayo Foundation for Medical Education and Research
Mayo Clinic in Florida
Original Assignee
Mayo Foundation for Medical Education and Research
Mayo Clinic in Florida
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mayo Foundation for Medical Education and Research, Mayo Clinic in Florida filed Critical Mayo Foundation for Medical Education and Research
Publication of WO2020198560A1 publication Critical patent/WO2020198560A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20128Atlas-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Definitions

  • HIPAA Health Insurance Portability and Accountability Act
  • Some of the earliest approaches to remove faces in medical images simply used existing algorithms for skull stripping (i.e., removing everything except for the brain). These methods unnecessarily prevent measurements using non-brain regions, such as intracranial volume, cerebrospinal fluid volume, arterial biomarkers, and so on. Face removal algorithms were thus proposed as an alternative. These use registration with a standard atlas to locate the face region and remove it by setting the image intensities to some standard value (e.g., zero). However, these face removal methods create processing failures, bias, and noise in automated image processing pipelines.
  • Face blurring algorithms including the approach used for some data releases from the human connectome project, were therefore proposed as an alternative to face removal. These methods similarly locate the face via registration with an atlas, but they blur the face’s contour to prevent recognition rather than remove it.
  • the ability of this class of methods to effectively de-identify an individual has recently been called into question by a manuscript showing that popular blurring method used by the Human Connectome Project can be defeated by un blurring the blurred face contour with deep-learning algorithms.
  • face blurring was designed specifically to modify the data less strongly than skull stripping or face wiping methods, the process still affects morphometric measurements of brain structures using popular analysis software.
  • the present disclosure addresses the aforementioned drawbacks by providing a method for generating a de-identified medical image.
  • the method includes accessing medical image data using a computer system.
  • the medical image data includes at least some voxels corresponding to a face of a subject.
  • Template face data are also accessed using the computer system.
  • the template face data include voxels corresponding to a template face that is different from the subject’s face.
  • the medical image data and the template face data are co-registered, and de-identified medical image data are generated by replacing the voxels corresponding to the face of the subject in the medical image data with the voxels corresponding to the template face in the template face data.
  • the de-identified medical image data includes at least one de-identified medical image of the subject.
  • the template face data corresponding to face voxels may be transformed using a linear transformation
  • the template face data corresponding to non-face voxels may be transformed using a nonlinear transformation
  • a de-identification mask may be smoothed in the template face data such that images transformed using registration parameters have a smooth spatial transition between linearly transformed regions and non-linearly transformed regions.
  • the medical image data further include at least some voxels corresponding to an ear of the subject
  • the template face data further include voxels corresponding to an ear associated with the template face
  • generating de-identified medical image data further includes replacing the voxels corresponding to the ear of the subject in the medical image data with the voxels corresponding to the ear of the template face in the template face data.
  • the template face data are representative of a population group to which the subject belongs.
  • the population group may be an age-matched population group in which the plurality of subjects have a similar age as the subject from which the medical image data were acquired.
  • the population group may be a race-matched population group in which the plurality of subjects have a same race as the subject from which the medical image data were acquired.
  • the population group may be a sex-matched population group in which the plurality of subjects have a same sex as the subject from which the medical image data were acquired.
  • the method includes segmenting the medical image data to locate voxels corresponding to a reference tissue and normalizing image intensity values in the template face data using image intensity values of the voxels corresponding to the reference tissue.
  • the reference tissue may be white matter.
  • the medical image data are magnetic resonance image data including one or more magnetic resonance images acquired from the subject.
  • the magnetic resonance images may include T1 -weighted images, T2 -weighted images, or magnetic resonance images acquired with other contrast weightings or mechanisms known in the art.
  • the medical image data may be computed tomography (CT) image data comprising one or more CT images acquired from the subject.
  • CT computed tomography
  • PET positron emission tomography
  • the method includes storing the de-identified medical image data for later use, whereby the de-identified medical image data depict de-identified facial information so as to impede facial recognition of the subject.
  • the method includes displaying a de-identified medical image of the subject to a user by accessing the de-identified medical image data with the computer system, selecting a de-identified medical image from the de- identified medical image data, and generating a display of the de-identified medical image.
  • the template face data may be average facial data comprising voxels corresponding to an average of a plurality of faces from a corresponding plurality of subjects.
  • a computer readable medium includes instructions stored on the computer readable medium for accessing medical image data using a computer system, where the medical image data include at least some voxels corresponding to a face of a subject.
  • the instructions also provide for accessing template face data using the computer system, where the template face data include voxels corresponding to a template face that is different from the subject’s face.
  • the instructions also provide for co-registering the medical image data and the template face data and generating de-identified medical image data by replacing the voxels corresponding to the face of the subject in the medical image data with the voxels corresponding to the template face in the template face data, where the de-identified medical image data include at least one de-identified medical image of the subject.
  • FIG. 1 is a flowchart setting forth the steps of an example method for generating de-identified medical image data, in which voxels corresponding to a subject’s face are replaced with voxels from template face data that represent a template face that is different from the subject’s face.
  • FIG. 2 is a flowchart setting forth the steps of an example method for generating template face data that represent an average face of a plurality of subjects in a population group.
  • FIG. 3 is a block diagram of an example facial de-identification system for de-identifying facial information in medical images, such as magnetic resonance images.
  • FIG. 4 is a block diagram of example hardware components that can implement the facial de-identification system of FIG. 3.
  • FIG. 5 is a block diagram of an example magnetic resonance imaging (“MRI”) system that can implement the methods described in the present disclosure.
  • MRI magnetic resonance imaging
  • FIG. 6A is an example of an x-ray computed tomography (“CT”) system that can implement the methods described in the present disclosure.
  • CT computed tomography
  • FIG. 6B is a block diagram of the example CT system of FIG. 6A.
  • the systems and methods described in the present disclosure replace face voxels with image data from a template face that is different from the subject’s face.
  • the template face may be a population-average face that is generated by averaging the faces of a plurality of subjects from a population group. This approach fully removes participant facial features, but minimizes impacts on downstream biomarker measurements by generating an output image that resembles a complete craniofacial image with statistical image texture properties similar to the original.
  • the systems and methods described in the present disclosure addresses a significant unmet need by providing medical image de-identification that protects research study participants from identification via facial recognition. Unlike existing techniques, the systems and methods described in the present disclosure provide for de-identification that minimizes harmful effects on the ability to measure quantitative medical information from the de-identified images, thereby allowing for increased subject or patient privacy protections while maximizing the medical, diagnostic, and scientific value of the de-identified images.
  • the systems and methods described in the present disclosure identify face regions in images via registration with a standard atlas that represents template face data, which may in some instances represent average facial data in a population group.
  • the face voxels in the input medical images are then replaced with those of the template face (i.e., a digital "face transplant” is performed) from the template face data, rather than removing or blurring the face voxels.
  • the method includes accessing medical image data with a computer system, as indicated at step 102. Accessing the medical image data can include retrieving previously acquired medical image data from a memory or other data storage device or medium. In some other instances, accessing the medical image data can include acquiring medical images with a medical imaging system and communicating those images to the computer system.
  • the medical image data may include one or more medical images, which may be 2D images or 3D images, and which contain at least some voxels corresponding to the face of a subject. Additionally or alternatively, the medical image data may include voxels corresponding to the subject’s ears, or other anatomy associated with the head, neck, or both.
  • the medical image data may include magnetic resonance images (e.g., Tl-weighted images, T2-weighted images, or magnetic resonance images acquired with other contrast weightings or mechanisms known in the art). Additionally or alternatively, the medical image data can include images obtained with other medical imaging modalities, such as x-ray computed tomography ("CT”), positron emission tomography (“PET”), and so on.
  • CT x-ray computed tomography
  • PET positron emission tomography
  • the method also includes accessing template face data with the computer system, as indicated at step 104.
  • Accessing the template face data can include, for instance, retrieving previously generated template face data from a memory or other data storage device or medium.
  • the medical image data contained in the template face data are preferably but may not necessarily be acquired using the same imaging modality as the medical image data accessed in step 102.
  • the template face data include medical image data acquired from a plurality of different subjects, which have been averaged together.
  • the template face data include voxels corresponding to the faces of the plurality of subjects having been co-registered and averaged together.
  • the template face data represent a population average face.
  • the template face data can include voxels corresponding to a face from a different subject.
  • the facial features in the input medical image data will be replaced with the facial features of the individual whose face is depicted in the template face data.
  • the template face data may include data representative of a population that matches one or more demographic or other characteristics of the subject depicted in the medical image data to be de -identified.
  • the template face data may be representative of a plurality of age-matched subjects, race-matched subjects, sex-matched subjects, or combinations thereof.
  • the medical image data are segmented to identify regions containing a reference tissue to be used for image intensity normalization.
  • the medical image data are segmented to locate voxels containing white matter.
  • the medical image data can be segmented using a segmentation algorithm such as SPM, FreeSurfer, FSL, or other suitable image segmentation algorithms known in the art.
  • the segmented medical image data can be stored for later use in an image intensity normalization step. Additionally or alternatively, methods for intensity normalization other than reference tissue normalization may also be applied, such as global intensity normalization.
  • the medical image data and the template face data are then coregistered, as indicated at step 106.
  • a nonlinear transformation between the medical image data and the template face data can be performed in order to transform the template face data to match the medical image data and to identify the face voxels, ear voxels, or both.
  • an ANTs (Advanced Normalization Tools) symmetric nonlinear registration can be used.
  • Other suitable methods may include DARTEL, FNIRT, DRAMMS, and so on.
  • the computed registration parameters in the face and ear regions can be modified by setting the masked locations to zero nonlinear deformation. Modifying the parameters in this way allows for only an affine (e.g., linear) registration to be applied in the masked regions (i.e., face voxels), while using a nonlinear registration throughout the rest of the image (i.e., non-face voxels).
  • This approach leaves the geometric contour of the face voxels in the template face data unaltered, while spatially transforming the rest of the voxels in the template face data to match the input image. This process provides spatially continuity between the edges of the original image data and the transferred template face data.
  • the de-identification mask can be smoothed such that images transformed using the registration parameters have a smooth spatial transition between the linearly-transformed and nonlinearly-transformed regions.
  • the modified spatial registration (e.g., warp) parameters can then be applied to the template face data (and the result resampled into the space of the original input medical image data.
  • the edges of the transformed face/ear regions will align with those of the target medical image data, but within those regions the contour is that of the original template face data (i.e., a template face rather than the face of the subject depicted in the target medical image data).
  • the intensity of the warped template face data can be multiplied or otherwise transformed to match the mean intensity of the reference tissue voxels in the medical image data.
  • this intensity normalization may be performed using a reference tissue, or using other suitable normalization techniques such as global or local intensity normalization.
  • the reference tissue can be brain white matter, and the locations of the white matter voxels can be identified by segmenting the medical image data.
  • the voxel intensity scales in the template face data and the medical image data can be made to roughly match.
  • image intensities may be normalized at step 108 and the face voxels in the medical image data are replaced with data from the corresponding, co registered voxels in the template face data, as indicated at step 110.
  • the medical image data are de-identified by replacing all voxels within a de identification mask with those voxels in the co-registered template face data, thereby replacing or "transplanting” the template face onto the images of the subject depicted in the medical image data.
  • the ears can be replaced to prevent ear recognition.
  • regions behind the head may also be removed (e.g., set to zero), because these sometimes contain face voxels due to aliasing during reconstruction.
  • the de-identification mask can be smoothed before replacing voxels such that the transition between the original and de-identified regions of the image is smooth. This smoothing generates a more realistic image by avoiding creating unrealistic edges at the mask boundaries.
  • the medical image data to be de-identified contain artifacts, such as those from motion, magnetic susceptibility, field inhomogeneity, or aliasing
  • the bias field can be modeled from segmenting the medical image data and applying this artifact to the transformed template face data prior to replacing the face voxels (e.g., as part of the intensity normalization step).
  • a registration cost function and volume of detected face tissue can be leveraged to automatically detect failures and apply alternate registration settings when they occur.
  • the medical image data to be de-identified lacks sufficient space to transfer the template face, such as when the imaged field-of-view lacks a full face of the subject, or when large portions of the face were affected by aliasing (e.g., part of the face appears behind the head).
  • the target medical image data field-of-view can be padded.
  • the medical image data can be padded by approximately 15 mm.
  • the head can be relocated within the medical image data as needed to allow room for the transplanted face without enlarging the images.
  • Some medical image data may also omit portions of the subject’s face, such as the bottoms of chins or mouths.
  • the template face data can be added in these regions, but other data (i.e., non-face voxels) in these image slices will remain missing.
  • the template face data may also include template neck data, such that template neck data can also be transferred from the template face data to complete the field-of-view, and to thereby provide greater consistency across images.
  • the transferred face can be cropped to match the original medical image data (reduced altering of the input image).
  • the de-identified image data are then displayed to a user, or stored for later use, as indicated at step 112.
  • displaying the de-identified image data may include displaying de-identified images on, or in conjunction with, a graphical user interface ("GUI”).
  • GUI graphical user interface
  • the de-identified medical image data can also be stored for later use, such as further processing or later display.
  • FIG. 2 a flowchart is illustrated as setting forth the steps of an example method for generating template face data as average facial data from medical image data. As noted above, in other instances the template data need not be representative of a population average.
  • the method includes accessing medical image data from a memory or other suitable data storage device or medium, as indicated at step 202.
  • These medical image data may represent medical images acquired from a plurality of different subjects.
  • the medical images are acquired using the same medical imaging modality as the images to be de-identified.
  • the medical image data can represent a diverse population group.
  • the medical image data can represent a population group that is matched based on one or more demographic characteristics or other characteristics or features.
  • the medical image data can represent an age-matched population group, a race-matched population group, a sex-matched population group, or combinations thereof.
  • Age-matched population groups can be stratified by age ranges, such as decades, years, and the like.
  • the medical images from the subjects are coregistered, as indicated at step 204.
  • the images can be coregistered using an unbiased, group- wise approach with high-dimensional symmetric normalization.
  • the image intensities in the medical images can be normalized, as indicated at step 206, prior averaging.
  • the coregistered medical images are then averaged to generate an average image, as indicated at step 208.
  • This averaging may be, for instance, a voxel- wise averaging.
  • One or more de-identification masks are then determined in the average image, as indicated at step 210.
  • one de-identification mask may correspond to face voxels in the average image.
  • Another de-identification mask may correspond to ear voxels in the average image.
  • the de-identification masks may be generated by tracing (e.g., manually, semi-automatically) the mask or masks.
  • a computing device 350 can receive one or more types of data (e.g., medical image data, template face data) from image source 302, which may be a medical image source, such as a magnetic resonance image source.
  • image source 302 which may be a medical image source, such as a magnetic resonance image source.
  • computing device 350 can execute at least a portion of a facial de-identification system 304 to de-identify facial regions in medical image data received from the image source 302.
  • the computing device 350 can communicate information about data received from the image source 302 to a server 352 over a communication network 354, which can execute at least a portion of the facial de-identification system 304 to de-identify medical image data received from the image source 302.
  • the server 352 can return information to the computing device 350 (and/or any other suitable computing device) indicative of an output of the facial de-identification system 304.
  • computing device 350 and/or server 352 can be any suitable computing device or combination of devices, such as a desktop computer, a laptop computer, a smartphone, a tablet computer, a wearable computer, a server computer, a virtual machine being executed by a physical computing device, and so on.
  • the computing device 350 and/or server 352 can also reconstruct images from the data.
  • image source 302 can be any suitable source of image data (e.g., measurement data, images reconstructed from measurement data), such as a magnetic resonance imaging ("MRI”) system or other suitable medical imaging system, another computing device (e.g., a server storing medical image data), and so on.
  • image source 302 can be local to computing device 350.
  • image source 302 can be incorporated with computing device 350 (e.g., computing device 350 can be configured as part of a device for capturing, scanning, and/or storing images).
  • image source 302 can be connected to computing device 350 by a cable, a direct wireless link, and so on.
  • image source 302 can be located locally and/or remotely from computing device 350, and can communicate data to computing device 350 (and/or server 352) via a communication network (e.g., communication network 354).
  • a communication network e.g., communication network 354
  • communication network 354 can be any suitable communication network or combination of communication networks.
  • communication network 354 can include a Wi-Fi network (which can include one or more wireless routers, one or more switches, etc.), a peer-to-peer network (e.g., a Bluetooth network), a cellular network (e.g., a 3G network, a 4G network, etc., complying with any suitable standard, such as CDMA, GSM, LTE, LTE Advanced, WiMAX, etc.), a wired network, and so on.
  • Wi-Fi network which can include one or more wireless routers, one or more switches, etc.
  • peer-to-peer network e.g., a Bluetooth network
  • a cellular network e.g., a 3G network, a 4G network, etc., complying with any suitable standard, such as CDMA, GSM, LTE, LTE Advanced, WiMAX, etc.
  • communication network 354 can be a local area network, a wide area network, a public network (e.g., the Internet), a private or semi-private network (e.g., a corporate or university intranet), any other suitable type of network, or any suitable combination of networks.
  • Communications links shown in FIG. 3 can each be any suitable communications link or combination of communications links, such as wired links, fiber optic links, Wi-Fi links, Bluetooth links, cellular links, and so on.
  • computing device 350 can include a processor 402, a display 404, one or more inputs 406, one or more communication systems 408, and/or memory 410.
  • processor 402 can be any suitable hardware processor or combination of processors, such as a central processing unit (“CPU”), a graphics processing unit (“GPU”), and so on.
  • display 404 can include any suitable display devices, such as a computer monitor, a touchscreen, a television, and so on.
  • inputs 406 can include any suitable input devices and/or sensors that can be used to receive user input, such as a keyboard, a mouse, a touchscreen, a microphone, and so on.
  • communications systems 408 can include any suitable hardware, firmware, and/or software for communicating information over communication network 354 and/or any other suitable communication networks.
  • communications systems 408 can include one or more transceivers, one or more communication chips and/or chip sets, and so on.
  • communications systems 408 can include hardware, firmware and/or software that can be used to establish a Wi-Fi connection, a Bluetooth connection, a cellular connection, an Ethernet connection, and so on.
  • memory 410 can include any suitable storage device or devices that can be used to store instructions, values, data, or the like, that can be used, for example, by processor 402 to present content using display 404, to communicate with server 352 via communications system(s) 408, and so on.
  • Memory 410 can include any suitable volatile memory, non-volatile memory, storage, or any suitable combination thereof.
  • memory 410 can include RAM, ROM, EEPROM, one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, and so on.
  • memory 410 can have encoded thereon, or otherwise stored therein, a computer program for controlling operation of computing device 350.
  • processor 402 can execute at least a portion of the computer program to present content (e.g., images, user interfaces, graphics, tables), receive content from server 352, transmit information to server 352, and so on.
  • server 352 can include a processor 412, a display 414, one or more inputs 416, one or more communications systems 418, and/or memory 420.
  • processor 412 can be any suitable hardware processor or combination of processors, such as a CPU, a GPU, and so on.
  • display 414 can include any suitable display devices, such as a computer monitor, a touchscreen, a television, and so on.
  • inputs 416 can include any suitable input devices and/or sensors that can be used to receive user input, such as a keyboard, a mouse, a touchscreen, a microphone, and so on.
  • communications systems 418 can include any suitable hardware, firmware, and/or software for communicating information over communication network 354 and/or any other suitable communication networks.
  • communications systems 418 can include one or more transceivers, one or more communication chips and/or chip sets, and so on.
  • communications systems 418 can include hardware, firmware and/or software that can be used to establish a Wi-Fi connection, a Bluetooth connection, a cellular connection, an Ethernet connection, and so on.
  • memory 420 can include any suitable storage device or devices that can be used to store instructions, values, data, or the like, that can be used, for example, by processor 412 to present content using display 414, to communicate with one or more computing devices 350, and so on.
  • Memory 420 can include any suitable volatile memory, non-volatile memory, storage, or any suitable combination thereof.
  • memory 420 can include RAM, ROM, EEPROM, one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, and so on.
  • memory 420 can have encoded thereon a server program for controlling operation of server 352.
  • processor 412 can execute at least a portion of the server program to transmit information and/or content (e.g., data, images, a user interface) to one or more computing devices 350, receive information and/or content from one or more computing devices 350, receive instructions from one or more devices (e.g., a personal computer, a laptop computer, a tablet computer, a smartphone), and so on.
  • information and/or content e.g., data, images, a user interface
  • processor 412 can execute at least a portion of the server program to transmit information and/or content (e.g., data, images, a user interface) to one or more computing devices 350, receive information and/or content from one or more computing devices 350, receive instructions from one or more devices (e.g., a personal computer, a laptop computer, a tablet computer, a smartphone), and so on.
  • devices e.g., a personal computer, a laptop computer, a tablet computer, a smartphone
  • image source 302 can include a processor 422, one or more image acquisition systems 424, one or more communications systems 426, and/or memory 428.
  • processor 422 can be any suitable hardware processor or combination of processors, such as a CPU, a GPU, and so on.
  • the one or more image acquisition systems 424 are generally configured to acquire data, images, or both, and can include an MRI system or other suitable medical imaging system. Additionally or alternatively, in some embodiments, one or more image acquisition systems 424 can include any suitable hardware, firmware, and/or software for coupling to and/or controlling operations of an MRI system or other suitable medical imaging system. In some embodiments, one or more portions of the one or more image acquisition systems 424 can be removable and/or replaceable.
  • image source 302 can include any suitable inputs and/or outputs.
  • image source 302 can include input devices and/or sensors that can be used to receive user input, such as a keyboard, a mouse, a touchscreen, a microphone, a trackpad, a trackball, and so on.
  • image source 302 can include any suitable display devices, such as a computer monitor, a touchscreen, a television, etc., one or more speakers, and so on.
  • communications systems 426 can include any suitable hardware, firmware, and/or software for communicating information to computing device 350 (and, in some embodiments, over communication network 354 and/or any other suitable communication networks).
  • communications systems 426 can include one or more transceivers, one or more communication chips and/or chip sets, and so on.
  • communications systems 426 can include hardware, firmware and/or software that can be used to establish a wired connection using any suitable port and/or communication standard (e.g., VGA, DVI video, USB, RS-232, etc.), Wi-Fi connection, a Bluetooth connection, a cellular connection, an Ethernet connection, and so on.
  • memory 428 can include any suitable storage device or devices that can be used to store instructions, values, data, or the like, that can be used, for example, by processor 422 to control the one or more image acquisition systems 424, and/or receive data from the one or more image acquisition systems 424; to images from data; present content (e.g., images, a user interface) using a display; communicate with one or more computing devices 350; and so on.
  • Memory 428 can include any suitable volatile memory, non-volatile memory, storage, or any suitable combination thereof.
  • memory 428 can include RAM, ROM, EEPROM, one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, and so on.
  • memory 428 can have encoded thereon, or otherwise stored therein, a program for controlling operation of image source 302.
  • processor 422 can execute at least a portion of the program to generate images, transmit information and/or content (e.g., data, images) to one or more computing devices 350, receive information and/or content from one or more computing devices 350, receive instructions from one or more devices (e.g., a personal computer, a laptop computer, a tablet computer, a smartphone, etc.), and so on.
  • any suitable computer readable media can be used for storing instructions for performing the functions and/or processes described herein.
  • computer readable media can be transitory or non-transitory.
  • non-transitory computer readable media can include media such as magnetic media (e.g., hard disks, floppy disks), optical media (e.g., compact discs, digital video discs, Blu-ray discs), semiconductor media (e.g., random access memory (“RAM”), flash memory, electrically programmable read only memory (“EPROM”), electrically erasable programmable read only memory (“EEPROM”)), any suitable media that is not fleeting or devoid of any semblance of permanence during transmission, and/or any suitable tangible media.
  • RAM random access memory
  • EPROM electrically programmable read only memory
  • EEPROM electrically erasable programmable read only memory
  • transitory computer readable media can include signals on networks, in wires, conductors, optical fibers, circuits, or any suitable media that is fleeting and devoid of any semblance of permanence during transmission, and/or any suitable intangible media.
  • the MRI system 500 includes an operator workstation 502 that may include a display 504, one or more input devices 506 (e.g., a keyboard, a mouse), and a processor 508.
  • the processor 508 may include a commercially available programmable machine running a commercially available operating system.
  • the operator workstation 502 provides an operator interface that facilitates entering scan parameters into the MRI system 500.
  • the operator workstation 502 may be coupled to different servers, including, for example, a pulse sequence server 510, a data acquisition server 512, a data processing server 514, and a data store server 516.
  • the operator workstation 502 and the servers 510, 512, 514, and 516 may be connected via a communication system 540, which may include wired or wireless network connections.
  • the pulse sequence server 510 functions in response to instructions provided by the operator workstation 502 to operate a gradient system 518 and a radiofrequency ("RF”) system 520.
  • Gradient waveforms for performing a prescribed scan are produced and applied to the gradient system 518, which then excites gradient coils in an assembly 522 to produce the magnetic field gradients G x , G y , and G z that are used for spatially encoding magnetic resonance signals.
  • the gradient coil assembly 522 forms part of a magnet assembly 524 that includes a polarizing magnet 526 and a whole-body RF coil 528.
  • RF waveforms are applied by the RF system 520 to the RF coil 528, or a separate local coil to perform the prescribed magnetic resonance pulse sequence.
  • Responsive magnetic resonance signals detected by the RF coil 528, or a separate local coil are received by the RF system 520.
  • the responsive magnetic resonance signals may be amplified, demodulated, filtered, and digitized under direction of commands produced by the pulse sequence server 510.
  • the RF system 520 includes an RF transmitter for producing a wide variety of RF pulses used in MRI pulse sequences.
  • the RF transmitter is responsive to the prescribed scan and direction from the pulse sequence server 510 to produce RF pulses of the desired frequency, phase, and pulse amplitude waveform.
  • the generated RF pulses may be applied to the whole-body RF coil 528 or to one or more local coils or coil arrays.
  • the RF system 520 also includes one or more RF receiver channels.
  • An RF receiver channel includes an RF preamplifier that amplifies the magnetic resonance signal received by the coil 528 to which it is connected, and a detector that detects and digitizes the I and Q quadrature components of the received magnetic resonance signal. The magnitude of the received magnetic resonance signal may, therefore, be determined at a sampled point by the square root of the sum of the squares of the I and Q components:
  • phase of the received magnetic resonance signal may also be determined according to the following relationship:
  • the pulse sequence server 510 may receive patient data from a physiological acquisition controller 530.
  • the physiological acquisition controller 530 may receive signals from a number of different sensors connected to the patient, including electrocardiograph (“ECG”) signals from electrodes, or respiratory signals from a respiratory bellows or other respiratory monitoring devices. These signals may be used by the pulse sequence server 510 to synchronize, or "gate,” the performance of the scan with the subject’s heart beat or respiration.
  • ECG electrocardiograph
  • the pulse sequence server 510 may also connect to a scan room interface circuit 532 that receives signals from various sensors associated with the condition of the patient and the magnet system. Through the scan room interface circuit 532, a patient positioning system 534 can receive commands to move the patient to desired positions during the scan.
  • the digitized magnetic resonance signal samples produced by the RF system 520 are received by the data acquisition server 512.
  • the data acquisition server 512 operates in response to instructions downloaded from the operator workstation 502 to receive the real-time magnetic resonance data and provide buffer storage, so that data is not lost by data overrun. In some scans, the data acquisition server 512 passes the acquired magnetic resonance data to the data processor server 514. In scans that require information derived from acquired magnetic resonance data to control the further performance of the scan, the data acquisition server 512 may be programmed to produce such information and convey it to the pulse sequence server 510. For example, during pre-scans, magnetic resonance data may be acquired and used to calibrate the pulse sequence performed by the pulse sequence server 510.
  • navigator signals may be acquired and used to adjust the operating parameters of the RF system 520 or the gradient system 518, or to control the view order in which k-space is sampled.
  • the data acquisition server 512 may also process magnetic resonance signals used to detect the arrival of a contrast agent in a magnetic resonance angiography (“MRA”) scan.
  • MRA magnetic resonance angiography
  • the data acquisition server 512 may acquire magnetic resonance data and processes it in real-time to produce information that is used to control the scan.
  • the data processing server 514 receives magnetic resonance data from the data acquisition server 512 and processes the magnetic resonance data in accordance with instructions provided by the operator workstation 502. Such processing may include, for example, reconstructing two-dimensional or three- dimensional images by performing a Fourier transformation of raw k-space data, performing other image reconstruction algorithms (e.g., iterative or backprojection reconstruction algorithms), applying filters to raw k-space data or to reconstructed images, generating functional magnetic resonance images, or calculating motion or flow images.
  • image reconstruction algorithms e.g., iterative or backprojection reconstruction algorithms
  • Images reconstructed by the data processing server 514 are conveyed back to the operator workstation 502 for storage.
  • Real-time images may be stored in a data base memory cache, from which they may be output to operator display 502 or a display 536.
  • Batch mode images or selected real time images may be stored in a host database on disc storage 538.
  • the data processing server 514 may notify the data store server 516 on the operator workstation 502.
  • the operator workstation 502 may be used by an operator to archive the images, produce films, or send the images via a network to other facilities.
  • the MRI system 500 may also include one or more networked workstations 542.
  • a networked workstation 542 may include a display 544, one or more input devices 546 (e.g., a keyboard, a mouse), and a processor 548.
  • the networked workstation 542 may be located within the same facility as the operator workstation 502, or in a different facility, such as a different healthcare institution or clinic.
  • the networked workstation 542 may gain remote access to the data processing server 514 or data store server 516 via the communication system 540. Accordingly, multiple networked workstations 542 may have access to the data processing server 514 and the data store server 516. In this manner, magnetic resonance data, reconstructed images, or other data may be exchanged between the data processing server 514 or the data store server 516 and the networked workstations 542, such that the data or images may be remotely processed by a networked workstation 542.
  • the CT system includes a gantry 602, to which at least one x-ray source 604 is coupled.
  • the x-ray source 604 projects an x-ray beam 606, which may be a fan-beam or cone-beam of x-rays, towards a detector array 608 on the opposite side of the gantry 602.
  • the detector array 608 includes a number of x-ray detector elements 610.
  • the x-ray detector elements 610 sense the projected x-rays 606 that pass through a subject 612, such as a medical patient or an object undergoing examination, that is positioned in the CT system 600.
  • Each x-ray detector element 610 produces an electrical signal that may represent the intensity of an impinging x-ray beam and, hence, the attenuation of the beam as it passes through the subject 612.
  • each x-ray detector 610 is capable of counting the number of x-ray photons that impinge upon the detector 610.
  • the gantry 602 and the components mounted thereon rotate about a center of rotation 614 located within the CT system 600.
  • the CT system 600 also includes an operator workstation 616, which typically includes a display 618; one or more input devices 620, such as a keyboard and mouse; and a computer processor 622.
  • the computer processor 622 may include a commercially available programmable machine running a commercially available operating system.
  • the operator workstation 616 provides the operator interface that enables scanning control parameters to be entered into the CT system 600.
  • the operator workstation 616 is in communication with a data store server 624 and an image reconstruction system 626.
  • the operator workstation 616, data store sever 624, and image reconstruction system 626 may be connected via a communication system 628, which may include any suitable network connection, whether wired, wireless, or a combination of both.
  • the communication system 628 may include both proprietary or dedicated networks, as well as open networks, such as the internet.
  • the operator workstation 616 is also in communication with a control system 630 that controls operation of the CT system 600.
  • the control system 630 generally includes an x-ray controller 632, a table controller 634, a gantry controller 636, and a data acquisition system 638.
  • the x-ray controller 632 provides power and timing signals to the x-ray source 604 and the gantry controller 636 controls the rotational speed and position of the gantry 602.
  • the table controller 634 controls a table 640 to position the subject 612 in the gantry 602 of the CT system 600.
  • the DAS 638 samples data from the detector elements 610 and converts the data to digital signals for subsequent processing. For instance, digitized x-ray data is communicated from the DAS 638 to the data store server 624.
  • the image reconstruction system 626 then retrieves the x-ray data from the data store server 624 and reconstructs an image therefrom.
  • the image reconstruction system 626 may include a commercially available computer processor, or may be a highly parallel computer architecture, such as a system that includes multiple-core processors and massively parallel, high-density computing devices.
  • image reconstruction can also be performed on the processor 622 in the operator workstation 616. Reconstructed images can then be communicated back to the data store server 624 for storage or to the operator workstation 616 to be displayed to the operator or clinician.
  • the CT system 600 may also include one or more networked workstations 642.
  • a networked workstation 642 may include a display 644; one or more input devices 646, such as a keyboard and mouse; and a processor 648.
  • the networked workstation 642 may be located within the same facility as the operator workstation 616, or in a different facility, such as a different healthcare institution or clinic.
  • the networked workstation 642, whether within the same facility or in a different facility as the operator workstation 616, may gain remote access to the data store server 624 and/or the image reconstruction system 626 via the communication system 628.
  • multiple networked workstations 642 may have access to the data store server 624 and/or image reconstruction system 626.
  • x-ray data, reconstructed images, or other data may be exchanged between the data store server 624, the image reconstruction system 626, and the networked workstations 642, such that the data or images may be remotely processed by a networked workstation 642.
  • This data may be exchanged in any suitable format, such as in accordance with the transmission control protocol (“TCP”), the internet protocol (“IP”), or other known or suitable protocols.
  • TCP transmission control protocol
  • IP internet protocol

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

Facial information in medical images is de-identified using an approach that reduces undesired effects of de-identification on image statistics, while still protecting participant privacy. Rather than remove or blur face voxels, face voxels are replaced with image data from a template face, which in some instances may be a population-average face. Participant facial features are fully removed, but impacts on downstream biomarker measurements are minimized by generating an output image that resembles a complete craniofacial image with statistical image texture properties similar to the original.

Description

SYSTEMS AND METHODS FOR NON DESTRUCTIVE DE-IDENTIFICATION OF FACIAL DATA IN MEDICAL IMAGES
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. Provisional Patent Application Serial No. 62/824,652 filed on March 27, 2019 and entitled "Systems and Methods for Non-Destructive Anonymization of Facial Data in Medical Images,” which is incorporated herein by reference as if set forth in its entirety for all purposes.
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH
[0002] This invention was made with government support under AG011378, AG041851, AG006786, AG016574, and AG034676 awarded by the National Institutes of Health. The government has certain rights in the invention.
BACKGROUND
[0003] Large data breaches occur so frequently that they have become routine in the daily news cycle and are increasingly recognized as a ubiquitous threat to society. Government entities and private corporations maintain large databases of individuals’ information that are frequently leaked or stolen. Breaches can contain financial data, private communications, tracking of individuals’ locations, interests, and purchases.
[0004] Breaches of medical data have been arguably less frequent than financial or social media information, but individuals’ medical data are among the most intensely personal and damaging when they occur, both to the individuals and to entities who failed to protect the data. Through the Health Insurance Portability and Accountability Act ("HIPAA”) and similar legislation, entities responsible for medical data can be held criminally responsible for data breaches if it is determined that they did not provide adequate protections. Institutions and corporations have both an ethical and legal responsibility to protect health information they collect and store.
[0005] While individuals’ privacy is routinely compromised by data breaches, the continuing advancement and adoption of face recognition software technology holds an unlimited potential to undermine, if not destroy, individuals’ expectation of privacy in public places. Concerns about loss of individual freedom and privacy, and risks of false identifications and bias in law enforcement, have fueled demand for global legislation to control the risks of face recognition, but the technology has advanced faster than legislation can adapt.
[0006] Face recognition of medical images poses significant privacy risks that have not been highly reported. Through continuing advances in medical imaging technology, medical scans from magnetic resonance imaging ("MRI”), computed tomography ("CT”), and other imaging modalities, often contain highly detailed imagery of individuals’ facial features. These scans can yield nearly-photorealistic 3D reconstructions of faces.
[0007] At the same time, there is an increasing demand for public sharing of all publicly-funded research data, including medical images. This demand is primarily driven by the desire to maximize scientific returns on the time and funding spent for its acquisition. Common practice for de-identifying medical images prior to public data sharing is to de-identify the accompanying textual meta-information, but the image itself is typically left intact.
[0008] Although researchers widely believe that modifying textual meta information is sufficient to de-identify participant images, recent research has shown that this is insufficient because freely available software can be applied to publicly shared research medical images to create detailed 3D renders of participant faces and automatically match them with (previously-identified) photographs. These tasks can be performed quickly and automatically on a large scale, facilitating detection of target individuals among many medical images, or comparison of each medical image with a large database of photographs. With the proliferation of social media, suitable photographs of private citizens (already associated with their identity) are both widely and publicly available to would-be attackers. An inquiring individual would need merely to generate a facial reconstruction from a research medical image and upload the image to their personal social media accounts to see if the system automatically tags it as anyone they know.
[0009] Medical research studies frequently include data on age, sex, and approximate location (e.g., study site) of otherwise de-identified participants. These details can be leveraged to help identify the study participants, particularly because age, sex, and home region are also frequently available via social media and can be leveraged to reduce the pool of candidate face matches. As imaging and face recognition technology improve, these threats only become stronger.
[0010] Research studies collect a multitude of protected health information about each participant that could be extremely damaging to those individuals if their identity were discovered. Identified individuals could be linked to medical diagnoses, genetic risks, biomarkers, psychometric testing, and so on, which could damage their careers and reputations. Knowledge of data such as diagnoses and risk factors could also cause emotional pain and suffering to participants, even without disclosure to others.
[0011] Downloaders of publicly shared research data are frequently required to accept a data use agreement (DUA) that, among other things, forbids attempts to identify participants. These agreements can legally protect study administrators, but they rely only on the honesty and promises of the individuals who sign them, rather than offering any direct protection of participants’ privacy. If the popular press discovered that even one participant had been publicly identified and consequently harmed by their research participation, this news could significantly and permanently erode public trust and participation in medical research, regardless of which entity receives legal blame. Study participants believe that research studies have ethical and legal obligation to protect their identities to the fullest possible extent, and they would likely consider these institutions to be at least partially at fault if they were compromised.
[0012] Standard approaches to medical image de-identification remove selected textual information from the image header, but they make no attempt to de- identify the image itself. This approach is insufficient because it does nothing to protect against face recognition. Some existing methods for facial de-identification of medical images do exist, but they are not used by most large imaging studies with publicly-released data. These existing approaches fall into two broad classes: face removal and face blurring.
[0013] Some of the earliest approaches to remove faces in medical images simply used existing algorithms for skull stripping (i.e., removing everything except for the brain). These methods unnecessarily prevent measurements using non-brain regions, such as intracranial volume, cerebrospinal fluid volume, arterial biomarkers, and so on. Face removal algorithms were thus proposed as an alternative. These use registration with a standard atlas to locate the face region and remove it by setting the image intensities to some standard value (e.g., zero). However, these face removal methods create processing failures, bias, and noise in automated image processing pipelines.
[0014] Face blurring algorithms, including the approach used for some data releases from the human connectome project, were therefore proposed as an alternative to face removal. These methods similarly locate the face via registration with an atlas, but they blur the face’s contour to prevent recognition rather than remove it. The ability of this class of methods to effectively de-identify an individual has recently been called into question by a manuscript showing that popular blurring method used by the Human Connectome Project can be defeated by un blurring the blurred face contour with deep-learning algorithms. Furthermore, even though face blurring was designed specifically to modify the data less strongly than skull stripping or face wiping methods, the process still affects morphometric measurements of brain structures using popular analysis software.
[0015] It is counterintuitive that software to measure structures inside the brain would be affected by removing or blurring structures outside the brain, but these approaches typically apply models designed for complete craniofacial medical images. When parts of the image are removed or blurred, this can bias affine and/or nonlinear registrations by creating non-anatomic edges in the image, and because registration cost functions and deformation models were not designed to model missing or blurred image data. Additionally, most popular methods for brain tissue class segmentation use a Bayesian maximum-a-posteriori ("MAP”) approach that classifies each location by comparing its statistics with those of all modelled image classes. For all MAP models, modifying the statistical model of one class (e.g., non brain) therefore affects the posterior probabilities (i.e., relative likelihood) of tissue being assigned to all possible classes (e.g., brain).
[0016] There remains a need, then, for a way in which medical images can be de-identified in a secure manner, and without introducing errors that affect downstream image processing. SUMMARY OF THE DISCLOSURE
[0017] The present disclosure addresses the aforementioned drawbacks by providing a method for generating a de-identified medical image. The method includes accessing medical image data using a computer system. The medical image data includes at least some voxels corresponding to a face of a subject. Template face data are also accessed using the computer system. The template face data include voxels corresponding to a template face that is different from the subject’s face. The medical image data and the template face data are co-registered, and de-identified medical image data are generated by replacing the voxels corresponding to the face of the subject in the medical image data with the voxels corresponding to the template face in the template face data. The de-identified medical image data includes at least one de-identified medical image of the subject.
[0018] In some configurations, when co-registering the medical image data and the template face data, the template face data corresponding to face voxels may be transformed using a linear transformation, and the template face data corresponding to non-face voxels may be transformed using a nonlinear transformation. A de-identification mask may be smoothed in the template face data such that images transformed using registration parameters have a smooth spatial transition between linearly transformed regions and non-linearly transformed regions.
[0019] In some configurations, the medical image data further include at least some voxels corresponding to an ear of the subject, and the template face data further include voxels corresponding to an ear associated with the template face; and generating de-identified medical image data further includes replacing the voxels corresponding to the ear of the subject in the medical image data with the voxels corresponding to the ear of the template face in the template face data.
[0020] In some configurations, the template face data are representative of a population group to which the subject belongs. The population group may be an age-matched population group in which the plurality of subjects have a similar age as the subject from which the medical image data were acquired. The population group may be a race-matched population group in which the plurality of subjects have a same race as the subject from which the medical image data were acquired. The population group may be a sex-matched population group in which the plurality of subjects have a same sex as the subject from which the medical image data were acquired.
[0021] In some configurations, the method includes segmenting the medical image data to locate voxels corresponding to a reference tissue and normalizing image intensity values in the template face data using image intensity values of the voxels corresponding to the reference tissue. In some configurations, the reference tissue may be white matter.
[0022] In some configurations, generating de-identified medical image data may include removing voxels in the medical image data located in regions behind a head of the subject. Removing the voxels in the medical image data located in regions behind the head of the subject may include assigning zero values or randomly distributed values to those voxels.
[0023] In some configurations, the medical image data are magnetic resonance image data including one or more magnetic resonance images acquired from the subject. The magnetic resonance images may include T1 -weighted images, T2 -weighted images, or magnetic resonance images acquired with other contrast weightings or mechanisms known in the art. The medical image data may be computed tomography (CT) image data comprising one or more CT images acquired from the subject. The medical image data may be positron emission tomography (PET) image data comprising one or more PET images acquired from the subject.
[0024] In some configurations, the method includes storing the de-identified medical image data for later use, whereby the de-identified medical image data depict de-identified facial information so as to impede facial recognition of the subject. In some configurations, the method includes displaying a de-identified medical image of the subject to a user by accessing the de-identified medical image data with the computer system, selecting a de-identified medical image from the de- identified medical image data, and generating a display of the de-identified medical image.
[0025] In some configurations, the template face data may be average facial data comprising voxels corresponding to an average of a plurality of faces from a corresponding plurality of subjects.
[0026] In one configuration, a computer readable medium is provided that includes instructions stored on the computer readable medium for accessing medical image data using a computer system, where the medical image data include at least some voxels corresponding to a face of a subject. The instructions also provide for accessing template face data using the computer system, where the template face data include voxels corresponding to a template face that is different from the subject’s face. The instructions also provide for co-registering the medical image data and the template face data and generating de-identified medical image data by replacing the voxels corresponding to the face of the subject in the medical image data with the voxels corresponding to the template face in the template face data, where the de-identified medical image data include at least one de-identified medical image of the subject.
[0027] The foregoing and other aspects and advantages of the present disclosure will appear from the following description. In the description, reference is made to the accompanying drawings that form a part hereof, and in which there is shown by way of illustration a preferred embodiment. This embodiment does not necessarily represent the full scope of the invention, however, and reference is therefore made to the claims and herein for interpreting the scope of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
[0028] FIG. 1 is a flowchart setting forth the steps of an example method for generating de-identified medical image data, in which voxels corresponding to a subject’s face are replaced with voxels from template face data that represent a template face that is different from the subject’s face.
[0029] FIG. 2 is a flowchart setting forth the steps of an example method for generating template face data that represent an average face of a plurality of subjects in a population group.
[0030] FIG. 3 is a block diagram of an example facial de-identification system for de-identifying facial information in medical images, such as magnetic resonance images.
[0031] FIG. 4 is a block diagram of example hardware components that can implement the facial de-identification system of FIG. 3.
[0032] FIG. 5 is a block diagram of an example magnetic resonance imaging ("MRI”) system that can implement the methods described in the present disclosure.
[0033] FIG. 6A is an example of an x-ray computed tomography ("CT”) system that can implement the methods described in the present disclosure.
[0034] FIG. 6B is a block diagram of the example CT system of FIG. 6A.
DETAILED DESCRIPTION
[0035] Described here are systems and methods for de-identifying facial information in medical images using an approach that reduces undesired effects of de-identification on image statistics while still protecting participant privacy. In general, rather than remove or blur face voxels, the systems and methods described in the present disclosure replace face voxels with image data from a template face that is different from the subject’s face. As one non-limiting example, the template face may be a population-average face that is generated by averaging the faces of a plurality of subjects from a population group. This approach fully removes participant facial features, but minimizes impacts on downstream biomarker measurements by generating an output image that resembles a complete craniofacial image with statistical image texture properties similar to the original.
[0036] The systems and methods described in the present disclosure addresses a significant unmet need by providing medical image de-identification that protects research study participants from identification via facial recognition. Unlike existing techniques, the systems and methods described in the present disclosure provide for de-identification that minimizes harmful effects on the ability to measure quantitative medical information from the de-identified images, thereby allowing for increased subject or patient privacy protections while maximizing the medical, diagnostic, and scientific value of the de-identified images.
[0037] In some configurations, the systems and methods described in the present disclosure identify face regions in images via registration with a standard atlas that represents template face data, which may in some instances represent average facial data in a population group. The face voxels in the input medical images are then replaced with those of the template face (i.e., a digital "face transplant” is performed) from the template face data, rather than removing or blurring the face voxels.
[0038] Referring now to FIG. 1, a flowchart is illustrated as setting forth the steps of an example method for generating a de-identified medical image, in which face regions in the medical image are de-identified. [0039] The method includes accessing medical image data with a computer system, as indicated at step 102. Accessing the medical image data can include retrieving previously acquired medical image data from a memory or other data storage device or medium. In some other instances, accessing the medical image data can include acquiring medical images with a medical imaging system and communicating those images to the computer system.
[0040] The medical image data may include one or more medical images, which may be 2D images or 3D images, and which contain at least some voxels corresponding to the face of a subject. Additionally or alternatively, the medical image data may include voxels corresponding to the subject’s ears, or other anatomy associated with the head, neck, or both. The medical image data may include magnetic resonance images (e.g., Tl-weighted images, T2-weighted images, or magnetic resonance images acquired with other contrast weightings or mechanisms known in the art). Additionally or alternatively, the medical image data can include images obtained with other medical imaging modalities, such as x-ray computed tomography ("CT”), positron emission tomography ("PET”), and so on.
[0041] The method also includes accessing template face data with the computer system, as indicated at step 104. Accessing the template face data can include, for instance, retrieving previously generated template face data from a memory or other data storage device or medium. The medical image data contained in the template face data are preferably but may not necessarily be acquired using the same imaging modality as the medical image data accessed in step 102.
[0042] In some embodiments, the template face data include medical image data acquired from a plurality of different subjects, which have been averaged together. In some embodiments, the template face data include voxels corresponding to the faces of the plurality of subjects having been co-registered and averaged together. Thus, in these instances, the template face data represent a population average face.
[0043] In some other embodiments, the template face data can include voxels corresponding to a face from a different subject. In these instances, the facial features in the input medical image data will be replaced with the facial features of the individual whose face is depicted in the template face data.
[0044] The template face data may include data representative of a population that matches one or more demographic or other characteristics of the subject depicted in the medical image data to be de -identified. For instance, the template face data may be representative of a plurality of age-matched subjects, race-matched subjects, sex-matched subjects, or combinations thereof.
[0045] Optionally, the medical image data are segmented to identify regions containing a reference tissue to be used for image intensity normalization. As one non-limiting example, the medical image data are segmented to locate voxels containing white matter. The medical image data can be segmented using a segmentation algorithm such as SPM, FreeSurfer, FSL, or other suitable image segmentation algorithms known in the art. As mentioned, the segmented medical image data can be stored for later use in an image intensity normalization step. Additionally or alternatively, methods for intensity normalization other than reference tissue normalization may also be applied, such as global intensity normalization.
[0046] The medical image data and the template face data are then coregistered, as indicated at step 106. As one non-limiting example, a nonlinear transformation between the medical image data and the template face data can be performed in order to transform the template face data to match the medical image data and to identify the face voxels, ear voxels, or both. As one example, an ANTs (Advanced Normalization Tools) symmetric nonlinear registration can be used. Other suitable methods may include DARTEL, FNIRT, DRAMMS, and so on.
[0047] The computed registration parameters in the face and ear regions can be modified by setting the masked locations to zero nonlinear deformation. Modifying the parameters in this way allows for only an affine (e.g., linear) registration to be applied in the masked regions (i.e., face voxels), while using a nonlinear registration throughout the rest of the image (i.e., non-face voxels). This approach leaves the geometric contour of the face voxels in the template face data unaltered, while spatially transforming the rest of the voxels in the template face data to match the input image. This process provides spatially continuity between the edges of the original image data and the transferred template face data.
[0048] When modifying these registration parameters, the de-identification mask can be smoothed such that images transformed using the registration parameters have a smooth spatial transition between the linearly-transformed and nonlinearly-transformed regions.
[0049] The modified spatial registration (e.g., warp) parameters can then be applied to the template face data (and the result resampled into the space of the original input medical image data. At this point, the edges of the transformed face/ear regions will align with those of the target medical image data, but within those regions the contour is that of the original template face data (i.e., a template face rather than the face of the subject depicted in the target medical image data).
[0050] Optionally, the intensity of the warped template face data can be multiplied or otherwise transformed to match the mean intensity of the reference tissue voxels in the medical image data. As noted above, this intensity normalization may be performed using a reference tissue, or using other suitable normalization techniques such as global or local intensity normalization. For example, the reference tissue can be brain white matter, and the locations of the white matter voxels can be identified by segmenting the medical image data. Using this image intensity normalization, the voxel intensity scales in the template face data and the medical image data can be made to roughly match.
[0051] Referring still to FIG. 1, after the medical image data and template face data are co-registered, image intensities may be normalized at step 108 and the face voxels in the medical image data are replaced with data from the corresponding, co registered voxels in the template face data, as indicated at step 110. For instance, the medical image data are de-identified by replacing all voxels within a de identification mask with those voxels in the co-registered template face data, thereby replacing or "transplanting” the template face onto the images of the subject depicted in the medical image data. Additionally or alternatively, the ears can be replaced to prevent ear recognition. In applicable imaging modalities, such as MRI, regions behind the head may also be removed (e.g., set to zero), because these sometimes contain face voxels due to aliasing during reconstruction.
[0052] In some instances, as described above, the de-identification mask can be smoothed before replacing voxels such that the transition between the original and de-identified regions of the image is smooth. This smoothing generates a more realistic image by avoiding creating unrealistic edges at the mask boundaries.
[0053] When the medical image data to be de-identified contain artifacts, such as those from motion, magnetic susceptibility, field inhomogeneity, or aliasing, it may be advantageous to process the medical image data to compensate for these artifacts. As an example, to address magnetic field inhomogeneities, the bias field can be modeled from segmenting the medical image data and applying this artifact to the transformed template face data prior to replacing the face voxels (e.g., as part of the intensity normalization step). For registration failures and other issues, a registration cost function and volume of detected face tissue can be leveraged to automatically detect failures and apply alternate registration settings when they occur.
[0054] Sometimes the medical image data to be de-identified lacks sufficient space to transfer the template face, such as when the imaged field-of-view lacks a full face of the subject, or when large portions of the face were affected by aliasing (e.g., part of the face appears behind the head). To address these scenarios, the target medical image data field-of-view can be padded. As one non-limiting example, the medical image data can be padded by approximately 15 mm. Alternatively, instead of padding the field-of-view, the head can be relocated within the medical image data as needed to allow room for the transplanted face without enlarging the images.
[0055] Some medical image data may also omit portions of the subject’s face, such as the bottoms of chins or mouths. In these instances, the template face data can be added in these regions, but other data (i.e., non-face voxels) in these image slices will remain missing. In some instances, the template face data may also include template neck data, such that template neck data can also be transferred from the template face data to complete the field-of-view, and to thereby provide greater consistency across images. In some other instances, the transferred face can be cropped to match the original medical image data (reduced altering of the input image).
[0056] The de-identified image data are then displayed to a user, or stored for later use, as indicated at step 112. For instance, displaying the de-identified image data may include displaying de-identified images on, or in conjunction with, a graphical user interface ("GUI”). As described, the de-identified medical image data can also be stored for later use, such as further processing or later display.
[0057] Referring now to FIG. 2, a flowchart is illustrated as setting forth the steps of an example method for generating template face data as average facial data from medical image data. As noted above, in other instances the template data need not be representative of a population average.
[0058] The method includes accessing medical image data from a memory or other suitable data storage device or medium, as indicated at step 202. These medical image data may represent medical images acquired from a plurality of different subjects. Preferably, the medical images are acquired using the same medical imaging modality as the images to be de-identified.
[0059] In some instances, the medical image data can represent a diverse population group. In some other instances, the medical image data can represent a population group that is matched based on one or more demographic characteristics or other characteristics or features. For example, the medical image data can represent an age-matched population group, a race-matched population group, a sex-matched population group, or combinations thereof. Age-matched population groups can be stratified by age ranges, such as decades, years, and the like.
[0060] The medical images from the subjects are coregistered, as indicated at step 204. For instance, the images can be coregistered using an unbiased, group- wise approach with high-dimensional symmetric normalization. The image intensities in the medical images can be normalized, as indicated at step 206, prior averaging.
[0061] The coregistered medical images are then averaged to generate an average image, as indicated at step 208. This averaging may be, for instance, a voxel- wise averaging. One or more de-identification masks are then determined in the average image, as indicated at step 210. For instance, one de-identification mask may correspond to face voxels in the average image. Another de-identification mask may correspond to ear voxels in the average image. The de-identification masks may be generated by tracing (e.g., manually, semi-automatically) the mask or masks.
[0062] Referring now to FIG. 3, an example of a system 300 for de -identifying facial regions in medical image data in accordance with some embodiments of the systems and methods described in the present disclosure is shown. As shown in FIG. 3, a computing device 350 can receive one or more types of data (e.g., medical image data, template face data) from image source 302, which may be a medical image source, such as a magnetic resonance image source. In some embodiments, computing device 350 can execute at least a portion of a facial de-identification system 304 to de-identify facial regions in medical image data received from the image source 302.
[0063] Additionally or alternatively, in some embodiments, the computing device 350 can communicate information about data received from the image source 302 to a server 352 over a communication network 354, which can execute at least a portion of the facial de-identification system 304 to de-identify medical image data received from the image source 302. In such embodiments, the server 352 can return information to the computing device 350 (and/or any other suitable computing device) indicative of an output of the facial de-identification system 304.
[0064] In some embodiments, computing device 350 and/or server 352 can be any suitable computing device or combination of devices, such as a desktop computer, a laptop computer, a smartphone, a tablet computer, a wearable computer, a server computer, a virtual machine being executed by a physical computing device, and so on. The computing device 350 and/or server 352 can also reconstruct images from the data.
[0065] In some embodiments, image source 302 can be any suitable source of image data (e.g., measurement data, images reconstructed from measurement data), such as a magnetic resonance imaging ("MRI”) system or other suitable medical imaging system, another computing device (e.g., a server storing medical image data), and so on. In some embodiments, image source 302 can be local to computing device 350. For example, image source 302 can be incorporated with computing device 350 (e.g., computing device 350 can be configured as part of a device for capturing, scanning, and/or storing images). As another example, image source 302 can be connected to computing device 350 by a cable, a direct wireless link, and so on. Additionally or alternatively, in some embodiments, image source 302 can be located locally and/or remotely from computing device 350, and can communicate data to computing device 350 (and/or server 352) via a communication network (e.g., communication network 354).
[0066] In some embodiments, communication network 354 can be any suitable communication network or combination of communication networks. For example, communication network 354 can include a Wi-Fi network (which can include one or more wireless routers, one or more switches, etc.), a peer-to-peer network (e.g., a Bluetooth network), a cellular network (e.g., a 3G network, a 4G network, etc., complying with any suitable standard, such as CDMA, GSM, LTE, LTE Advanced, WiMAX, etc.), a wired network, and so on. In some embodiments, communication network 354 can be a local area network, a wide area network, a public network (e.g., the Internet), a private or semi-private network (e.g., a corporate or university intranet), any other suitable type of network, or any suitable combination of networks. Communications links shown in FIG. 3 can each be any suitable communications link or combination of communications links, such as wired links, fiber optic links, Wi-Fi links, Bluetooth links, cellular links, and so on.
[0067] Referring now to FIG. 4, an example of hardware 400 that can be used to implement image source 302, computing device 350, and server 354 in accordance with some embodiments of the systems and methods described in the present disclosure is shown. As shown in FIG. 4, in some embodiments, computing device 350 can include a processor 402, a display 404, one or more inputs 406, one or more communication systems 408, and/or memory 410. In some embodiments, processor 402 can be any suitable hardware processor or combination of processors, such as a central processing unit ("CPU”), a graphics processing unit ("GPU”), and so on. In some embodiments, display 404 can include any suitable display devices, such as a computer monitor, a touchscreen, a television, and so on. In some embodiments, inputs 406 can include any suitable input devices and/or sensors that can be used to receive user input, such as a keyboard, a mouse, a touchscreen, a microphone, and so on.
[0068] In some embodiments, communications systems 408 can include any suitable hardware, firmware, and/or software for communicating information over communication network 354 and/or any other suitable communication networks. For example, communications systems 408 can include one or more transceivers, one or more communication chips and/or chip sets, and so on. In a more particular example, communications systems 408 can include hardware, firmware and/or software that can be used to establish a Wi-Fi connection, a Bluetooth connection, a cellular connection, an Ethernet connection, and so on.
[0069] In some embodiments, memory 410 can include any suitable storage device or devices that can be used to store instructions, values, data, or the like, that can be used, for example, by processor 402 to present content using display 404, to communicate with server 352 via communications system(s) 408, and so on. Memory 410 can include any suitable volatile memory, non-volatile memory, storage, or any suitable combination thereof. For example, memory 410 can include RAM, ROM, EEPROM, one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, and so on. In some embodiments, memory 410 can have encoded thereon, or otherwise stored therein, a computer program for controlling operation of computing device 350. In such embodiments, processor 402 can execute at least a portion of the computer program to present content (e.g., images, user interfaces, graphics, tables), receive content from server 352, transmit information to server 352, and so on.
[0070] In some embodiments, server 352 can include a processor 412, a display 414, one or more inputs 416, one or more communications systems 418, and/or memory 420. In some embodiments, processor 412 can be any suitable hardware processor or combination of processors, such as a CPU, a GPU, and so on. In some embodiments, display 414 can include any suitable display devices, such as a computer monitor, a touchscreen, a television, and so on. In some embodiments, inputs 416 can include any suitable input devices and/or sensors that can be used to receive user input, such as a keyboard, a mouse, a touchscreen, a microphone, and so on.
[0071] In some embodiments, communications systems 418 can include any suitable hardware, firmware, and/or software for communicating information over communication network 354 and/or any other suitable communication networks. For example, communications systems 418 can include one or more transceivers, one or more communication chips and/or chip sets, and so on. In a more particular example, communications systems 418 can include hardware, firmware and/or software that can be used to establish a Wi-Fi connection, a Bluetooth connection, a cellular connection, an Ethernet connection, and so on.
[0072] In some embodiments, memory 420 can include any suitable storage device or devices that can be used to store instructions, values, data, or the like, that can be used, for example, by processor 412 to present content using display 414, to communicate with one or more computing devices 350, and so on. Memory 420 can include any suitable volatile memory, non-volatile memory, storage, or any suitable combination thereof. For example, memory 420 can include RAM, ROM, EEPROM, one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, and so on. In some embodiments, memory 420 can have encoded thereon a server program for controlling operation of server 352. In such embodiments, processor 412 can execute at least a portion of the server program to transmit information and/or content (e.g., data, images, a user interface) to one or more computing devices 350, receive information and/or content from one or more computing devices 350, receive instructions from one or more devices (e.g., a personal computer, a laptop computer, a tablet computer, a smartphone), and so on.
[0073] In some embodiments, image source 302 can include a processor 422, one or more image acquisition systems 424, one or more communications systems 426, and/or memory 428. In some embodiments, processor 422 can be any suitable hardware processor or combination of processors, such as a CPU, a GPU, and so on. In some embodiments, the one or more image acquisition systems 424 are generally configured to acquire data, images, or both, and can include an MRI system or other suitable medical imaging system. Additionally or alternatively, in some embodiments, one or more image acquisition systems 424 can include any suitable hardware, firmware, and/or software for coupling to and/or controlling operations of an MRI system or other suitable medical imaging system. In some embodiments, one or more portions of the one or more image acquisition systems 424 can be removable and/or replaceable.
[0074] Note that, although not shown, image source 302 can include any suitable inputs and/or outputs. For example, image source 302 can include input devices and/or sensors that can be used to receive user input, such as a keyboard, a mouse, a touchscreen, a microphone, a trackpad, a trackball, and so on. As another example, image source 302 can include any suitable display devices, such as a computer monitor, a touchscreen, a television, etc., one or more speakers, and so on.
[0075] In some embodiments, communications systems 426 can include any suitable hardware, firmware, and/or software for communicating information to computing device 350 (and, in some embodiments, over communication network 354 and/or any other suitable communication networks). For example, communications systems 426 can include one or more transceivers, one or more communication chips and/or chip sets, and so on. In a more particular example, communications systems 426 can include hardware, firmware and/or software that can be used to establish a wired connection using any suitable port and/or communication standard (e.g., VGA, DVI video, USB, RS-232, etc.), Wi-Fi connection, a Bluetooth connection, a cellular connection, an Ethernet connection, and so on.
[0076] In some embodiments, memory 428 can include any suitable storage device or devices that can be used to store instructions, values, data, or the like, that can be used, for example, by processor 422 to control the one or more image acquisition systems 424, and/or receive data from the one or more image acquisition systems 424; to images from data; present content (e.g., images, a user interface) using a display; communicate with one or more computing devices 350; and so on. Memory 428 can include any suitable volatile memory, non-volatile memory, storage, or any suitable combination thereof. For example, memory 428 can include RAM, ROM, EEPROM, one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, and so on. In some embodiments, memory 428 can have encoded thereon, or otherwise stored therein, a program for controlling operation of image source 302. In such embodiments, processor 422 can execute at least a portion of the program to generate images, transmit information and/or content (e.g., data, images) to one or more computing devices 350, receive information and/or content from one or more computing devices 350, receive instructions from one or more devices (e.g., a personal computer, a laptop computer, a tablet computer, a smartphone, etc.), and so on.
[0077] In some embodiments, any suitable computer readable media can be used for storing instructions for performing the functions and/or processes described herein. For example, in some embodiments, computer readable media can be transitory or non-transitory. For example, non-transitory computer readable media can include media such as magnetic media (e.g., hard disks, floppy disks), optical media (e.g., compact discs, digital video discs, Blu-ray discs), semiconductor media (e.g., random access memory ("RAM”), flash memory, electrically programmable read only memory ("EPROM”), electrically erasable programmable read only memory ("EEPROM”)), any suitable media that is not fleeting or devoid of any semblance of permanence during transmission, and/or any suitable tangible media. As another example, transitory computer readable media can include signals on networks, in wires, conductors, optical fibers, circuits, or any suitable media that is fleeting and devoid of any semblance of permanence during transmission, and/or any suitable intangible media. [0078] Referring particularly now to FIG. 5, an example of an MRI system 500, which can implement some example embodiments of the methods described in the present disclosure, is illustrated. The MRI system 500 includes an operator workstation 502 that may include a display 504, one or more input devices 506 (e.g., a keyboard, a mouse), and a processor 508. The processor 508 may include a commercially available programmable machine running a commercially available operating system. The operator workstation 502 provides an operator interface that facilitates entering scan parameters into the MRI system 500. The operator workstation 502 may be coupled to different servers, including, for example, a pulse sequence server 510, a data acquisition server 512, a data processing server 514, and a data store server 516. The operator workstation 502 and the servers 510, 512, 514, and 516 may be connected via a communication system 540, which may include wired or wireless network connections.
[0079] The pulse sequence server 510 functions in response to instructions provided by the operator workstation 502 to operate a gradient system 518 and a radiofrequency ("RF”) system 520. Gradient waveforms for performing a prescribed scan are produced and applied to the gradient system 518, which then excites gradient coils in an assembly 522 to produce the magnetic field gradients Gx, Gy, and Gz that are used for spatially encoding magnetic resonance signals. The gradient coil assembly 522 forms part of a magnet assembly 524 that includes a polarizing magnet 526 and a whole-body RF coil 528.
[0080] RF waveforms are applied by the RF system 520 to the RF coil 528, or a separate local coil to perform the prescribed magnetic resonance pulse sequence. Responsive magnetic resonance signals detected by the RF coil 528, or a separate local coil, are received by the RF system 520. The responsive magnetic resonance signals may be amplified, demodulated, filtered, and digitized under direction of commands produced by the pulse sequence server 510. The RF system 520 includes an RF transmitter for producing a wide variety of RF pulses used in MRI pulse sequences. The RF transmitter is responsive to the prescribed scan and direction from the pulse sequence server 510 to produce RF pulses of the desired frequency, phase, and pulse amplitude waveform. The generated RF pulses may be applied to the whole-body RF coil 528 or to one or more local coils or coil arrays. [0081] The RF system 520 also includes one or more RF receiver channels. An RF receiver channel includes an RF preamplifier that amplifies the magnetic resonance signal received by the coil 528 to which it is connected, and a detector that detects and digitizes the I and Q quadrature components of the received magnetic resonance signal. The magnitude of the received magnetic resonance signal may, therefore, be determined at a sampled point by the square root of the sum of the squares of the I and Q components:
Figure imgf000021_0001
[0082] and the phase of the received magnetic resonance signal may also be determined according to the following relationship:
Figure imgf000021_0002
[0083] The pulse sequence server 510 may receive patient data from a physiological acquisition controller 530. By way of example, the physiological acquisition controller 530 may receive signals from a number of different sensors connected to the patient, including electrocardiograph ("ECG”) signals from electrodes, or respiratory signals from a respiratory bellows or other respiratory monitoring devices. These signals may be used by the pulse sequence server 510 to synchronize, or "gate,” the performance of the scan with the subject’s heart beat or respiration.
[0084] The pulse sequence server 510 may also connect to a scan room interface circuit 532 that receives signals from various sensors associated with the condition of the patient and the magnet system. Through the scan room interface circuit 532, a patient positioning system 534 can receive commands to move the patient to desired positions during the scan.
[0085] The digitized magnetic resonance signal samples produced by the RF system 520 are received by the data acquisition server 512. The data acquisition server 512 operates in response to instructions downloaded from the operator workstation 502 to receive the real-time magnetic resonance data and provide buffer storage, so that data is not lost by data overrun. In some scans, the data acquisition server 512 passes the acquired magnetic resonance data to the data processor server 514. In scans that require information derived from acquired magnetic resonance data to control the further performance of the scan, the data acquisition server 512 may be programmed to produce such information and convey it to the pulse sequence server 510. For example, during pre-scans, magnetic resonance data may be acquired and used to calibrate the pulse sequence performed by the pulse sequence server 510. As another example, navigator signals may be acquired and used to adjust the operating parameters of the RF system 520 or the gradient system 518, or to control the view order in which k-space is sampled. In still another example, the data acquisition server 512 may also process magnetic resonance signals used to detect the arrival of a contrast agent in a magnetic resonance angiography ("MRA”) scan. For example, the data acquisition server 512 may acquire magnetic resonance data and processes it in real-time to produce information that is used to control the scan.
[0086] The data processing server 514 receives magnetic resonance data from the data acquisition server 512 and processes the magnetic resonance data in accordance with instructions provided by the operator workstation 502. Such processing may include, for example, reconstructing two-dimensional or three- dimensional images by performing a Fourier transformation of raw k-space data, performing other image reconstruction algorithms (e.g., iterative or backprojection reconstruction algorithms), applying filters to raw k-space data or to reconstructed images, generating functional magnetic resonance images, or calculating motion or flow images.
[0087] Images reconstructed by the data processing server 514 are conveyed back to the operator workstation 502 for storage. Real-time images may be stored in a data base memory cache, from which they may be output to operator display 502 or a display 536. Batch mode images or selected real time images may be stored in a host database on disc storage 538. When such images have been reconstructed and transferred to storage, the data processing server 514 may notify the data store server 516 on the operator workstation 502. The operator workstation 502 may be used by an operator to archive the images, produce films, or send the images via a network to other facilities.
[0088] The MRI system 500 may also include one or more networked workstations 542. For example, a networked workstation 542 may include a display 544, one or more input devices 546 (e.g., a keyboard, a mouse), and a processor 548. The networked workstation 542 may be located within the same facility as the operator workstation 502, or in a different facility, such as a different healthcare institution or clinic.
[0089] The networked workstation 542 may gain remote access to the data processing server 514 or data store server 516 via the communication system 540. Accordingly, multiple networked workstations 542 may have access to the data processing server 514 and the data store server 516. In this manner, magnetic resonance data, reconstructed images, or other data may be exchanged between the data processing server 514 or the data store server 516 and the networked workstations 542, such that the data or images may be remotely processed by a networked workstation 542.
[0090] Referring particularly now to FIGS. 6A and 6B, an example of an x-ray computed tomography ("CT”) imaging system 600, which can implement some example embodiments of the methods described in the present disclosure, is illustrated. The CT system includes a gantry 602, to which at least one x-ray source 604 is coupled. The x-ray source 604 projects an x-ray beam 606, which may be a fan-beam or cone-beam of x-rays, towards a detector array 608 on the opposite side of the gantry 602. The detector array 608 includes a number of x-ray detector elements 610. Together, the x-ray detector elements 610 sense the projected x-rays 606 that pass through a subject 612, such as a medical patient or an object undergoing examination, that is positioned in the CT system 600. Each x-ray detector element 610 produces an electrical signal that may represent the intensity of an impinging x-ray beam and, hence, the attenuation of the beam as it passes through the subject 612. In some configurations, each x-ray detector 610 is capable of counting the number of x-ray photons that impinge upon the detector 610. During a scan to acquire x-ray projection data, the gantry 602 and the components mounted thereon rotate about a center of rotation 614 located within the CT system 600.
[0091] The CT system 600 also includes an operator workstation 616, which typically includes a display 618; one or more input devices 620, such as a keyboard and mouse; and a computer processor 622. The computer processor 622 may include a commercially available programmable machine running a commercially available operating system. The operator workstation 616 provides the operator interface that enables scanning control parameters to be entered into the CT system 600. In general, the operator workstation 616 is in communication with a data store server 624 and an image reconstruction system 626. By way of example, the operator workstation 616, data store sever 624, and image reconstruction system 626 may be connected via a communication system 628, which may include any suitable network connection, whether wired, wireless, or a combination of both. As an example, the communication system 628 may include both proprietary or dedicated networks, as well as open networks, such as the internet.
[0092] The operator workstation 616 is also in communication with a control system 630 that controls operation of the CT system 600. The control system 630 generally includes an x-ray controller 632, a table controller 634, a gantry controller 636, and a data acquisition system 638. The x-ray controller 632 provides power and timing signals to the x-ray source 604 and the gantry controller 636 controls the rotational speed and position of the gantry 602. The table controller 634 controls a table 640 to position the subject 612 in the gantry 602 of the CT system 600.
[0093] The DAS 638 samples data from the detector elements 610 and converts the data to digital signals for subsequent processing. For instance, digitized x-ray data is communicated from the DAS 638 to the data store server 624. The image reconstruction system 626 then retrieves the x-ray data from the data store server 624 and reconstructs an image therefrom. The image reconstruction system 626 may include a commercially available computer processor, or may be a highly parallel computer architecture, such as a system that includes multiple-core processors and massively parallel, high-density computing devices. Optionally, image reconstruction can also be performed on the processor 622 in the operator workstation 616. Reconstructed images can then be communicated back to the data store server 624 for storage or to the operator workstation 616 to be displayed to the operator or clinician.
[0094] The CT system 600 may also include one or more networked workstations 642. By way of example, a networked workstation 642 may include a display 644; one or more input devices 646, such as a keyboard and mouse; and a processor 648. The networked workstation 642 may be located within the same facility as the operator workstation 616, or in a different facility, such as a different healthcare institution or clinic. [0095] The networked workstation 642, whether within the same facility or in a different facility as the operator workstation 616, may gain remote access to the data store server 624 and/or the image reconstruction system 626 via the communication system 628. Accordingly, multiple networked workstations 642 may have access to the data store server 624 and/or image reconstruction system 626. In this manner, x-ray data, reconstructed images, or other data may be exchanged between the data store server 624, the image reconstruction system 626, and the networked workstations 642, such that the data or images may be remotely processed by a networked workstation 642. This data may be exchanged in any suitable format, such as in accordance with the transmission control protocol ("TCP”), the internet protocol ("IP”), or other known or suitable protocols.
[0096] The present disclosure has described one or more preferred embodiments, and it should be appreciated that many equivalents, alternatives, variations, and modifications, aside from those expressly stated, are possible and within the scope of the invention.

Claims

1. A method for generating a de-identified medical image, the step of the method comprising:
(a) accessing medical image data using a computer system, wherein the
medical image data comprise at least some voxels corresponding to a face of a subject;
(b) accessing template face data using the computer system, wherein the template face data comprise voxels corresponding to a template face that is different from the subject’s face;
(c) co-registering the medical image data and the template face data;
(d) generating de-identified medical image data by replacing the voxels
corresponding to the face of the subject in the medical image data with the voxels corresponding to the template face in the template face data, wherein the de-identified medical image data comprise at least one de- identified medical image of the subject.
2. The method as recited in claim 1, wherein step (c) includes transforming the template face data corresponding to face voxels using a linear transformation and the template face data corresponding to non-face voxels using a nonlinear
transformation.
3. The method as recited in claim 2, wherein step (c) includes smoothing a de-identification mask in the template face data such that images transformed using registration parameters have a smooth spatial transition between linearly transformed regions and non-linearly transformed regions.
4. The method as recited in claim 1, wherein the medical image data further comprise at least some voxels corresponding to an ear of the subject, the template face data further comprise voxels corresponding to an ear associated with the template face, and step (d) further includes replacing the voxels corresponding to the ear of the subject in the medical image data with the voxels corresponding to the ear of the template face in the template face data.
5. The method as recited in claim 1, wherein the template face data are representative of a population group to which the subject belongs.
6. The method as recited in claim 5, wherein the population group is an age- matched population group in which the plurality of subjects have a similar age as the subject from which the medical image data were acquired.
7. The method as recited in claim 5, wherein the population group is a race- matched population group in which the plurality of subjects have a same race as the subject from which the medical image data were acquired.
8. The method as recited in claim 5, wherein the population group is a sex- matched population group in which the plurality of subjects have a same sex as the subject from which the medical image data were acquired.
9. The method as recited in claim 1, further comprising segmenting the medical image data to locate voxels corresponding to a reference tissue and normalizing image intensity values in the template face data using image intensity values of the voxels corresponding to the reference tissue.
10. The method as recited in claim 9, wherein the reference tissue is white matter.
11. The method as recited in claim 1, wherein step (d) further includes removing voxels in the medical image data located in regions behind a head of the subject.
12. The method as recited in claim 11, wherein removing the voxels in the medical image data located in regions behind the head of the subject includes assigning zero values to those voxels.
13. The method as recited in claim 1, wherein the medical image data are magnetic resonance image data comprising one or more magnetic resonance images acquired from the subject.
14. The method as recited in claim 13, wherein the one or more magnetic resonance images comprise Tl-weighted images.
15. The method as recited in claim 1, wherein the medical image data are computed tomography (CT) image data comprising one or more CT images acquired from the subject.
16. The method as recited in claim 1, wherein the medical image data are positron emission tomography (PET) image data comprising one or more PET images acquired from the subject.
17. The method as recited in claim 1, further comprising storing the de- identified medical image data for later use, whereby the de-identified medical image data depict de-identified facial information so as to impede facial recognition of the subject.
18. The method as recited in claim 1, further comprising displaying a de- identified medical image of the subject to a user by accessing the de-identified medical image data with the computer system, selecting a de-identified medical image from the de-identified medical image data, and generating a display of the de-identified medical image.
19. The method as recited in claim 1, wherein the template face data are average facial data comprising voxels corresponding to an average of a plurality of faces from a corresponding plurality of subjects.
20. A non-transitory computer-readable medium storing instructions that, when executed by a computer, cause the computer to perform a method comprising:
(a) accessing medical image data using a computer system, wherein the medical image data comprise at least some voxels corresponding to a face of a subject;
(b) accessing template face data using the computer system, wherein the template face data comprise voxels corresponding to a template face that is different from the subject’s face;
(c) co-registering the medical image data and the template face data;
(d) generating de-identified medical image data by replacing the voxels
corresponding to the face of the subject in the medical image data with the voxels corresponding to the template face in the template face data, wherein the de-identified medical image data comprise at least one de- identified medical image of the subject.
PCT/US2020/025147 2019-03-27 2020-03-27 Systems and methods for non-destructive de-identification of facial data in medical images Ceased WO2020198560A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962824652P 2019-03-27 2019-03-27
US62/824,652 2019-03-27

Publications (1)

Publication Number Publication Date
WO2020198560A1 true WO2020198560A1 (en) 2020-10-01

Family

ID=70465262

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/025147 Ceased WO2020198560A1 (en) 2019-03-27 2020-03-27 Systems and methods for non-destructive de-identification of facial data in medical images

Country Status (1)

Country Link
WO (1) WO2020198560A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4202848A1 (en) 2021-12-24 2023-06-28 Koninklijke Philips N.V. Model based repair of three dimentional tomographc medical imaging data
EP4510140A1 (en) * 2023-07-31 2025-02-19 Coreline Soft Co., Ltd Apparatus and method for anonymizing medical images

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090228299A1 (en) * 2005-11-09 2009-09-10 The Regents Of The University Of California Methods and apparatus for context-sensitive telemedicine

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090228299A1 (en) * 2005-11-09 2009-09-10 The Regents Of The University Of California Methods and apparatus for context-sensitive telemedicine

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
AMANDA BISCHOFF-GRETHE ET AL: "A technique for the deidentification of structural brain MR images", HUMAN BRAIN MAPPING, vol. 28, no. 9, September 2007 (2007-09-01), pages 892 - 903, XP055439676, ISSN: 1065-9471, DOI: 10.1002/hbm.20312 *
DU RUOYU ET AL: "Identity Concealment of Brain Images by Masking", vol. 7, no. 4, July 2013 (2013-07-01), pages 1511 - 1517, XP009520518, ISSN: 1935-0090, Retrieved from the Internet <URL:http://www.naturalspublishing.com/Article.asp?ArtcID=3106> [retrieved on 20200519], DOI: 10.12785/AMIS/070434 *
RUOYU DU ET AL: "Constructing De-identified Brain Model using Deformable Registration", 2013, XP055696693, Retrieved from the Internet <URL:http://kiise.or.kr/e_journal/2013/4/cpl/pdf/06.pdf> [retrieved on 20200519] *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4202848A1 (en) 2021-12-24 2023-06-28 Koninklijke Philips N.V. Model based repair of three dimentional tomographc medical imaging data
WO2023117678A1 (en) 2021-12-24 2023-06-29 Koninklijke Philips N.V. Model based repair of three dimentional tomographc medical imaging data
EP4510140A1 (en) * 2023-07-31 2025-02-19 Coreline Soft Co., Ltd Apparatus and method for anonymizing medical images

Similar Documents

Publication Publication Date Title
US11696701B2 (en) Systems and methods for estimating histological features from medical images using a trained model
KR102645120B1 (en) System and method for integrating tomographic image reconstruction and radiomics using neural networks
JP5893623B2 (en) Anomaly detection method and system in data set
JP7324195B2 (en) Optimizing Positron Emission Tomography System Design Using Deep Imaging
EP3282379B1 (en) Method and apparatus for archiving anonymized volumetric data from medical image visualization software
JP5424902B2 (en) Automatic diagnosis and automatic alignment supplemented using PET / MR flow estimation
Heinrich et al. Automatic human identification based on dental X-ray radiographs using computer vision
US10796464B2 (en) Selective image reconstruction
US20160292855A1 (en) Medical imaging device rendering predictive prostate cancer visualizations using quantitative multiparametric mri models
Goubran et al. Image registration of ex-vivo MRI to sparsely sectioned histology of hippocampal and neocortical temporal lobe specimens
KR20140088434A (en) Mri multi-parametric images aquisition supporting apparatus and method based on patient characteristics
Guan et al. Big data analytics on lung cancer diagnosis framework with deep learning
CN110598696B (en) Medical image scanning positioning method, medical image scanning method and computer equipment
KR102202398B1 (en) Image processing apparatus and image processing method thereof
CN111904379B (en) Scanning method and device for multimodal medical equipment
WO2019169393A1 (en) Improved multi-shot echo planar imaging through machine learning
JP2016508769A (en) Medical image processing
Houck et al. A comparison of automated and manual co-registration for magnetoencephalography
CN114332132A (en) Image segmentation method and device and computer equipment
JP2023509318A (en) Anatomical Encryption of Patient Images for Artificial Intelligence
Anchling et al. Automated orientation and registration of cone-beam computed tomography scans
WO2020198560A1 (en) Systems and methods for non-destructive de-identification of facial data in medical images
KR101885562B1 (en) Method for mapping region of interest in first medical image onto second medical image and apparatus using the same
US8805122B1 (en) System, method, and computer-readable medium for interpolating spatially transformed volumetric medical image data
Zhou et al. Clinical validation of an AI-based motion correction reconstruction algorithm in cerebral CT

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20721887

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20721887

Country of ref document: EP

Kind code of ref document: A1