[go: up one dir, main page]

WO2025117351A1 - System and method for volumetric sensing for medical applications - Google Patents

System and method for volumetric sensing for medical applications Download PDF

Info

Publication number
WO2025117351A1
WO2025117351A1 PCT/US2024/057016 US2024057016W WO2025117351A1 WO 2025117351 A1 WO2025117351 A1 WO 2025117351A1 US 2024057016 W US2024057016 W US 2024057016W WO 2025117351 A1 WO2025117351 A1 WO 2025117351A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
feature
physical parameter
human body
modal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2024/057016
Other languages
French (fr)
Inventor
Paulo E. XAVIER DA SILVEIRA
Anton Aleksandrovich TOKAR
Richard James GIBBS III
Jeremy DUBLON
Crystal MITCHELL
Ravi Vibhakar SHAH
Jonathan Dana EDWARDS
Isobel Jane MULLIGAN
Kazi Miftahul HOQUE
Dmitrii Aleksandrovich GLADYSHEV
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xrpro LLC
Original Assignee
Xrpro LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xrpro LLC filed Critical Xrpro LLC
Publication of WO2025117351A1 publication Critical patent/WO2025117351A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/10ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to drugs or medications, e.g. for ensuring correct administration to patients
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing

Definitions

  • FIG. 1 is an example block diagram of a system for generating multi-modal volumetric data of a feature or anatomical data of a patient according to some implementations.
  • FIG. 6 is another example process flow diagram associated with generating multi-modal volumetric data and detecting potential conditions associated with the feature represented by the multi-modal volumetric data or model according to some implementations.
  • FIG. 7 is another example process flow diagram associated with generating multi-modal volumetric data and detecting potential conditions associated with the feature represented by the multi-modal volumetric data or model according to some implementations.
  • FIG. 8 is another example process flow diagram associated with generating multi-modal volumetric data and detecting potential conditions associated with the feature represented by the multi-modal volumetric data or model according to some implementations.
  • FIG. 9 is another example process flow diagram associated with generating multi-modal volumetric data and detecting potential conditions associated with the feature represented by the multi-modal volumetric data or model according to some implementations.
  • FIG. 10 is another example process flow diagram associated with generating multi-modal volumetric data and detecting potential conditions associated with the feature represented by the multi-modal volumetric data or model according to some implementations.
  • FIG. 11 is another example process flow diagram associated with generating multi-modal volumetric data and detecting potential conditions associated with the feature represented by the multi-modal volumetric data or model according to some implementations.
  • FIG. 12 is an example process flow diagram associated with generating multimodal volumetric data according to some implementations.
  • FIG. 13 is an example process flow diagram associated with generating three- dimensional model of a feature of a body according to some implementations.
  • FIG. 14 is an example pictorial diagram of a multi-modal volumetric three- dimensional model of a feature of a patient according to some implementations.
  • systems and architecture for generating multi-modal volumetric three-dimensional models (e.g., meshes, reconstruction, maps, and the like) of features (e.g., organs, limbs, epidermis, regions, characteristics, and the like) of a human body for use in medical diagnostics, monitoring (e.g., continuous, periodic, and the like), therapy selection and evaluation, and the like.
  • features e.g., organs, limbs, epidermis, regions, characteristics, and the like
  • the systems and architectures, discussed herein may be applicable to fields of three-dimensional (3D) sensing of physical parameters in volumetric objects. More specifically to the field of sensing biological (unidimensional or multi-dimensional) signals applied to features of the human body for medical applications (e.g., diagnostics, monitoring, treatment, and the like).
  • the system discussed herein may be configured to assist with generation of depth maps for the 3D reconstruction of features of the human body (e.g., feet, hands, head, limbs, pelvis, organs, torso, and/or the like) while concurrently (or substantially simultaneously) acquiring and determining additional sensor modality values, such as temperature, microbial loads, dielectric properties, blood flow, electric field potential, magnetic field strength, tissue oxygenation, and the like at various depths (e.g., surface temperatures, internal temperatures at one or more predetermined depth or region, and the like) with respect to the feature or volumetric 3D model.
  • the additional sensor modality values may be determined and/or estimated at specific points, regions, multiple points, along the surface or within the 3D volumetric model and corresponding feature of the human body.
  • the system is configured to generate and solve systems of partial differential equations (PDE) based at least in part on the volumetric 3D data and model to determine values or metrics of biological parameters within points that are not directly measured or between points that are directly measured, potentially with improved numeric accuracy.
  • PDE partial differential equations
  • the additional values or metrics may assist with locating and diagnosing conditions, such as, for example, diabetic ulcers and pre-ulcerative lesions while reducing the cost of diagnosis and subsequent monitoring of the condition (such as via at home or remote future scanning of the affected feature).
  • the system discussed herein increasing the availability of medical equipment, the frequency of data collection and monitoring, and increasing overall patient compliance with treatments when compared to conventional in-office diagnostic specialists and equipment.
  • using various numeric methods may be utilized to reconstruct the whole volume of the feature from one or more 3D scans and to estimate volumetric values, such as temperature, microbial load, dielectric properties, electric field potential, magnetic field strength, tissue oxygenation, and the like within portions and/or the entire volume of the feature.
  • volumetric values such as temperature, microbial load, dielectric properties, electric field potential, magnetic field strength, tissue oxygenation, and the like within portions and/or the entire volume of the feature.
  • the resulting surface or 3D volumetric model is used to diagnose medical conditions (e.g., cancerous, ischemic, infected, necrotic or ulcerous regions, and/or the like) within the scanned feature (e.g., body part).
  • the system may determine temperatures over the 3D volume of the model by applying Poisson’s equation followed by the use of PDEs and Gauss’ theorem to extrapolate known temperature points to unknown points within the enclosed volume of the feature, as discussed herein.
  • estimating temperature within a feature of the body may assist with determination of core temperature (an important vital parameter) as well as for diagnosis of tissue infection and necrosis.
  • core temperature an important vital parameter
  • tissue infection an increased metabolism induced by the body’s immune system resulting in an increase in local temperature over the affected region of the feature of the body, which may be detected using the 3D volumetric model having additional integrated temperature data over and through the volume.
  • necrosis the decreased metabolism caused by tissue death results in a decrease in local temperature of the affected region of the feature of the body.
  • localized temperature detection may assist in early diagnosis and detection of both diabetically induced ulcerous infections (such as in the feet of diabetic patients) and chronic venous insufficiency (e.g., a medical conditions that affect millions of patients worldwide resulting in patient discomfort, pain and suffering, and limited mobility, which further deteriorates patient prognosis, sometimes resulting in loss of limb or if untreated may result in death).
  • the localized temperature detection over and through the volume of the 3D volumetric model may assist in detection of deep-tissue pressure injuries (DTPIs) that may occur in the hospital, particularly when a patient is bedridden or otherwise suffering from restricted mobility.
  • DTPIs deep-tissue pressure injuries
  • pressure injuries acquired in the hospital may result in harm, including chronic wounds, and in some cases deaths.
  • the system discussed herein, may be used as a periodic or continuous monitor to generate up to full body 3D volumetric models integrating other multi-modal sensor capabilities (such as temperature data, infrared data, ultraviolet data, radio wave data, motion data, and the like). In this manner, DTPIs may be detected early and treated prior to complications ensuing.
  • the system may incorporate cloud-based resources and processing (e.g., such as for aggregation, filtering, data integration into one or more volumetric models, model generation, application of one or more machine learning models, and the like).
  • the 3D volumetric models may be processed and/or generated via cloud based services, systems, and/or processing resources.
  • the cloud-based services may include, among other elements, one or more servers in communication with one or more user equipment or devices over one or more networks.
  • the multi-modal 3D volumetric data may be processed locally on a user equipment or partially in the cloud and partially on the local user equipment.
  • the multi-modal 3D volumetric data may include numeric representations of anatomy of patients or users, such as three-dimensional scans of features of the body, portions of skin, and the like as discussed herein.
  • the multi-modal 3D volumetric data may also include different types of data, such as thermal data, red-green-blue data, depth data, infrared data, magnetic resonance imaging (MRI) data, light detection and ranging (LIDAR) data, and the like.
  • the multi-modal 3D volumetric data may also include additional data related to the image data such as sensor data including one or more of temperature, oxygenation, bacterial load, electrical potential, dielectric impedance, electrocardiogam (EKG), photoplethysmographic (PPG), oxygenation, perfusion, heart rate, heart rate variance (HRV), and the like.
  • sensor data including one or more of temperature, oxygenation, bacterial load, electrical potential, dielectric impedance, electrocardiogam (EKG), photoplethysmographic (PPG), oxygenation, perfusion, heart rate, heart rate variance (HRV), and the like.
  • meta-data may be associated with the multi-modal 3D volumetric data.
  • the meta data may include patient information (e.g., identifiers, demographic information, name, age, gender, weight, body part dimensions, such as extracted from the multi-modal 3D volumetric data by the capture device, birth date, medical history, family data, and the like), 3D scan or scanning device information (e.g., device identifier, sensor type, serial number, firmware or software version, scan date, time, and/or the like).
  • patient information e.g., identifiers, demographic information, name, age, gender, weight, body part dimensions, such as extracted from the multi-modal 3D volumetric data by the capture device, birth date, medical history, family data, and the like
  • 3D scan or scanning device information e.g., device identifier, sensor type, serial number, firmware or software version, scan date, time, and/or the like.
  • cloud-based processing may consist of multiple servers, available full-time and on demand, making their services substantially ubiquitous, constantly available, on demand and easily accessible (e.g., additional servers may be quickly activated in
  • servers may consist of multiple computers in large service centers, using different operating systems, benefiting from lower-cost centralized utilities and services, and from expandable infrastructure, such as multiple parallel central processing units (CPUs), graphic processing units (GPUs), arithmetic logic units (ALUs), tensor processing units (TPUs) and quantum processing units (QPUs), to name a few.
  • cloud-servers may also be referred to as backend-servers or simply backend.
  • the cloud-based system discussed herein may include data pre-processing, use of one or more machine learning models that are trained on multi-modal 3D volumetric data associated with anatomy of individuals having various conditions, symptoms, states of health, age, genders, cultural backgrounds, and the like.
  • the one or more machine learning models may be trained to segment the multi-modal 3D volumetric data, classify the multi-modal 3D volumetric data, perform feature detection (such as identifying body parts, landmarks, dimensions, and the like) from the multi-modal 3D volumetric data.
  • the one or more machine learning models may also be trained to assist in diagnosing conditions and symptoms with respect to various features and/or body parts and individuals having a wide variety of features, body types, lifestyles (e.g., diet, exercise, work conditions, and the like), cultures, demographics (e.g., age, gender, and the like), such as those classified and identified by one or more other machine learning models (including, but not limited to, feet in various stages of load bearing and having various types and conditions of arches), determining health status, recommending patient specific treatments or therapies and the like.
  • the cloud-based processing may be used to determine a full system of PDEs, benefitting health professionals as well as patients by improving accuracy while also benefitting from the increased computational power and memory available to cloud based systems.
  • a health care professional may upload multi-modal 3D volumetric data via the user equipment or scanning device to the cloud-based system and receive in response indications of features or conditions that may require further evaluation, user specific anthropometric measurements to assist with diagnostics or evaluations, flagged or identified potential conditions, symptoms as well as suggested treatments or therapies including those related to ulcers, necrosis, infection, and the like.
  • the cloud-based system may provide instructions to perform additional scans and/or capture additional multi-modal 3D volumetric data or other types of sensor data associated with a specific user to enhance any recommendations or features identified.
  • the cloud-based system may also return one or more additional inquiries for the healthcare professional and/or the patient, such as questions related to an accident, a particular feature, history of a feature, feature, or change in state detected, and the like to further assist the healthcare professionals in diagnostics and evaluation of the patient.
  • additional inquiries for the healthcare professional and/or the patient, such as questions related to an accident, a particular feature, history of a feature, feature, or change in state detected, and the like to further assist the healthcare professionals in diagnostics and evaluation of the patient.
  • the machine learning models may be generated using various machine learning techniques.
  • the models may be generated using one or more neural network(s).
  • a neural network may be a biologically inspired algorithm or technique which passes input data (e.g., image and sensor data captured by the user equipment or devices) through a series of connected layers to produce an output or learned inference.
  • Each layer in a neural network can also comprise another neural network or can comprise any number of layers (whether convolutional or not).
  • a neural network can utilize machine learning, which can refer to a broad class of such techniques in which an output is generated based on learned parameters.
  • one or more neural network(s) may generate any number of learned inferences or heads from the captured sensor and/or image data.
  • the neural network may be a trained network architecture that is end-to- end.
  • the machine learning models may include segmenting and/or classifying extracted deep convolutional features of the sensor and/or image data into semantic data.
  • appropriate truth outputs of the model in the form of semantic per-pixel classifications (e.g., vehicle identifier, container identifier, driver identifier, and the like).
  • machine learning algorithms can include, but are not limited to, regression algorithms (e.g., ordinary least squares regression (OLSR), linear regression, logistic regression, stepwise regression, multivariate adaptive regression splines (MARS), locally estimated scatterplot smoothing (LOESS)), instance-based algorithms (e.g., ridge regression, least absolute shrinkage and selection operator (LASSO), elastic net, least-angle regression (LARS)), decisions tree algorithms (e.g., classification and regression tree (CART), iterative dichotomiser 3 (ID3), Chi-squared automatic interaction detection (CHAID), decision stump, conditional decision trees), Bayesian algorithms (e.g., naive Bayes, Gaussian naive Bayes, multinomial naive Bayes, average one-dependence estimators (AODE), Bayesian belief network (BNN), Bayesian networks), clustering algorithms (e.g., OLSR), linear regression, logistic regression, stepwise regression, multivariate adaptive regression splines (MARS), locally estimated
  • architectures include neural networks such as ResNet50, ResNetlOl, VGG, DenseNet, PointNet, and the like.
  • the system may also apply Gaussian blurs, Bayes Functions, color analyzing or processing techniques and/or a combination thereof.
  • FIG. 1 is an example block diagram of a system 100 for generating multi-modal volumetric data, such as 3D volumetric model 102 of a feature 104 or anatomical data of a patient according to some implementations.
  • the system 100 may include user equipment or scanning device 106 configured to capture sensor data of the feature 104 (e.g., the foot of the patient) and to generate the 3D volumetric model 102 representing the feature 104.
  • the 3D volumetric model 102 is illustrated including integrated temperature data, however, it should be understood that the 3D volumetric model 102 may include other integrated data, such as microbial load, and/or multiple types of integrated data that may be viewed together (such as via a combined overlay) and/or individually (such as via a selectable option).
  • integrated data such as microbial load
  • multiple types of integrated data such as via a combined overlay
  • individually such as via a selectable option
  • the user equipment 106 may include one or more emitters 108 for outputting signals (such as particular types of light, waves, radiation, and the like) while the healthcare professional 110 is scanning the feature 104 of the patient.
  • the 3D volumetric model 102 may also integrate microbial load data that may be generated via an ultraviolet (UV) emitter 108 and sensor that integrates the UV data into the model via one or more machine learning model trained on UV data and corresponding microbial types, loads, species, and known related conditions.
  • UV ultraviolet
  • the user equipment 106 may also include both image capture devices 112 and/or sensors 114.
  • the user equipment 102 may scan both image data (such as red-green-blue data of the feature 104) together with depth data, temperature data, and the like captured by the sensors 114.
  • the image capture devices 112 and the sensors 114 may be combined, such as via the same device or package.
  • the image capture devices 112 and the sensors 114 may include one or more of devices configured to capture color data, infrared data, LIDAR data, radar data, impedance data, electric field data, photoplethysmography data, tissue perfusion data, tissue oxygenation data, radio wave data, radiation data, audio data, stereoscopic data, magnetic data, contact data, depth data, temperature data, ultraviolet data, motion data, a combination thereof, and/or the like.
  • the user equipment 106 may also include one or more user interfaces 116.
  • the user interfaces 116 may include input interfaces (e.g., mouse or keyboard) or output interfaces (e.g., a display).
  • the user interfaces 116 may include a virtual environment display or a traditional two-dimensional display, such as a liquid crystal display or a light emitting diode display.
  • the user interfaces 116 may also include one or more input components for receiving feedback from the user.
  • the input components may include tactile input components, audio input components, gesture or motion inputs (such as IMU inputs), or other natural language processing components.
  • the user interfaces 116 may be a combined input and output, such as a touch enabled display for viewing and interacting with the 3D volumetric model 102, performing a scan, and/or the like.
  • the user equipment 106 also includes processors 118, one or more computer- readable media 120, and/or communication interfaces 122, as discussed in more detail below.
  • each of the processors 118 may itself comprise one or more processors or processing cores.
  • the computer-readable media 120 may include volatile media (such as random access memory (RAM)) and/or nonvolatile media (such as read only memory (ROM), Flash memory, optical disks, magnetic disks, and so forth).
  • the computer-readable media 120 may include fixed media (e.g., RAM, ROM, a fixed hard drive, and so on) as well as removable media (e.g., Flash memory, a removable hard drive, an optical disc, and so forth).
  • the computer-readable media 120 may be configured in a variety of other ways as further described below. Several modules such as instructions, data stores, and so forth may be stored within the computer-readable media 120 and configured to execute on the processors 118.
  • the communication interface(s) 122 can facilitate communication with other proximate sensor systems and/or other facility systems.
  • the communications interfaces(s) 122 may enable Wi-Fi-based communication such as via frequencies defined by the IEEE 802.11 standards, short range wireless frequencies such as Bluetooth, Bluetooth Low Energy (BLE), cellular communication (e.g., 2G, 3G, 4G, 4G LTE, 5G, etc.), satellite communication, dedicated short-range communications (DSRC), or any suitable wired or wireless communications protocol that enables the respective computing device to interface with the other computing device(s).
  • BLE Bluetooth Low Energy
  • cellular communication e.g., 2G, 3G, 4G, 4G LTE, 5G, etc.
  • satellite communication e.g., dedicated short-range communications (DSRC), or any suitable wired or wireless communications protocol that enables the respective computing device to interface with the other computing device(s).
  • DSRC dedicated short-range communications
  • the 3D volumetric model 102 showing integrated temperature data includes a hot region 124.
  • the hot region 124 may be an indicator of an infection, a pre-ulcer, or an otherwise at-risk area that may need to be investigated by a healthcare professional, such as professional 110.
  • the user equipment 106 may generate an alert to the patient, a healthcare professional 110, or the like to notify them that the region 124 should be reviewed.
  • the alert may include a list of potential conditions that the healthcare professional 110 may consider during their review or investigation of the region 124.
  • FIG. 2 is an example block diagram of an architecture for the system 200 for generating multi-modal volumetric data 202 at least in part via a cloud-based anatomical data processing system 204 and in part via a user equipment or scanning device 206 according to some implementations.
  • the user equipment 206 such as a sensor device and a processing device, may be used to capture and/or otherwise generate image data 208 and/or sensor data 210 of a feature 212 (e.g., in the current example, a foot) of a human body.
  • the user equipment 206 proximate to the feature 212 may capture the image data 208 and the sensor data 210 and provide the data 208 and 210 to the anatomical data processing system 204 in the cloud.
  • the anatomical data processing system 204 may generate a 3D mesh associated with the image data 208 and representative of the feature 212 of the patient.
  • the anatomical data processing system 204 may also determine one or more physical and/or biological parameters 218 associated with the feature 212 of the patient based at least in part on the sensor data 210.
  • the anatomical data processing system 204 may then provide the volumetric data 202 back to the user equipment 206 such that a healthcare professional 214 may review the data 202 to assist with diagnostics and treatment of the patient.
  • the anatomical data processing system 204 may also provide the volumetric data 202 to one or more third-party systems 216.
  • the anatomical data processing system 204 may provide the volumetric data 202 to one or more additional health care providers assisting the patient with treatment, therapy, and/or unrelated conditions, one or more guardians or custodians, and the like.
  • the anatomical data processing system 204 may also determine or diagnose conditions based at least in part on the volumetric data 202 including the multi-modal 3D volumetric model representative of the feature 212 and corresponding physical and/or biological parameters 218.
  • the anatomical data processing system 204 may input the volumetric data 202 and/or the physical and/or biological parameters 218, such as via the multi-modal 3D volumetric model representative of the feature 212 into one or more machine learning models trained on 3D data of various features of various individuals in various different conditions, states, and the like.
  • the training data may be generated from different individuals having a wide variety of features (such as those classified and identified by one or more other machine learning models), changes in state of the features, determining health status, undergoing different patient specific treatments or therapies and the like.
  • the one or more machine learning models may output potential conditions and/or diagnostics data 220 for review by a healthcare professional 214 together with the volumetric data 202 and integrated physical and/or biological parameters 218.
  • the system 204 may utilize multiple sets of machine learning models, such as a neural network and as discussed herein.
  • the system 200 may be utilized to monitor the feature 212 of the patient over time, such as on a continuous basis, semi-continuous basis, periodic basis (at various frequencies), and/or the like.
  • the anatomical data processing system 204 may generate volumetric data 202 and/or physical and/or biological parameter data 218 for each scan or at each interval, such that the anatomical data processing system 204 generates a multi-modal volumetric model of the feature 212 representative of the feature 212 at each interval or period of time.
  • the anatomical data processing system 204 may compare the multi-modal volumetric models of the feature 212 overtime to generate change data 224 to assist with detection, monitoring, and treatment of various conditions.
  • the anatomical data processing system 204 may output the potential conditions and/or diagnostics data 220, the volumetric data 202, and/or the change data 224 to the healthcare professional 214 via the user equipment 206 and/or to a third-party system 216, as discussed herein.
  • the health care professional 214 may upload the image data 208 and the sensor data 210 via a user equipment 206 to the cloud-based anatomical data processing system 204 and receive in response indications of conditions that may require further evaluation, user specific anthropometric measurements to assist with diagnostics or evaluations, flagged or identified potential conditions, symptoms as well as suggested treatments or therapies.
  • the anatomical data processing system 204 may provide notifications 222 with instructions to perform additional scans and/or capture additional image data 208 and/or sensor data 210 (e.g., additional types of sensor data) associated with a specific feature 212 of the patient to enhance any recommendations or features identified.
  • the anatomical data processing system 204 may also return one or more additional inquiries for the healthcare professional 214 and/or the patient, such as questions related to an accident, a particular body part, history of a feature detected, history of a change in state of a feature detected, and the like to further assist the healthcare professionals in diagnostics and evaluation of the patient.
  • cloud-based processing may consist of multiple servers, available full-time and on demand, making their services substantially ubiquitous, constantly available, on demand and easily accessible (e.g., additional servers may be quickly activated in times of peak demand).
  • servers may consist of multiple computers in large service centers, using different operating systems, benefiting from lower-cost centralized utilities and services, and from expandable infrastructure, such as multiple parallel central processing units (CPUs), graphic processing units (GPUs), arithmetic logic units (ALUs), tensor processing units (TPUs) and quantum processing units (QPUs), to name a few.
  • cloud-servers may also be referred to as backend-servers or simply backend.
  • the data, notifications, measurements, instructions, and the like may be transmitted between various systems using networks, generally indicated by 226-228.
  • the networks 226-228 may be any type of network that facilitates communication between one or more systems and may include one or more cellular networks, ZigBee, Bluetooth, BLE, radio, WiFi networks, short-range or near-field networks, infrared signals, local area networks, wide area networks, the internet, and so forth.
  • each network 226-228 is shown as a separate network but it should be understood that two or more of the networks may be combined or the same.
  • FIG. 3 is an example block diagram of an architecture for user equipment 300 associated with generating multi-modal volumetric data according to some implementations.
  • the user equipment 300 may include one or more communication interface(s) 304 (also referred to as communication devices and/or modems), one or more sensor system(s) 306, and one or more emitter(s) 308.
  • the user equipment 300 can include one or more communication interface(s) 304 that enable communication between the user equipment 300 and one or more other local or remote computing device(s) or remote services, such as a cloud-based system of FIG. 2.
  • the communication interface(s) 304 can facilitate communication with other proximate sensor systems, a central control system, or other facility systems.
  • the communications interfaces(s) 304 may enable Wi-Fi-based communication such as via frequencies defined by the IEEE 802.11 standards, short range wireless frequencies such as Bluetooth, Bluetooth low energy, ZigBee, cellular communication (e.g., 2G, 3G, 4G, 4G LTE, 5G, etc.), satellite communication, dedicated short-range communications (DSRC), or any suitable wired or wireless communications protocol that enables the respective computing device to interface with the other computing device(s).
  • Wi-Fi-based communication such as via frequencies defined by the IEEE 802.11 standards, short range wireless frequencies such as Bluetooth, Bluetooth low energy, ZigBee, cellular communication (e.g., 2G, 3G, 4G, 4G LTE, 5G, etc.), satellite communication, dedicated short-range communications (DSRC), or any suitable wired or wireless communications protocol that enables the respective computing device to interface with the other computing device(s).
  • DSRC dedicated short-range communications
  • the one or more sensor system(s) 306 may be configured to capture the image data 328, the sensor data 330, and the like associated with a feature of a body of a patient.
  • the sensor system(s) 306 may include thermal sensors, time-of-flight sensors, location sensors, LIDAR sensors, radar sensors, sonar sensors, infrared sensors, cameras (e.g., RGB, IR, intensity, depth, and the like), magnetic sensors, microphone sensors, environmental sensors (e.g., temperature sensors, humidity sensors, electric field probes, radio frequency (RF) antennas, impedance sensors, magnetic field sensors, photoplethysmography sensors, tissue oxygenation, tissue perfusion, millimeter-wave radars, light sensors, pressure sensors, and the like), motion sensors, radiation sensors, and the like.
  • thermal sensors time-of-flight sensors, location sensors, LIDAR sensors, radar sensors, sonar sensors, infrared sensors, cameras (e.g., RGB, IR, intensity, depth, and the like), magnetic sensors, microphone sensors
  • the sensor system(s) 306 may include multiple instances of each type of sensor.
  • camera sensors may include multiple cameras disposed at various locations.
  • the user equipment 300 may also include one or more emitter(s) 308 for emitting light, radiation (e.g., x-rays, gamma rays, and the like), electric fields, and/or sound.
  • light may be output by the emitters 308 in the wavelength range from 300nm to 200nm to cause microbes, such as organelles and anabolytes, to fluoresce.
  • the biological parameters of the skin may fluoresce at different wavelengths and/or intensities when comparing healthy skin and/or diseased skin, such as cancerous skin.
  • the emitters 308 in this example may output illumination, fields, lasers, patterns, such as an array of light, audio, radiation, and the like.
  • the user equipment 300 may also include one or more user interfaces 302, such as input (e.g., mouse or keyboard) or output devices (e.g., a display).
  • the user interfaces 302 may include a virtual environment display or a traditional two-dimensional display, such as a liquid crystal display or a light emitting diode display.
  • the user interfaces 302 may also include one or more input components for receiving feedback from the user.
  • the input components may include tactile input components, audio input components, or other natural language processing components.
  • the user interfaces 302 may be a combined touch enabled display.
  • the user equipment 300 may include one or more processors 310 and one or more computer-readable media 312. Each of the processors 310 may itself comprise one or more processors or processing cores.
  • the computer-readable media 312 is illustrated as including memory/storage.
  • the computer-readable media 312 may include volatile media (such as random access memory (RAM)) and/or nonvolatile media (such as read only memory (ROM), Flash memory, optical disks, magnetic disks, and so forth).
  • the computer-readable media 312 may include fixed media (e.g., RAM, ROM, a fixed hard drive, and so on) as well as removable media (e.g., Flash memory, a removable hard drive, an optical disc, and so forth).
  • the computer-readable media 312 may be configured in a variety of other ways as further described below.
  • the computer-readable media 312 may store instructions including data capture instructions 314, model or mesh generation instructions 316, data integration instructions 318, diagnostics instructions 320, change determination instruction 322, reporting instructions 324, as well as other instructions 326, such as an operating system.
  • the computer-readable media 312 may also be configured to store data, such as the image data 328, the sensor data 330, volumetric models and data 332, parameter data 334, change data 336, condition data 338, patient data 340, machine learned models 342, and the like.
  • the data capture instructions 314 may be configured to assist the healthcare professional or other users (such as the patient) in capturing the image data 328 and/or the sensor data 330 of the feature.
  • the data capture instruction 314 may cause a partial mesh to appear on the user interface 302 with arrows or highlighted areas that require additional scanning to thereby visually show the user a current state of the model and/or improve the resulting volumetric model.
  • the model or mesh generation instructions 316 may be configured to receive the image data 328 and/or the sensor data 330 of a feature of a body and to generate a 3D model or mesh of the feature in response.
  • the 3D model may be a volumetric model of a feature.
  • the data integration instructions 318 may be configured to integrate one or more types of sensor data 330 into the 3D model or mesh, such as to generate the multi-modal 3D volumetric model of the feature.
  • the data integration instructions 318 may estimate volumetric values, such as temperature, within the entire volume of the body part, utilize interpolation techniques to achieve concurrent estimates while accumulating additional observations or sensor data over time.
  • the change determination instruction 322 may be configured to detect changes in the volumetric models and data 332 generated over time. For example, the change determination instruction 322 may detect changes in amplitude, magnitude, size, concentrations, and the like of detected elevated temperatures, biological loads, dielectric properties, and the like between a first and second multi-modal volumetric model as discussed herein.
  • the reporting instructions 324 may be configured to send or transmit the diagnostic outputs, the metrics, the 3D models, and the like.
  • the reporting instruction 324 may send the diagnostic outputs, the metrics, the 3D models, and the like to insurance providers, other medical health care professional systems, patient portals or systems, and the like.
  • FIG. 4 is an example block diagram of an architecture for a cloud-based anatomical data processing system 400 associated with generating multi-modal volumetric data according to some implementations.
  • the cloud-based anatomical data processing system 400 can include one or more communication interface(s) 402 that enables communication between the cloud-based anatomical data processing system 400 and one or more other local or remote computing device(s) or remote services, such as the user equipment.
  • the communication interface(s) 402 can facilitate communication with other proximate sensor systems and/or other facility systems.
  • the communications interfaces(s) 402 may enable Wi-Fi-based communication such as via frequencies defined by the IEEE 802.11 standards, short range wireless frequencies such as Bluetooth, BLE, cellular communication (e.g., 2G, 3G, 4G, 4G LTE, 5G, etc.), satellite communication, dedicated short-range communications (DSRC), or any suitable wired or wireless communications protocol that enables the respective computing device to interface with the other computing device(s).
  • Wi-Fi-based communication such as via frequencies defined by the IEEE 802.11 standards, short range wireless frequencies such as Bluetooth, BLE, cellular communication (e.g., 2G, 3G, 4G, 4G LTE, 5G, etc.), satellite communication, dedicated short-range communications (DSRC), or any suitable wired or wireless communications protocol that enables the respective computing device to interface with the other computing device(s).
  • the cloud-based anatomical data processing system 400 may include one or more processors 404 and one or more computer-readable media 406. Each of the processors 404 may itself comprise one or more processors or processing cores.
  • the computer-readable media 406 is illustrated as including memory/storage.
  • the computer-readable media 406 may include volatile media (such as random access memory (RAM)) and/or nonvolatile media (such as read only memory (ROM), Flash memory, optical disks, magnetic disks, and so forth).
  • the computer-readable media 406 may include fixed media (e.g., RAM, ROM, a fixed hard drive, and so on) as well as removable media (e.g., Flash memory, a removable hard drive, an optical disc, and so forth).
  • the computer-readable media 406 may be configured in a variety of other ways as further described below.
  • Several modules such as instructions, data stores, and so forth may be stored within the computer-readable media 406 and configured to execute on the processors 404.
  • the computer-readable media 406 stores instructions including data capture instructions 408, model or mesh generation instructions 410, data integration instructions 412, diagnostics instructions 414, change determination instruction 416, reporting instructions 418, as well as other instructions 420, such as an operating system.
  • the computer-readable media 406 may also be configured to store data, such as the image data 422, the sensor data 424, volumetric models and data 426, parameter data 428, change data 430, condition data 432, patient data 434, machine learned models 436, and the like.
  • the data capture instructions 408 may be configured to assist the healthcare professional or other users (such as the patient) in capturing the image data 422 and/or the sensor data 424 of the feature.
  • the data capture instruction 408 may send notifications to the user equipment to assist with collection of additional data 422 and 424 for integration into the volumetric model of the feature.
  • the model or mesh generation instructions 410 may be configured to receive the image data 422 and/or the sensor data 424 of a feature of a body and to generate a 3D model or mesh of the feature in response.
  • the 3D model may be a volumetric model of a feature.
  • the data integration instructions 412 may be configured to integrate one or more types of sensor data 424 into the 3D model or mesh, such as to generate the multi-modal 3D volumetric model of the feature.
  • the data integration instructions 412 may estimate volumetric values, such as temperature, within the entire volume of the body part, utilize interpolation techniques to achieve concurrent estimates while accumulating additional observations or sensor data over time,
  • the diagnostics instructions 414 may be configured to detect possible concerns, issues, or conditions with the feature based at least in part on the volumetric models and data 426, such as detection of elevated temperature, biological load, dielectric properties, and the like at various regions within the volumetric models and data 426.
  • the change determination instruction 416 may be configured to detect changes in the volumetric models and data 426 generated over time. For example, the change determination instruction 416 may detect changes in amplitude, magnitude, size, concentrations, and the like of detected elevated temperatures, biological loads, dielectric properties, tissue perfusion, tissue oxygenation, and the like between a first and second multi-modal volumetric model as discussed herein.
  • the reporting instructions 418 may be configured to send or transmit the diagnostic outputs, the metrics, the 3D models, and the like.
  • the reporting instruction 418 may send the diagnostic outputs, the metrics, the 3D models, and the like to insurance providers, other medical health care professional systems, patient portals or systems, and the like.
  • FIGS. 5-13 are flow diagrams illustrating example processes associated with the generating multi-modal volumetric data and detecting potential conditions associated with the feature represented by the multi-modal volumetric data or model, as discussed herein.
  • the processes are illustrated as a collection of blocks in a logical flow diagram, which represent a sequence of operations, some or all of which can be implemented in hardware, software, or a combination thereof.
  • the blocks represent computer-executable instructions stored on one or more computer-readable media that, when executed by one or more processor(s), perform the recited operations.
  • computer-executable instructions include routines, programs, objects, components, encryption, deciphering, compressing, recording, data structures and the like that perform particular functions or implement particular abstract data types.
  • FIG. 5 is an example process 500 flow diagram associated with generating multi-modal volumetric data and detecting potential conditions associated with the feature represented by the multi-modal volumetric data or model according to some implementations.
  • the system discussed herein may include a local device in physical proximity to a patient and/or a cloud-based service or system configured to process data generated by the local device.
  • the system may generate 3D multi-modal volumetric data or models of a feature of a human body for use in diagnostics and patient monitoring.
  • the system may receive image data and/or sensor data of a feature of a body.
  • the system may capture, generate, and/or receive the image data (such as 3D image data, RGB image data, and the like) of a feature (such as a limb, appendage, torso, region, organ, or the like of a patient) together with sensor data having various types or modes (e.g., radiation levels, texture, temperature, pressure, electric field, infrared, ultraviolet, and the like) of the feature.
  • image data such as 3D image data, RGB image data, and the like
  • sensor data having various types or modes (e.g., radiation levels, texture, temperature, pressure, electric field, infrared, ultraviolet, and the like) of the feature.
  • the system may generate, based at least in part on the image data, a 3D mesh associated with the feature of the body and, at 506, the system may integrate, based at least in part on the sensor data, physical parameter data associated with the feature of the body into the 3D mesh.
  • the image data and sensor data may include multiple observations (such as to reduce noise) by accumulating the image and sensor data over time.
  • the system may perform time-averaging or a weighted time averaging of the image data and/or sensor data (such as in which the weights are proportional to the estimated quality of the signal).
  • the system may apply one or more signed differences functions (SDF) or truncated signed differences functions (TSDF) that may be extended beyond the image data to other physical parameters represented by the sensor data as a physical TSDF (PTSDF).
  • SDF signed differences functions
  • TSDF truncated signed differences functions
  • the PTSDF may be converted to a 3D mesh using similar techniques to those used to convert a TSDF into a mesh via 3D reconstruction techniques.
  • some 3D reconstruction techniques may include triangular and rectangular marching cubes.
  • the 3D mesh conversion may be performed in steps or parts so that the 3D reconstructed mesh and the physical parameter mesh are computed concurrently or substantially simultaneously and then combined.
  • the system may also solve for systems of partial differential equations (PDEs) including a large number of parameters.
  • PDEs partial differential equations
  • the physical parameters may be stored as textures, in which case the physical parameters may be applied to the volume of the 3D mesh using techniques used to apply textures to 3D meshes such as, for example, 3D UV mapping and texturing.
  • generating the 3D mesh, the PTSDF, or multi-modal volumetric model may include mesh cleaning, such as removing vertices or faces that overlap, or that are too close to each other, closing holes or making the 3D mesh watertight (e.g., via techniques such as Poisson technique), and/or reducing a number of faces in the 3D mesh so that the resulting number of faces meets a given requirement for average mesh density (such as to improve performance).
  • mesh cleaning such as removing vertices or faces that overlap, or that are too close to each other, closing holes or making the 3D mesh watertight (e.g., via techniques such as Poisson technique), and/or reducing a number of faces in the 3D mesh so that the resulting number of faces meets a given requirement for average mesh density (such as to improve performance).
  • the system may determine, based at least in part on the 3D mesh having integrated the sensor data and, thus, the physical parameters, a potential condition associated with the feature of the body and, at 510, the system may determine, based at least in part on the 3D mesh, a region associated with the potential condition. For example, the system may compare various regions of the 3D mesh or multi-modal volumetric model to one or more thresholds over corresponding types or modes of the sensor data. For instance, the system may compare the temperature data to one or more thresholds in order to detect potential ulcers, wounds, infections, sores, or the like. In some cases, the system may determine the region by detecting an edge or transition boundary between the mode of the sensor data (e.g., temperature) representing sensor data above or equal to the one or more threshold and sensor data below or equal to the one or more thresholds.
  • the mode of the sensor data e.g., temperature
  • the system may send an alert including the 3D mesh or multi-modal volumetric model, the region identified on the 3D mesh or multi-modal volumetric model, and/or an indication of the potential condition.
  • the system may send the alert to a physician, the patient, a guardian or agent, family member, and/or the like.
  • FIG. 6 is another example process 600 flow diagram associated with generating multi-modal volumetric data and detecting potential conditions associated with the feature represented by the multi-modal volumetric data or model according to some implementations.
  • the system discussed herein may include a local device in physical proximity to a patient and/or a cloud-based service or system configured to process data generated by the local device.
  • the system may generate 3D multi-modal volumetric data or models of a feature of a human body for use in diagnostics and patient monitoring.
  • the system may receive image data and/or sensor data of a feature of a body.
  • the system may capture, generate, and/or receive the image data (such as 3D image data, RGB image data, and the like) of a feature (such as a limb, appendage, torso, region, organ, or the like of a patient) together with sensor data having various types or modes (e.g., radiation levels, texture, temperature, pressure, electric field, infrared, ultraviolet, and the like) of the feature.
  • image data such as 3D image data, RGB image data, and the like
  • sensor data having various types or modes (e.g., radiation levels, texture, temperature, pressure, electric field, infrared, ultraviolet, and the like) of the feature.
  • the system may generate, based at least in part on the image data, a 3D mesh associated with the feature of the body.
  • the system may apply one or more SDF or TSDF that may be extended beyond the image data to other physical parameters represented by the sensor data as a PTSDF, as discussed above.
  • the PTSDF may be converted to a 3D mesh using similar techniques to those used to convert a TSDF into a mesh via 3D reconstruction techniques.
  • some 3D reconstruction techniques may include triangular and rectangular marching cubes.
  • the 3D mesh conversion may be performed in steps or parts so that the 3D reconstructed mesh and the physical parameter mesh are computed concurrently or substantially simultaneously and then combined.
  • the system may determine, based at least in part on the sensor data and/or the 3D mesh, physical parameters associated with the feature of the body. In some cases, the system may integrate or overlay the physical parameter data onto the 3D mesh to generate a multi-modal volumetric model of the feature of the body, as discussed herein.
  • the system may determine, based at least in part on the physical parameter and/or the three dimensional mesh, gradients and/or Laplacians associated with regions representing differentials in the physical parameter data associated with the feature of the body and, at 610, the system may determine, based at least in part on the gradients and/or Laplacian, local maxima and/or local minima.
  • the gradients e.g., temperature, microbial loads, electrical fields, dielectric properties, and the like
  • Laplacians are determined using finite differences that allows the identification of local maxima and/or minima at 610 by identifying points within the 3D volume or mesh of the feature in which the gradients approach zero and the Laplacian is negative for local maxima and positive for local minima.
  • the system may determine, based at least in part on the three- dimensional mesh and the local maxima or local minima, a region associated with the physical parameter. For example, the system may compare the local maxima and/or local minima to one or more thresholds to determine a region associated with an abnormal value of the physical parameter. For example, when the physical parameter is temperature, the system may determine a hot region or a cold region based on at least one or more thresholds associated with normal human body temperature.
  • the system may determine that the region is infected with one or more types of bacteria when the microbial load is at or above the one or more microbial load thresholds (e.g., saturation thresholds, brightness thresholds, size thresholds, and the like).
  • the system may also determine a type of microbe associated with the region based on additional sensor data, such as fluorescence data (e.g., fluorescence color) compared with known microbe response to UV light.
  • the physical parameter is tissue perfusion and/or tissue oxygenation, both of which may be detected by near infrared (NIR) spectroscopy.
  • NIR near infrared
  • Local minima may represent necrotic tissue while local maxima may represent infected tissue.
  • Tissue perfusion and oxygenation can further be applied to monitor the progression of the healing of wounds, and the acceptance and progressive healing of skin grafts.
  • the system may send an alert including the region identified on the three- dimensional mesh and an indication of the potential condition.
  • the system may send the alert to a physician, the patient, a guardian or agent, family member, and/or the like.
  • FIG. 7 is another example process 700 flow diagram associated with generating multi-modal volumetric data and detecting potential conditions associated with the feature represented by the multi-modal volumetric data or model according to some implementations.
  • the system may generate the 3D mesh or multi-modal volumetric model to detect potential conditions, including conditions that are not visible (e.g., below the surface), of the feature of the human body being scanned.
  • process 700 discussed a method for detecting potential ulcers and/or pre-ulcerative lesions prior to the ulcer’s formation on the exposed skin.
  • the system may receive image data and/or sensor data of a feature of a body.
  • the system may capture, generate, and/or receive the image data (such as 3D image data, RGB image data, and the like) of a feature (such as a limb, appendage, torso, region, organ, or the like of a patient) together with sensor data having various types or modes (e.g., radiation levels, texture, temperature, pressure, electric field, infrared, ultraviolet, and the like) of the feature.
  • the system may generate, based at least in part on the image data, a 3D mesh associated with the feature of the body.
  • the system may apply one or more SDF or TSDF that may be extended beyond the image data to other physical parameters represented by the sensor data as a PTSDF, as discussed above.
  • the PTSDF may be converted to a 3D mesh using similar techniques to those used to convert a TSDF into a mesh via 3D reconstruction techniques.
  • some 3D reconstruction techniques may include triangular and rectangular marching cubes.
  • the 3D mesh conversion may be performed in steps or parts so that the 3D reconstructed mesh and the physical parameter mesh are computed concurrently or substantially simultaneously and then combined.
  • the system may determine, based at least in part on the sensor data and/or the 3D mesh, physical parameters associated with the feature of the body. In some cases, the system may integrate or overlay the physical parameter data onto the 3D mesh to generate a multi-modal volumetric model of the feature of the body, as discussed herein.
  • the system may determine, based at least in part on the physical parameter and/or the three dimensional mesh, gradients and/or Laplacians associated with regions representing differentials in the physical parameter data associated with the feature of the body and, at 710, the system may determine local maxima based at least in part on the gradients and/or Laplaciansl.
  • the gradients e.g., temperature
  • Laplacians are determined using finite differences that allow the identification of local maxima and/or minima at 710 by identifying points within the 3D volume or mesh of the feature in which the gradients approach zero and the Laplacian is negative for a local maxima.
  • the system may determine, based at least in part on the three- dimensional mesh and the local maxima, a region associated with the physical parameter (e.g., temperature). For example, the system may compare the local maxima to one or more thresholds to determine a region associated with an abnormal value of the physical parameter (e.g., temperature). For example, when the physical parameter is temperature, the system may determine a hot region based on at least one or more thresholds associated with normal human body temperature.
  • the physical parameter e.g., temperature
  • the system may determine a hot region based on at least one or more thresholds associated with normal human body temperature.
  • identifying the region as a potential ulcer or pre-ulcerative lesions may be determined. For example, the system may compare all local maxima identified to an initial threshold (such as approximately 37 degrees Celsius or 98.6 degrees Fahrenheit). When the local maxima exceeds one or more thresholds, then the system may identify the region as a potential ulcerous or pre-ulcerative lesion location. In some cases, regions having a local maxima above a second threshold may be marked and available for visualization on the 3D mesh by the user or health care professional.
  • an initial threshold such as approximately 37 degrees Celsius or 98.6 degrees Fahrenheit
  • the system may send an alert including the region identified on the three- dimensional mesh and an indication of the potential ulcer or pre-ulcerative lesion.
  • the system may send the alert to a physician, the patient, a guardian or agent, family member, and/or the like.
  • FIG. 8 is another example process 800 flow diagram associated with generating multi-modal volumetric data and detecting potential conditions associated with the feature represented by the multi-modal volumetric data or model according to some implementations.
  • the system may generate the 3D mesh or multi-modal volumetric model to detect potential conditions, including conditions that are not visible (e.g., below the surface) of the feature of the human body being scanned.
  • process 800 discussed a method for detecting potential necrosis.
  • the system may receive image data and/or sensor data of a feature of a body.
  • the system may capture, generate, and/or receive the image data (such as 3D image data, RGB image data, and the like) of a feature (such as a limb, appendage, torso, region, organ, or the like of a patient) together with sensor data having various types or modes (e.g., radiation levels, texture, temperature, pressure, electric field, infrared, ultraviolet, and the like) of the feature.
  • image data such as 3D image data, RGB image data, and the like
  • sensor data having various types or modes (e.g., radiation levels, texture, temperature, pressure, electric field, infrared, ultraviolet, and the like) of the feature.
  • the system may determine, based at least in part on the physical parameter and/or the three dimensional mesh, gradients and/or Laplacians associated with regions representing differentials in the physical parameter data associated with the feature of the body and, at 810, the system may determine local maxima, based at least in part on the gradients and/or Laplacians.
  • the gradients e.g., temperature
  • Laplacians are determined using finite differences that allow the identification of local minima at 810 by identifying points within the 3D volume or mesh of the feature at which the gradients approach zero and the Laplacian is negative for a local maxima.
  • the system may determine, based at least in part on the three- dimensional mesh and the local minima, a region of concern associated with the physical parameter (e.g., temperature). For example, the system may compare the local maxima to one or more thresholds to determine a region associated with an abnormal value of the physical parameter (e.g., temperature). For example, when the physical parameter is temperature, the system may determine a cold region based on at least one or more thresholds associated with normal human body temperature.
  • a region of concern associated with the physical parameter e.g., temperature
  • the system may compare the local maxima to one or more thresholds to determine a region associated with an abnormal value of the physical parameter (e.g., temperature). For example, when the physical parameter is temperature, the system may determine a cold region based on at least one or more thresholds associated with normal human body temperature.
  • identifying the region as a potential necrosis For example, the system may compare all local maxima identified to an initial threshold (such as approximately 33, 34, or 36 degrees Celsius). When the local maxima exceeds one or more thresholds, then the system may identify the region as a potential necrosis location. In some cases, regions having a local minima below a second threshold may be marked and available for visualization on the 3D mesh by the user or health care professional.
  • an initial threshold such as approximately 33, 34, or 36 degrees Celsius.
  • the system may send an alert including the region identified on the three-dimensional mesh and an indication of the potential necrosis.
  • the system may send the alert to a physician, the patient, a guardian or agent, family member, and/or the like.
  • FIG. 9 is another example process 900 flow diagram associated with generating multi-modal volumetric data and detecting potential conditions associated with the feature represented by the multi-modal volumetric data or model according to some implementations.
  • the system may generate the 3D mesh or multimodal volumetric model to detect potential conditions, including conditions that are related to, responsive to, and/or caused by bacteria and other microbes.
  • the system may emit ultraviolet light onto a feature of a body and, at 904, the system may capture image data and/or sensor data (e.g., UV data) of the feature of the body while the UV light is emitted.
  • image data and/or sensor data e.g., UV data
  • the UV light may be emitted onto the feature being scanned to cause any microbial activity to fluoresce in a manner that may be detected by one or more UV sensors.
  • the system may generate, based at least in part on the image data, a three-dimensional mesh associated with the feature of the body. For example, the system may apply one or more SDF or TSDF that may be extended beyond the image data to other physical parameters represented by the sensor data as a PTSDF, as discussed above.
  • the PTSDF may be converted to a 3D mesh using similar techniques to those used to convert a TSDF into a mesh via 3D reconstruction techniques.
  • some 3D reconstruction techniques may include triangular and rectangular marching cubes.
  • the 3D mesh conversion may be performed in steps or parts so that the 3D reconstructed mesh and the physical parameter mesh are computed concurrently or substantially simultaneously and then combined.
  • the system may determine, based at least in part on the sensor data, a microbial load associated with a region of the feature of the body. For example, based on the sensor data, such as a fluoresce color, intensity, saturation, size of a region, and the like, the system may determine a type or species of microbe associated with one or more regions and an estimated or approximate quantity or load of each species of microbe. The system may also determine a bounding region associated with each instance of the microbial loads, such as via the local maxima and gradient techniques discussed herein.
  • the system may determine, based at least in part on the microbial load, a potential condition associated with the feature of the body. For example, using the species or type of microbe and the load data, the system may determine specific conditions, such as types of infections (e.g., bacterial infection caused by multiple classes of bacteria, fungal infection, cellulitis, abscesses, warts, human papillomavirus (HPV), and the like).
  • types of infections e.g., bacterial infection caused by multiple classes of bacteria, fungal infection, cellulitis, abscesses, warts, human papillomavirus (HPV), and the like.
  • HPV human papillomavirus
  • the system may be theranostic. That is, the same UV light used to cause fluorescence of tissue may also be used to kill pathogens, thus having a therapeutic application in addition to a diagnostic one.
  • the system may send an alert including the region identified on the three-dimensional mesh and an indication of the potential microbial condition.
  • the system may send the alert to a physician, the patient, a guardian or agent, family member, and/or the like.
  • FIG. 10 is another example process flow 1000 diagram associated with generating multi-modal volumetric data and detecting potential conditions associated with the feature represented by the multi-modal volumetric data or model according to some implementations.
  • the system may generate the 3D mesh or multimodal volumetric model to detect potential conditions, including conditions that are related to, responsive to, and/or caused by low oxygenation levels.
  • the system may emit infrared (IR) light onto a feature of a body and, at 1004, the system may capture image data and/or sensor data (e.g., IR data) of the feature of the body while the IR light is emitted.
  • IR infrared
  • the IR light may be absorbed by oxygenated red blood cells, thus indicating a higher concentration of oxygen and, thus, perfusion, at the tissue location.
  • the system may generate, based at least in part on the image data, a three-dimensional mesh associated with the feature of the body. For example, the system may apply one or more SDF or TSDF that may be extended beyond the image data to other physical parameters represented by the sensor data as a PTSDF, as discussed above.
  • the PTSDF may be converted to a 3D mesh using similar techniques to those used to convert a TSDF into a mesh via 3D reconstruction techniques.
  • some 3D reconstruction techniques may include triangular and rectangular marching cubes.
  • the 3D mesh conversion may be performed in steps or parts so that the 3D reconstructed mesh and the physical parameter mesh are computed concurrently or substantially simultaneously and then combined.
  • the system may determine, based at least in part on the sensor data, an oxygenation level associated with a region of the feature of the body (including but not limited to tissue regions and arterial oxygenation levels). For example, based on the sensor data, such as a blood flow, color, and the like, the system may determine a level of oxygenation associated with one or more regions. The system may also determine a bounding region associated with each instance of the different oxygenation levels, such as via the local maxima, local minima, and/or gradient techniques discussed herein.
  • the system may determine, based at least in part on the oxygenation level, a potential condition associated with the feature of the body. For example, using the level of oxygenation, the system may determine specific conditions, such as skin breakdowns, pressure ulcers, cyanosis, chronic venous insufficiency, and the like. In some cases, the level of oxygenation may also be utilized to determine the progress of a therapy or treatment of a known condition, such as healing of wounds. In this case, the level of oxygenation may be determined over time and the amount and rate of change may be utilized to determine progress of treatment and/or healing of the wound. [00115] At 1012, the system may send an alert including the region identified on the three-dimensional mesh and an indication of the potential condition. For example, the system may send the alert to a physician, the patient, a guardian or agent, family member, and/or the like.
  • FIG. 11 is another example process 1100 flow diagram associated with generating multi-modal volumetric data and detecting potential conditions associated with the feature represented by the multi-modal volumetric data or model according to some implementations.
  • the system discussed herein may be configured to monitor a condition of a patient or user over time. For example, if the user is suffering from a wound, bed sore, or the like, the system may be configured to monitor changes in size, depth, blood flow, temperature (e.g., infection), and the like in a medical facility, at home, and/or the like.
  • the system may receive image data and/or sensor data of a feature of a body having a condition and undergoing therapy for the condition.
  • the system may capture, generate, and/or receive the image data (such as 3D image data, RGB image data, and the like) of a feature (such as a limb, appendage, torso, region, organ, or the like of a patient) together with sensor data having various types or modes (e.g., radiation levels, texture, temperature, tissue perfusion, tissue oxygenation, pressure, electric field, infrared, ultraviolet, and the like) of the feature.
  • prior 3D multi-modal volumetric models or 3D meshes of the feature including the wound may exist and be stored with respect to a non-transitory computer readable media accessible to the system.
  • the system may generate, based at least in part on the image data, a 3D mesh associated with the feature of the body including the condition.
  • the system may apply one or more SDF or TSDF that may be extended beyond the image data to other physical parameters represented by the sensor data as a PTSDF, as discussed above.
  • the PTSDF may be converted to a 3D mesh using similar techniques to those used to convert a TSDF into a mesh via 3D reconstruction techniques.
  • some 3D reconstruction techniques may include triangular and rectangular marching cubes.
  • the 3D mesh conversion may be performed in steps or parts so that the 3D reconstructed mesh and the physical parameter mesh are computed concurrently or substantially simultaneously and then combined.
  • the system may determine, based at least in part on the sensor data, a physical parameter(s) (such as a microbial load or the like) associated with the condition of the feature of the body.
  • a physical parameter(s) such as a microbial load or the like
  • the system may integrate or overlay physical parameter(s) (such as a microbial load or the like) data onto the 3D mesh to generate a multi-modal volumetric model of the feature of the body, as discussed herein.
  • the system may determine, based at least in part on the physical parameter(s) (such as a microbial load or the like) and a second three-dimensional mesh of the feature of the body captured at a prior period of time, a change in the physical parameter associated with the feature of the body.
  • the system may be configured to periodically, continuously, or at various other intervals generate the 3D mesh and/or multi-modal volumetric model, such that a record of the feature and/or condition is generated over a period of time.
  • the system may compare the microbial load (and/or other physical parameters) of a current 3D mesh or model to the prior generated models to determine if there is a change, such as an increase or decrease in the microbial load, a size of the microbial activity, or the like.
  • the system may determine, based at least in part on the change in the region and/or the physical parameter, if the treatment is improving the condition.
  • the change may be a change in the size of the region associated with the condition, a change in the total metric or physical parameter (e.g., temperature or the like), or the like.
  • the system may determine if the condition is improving based at least in part on the change. For example, if the size of the region associated with a microbial load is shrinking or the total microbial load is decreasing the treatment may be improving the condition. Otherwise, if the microbial load is stable or growing or the total microbial load is stable or increasing the treatment is failing to improve the condition. If the condition is improving the process 1100, may proceed to 1114 and send a recommendation to continue the therapy to a health professional, insurance company, patient, guardian, and/or the like.
  • the process 1100 may proceed to 1116 and send a recommendation to alter the therapy to a health professional, insurance company, patient, guardian, and/or the like.
  • the system may recommend alternative therapy or treatment.
  • the system may advise the health professional to review the data and select an alternative treatment or therapy.
  • FIG. 12 is an example process 1200 flow diagram associated with generating multi-modal volumetric data according to some implementations.
  • a system may be configured to generate a multi-modal volumetric model of a feature of a body.
  • the system may overlay one or more physical parameters onto the 3D mesh of the feature of the body in order to generate the multi-modal volumetric model that may be viewed by a healthcare professional, patient, or other user.
  • the system may receive image data, depth data, and/or sensor data of a feature of a body being modeled.
  • the system may capture, generate, and/or receive the image data (such as 3D image data, RGB image data, and the like) of a feature (such as a limb, appendage, torso, region, organ, or the like of a patient) together with sensor data having various types or modes (e.g., radiation levels, texture, temperature, pressure, electric field, infrared, ultraviolet, and the like) of the feature.
  • the image data, the depth data and the sensor data may be captured by a single device concurrently.
  • the image data and the depth data 1204 may be provided to a 3D mesh generation system and the sensor data 1206 may, concurrently, be provided to a system for generating a parameter map of a physical parameter being exhibited by the feature of the body.
  • the system may generate a 3D model, mesh, or depth map of the feature of the body and output local services 1208 of the 3D model, mesh, or depth map.
  • the system may generate the 3D model or depth map by applying a time integrating loop (such as a KinectFusion technique). For example, the system may determine a new pose for an existing 3D model or depth map based at least in part on image data and the depth data 1204. The system may extract expected surfaces from the new pose.
  • the system may apply one or more tracking algorithms or techniques (such as an iterative closest point technique) to refine the pose estimates.
  • the system may, based at least in part on the refined pose estimates, project the depth map onto a signed distance function (SDF).
  • SDF signed distance function
  • the system may also perform weighted average updates of the SDF and on select voxels to output 3D mesh, model, and depth map as well as one or more local surfaces 1208 associated with the 3D mesh and depth map.
  • the system may determine local surface normals 1212 based at least in part on the local surfaces 1208.
  • the system may utilize a point to surface correspondence technique to determine local surfaces 1208.
  • the system may utilize techniques such as principal component analysis, least-squares plane fitting, nearest neighbors, or the like.
  • the system may generate a physical parameter map 1216 of the feature of the body, based at least in part on the sensor data 1206.
  • the physical parameter may be temperature, microbial load, dielectric properties, blood flow, electric field potential, magnetic field strength, tissue perfusion, tissue oxygenation, and the like.
  • the system may perform angular and distance correction on the physical parameter map 1216.
  • the angular and distance correction may be applied for each pixel in the physical parameter map 1216 based at least in part on the surface normals 1212 generated at 1210.
  • the angular and distance correction may be based at least in part on a-priori knowledge of a response of the sensor device used to capture the sensor data 1206 and/or the image and depth data 1204.
  • the a-priori knowledge may be determined based on the sensor response to different parameter readings at different distances from the feature of the body and/or at different angles with respect to the feature of the body.
  • the system may correctly align the values of the physical parameter map to the local surfaces 1208 using known characteristics of the sensor device.
  • the angular and distance correction may be performed for each pixel of the physical parameter map 1216.
  • the angular and distance correction may be performed with respect to regions or segments of pixels as a single concurrent operation (e.g., the same correction is applied to each pixel of the region or segment).
  • the system may apply mean value properties of a harmonic function to the physical parameter map and, at 1222, the system may apply spatial averaging to the physical parameter.
  • the system may apply the mean value properties of the harmonic function to estimate a value of a given point in the physical parameter map as an average determined by pixels in a circle or neighborhood proximate to the given point.
  • the system may apply mean value properties of the harmonic function and spatial averaging to complete or fill empty points or pixels in the physical parameter map, replace outliers, or the like.
  • the system may utilize spatial averaged estimates to remove outliers, apply one or more Laplacian filters, or the like.
  • the system may update local value estimates of one or more pixels of the physical parameter map and, at 1226, the system may apply mean value property and temporally integrate the physical parameter map. For example, the system may apply the mean value property theorem.
  • the system may determine thermodynamic cases associated with the physical parameter map. For example, the system may apply a Poission technique or algorithm to solve for the thermodynamic cases of the physical parameter map based at least in part on values in an isosurface.
  • values different from zero may be used to define the isosurface. For instance, in an example in which the physical property is temperature, a positive value may be applied for a heating body (e.g., a feature or region that is increasing in temperature) while a negative value may be apply for a cooling body (e.g., the feature or region that is decreasing in temperature).
  • the system may apply temporal filters to the physical parameter map following angular and distance correction at 1218 and, at 1232, the system may store the localized parameter values and boundary conditions as localized values 1234.
  • the system may apply one or more Kalman filters to improve physical parameter value estimates within the physical parameter map.
  • the system may utilize localized physical parameter derivatives that correspond to estimates of a von Neuman one or more boundary conditions.
  • the system may determine global spatiotemporal signed distance functions for the physical parameter. In some cases, the system may determine the global spatiotemporal signed distance functions to improve localized physical parameter values at one or more pixels or points within the physical parameter map. In the current example, the system may utilize the localized values 1234 and the thermodynamic cases generated at 1228 to determine the global spatiotemporal signed distance functions.
  • the system may determine physical parameter sources and sinks with respect to the physical parameter map of the feature and, at 1240, the system may determine and visualize physical parameter flow (e.g., movement or changes in the physical parameters over the map, such as from one pixel to the next).
  • the system may overlay the physical parameter sources and sinks as high and low (hot and cold) regions on the 3D mesh.
  • the system may also overlay the physical flow (such as changes in temperature or parameters values), via one or more arrows showing increases and/or decrease between regions of the 3D mesh. In this manner, the system may generate a multi-modal volumetric model or mesh having an overlay physical parameter map of the feature of the body.
  • FIG. 13 is an example process 1300 flow diagram associated with generating a three-dimensional model of a feature of a body according to some implementations.
  • a system may be configured to generate a 3D multi-modal volumetric model of a feature of a body.
  • the system may overlay one or more physical parameters onto the 3D mesh of the feature of the body in order to generate the multi-modal volumetric model that may be viewed by a healthcare professional, patient, or other user.
  • the system may generate the 3D mesh used to overlay the physical parameter values generated by various sensor systems and modalities, such as indicated by 1216 of FIG. 12.
  • the system may receive image data, depth data, and/or sensor data of a feature of a body being modeled.
  • the system may capture, generate, and/or receive the image data (such as 3D image data, RGB image data, and the like) of a feature (such as a limb, appendage, torso, region, organ, or the like of a patient) together with sensor data having various types or modes (e.g., radiation levels, texture, temperature, pressure, electric field, infrared, ultraviolet, and the like) of the feature.
  • the image data, the depth data and the sensor data may be captured by a single device concurrently.
  • the system may determine a depth map based at least in part on the image data, the depth data, and/or the sensor data. For example, the system may determine the depth map of the feature using various 3D reconstruction techniques and the depth data and/or image data of the feature.
  • the system may determine a pose for the feature, such as a new pose or a change in pose as the feature moves or as additional depth data and/or image data is captured/generated at 1302. For example, the system may utilize an existing depth map to estimate a new pose of the feature.
  • the system may extract expected surface based at least in part on the new pose or the pose determined for the feature and, at 1310, the system may apply one or more tracking techniques to the physical parameter map based at least in part on local surface normals 1212 generated as discussed above with respect to process 1200 of FIG. 12.
  • the system may update the depth map.
  • the system may project the depth map onto a signed distance function (SDF) and perform one or more weighted average updates with respect to the SDF.
  • the system may also apply one or more weights to various voxels, surfaces, and/or pixels of the SDF.
  • the system may then output the SDF or the depth map as a 3D mesh of the feature for use by other processes, such as the process 1200 of FIG. 12 for overlaying or integrating of physical parameter data on the 3D mesh.
  • FIG. 14 is an example pictorial diagram 1400 of a multi-modal volumetric three-dimensional model 1402 of a feature of a patient according to some implementations.
  • the feature is a foot of a human and the multimodal volumetric three-dimensional model 1402 is showing a physical parameter, such as temperature in the current example, with respect to the 3D model 1402.
  • the model 1402 includes a high region (or heat source for temperature), generally indicated by 1404.
  • the system shows multiple regions proximate or about the hot spot 1404 having increasing temperature ranges that may be utilized to determine a size or region associated with the hot spot 1404 and may be used for further diagnostics of the foot.
  • a method comprising: capturing, via user equipment, sensor data of a feature of a human body; generating, based at least in part on the sensor data, a three- dimensional mesh of the feature of the human body; generating, based at least in part on the sensor data, a first physical parameter map associated with feature of the human body and a first physical parameter; generating, based at least in part on the first physical parameter map and the three-dimensional mesh of the feature of the human body, a multi-modal volumetric model of the features of the human body; and outputting the multi-modal volumetric model.
  • the sensor data includes image data, depth data, and data associated with at least one additional sensor modality; generating the three-dimensional mesh of the feature of the human body is based at least in part on the image data and the depth data; and generating the first physical parameter map associated with the feature of the human body is based at least in part on the data associated with at least one additional sensor modality.
  • D The method of any of C, wherein the data associated with at least one additional sensor modality includes at least one of: temperature data, microbial load data, dielectric property data, electric field potential data, magnetic field strength data, tissue perfusion data, and/or tissue oxygenation data.
  • the method of C further comprising: determining, based at least in part on the multi-modal volumetric model, a potential condition associated with the feature of the human body; and determining, based at least in part on the multi-modal volumetric model, a region of the feature of the body associated with the potential condition; and wherein outputting the multi-modal volumetric model includes outputting an indication of the region or the potential condition.
  • determining the potential condition further comprises determining that a quantity of the first physical parameter meets or exceeds a threshold.
  • determining the potential condition further comprises determining that a size, ratio, or percentage of the region associated with the potential condition meets or exceeds a threshold.
  • the method further comprises: capturing, via user equipment, second sensor data of the feature of the human body, the second sensor data captured at a second time subsequent to a time at which the first sensor data was captured and the second sensor data including second image data, second depth data, and second data associated with the at least one additional sensor modality; generating, based at least in part on the second image data and the second depth data, a second three-dimensional mesh of the feature of the human body; generating, based at least in part on the second data associated with the at least one additional sensor modality, a second physical parameter map associated with feature of the human body and the first physical parameter; generating, based at least in part on the second physical parameter map and the second three-dimensional mesh of the feature of the human body, a second multi-modal volumetric model of the features of the human body, the second multi-modal volumetric model including the region; determining a difference in
  • J The method of I, further comprising determining a severity of the potential condition is decreasing based at least in part on a reduction in the size of the region or a reduction in the quantity of the first physical parameter.
  • K The method of I, further comprising determining a severity of the potential condition is increasing based at least in part on an increase in the size of the region or an increase in the quantity of the first physical parameter.
  • L The method of A, further comprising generating, based at least in part on the sensor data, a second physical parameter map associated with feature of the human body and a second physical parameter; and wherein generating the multi-modal volumetric model of the features of the human body is based at least in part on the second physical parameter map.
  • a system comprising a first sensor device to capture image data; a second sensor device to capture depth data; a third sensor device to capture data associated with a first physical parameter; one or more processors; and one or more non-transitory computer-readable media storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising generating, based at least in part on the image data and the depth data, a three-dimensional mesh of a feature of a human body; generating, based at least in part on the data associated with the first physical parameter, a first physical parameter map associated with feature of the human body and the first physical parameter; generating, based at least in part on the first physical parameter map and the three-dimensional mesh of the feature of the human body, a multi-modal volumetric model of the features of the human body; and outputting the multi-modal volumetric model.
  • N The system of M, wherein the data associated with a physical parameter includes at least one of temperature data, microbial load data, dielectric property data, electric field potential data, magnetic field strength data, tissue perfusion data, and/or tissue oxygenation data.
  • the operations further comprise determining, based at least in part on the data associated with a first physical parameter, a gradient associated with the first physical parameter and the feature of the human body; determining, based at least in part on the gradient, a local minima associated with the first physical parameter and the feature of the human body; determining, based at least in part on the local minima, the three- dimensional mesh, a region of the feature of the human body associated with the local minima; responsive to determining that one or more thresholds have been met or exceeded with respect to the local minima or the region, identifying the region as a potential necrosis; and sending to a remote device an alert associated with the potential necrosis.
  • the operations further comprise determining, based at least in part on the data associated with a first physical parameter, a gradient associated with the first physical parameter and the feature of the human body; determining, based at least in part on the gradient, a local maxima associated with the first physical parameter and the feature of the human body; determining, based at least in part on the local maxima, the three- dimensional mesh, a region of the feature of the human body associated with the local maxima; responsive to determining that one or more thresholds have been met or exceeded with respect to the local maxima or the region, identifying the region as a potential pre-ulcerative lesion or deep tissue pressure injury; and sending to a remote device an alert associated with the potential pre-ulcerative lesion or deep tissue pressure injury.
  • R One or more non-transitory computer-readable media storing instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising receiving, from user equipment, image data, depth data, and sensor data of a feature of an individual; generating, based at least in part on the image data and the depth data, a three-dimensional mesh of the feature of the individual; generating, based at least in part on the sensor data and the three- dimensional mesh, a multi-modal volumetric model including a representation of a first physical parameter of the feature of the individual; and outputting the multi-modal volumetric model.
  • T The one or more non-transitory computer-readable media of R, wherein the multi-modal volumetric model is associated with a point in time and the one or more non-transitory computer-readable media stores a plurality of multi-modal volumetric models associated with the feature of the individual, each multi-modal volumetric model captured at a different point in time and the plurality of multi-modal volumetric model forms a record of the feature of the individual.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Public Health (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

A system for generating a multi-modal volumetric model of a feature of a body. For example, the system may determine a 3D mesh of the feature and integrate sensor data, such as temperature, microbial loads, dielectric properties, blood flow, electric field potential, magnetic field strength, tissue oxygenation, and the like into the 3D mesh to generate a multi-modal volumetric model that may be utilized for diagnostics.

Description

SYSTEM AND METHOD FOR VOLUMETRIC SENSING FOR MEDICAL APPLICATIONS
CROSS-REFERENCE TO RELATED APPLICATION(S)
[0001] This application claims priority to U.S. Provisional Application No. 63/603,877 filed on November 29, 2023 and entitled “System and Method for Volumetric Sensing for Medical Applications,” which is incorporated herein by reference in its entirety.
BACKGROUND
[0002] Today, in conventional medical applications, the quality of care of patients may be influenced by the patient’s socio-economic status or by geographic limitations. In many cases, advanced care and treatments often require a level of care and detailed regular attention by medical experts that is often outside the financial or geographic reach of the patient. Accordingly, medical experts and clinicians require a remote system for scanning and diagnosing medical conditions as well as to monitor the progression of treatment and healing of their patients on a more regular basis.
BRIEF DESCRIPTION OF THE DRAWINGS
[0003] The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical components or features.
[0004] FIG. 1 is an example block diagram of a system for generating multi-modal volumetric data of a feature or anatomical data of a patient according to some implementations.
[0005] FIG. 2 is an example block diagram of an architecture for the system for generating multi-modal volumetric data via at least in part a cloud-based anatomical data processing system according to some implementations.
[0006] FIG. 3 is an example block diagram of an architecture for user equipment associated with generating multi-modal volumetric data according to some implementations. [0007] FIG. 4 is an example block diagram of an architecture for a cloud-based anatomical data processing system associated with generating multi-modal volumetric data according to some implementations.
[0008] FIG. 5 is an example process flow diagram associated with generating multimodal volumetric data and detecting potential conditions associated with the feature represented by the multi-modal volumetric data or model according to some implementations.
[0009] FIG. 6 is another example process flow diagram associated with generating multi-modal volumetric data and detecting potential conditions associated with the feature represented by the multi-modal volumetric data or model according to some implementations.
[0010] FIG. 7 is another example process flow diagram associated with generating multi-modal volumetric data and detecting potential conditions associated with the feature represented by the multi-modal volumetric data or model according to some implementations.
[0011] FIG. 8 is another example process flow diagram associated with generating multi-modal volumetric data and detecting potential conditions associated with the feature represented by the multi-modal volumetric data or model according to some implementations.
[0012] FIG. 9 is another example process flow diagram associated with generating multi-modal volumetric data and detecting potential conditions associated with the feature represented by the multi-modal volumetric data or model according to some implementations.
[0013] FIG. 10 is another example process flow diagram associated with generating multi-modal volumetric data and detecting potential conditions associated with the feature represented by the multi-modal volumetric data or model according to some implementations.
[0014] FIG. 11 is another example process flow diagram associated with generating multi-modal volumetric data and detecting potential conditions associated with the feature represented by the multi-modal volumetric data or model according to some implementations.
[0015] FIG. 12 is an example process flow diagram associated with generating multimodal volumetric data according to some implementations. [0016] FIG. 13 is an example process flow diagram associated with generating three- dimensional model of a feature of a body according to some implementations.
[0017] FIG. 14 is an example pictorial diagram of a multi-modal volumetric three- dimensional model of a feature of a patient according to some implementations.
DETAILED DESCRIPTION
[0018] Discussed herein are systems and architecture for generating multi-modal volumetric three-dimensional models (e.g., meshes, reconstruction, maps, and the like) of features (e.g., organs, limbs, epidermis, regions, characteristics, and the like) of a human body for use in medical diagnostics, monitoring (e.g., continuous, periodic, and the like), therapy selection and evaluation, and the like. In some cases, the systems and architectures, discussed herein, may be applicable to fields of three-dimensional (3D) sensing of physical parameters in volumetric objects. More specifically to the field of sensing biological (unidimensional or multi-dimensional) signals applied to features of the human body for medical applications (e.g., diagnostics, monitoring, treatment, and the like).
[0019] In some cases, the system discussed herein may be configured to assist with generation of depth maps for the 3D reconstruction of features of the human body (e.g., feet, hands, head, limbs, pelvis, organs, torso, and/or the like) while concurrently (or substantially simultaneously) acquiring and determining additional sensor modality values, such as temperature, microbial loads, dielectric properties, blood flow, electric field potential, magnetic field strength, tissue oxygenation, and the like at various depths (e.g., surface temperatures, internal temperatures at one or more predetermined depth or region, and the like) with respect to the feature or volumetric 3D model. In some cases, the additional sensor modality values may be determined and/or estimated at specific points, regions, multiple points, along the surface or within the 3D volumetric model and corresponding feature of the human body.
[0020] In some implementations, the system, discussed herein, is configured to generate and solve systems of partial differential equations (PDE) based at least in part on the volumetric 3D data and model to determine values or metrics of biological parameters within points that are not directly measured or between points that are directly measured, potentially with improved numeric accuracy. In some cases, the additional values or metrics may assist with locating and diagnosing conditions, such as, for example, diabetic ulcers and pre-ulcerative lesions while reducing the cost of diagnosis and subsequent monitoring of the condition (such as via at home or remote future scanning of the affected feature). In this manner, the system discussed herein increasing the availability of medical equipment, the frequency of data collection and monitoring, and increasing overall patient compliance with treatments when compared to conventional in-office diagnostic specialists and equipment.
[0021] In some examples, using various numeric methods (e.g., Poisson’s equitation, PDEs, Gauss’ theorem, and the like) may be utilized to reconstruct the whole volume of the feature from one or more 3D scans and to estimate volumetric values, such as temperature, microbial load, dielectric properties, electric field potential, magnetic field strength, tissue oxygenation, and the like within portions and/or the entire volume of the feature. In some cases, the resulting surface or 3D volumetric model is used to diagnose medical conditions (e.g., cancerous, ischemic, infected, necrotic or ulcerous regions, and/or the like) within the scanned feature (e.g., body part).
[0022] As one specific example, the system may determine temperatures over the 3D volume of the model by applying Poisson’s equation followed by the use of PDEs and Gauss’ theorem to extrapolate known temperature points to unknown points within the enclosed volume of the feature, as discussed herein. In this example, estimating temperature within a feature of the body may assist with determination of core temperature (an important vital parameter) as well as for diagnosis of tissue infection and necrosis. For instance, in the case of a tissue infection, an increased metabolism induced by the body’s immune system resulting in an increase in local temperature over the affected region of the feature of the body, which may be detected using the 3D volumetric model having additional integrated temperature data over and through the volume. In the case of necrosis, the decreased metabolism caused by tissue death results in a decrease in local temperature of the affected region of the feature of the body.
[0023] As another example, localized temperature detection may assist in early diagnosis and detection of both diabetically induced ulcerous infections (such as in the feet of diabetic patients) and chronic venous insufficiency (e.g., a medical conditions that affect millions of patients worldwide resulting in patient discomfort, pain and suffering, and limited mobility, which further deteriorates patient prognosis, sometimes resulting in loss of limb or if untreated may result in death). As another example, the localized temperature detection over and through the volume of the 3D volumetric model may assist in detection of deep-tissue pressure injuries (DTPIs) that may occur in the hospital, particularly when a patient is bedridden or otherwise suffering from restricted mobility. In some cases, pressure injuries acquired in the hospital may result in harm, including chronic wounds, and in some cases deaths. In this example, the system, discussed herein, may be used as a periodic or continuous monitor to generate up to full body 3D volumetric models integrating other multi-modal sensor capabilities (such as temperature data, infrared data, ultraviolet data, radio wave data, motion data, and the like). In this manner, DTPIs may be detected early and treated prior to complications ensuing.
[0024] In some cases, the system may incorporate cloud-based resources and processing (e.g., such as for aggregation, filtering, data integration into one or more volumetric models, model generation, application of one or more machine learning models, and the like). For example, the 3D volumetric models may be processed and/or generated via cloud based services, systems, and/or processing resources. In some instances, the cloud-based services may include, among other elements, one or more servers in communication with one or more user equipment or devices over one or more networks. In other cases, the multi-modal 3D volumetric data may be processed locally on a user equipment or partially in the cloud and partially on the local user equipment. [0025] In some implementations, the multi-modal 3D volumetric data may include numeric representations of anatomy of patients or users, such as three-dimensional scans of features of the body, portions of skin, and the like as discussed herein. The multi-modal 3D volumetric data may also include different types of data, such as thermal data, red-green-blue data, depth data, infrared data, magnetic resonance imaging (MRI) data, light detection and ranging (LIDAR) data, and the like. In some cases, the multi-modal 3D volumetric data may also include additional data related to the image data such as sensor data including one or more of temperature, oxygenation, bacterial load, electrical potential, dielectric impedance, electrocardiogam (EKG), photoplethysmographic (PPG), oxygenation, perfusion, heart rate, heart rate variance (HRV), and the like. In some cases, meta-data may be associated with the multi-modal 3D volumetric data. For instance, the meta data may include patient information (e.g., identifiers, demographic information, name, age, gender, weight, body part dimensions, such as extracted from the multi-modal 3D volumetric data by the capture device, birth date, medical history, family data, and the like), 3D scan or scanning device information (e.g., device identifier, sensor type, serial number, firmware or software version, scan date, time, and/or the like). [0026] In some cases, as discussed herein, cloud-based processing may consist of multiple servers, available full-time and on demand, making their services substantially ubiquitous, constantly available, on demand and easily accessible (e.g., additional servers may be quickly activated in times of peak demand). Also, servers may consist of multiple computers in large service centers, using different operating systems, benefiting from lower-cost centralized utilities and services, and from expandable infrastructure, such as multiple parallel central processing units (CPUs), graphic processing units (GPUs), arithmetic logic units (ALUs), tensor processing units (TPUs) and quantum processing units (QPUs), to name a few. In general, cloud-servers may also be referred to as backend-servers or simply backend.
[0027] In some implementations, the cloud-based system discussed herein may include data pre-processing, use of one or more machine learning models that are trained on multi-modal 3D volumetric data associated with anatomy of individuals having various conditions, symptoms, states of health, age, genders, cultural backgrounds, and the like. For example, the one or more machine learning models may be trained to segment the multi-modal 3D volumetric data, classify the multi-modal 3D volumetric data, perform feature detection (such as identifying body parts, landmarks, dimensions, and the like) from the multi-modal 3D volumetric data. The one or more machine learning models may also be trained to assist in diagnosing conditions and symptoms with respect to various features and/or body parts and individuals having a wide variety of features, body types, lifestyles (e.g., diet, exercise, work conditions, and the like), cultures, demographics (e.g., age, gender, and the like), such as those classified and identified by one or more other machine learning models (including, but not limited to, feet in various stages of load bearing and having various types and conditions of arches), determining health status, recommending patient specific treatments or therapies and the like. In some implementations, the cloud-based processing may be used to determine a full system of PDEs, benefitting health professionals as well as patients by improving accuracy while also benefitting from the increased computational power and memory available to cloud based systems.
[0028] In this manner, a health care professional may upload multi-modal 3D volumetric data via the user equipment or scanning device to the cloud-based system and receive in response indications of features or conditions that may require further evaluation, user specific anthropometric measurements to assist with diagnostics or evaluations, flagged or identified potential conditions, symptoms as well as suggested treatments or therapies including those related to ulcers, necrosis, infection, and the like. In some cases, the cloud-based system may provide instructions to perform additional scans and/or capture additional multi-modal 3D volumetric data or other types of sensor data associated with a specific user to enhance any recommendations or features identified. In one specific example, the cloud-based system may also return one or more additional inquiries for the healthcare professional and/or the patient, such as questions related to an accident, a particular feature, history of a feature, feature, or change in state detected, and the like to further assist the healthcare professionals in diagnostics and evaluation of the patient.
[0029] As described herein, the machine learning models may be generated using various machine learning techniques. For example, the models may be generated using one or more neural network(s). A neural network may be a biologically inspired algorithm or technique which passes input data (e.g., image and sensor data captured by the user equipment or devices) through a series of connected layers to produce an output or learned inference. Each layer in a neural network can also comprise another neural network or can comprise any number of layers (whether convolutional or not). As can be understood in the context of this disclosure, a neural network can utilize machine learning, which can refer to a broad class of such techniques in which an output is generated based on learned parameters.
[0030] As an illustrative example, one or more neural network(s) may generate any number of learned inferences or heads from the captured sensor and/or image data. In some cases, the neural network may be a trained network architecture that is end-to- end. In one example, the machine learning models may include segmenting and/or classifying extracted deep convolutional features of the sensor and/or image data into semantic data. In some cases, appropriate truth outputs of the model in the form of semantic per-pixel classifications (e.g., vehicle identifier, container identifier, driver identifier, and the like).
[0031] Although discussed in the context of neural networks, any type of machine learning can be used consistent with this disclosure. For example, machine learning algorithms can include, but are not limited to, regression algorithms (e.g., ordinary least squares regression (OLSR), linear regression, logistic regression, stepwise regression, multivariate adaptive regression splines (MARS), locally estimated scatterplot smoothing (LOESS)), instance-based algorithms (e.g., ridge regression, least absolute shrinkage and selection operator (LASSO), elastic net, least-angle regression (LARS)), decisions tree algorithms (e.g., classification and regression tree (CART), iterative dichotomiser 3 (ID3), Chi-squared automatic interaction detection (CHAID), decision stump, conditional decision trees), Bayesian algorithms (e.g., naive Bayes, Gaussian naive Bayes, multinomial naive Bayes, average one-dependence estimators (AODE), Bayesian belief network (BNN), Bayesian networks), clustering algorithms (e.g., k- means, k-medians, expectation maximization (EM), hierarchical clustering), association rule learning algorithms (e.g., perceptron, back-propagation, hopfield network, Radial Basis Function Network (RBFN)), deep learning algorithms (e.g., Deep Boltzmann Machine (DBM), Deep Belief Networks (DBN), Convolutional Neural Network (CNN), Stacked Auto-Encoders), Dimensionality Reduction Algorithms (e.g., Principal Component Analysis (PCA), Principal Component Regression (PCR), Partial Least Squares Regression (PLSR), Sammon Mapping, Multidimensional Scaling (MDS), Projection Pursuit, Linear Discriminant Analysis (LDA), Mixture Discriminant Analysis (MDA), Quadratic Discriminant Analysis (QDA), Flexible Discriminant Analysis (FDA)), Ensemble Algorithms (e.g., Boosting, Bootstrapped Aggregation (Bagging), AdaBoost, Stacked Generalization (blending), Gradient Boosting Machines (GBM), Gradient Boosted Regression Trees (GBRT), Random Forest), SVM (support vector machine), supervised learning, unsupervised learning, semi-supervised learning, etc. Additional examples of architectures include neural networks such as ResNet50, ResNetlOl, VGG, DenseNet, PointNet, and the like. In some cases, the system may also apply Gaussian blurs, Bayes Functions, color analyzing or processing techniques and/or a combination thereof.
[0032] In some cases, the 3D volumetric model as well as any detected potential conditions, therapies, or changes in state detected by the system may be output to the patient, a healthcare professional, and/or a third-party (such as a secondary health care professional treating the patient for another related or unrelated condition or the like). [0033] FIG. 1 is an example block diagram of a system 100 for generating multi-modal volumetric data, such as 3D volumetric model 102 of a feature 104 or anatomical data of a patient according to some implementations. In the current example, the system 100 may include user equipment or scanning device 106 configured to capture sensor data of the feature 104 (e.g., the foot of the patient) and to generate the 3D volumetric model 102 representing the feature 104. In the current example, the 3D volumetric model 102 is illustrated including integrated temperature data, however, it should be understood that the 3D volumetric model 102 may include other integrated data, such as microbial load, and/or multiple types of integrated data that may be viewed together (such as via a combined overlay) and/or individually (such as via a selectable option).
[0034] In the current example, the user equipment 106 may include one or more emitters 108 for outputting signals (such as particular types of light, waves, radiation, and the like) while the healthcare professional 110 is scanning the feature 104 of the patient. For example, the 3D volumetric model 102 may also integrate microbial load data that may be generated via an ultraviolet (UV) emitter 108 and sensor that integrates the UV data into the model via one or more machine learning model trained on UV data and corresponding microbial types, loads, species, and known related conditions.
[0035] The user equipment 106 may also include both image capture devices 112 and/or sensors 114. For example, the user equipment 102 may scan both image data (such as red-green-blue data of the feature 104) together with depth data, temperature data, and the like captured by the sensors 114. In some cases, the image capture devices 112 and the sensors 114 may be combined, such as via the same device or package. In some cases, the image capture devices 112 and the sensors 114 may include one or more of devices configured to capture color data, infrared data, LIDAR data, radar data, impedance data, electric field data, photoplethysmography data, tissue perfusion data, tissue oxygenation data, radio wave data, radiation data, audio data, stereoscopic data, magnetic data, contact data, depth data, temperature data, ultraviolet data, motion data, a combination thereof, and/or the like.
[0036] The user equipment 106 may also include one or more user interfaces 116. The user interfaces 116 may include input interfaces (e.g., mouse or keyboard) or output interfaces (e.g., a display). In some cases, the user interfaces 116 may include a virtual environment display or a traditional two-dimensional display, such as a liquid crystal display or a light emitting diode display. The user interfaces 116 may also include one or more input components for receiving feedback from the user. In some cases, the input components may include tactile input components, audio input components, gesture or motion inputs (such as IMU inputs), or other natural language processing components. In one specific example, the user interfaces 116 may be a combined input and output, such as a touch enabled display for viewing and interacting with the 3D volumetric model 102, performing a scan, and/or the like.
[0037] The user equipment 106 also includes processors 118, one or more computer- readable media 120, and/or communication interfaces 122, as discussed in more detail below. For example, each of the processors 118 may itself comprise one or more processors or processing cores. The computer-readable media 120 may include volatile media (such as random access memory (RAM)) and/or nonvolatile media (such as read only memory (ROM), Flash memory, optical disks, magnetic disks, and so forth). The computer-readable media 120 may include fixed media (e.g., RAM, ROM, a fixed hard drive, and so on) as well as removable media (e.g., Flash memory, a removable hard drive, an optical disc, and so forth). The computer-readable media 120 may be configured in a variety of other ways as further described below. Several modules such as instructions, data stores, and so forth may be stored within the computer-readable media 120 and configured to execute on the processors 118.
[0038] The communication interface(s) 122 can facilitate communication with other proximate sensor systems and/or other facility systems. The communications interfaces(s) 122 may enable Wi-Fi-based communication such as via frequencies defined by the IEEE 802.11 standards, short range wireless frequencies such as Bluetooth, Bluetooth Low Energy (BLE), cellular communication (e.g., 2G, 3G, 4G, 4G LTE, 5G, etc.), satellite communication, dedicated short-range communications (DSRC), or any suitable wired or wireless communications protocol that enables the respective computing device to interface with the other computing device(s).
[0039] In the current example, the 3D volumetric model 102 showing integrated temperature data includes a hot region 124. The hot region 124 may be an indicator of an infection, a pre-ulcer, or an otherwise at-risk area that may need to be investigated by a healthcare professional, such as professional 110. In some cases, the user equipment 106 may generate an alert to the patient, a healthcare professional 110, or the like to notify them that the region 124 should be reviewed. In some cases, the alert may include a list of potential conditions that the healthcare professional 110 may consider during their review or investigation of the region 124.
[0040] FIG. 2 is an example block diagram of an architecture for the system 200 for generating multi-modal volumetric data 202 at least in part via a cloud-based anatomical data processing system 204 and in part via a user equipment or scanning device 206 according to some implementations. In the current example, the user equipment 206, such as a sensor device and a processing device, may be used to capture and/or otherwise generate image data 208 and/or sensor data 210 of a feature 212 (e.g., in the current example, a foot) of a human body.
[0041] In the current example, the user equipment 206 proximate to the feature 212 may capture the image data 208 and the sensor data 210 and provide the data 208 and 210 to the anatomical data processing system 204 in the cloud. The anatomical data processing system 204 may generate a 3D mesh associated with the image data 208 and representative of the feature 212 of the patient. The anatomical data processing system 204 may also determine one or more physical and/or biological parameters 218 associated with the feature 212 of the patient based at least in part on the sensor data 210. In some cases, the anatomical data processing system 204 may then overlay or integrate the sensor data 210 and/or the physical and/or biological parameters 218 and/or associated data into the 3D mesh to generate the volumetric data 202 (e.g., a multi-modal 3D volumetric model representative of the feature 212).
[0042] The anatomical data processing system 204 may then provide the volumetric data 202 back to the user equipment 206 such that a healthcare professional 214 may review the data 202 to assist with diagnostics and treatment of the patient. The anatomical data processing system 204 may also provide the volumetric data 202 to one or more third-party systems 216. For example, the anatomical data processing system 204 may provide the volumetric data 202 to one or more additional health care providers assisting the patient with treatment, therapy, and/or unrelated conditions, one or more guardians or custodians, and the like.
[0043] In some cases, the anatomical data processing system 204 may also determine or diagnose conditions based at least in part on the volumetric data 202 including the multi-modal 3D volumetric model representative of the feature 212 and corresponding physical and/or biological parameters 218. For example, the anatomical data processing system 204 may input the volumetric data 202 and/or the physical and/or biological parameters 218, such as via the multi-modal 3D volumetric model representative of the feature 212 into one or more machine learning models trained on 3D data of various features of various individuals in various different conditions, states, and the like. In some cases, the training data may be generated from different individuals having a wide variety of features (such as those classified and identified by one or more other machine learning models), changes in state of the features, determining health status, undergoing different patient specific treatments or therapies and the like. In these examples, the one or more machine learning models may output potential conditions and/or diagnostics data 220 for review by a healthcare professional 214 together with the volumetric data 202 and integrated physical and/or biological parameters 218. In some cases, the system 204 may utilize multiple sets of machine learning models, such as a neural network and as discussed herein. [0044] In some cases, the system 200 may be utilized to monitor the feature 212 of the patient over time, such as on a continuous basis, semi-continuous basis, periodic basis (at various frequencies), and/or the like. In these cases, the anatomical data processing system 204 may generate volumetric data 202 and/or physical and/or biological parameter data 218 for each scan or at each interval, such that the anatomical data processing system 204 generates a multi-modal volumetric model of the feature 212 representative of the feature 212 at each interval or period of time. In this example, the anatomical data processing system 204 may compare the multi-modal volumetric models of the feature 212 overtime to generate change data 224 to assist with detection, monitoring, and treatment of various conditions.
[0045] In these cases, the anatomical data processing system 204 may output the potential conditions and/or diagnostics data 220, the volumetric data 202, and/or the change data 224 to the healthcare professional 214 via the user equipment 206 and/or to a third-party system 216, as discussed herein. In this manner, the health care professional 214 may upload the image data 208 and the sensor data 210 via a user equipment 206 to the cloud-based anatomical data processing system 204 and receive in response indications of conditions that may require further evaluation, user specific anthropometric measurements to assist with diagnostics or evaluations, flagged or identified potential conditions, symptoms as well as suggested treatments or therapies. In some cases, the anatomical data processing system 204 may provide notifications 222 with instructions to perform additional scans and/or capture additional image data 208 and/or sensor data 210 (e.g., additional types of sensor data) associated with a specific feature 212 of the patient to enhance any recommendations or features identified. In one specific example, the anatomical data processing system 204 may also return one or more additional inquiries for the healthcare professional 214 and/or the patient, such as questions related to an accident, a particular body part, history of a feature detected, history of a change in state of a feature detected, and the like to further assist the healthcare professionals in diagnostics and evaluation of the patient.
[0046] In some cases, as discussed herein, cloud-based processing may consist of multiple servers, available full-time and on demand, making their services substantially ubiquitous, constantly available, on demand and easily accessible (e.g., additional servers may be quickly activated in times of peak demand). Also, servers may consist of multiple computers in large service centers, using different operating systems, benefiting from lower-cost centralized utilities and services, and from expandable infrastructure, such as multiple parallel central processing units (CPUs), graphic processing units (GPUs), arithmetic logic units (ALUs), tensor processing units (TPUs) and quantum processing units (QPUs), to name a few. In general, cloud-servers may also be referred to as backend-servers or simply backend.
[0047] In the current example, the data, notifications, measurements, instructions, and the like may be transmitted between various systems using networks, generally indicated by 226-228. The networks 226-228 may be any type of network that facilitates communication between one or more systems and may include one or more cellular networks, ZigBee, Bluetooth, BLE, radio, WiFi networks, short-range or near-field networks, infrared signals, local area networks, wide area networks, the internet, and so forth. In the current example, each network 226-228 is shown as a separate network but it should be understood that two or more of the networks may be combined or the same.
[0048] FIG. 3 is an example block diagram of an architecture for user equipment 300 associated with generating multi-modal volumetric data according to some implementations. The user equipment 300 may include one or more communication interface(s) 304 (also referred to as communication devices and/or modems), one or more sensor system(s) 306, and one or more emitter(s) 308.
[0049] The user equipment 300 can include one or more communication interface(s) 304 that enable communication between the user equipment 300 and one or more other local or remote computing device(s) or remote services, such as a cloud-based system of FIG. 2. For instance, the communication interface(s) 304 can facilitate communication with other proximate sensor systems, a central control system, or other facility systems. The communications interfaces(s) 304 may enable Wi-Fi-based communication such as via frequencies defined by the IEEE 802.11 standards, short range wireless frequencies such as Bluetooth, Bluetooth low energy, ZigBee, cellular communication (e.g., 2G, 3G, 4G, 4G LTE, 5G, etc.), satellite communication, dedicated short-range communications (DSRC), or any suitable wired or wireless communications protocol that enables the respective computing device to interface with the other computing device(s).
[0050] The one or more sensor system(s) 306 may be configured to capture the image data 328, the sensor data 330, and the like associated with a feature of a body of a patient. In at least some examples, the sensor system(s) 306 may include thermal sensors, time-of-flight sensors, location sensors, LIDAR sensors, radar sensors, sonar sensors, infrared sensors, cameras (e.g., RGB, IR, intensity, depth, and the like), magnetic sensors, microphone sensors, environmental sensors (e.g., temperature sensors, humidity sensors, electric field probes, radio frequency (RF) antennas, impedance sensors, magnetic field sensors, photoplethysmography sensors, tissue oxygenation, tissue perfusion, millimeter-wave radars, light sensors, pressure sensors, and the like), motion sensors, radiation sensors, and the like. In some examples, the sensor system(s) 306 may include multiple instances of each type of sensor. For instance, camera sensors may include multiple cameras disposed at various locations. [0051] The user equipment 300 may also include one or more emitter(s) 308 for emitting light, radiation (e.g., x-rays, gamma rays, and the like), electric fields, and/or sound. For example, light may be output by the emitters 308 in the wavelength range from 300nm to 200nm to cause microbes, such as organelles and anabolytes, to fluoresce. As another example, the biological parameters of the skin may fluoresce at different wavelengths and/or intensities when comparing healthy skin and/or diseased skin, such as cancerous skin. By way of example and not limitation, the emitters 308 in this example may output illumination, fields, lasers, patterns, such as an array of light, audio, radiation, and the like.
[0052] The user equipment 300 may also include one or more user interfaces 302, such as input (e.g., mouse or keyboard) or output devices (e.g., a display). The user interfaces 302 may include a virtual environment display or a traditional two-dimensional display, such as a liquid crystal display or a light emitting diode display. The user interfaces 302 may also include one or more input components for receiving feedback from the user. In some cases, the input components may include tactile input components, audio input components, or other natural language processing components. In one specific example, the user interfaces 302 may be a combined touch enabled display.
[0053] The user equipment 300 may include one or more processors 310 and one or more computer-readable media 312. Each of the processors 310 may itself comprise one or more processors or processing cores. The computer-readable media 312 is illustrated as including memory/storage. The computer-readable media 312 may include volatile media (such as random access memory (RAM)) and/or nonvolatile media (such as read only memory (ROM), Flash memory, optical disks, magnetic disks, and so forth). The computer-readable media 312 may include fixed media (e.g., RAM, ROM, a fixed hard drive, and so on) as well as removable media (e.g., Flash memory, a removable hard drive, an optical disc, and so forth). The computer-readable media 312 may be configured in a variety of other ways as further described below.
[0054] Several modules such as instructions, data stores, and so forth may be stored within the computer-readable media 312 and configured to execute on the processors 310. For example, as illustrated, the computer-readable media 312 may store instructions including data capture instructions 314, model or mesh generation instructions 316, data integration instructions 318, diagnostics instructions 320, change determination instruction 322, reporting instructions 324, as well as other instructions 326, such as an operating system. The computer-readable media 312 may also be configured to store data, such as the image data 328, the sensor data 330, volumetric models and data 332, parameter data 334, change data 336, condition data 338, patient data 340, machine learned models 342, and the like.
[0055] The data capture instructions 314 may be configured to assist the healthcare professional or other users (such as the patient) in capturing the image data 328 and/or the sensor data 330 of the feature. For example, the data capture instruction 314 may cause a partial mesh to appear on the user interface 302 with arrows or highlighted areas that require additional scanning to thereby visually show the user a current state of the model and/or improve the resulting volumetric model.
[0056] The model or mesh generation instructions 316 may be configured to receive the image data 328 and/or the sensor data 330 of a feature of a body and to generate a 3D model or mesh of the feature in response. In some cases, as discussed herein, the 3D model may be a volumetric model of a feature.
[0057] The data integration instructions 318 may be configured to integrate one or more types of sensor data 330 into the 3D model or mesh, such as to generate the multi-modal 3D volumetric model of the feature. In some cases, the data integration instructions 318 may estimate volumetric values, such as temperature, within the entire volume of the body part, utilize interpolation techniques to achieve concurrent estimates while accumulating additional observations or sensor data over time.
[0058] The diagnostics instructions 320 may be configured to detect possible concerns, issues, or conditions with the feature based at least in part on the volumetric models and data 332, such as detection of elevated temperature, biological load, dielectric properties, and the like at various regions within the volumetric models and data 332.
[0059] The change determination instruction 322 may be configured to detect changes in the volumetric models and data 332 generated over time. For example, the change determination instruction 322 may detect changes in amplitude, magnitude, size, concentrations, and the like of detected elevated temperatures, biological loads, dielectric properties, and the like between a first and second multi-modal volumetric model as discussed herein.
[0060] The reporting instructions 324 may be configured to send or transmit the diagnostic outputs, the metrics, the 3D models, and the like. For example, the reporting instruction 324 may send the diagnostic outputs, the metrics, the 3D models, and the like to insurance providers, other medical health care professional systems, patient portals or systems, and the like.
[0061] FIG. 4 is an example block diagram of an architecture for a cloud-based anatomical data processing system 400 associated with generating multi-modal volumetric data according to some implementations. The cloud-based anatomical data processing system 400 can include one or more communication interface(s) 402 that enables communication between the cloud-based anatomical data processing system 400 and one or more other local or remote computing device(s) or remote services, such as the user equipment. For instance, the communication interface(s) 402 can facilitate communication with other proximate sensor systems and/or other facility systems. The communications interfaces(s) 402 may enable Wi-Fi-based communication such as via frequencies defined by the IEEE 802.11 standards, short range wireless frequencies such as Bluetooth, BLE, cellular communication (e.g., 2G, 3G, 4G, 4G LTE, 5G, etc.), satellite communication, dedicated short-range communications (DSRC), or any suitable wired or wireless communications protocol that enables the respective computing device to interface with the other computing device(s).
[0062] The cloud-based anatomical data processing system 400 may include one or more processors 404 and one or more computer-readable media 406. Each of the processors 404 may itself comprise one or more processors or processing cores. The computer-readable media 406 is illustrated as including memory/storage. The computer-readable media 406 may include volatile media (such as random access memory (RAM)) and/or nonvolatile media (such as read only memory (ROM), Flash memory, optical disks, magnetic disks, and so forth). The computer-readable media 406 may include fixed media (e.g., RAM, ROM, a fixed hard drive, and so on) as well as removable media (e.g., Flash memory, a removable hard drive, an optical disc, and so forth). The computer-readable media 406 may be configured in a variety of other ways as further described below. [0063] Several modules such as instructions, data stores, and so forth may be stored within the computer-readable media 406 and configured to execute on the processors 404. For example, as illustrated, the computer-readable media 406 stores instructions including data capture instructions 408, model or mesh generation instructions 410, data integration instructions 412, diagnostics instructions 414, change determination instruction 416, reporting instructions 418, as well as other instructions 420, such as an operating system. The computer-readable media 406 may also be configured to store data, such as the image data 422, the sensor data 424, volumetric models and data 426, parameter data 428, change data 430, condition data 432, patient data 434, machine learned models 436, and the like.
[0064] The data capture instructions 408 may be configured to assist the healthcare professional or other users (such as the patient) in capturing the image data 422 and/or the sensor data 424 of the feature. For example, the data capture instruction 408 may send notifications to the user equipment to assist with collection of additional data 422 and 424 for integration into the volumetric model of the feature.
[0065] The model or mesh generation instructions 410 may be configured to receive the image data 422 and/or the sensor data 424 of a feature of a body and to generate a 3D model or mesh of the feature in response. In some cases, as discussed herein, the 3D model may be a volumetric model of a feature.
[0066] The data integration instructions 412 may be configured to integrate one or more types of sensor data 424 into the 3D model or mesh, such as to generate the multi-modal 3D volumetric model of the feature. In some cases, the data integration instructions 412 may estimate volumetric values, such as temperature, within the entire volume of the body part, utilize interpolation techniques to achieve concurrent estimates while accumulating additional observations or sensor data over time,
[0067] The diagnostics instructions 414 may be configured to detect possible concerns, issues, or conditions with the feature based at least in part on the volumetric models and data 426, such as detection of elevated temperature, biological load, dielectric properties, and the like at various regions within the volumetric models and data 426.
[0068] The change determination instruction 416 may be configured to detect changes in the volumetric models and data 426 generated over time. For example, the change determination instruction 416 may detect changes in amplitude, magnitude, size, concentrations, and the like of detected elevated temperatures, biological loads, dielectric properties, tissue perfusion, tissue oxygenation, and the like between a first and second multi-modal volumetric model as discussed herein.
[0069] The reporting instructions 418 may be configured to send or transmit the diagnostic outputs, the metrics, the 3D models, and the like. For example, the reporting instruction 418 may send the diagnostic outputs, the metrics, the 3D models, and the like to insurance providers, other medical health care professional systems, patient portals or systems, and the like.
[0070] FIGS. 5-13 are flow diagrams illustrating example processes associated with the generating multi-modal volumetric data and detecting potential conditions associated with the feature represented by the multi-modal volumetric data or model, as discussed herein. The processes are illustrated as a collection of blocks in a logical flow diagram, which represent a sequence of operations, some or all of which can be implemented in hardware, software, or a combination thereof. In the context of software, the blocks represent computer-executable instructions stored on one or more computer-readable media that, when executed by one or more processor(s), perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, encryption, deciphering, compressing, recording, data structures and the like that perform particular functions or implement particular abstract data types.
[0071] The order in which the operations are described should not be construed as a limitation. Any number of the described blocks can be combined in any order and/or in parallel to implement the processes, or alternative processes, and not all of the blocks need be executed. For discussion purposes, the processes herein are described with reference to the frameworks, architectures and environments described in the examples herein, although the processes may be implemented in a wide variety of other frameworks, architectures or environments.
[0072] FIG. 5 is an example process 500 flow diagram associated with generating multi-modal volumetric data and detecting potential conditions associated with the feature represented by the multi-modal volumetric data or model according to some implementations. As discussed above, the system discussed herein may include a local device in physical proximity to a patient and/or a cloud-based service or system configured to process data generated by the local device. In some cases, the system may generate 3D multi-modal volumetric data or models of a feature of a human body for use in diagnostics and patient monitoring. [0073] At 502, the system may receive image data and/or sensor data of a feature of a body. For example, the system may capture, generate, and/or receive the image data (such as 3D image data, RGB image data, and the like) of a feature (such as a limb, appendage, torso, region, organ, or the like of a patient) together with sensor data having various types or modes (e.g., radiation levels, texture, temperature, pressure, electric field, infrared, ultraviolet, and the like) of the feature.
[0074] At 504, the system may generate, based at least in part on the image data, a 3D mesh associated with the feature of the body and, at 506, the system may integrate, based at least in part on the sensor data, physical parameter data associated with the feature of the body into the 3D mesh. For example, in some cases the image data and sensor data may include multiple observations (such as to reduce noise) by accumulating the image and sensor data over time. Accordingly, in some cases, the system may perform time-averaging or a weighted time averaging of the image data and/or sensor data (such as in which the weights are proportional to the estimated quality of the signal). For example, the system may apply one or more signed differences functions (SDF) or truncated signed differences functions (TSDF) that may be extended beyond the image data to other physical parameters represented by the sensor data as a physical TSDF (PTSDF).
[0075] In some cases, the PTSDF may be converted to a 3D mesh using similar techniques to those used to convert a TSDF into a mesh via 3D reconstruction techniques. For example, some 3D reconstruction techniques may include triangular and rectangular marching cubes. In some cases, the 3D mesh conversion may be performed in steps or parts so that the 3D reconstructed mesh and the physical parameter mesh are computed concurrently or substantially simultaneously and then combined.
[0076] In some cases, such as when the sensor data represents one or more biological parameters, the system may also solve for systems of partial differential equations (PDEs) including a large number of parameters. In other cases, such as when physical sensor data includes an imaging device (e.g., a CMOS camera, a CCD camera, a bolometer, a SPAD array, and the like) to capture data, such as fluorescing microbes, the physical parameters may be stored as textures, in which case the physical parameters may be applied to the volume of the 3D mesh using techniques used to apply textures to 3D meshes such as, for example, 3D UV mapping and texturing. [0077] In some examples, generating the 3D mesh, the PTSDF, or multi-modal volumetric model may include mesh cleaning, such as removing vertices or faces that overlap, or that are too close to each other, closing holes or making the 3D mesh watertight (e.g., via techniques such as Poisson technique), and/or reducing a number of faces in the 3D mesh so that the resulting number of faces meets a given requirement for average mesh density (such as to improve performance).
[0078] At 508, the system may determine, based at least in part on the 3D mesh having integrated the sensor data and, thus, the physical parameters, a potential condition associated with the feature of the body and, at 510, the system may determine, based at least in part on the 3D mesh, a region associated with the potential condition. For example, the system may compare various regions of the 3D mesh or multi-modal volumetric model to one or more thresholds over corresponding types or modes of the sensor data. For instance, the system may compare the temperature data to one or more thresholds in order to detect potential ulcers, wounds, infections, sores, or the like. In some cases, the system may determine the region by detecting an edge or transition boundary between the mode of the sensor data (e.g., temperature) representing sensor data above or equal to the one or more threshold and sensor data below or equal to the one or more thresholds.
[0079] At 512, the system may send an alert including the 3D mesh or multi-modal volumetric model, the region identified on the 3D mesh or multi-modal volumetric model, and/or an indication of the potential condition. For example, the system may send the alert to a physician, the patient, a guardian or agent, family member, and/or the like.
[0080] FIG. 6 is another example process 600 flow diagram associated with generating multi-modal volumetric data and detecting potential conditions associated with the feature represented by the multi-modal volumetric data or model according to some implementations. As discussed above, the system discussed herein may include a local device in physical proximity to a patient and/or a cloud-based service or system configured to process data generated by the local device. In some cases, the system may generate 3D multi-modal volumetric data or models of a feature of a human body for use in diagnostics and patient monitoring.
[0081] At 602, the system may receive image data and/or sensor data of a feature of a body. For example, the system may capture, generate, and/or receive the image data (such as 3D image data, RGB image data, and the like) of a feature (such as a limb, appendage, torso, region, organ, or the like of a patient) together with sensor data having various types or modes (e.g., radiation levels, texture, temperature, pressure, electric field, infrared, ultraviolet, and the like) of the feature.
[0082] At 604, the system may generate, based at least in part on the image data, a 3D mesh associated with the feature of the body. For example, the system may apply one or more SDF or TSDF that may be extended beyond the image data to other physical parameters represented by the sensor data as a PTSDF, as discussed above. In some cases, the PTSDF may be converted to a 3D mesh using similar techniques to those used to convert a TSDF into a mesh via 3D reconstruction techniques. For example, some 3D reconstruction techniques may include triangular and rectangular marching cubes. In some cases, the 3D mesh conversion may be performed in steps or parts so that the 3D reconstructed mesh and the physical parameter mesh are computed concurrently or substantially simultaneously and then combined.
[0083] At 606, the system may determine, based at least in part on the sensor data and/or the 3D mesh, physical parameters associated with the feature of the body. In some cases, the system may integrate or overlay the physical parameter data onto the 3D mesh to generate a multi-modal volumetric model of the feature of the body, as discussed herein.
[0084] At 608, the system may determine, based at least in part on the physical parameter and/or the three dimensional mesh, gradients and/or Laplacians associated with regions representing differentials in the physical parameter data associated with the feature of the body and, at 610, the system may determine, based at least in part on the gradients and/or Laplacian, local maxima and/or local minima. For example, the gradients (e.g., temperature, microbial loads, electrical fields, dielectric properties, and the like) and Laplacians are determined using finite differences that allows the identification of local maxima and/or minima at 610 by identifying points within the 3D volume or mesh of the feature in which the gradients approach zero and the Laplacian is negative for local maxima and positive for local minima.
[0085] At 612, the system may determine, based at least in part on the three- dimensional mesh and the local maxima or local minima, a region associated with the physical parameter. For example, the system may compare the local maxima and/or local minima to one or more thresholds to determine a region associated with an abnormal value of the physical parameter. For example, when the physical parameter is temperature, the system may determine a hot region or a cold region based on at least one or more thresholds associated with normal human body temperature.
[0086] At 614, responsive to determining that one or more thresholds have been met or exceeded with respect to the local maxima or local minima, identifying the region as a potential condition. For example, if the physical parameter is microbial load, the system may determine that the region is infected with one or more types of bacteria when the microbial load is at or above the one or more microbial load thresholds (e.g., saturation thresholds, brightness thresholds, size thresholds, and the like). In some cases, the system may also determine a type of microbe associated with the region based on additional sensor data, such as fluorescence data (e.g., fluorescence color) compared with known microbe response to UV light. In another example, the physical parameter is tissue perfusion and/or tissue oxygenation, both of which may be detected by near infrared (NIR) spectroscopy. Local minima may represent necrotic tissue while local maxima may represent infected tissue. Tissue perfusion and oxygenation can further be applied to monitor the progression of the healing of wounds, and the acceptance and progressive healing of skin grafts.
[0087] At 616, the system may send an alert including the region identified on the three- dimensional mesh and an indication of the potential condition. For example, the system may send the alert to a physician, the patient, a guardian or agent, family member, and/or the like.
[0088] FIG. 7 is another example process 700 flow diagram associated with generating multi-modal volumetric data and detecting potential conditions associated with the feature represented by the multi-modal volumetric data or model according to some implementations. In some cases, the system may generate the 3D mesh or multi-modal volumetric model to detect potential conditions, including conditions that are not visible (e.g., below the surface), of the feature of the human body being scanned. For instance, process 700 discussed a method for detecting potential ulcers and/or pre-ulcerative lesions prior to the ulcer’s formation on the exposed skin.
[0089] At 702, the system may receive image data and/or sensor data of a feature of a body. For example, the system may capture, generate, and/or receive the image data (such as 3D image data, RGB image data, and the like) of a feature (such as a limb, appendage, torso, region, organ, or the like of a patient) together with sensor data having various types or modes (e.g., radiation levels, texture, temperature, pressure, electric field, infrared, ultraviolet, and the like) of the feature. [0090] At 704, the system may generate, based at least in part on the image data, a 3D mesh associated with the feature of the body. For example, the system may apply one or more SDF or TSDF that may be extended beyond the image data to other physical parameters represented by the sensor data as a PTSDF, as discussed above. In some cases, the PTSDF may be converted to a 3D mesh using similar techniques to those used to convert a TSDF into a mesh via 3D reconstruction techniques. For example, some 3D reconstruction techniques may include triangular and rectangular marching cubes. In some cases, the 3D mesh conversion may be performed in steps or parts so that the 3D reconstructed mesh and the physical parameter mesh are computed concurrently or substantially simultaneously and then combined.
[0091] At 706, the system may determine, based at least in part on the sensor data and/or the 3D mesh, physical parameters associated with the feature of the body. In some cases, the system may integrate or overlay the physical parameter data onto the 3D mesh to generate a multi-modal volumetric model of the feature of the body, as discussed herein.
[0092] At 708, the system may determine, based at least in part on the physical parameter and/or the three dimensional mesh, gradients and/or Laplacians associated with regions representing differentials in the physical parameter data associated with the feature of the body and, at 710, the system may determine local maxima based at least in part on the gradients and/or Laplaciansl. For example, the gradients (e.g., temperature) and Laplacians are determined using finite differences that allow the identification of local maxima and/or minima at 710 by identifying points within the 3D volume or mesh of the feature in which the gradients approach zero and the Laplacian is negative for a local maxima.
[0093] At 712, the system may determine, based at least in part on the three- dimensional mesh and the local maxima, a region associated with the physical parameter (e.g., temperature). For example, the system may compare the local maxima to one or more thresholds to determine a region associated with an abnormal value of the physical parameter (e.g., temperature). For example, when the physical parameter is temperature, the system may determine a hot region based on at least one or more thresholds associated with normal human body temperature.
[0094] At 714, responsive to determining that one or more thresholds have been met or exceeded with respect to the local maxima, identifying the region as a potential ulcer or pre-ulcerative lesions. For example, the system may compare all local maxima identified to an initial threshold (such as approximately 37 degrees Celsius or 98.6 degrees Fahrenheit). When the local maxima exceeds one or more thresholds, then the system may identify the region as a potential ulcerous or pre-ulcerative lesion location. In some cases, regions having a local maxima above a second threshold may be marked and available for visualization on the 3D mesh by the user or health care professional. [0095] At 716, the system may send an alert including the region identified on the three- dimensional mesh and an indication of the potential ulcer or pre-ulcerative lesion. For example, the system may send the alert to a physician, the patient, a guardian or agent, family member, and/or the like.
[0096] FIG. 8 is another example process 800 flow diagram associated with generating multi-modal volumetric data and detecting potential conditions associated with the feature represented by the multi-modal volumetric data or model according to some implementations. In some cases, the system may generate the 3D mesh or multi-modal volumetric model to detect potential conditions, including conditions that are not visible (e.g., below the surface) of the feature of the human body being scanned. For instance, process 800 discussed a method for detecting potential necrosis.
[0097] At 802, the system may receive image data and/or sensor data of a feature of a body. For example, the system may capture, generate, and/or receive the image data (such as 3D image data, RGB image data, and the like) of a feature (such as a limb, appendage, torso, region, organ, or the like of a patient) together with sensor data having various types or modes (e.g., radiation levels, texture, temperature, pressure, electric field, infrared, ultraviolet, and the like) of the feature.
[0098] At 804, the system may generate, based at least in part on the image data, a 3D mesh associated with the feature of the body. For example, the system may apply one or more SDF or TSDF that may be extended beyond the image data to other physical parameters represented by the sensor data as a PTSDF, as discussed above. In some cases, the PTSDF may be converted to a 3D mesh using similar techniques to those used to convert a TSDF into a mesh via 3D reconstruction techniques. For example, some 3D reconstruction techniques may include triangular and rectangular marching cubes. In some cases, the 3D mesh conversion may be performed in steps or parts so that the 3D reconstructed mesh and the physical parameter mesh are computed concurrently or substantially simultaneously and then combined.
[0099] At 806, the system may determine, based at least in part on the sensor data and/or the 3D mesh, physical parameters associated with the feature of the body. In some cases, the system may integrate or overlay the physical parameter data onto the 3D mesh to generate a multi-modal volumetric model of the feature of the body, as discussed herein.
[00100] At 808, the system may determine, based at least in part on the physical parameter and/or the three dimensional mesh, gradients and/or Laplacians associated with regions representing differentials in the physical parameter data associated with the feature of the body and, at 810, the system may determine local maxima, based at least in part on the gradients and/or Laplacians. For example, the gradients (e.g., temperature) and Laplacians are determined using finite differences that allow the identification of local minima at 810 by identifying points within the 3D volume or mesh of the feature at which the gradients approach zero and the Laplacian is negative for a local maxima.
[00101] At 812, the system may determine, based at least in part on the three- dimensional mesh and the local minima, a region of concern associated with the physical parameter (e.g., temperature). For example, the system may compare the local maxima to one or more thresholds to determine a region associated with an abnormal value of the physical parameter (e.g., temperature). For example, when the physical parameter is temperature, the system may determine a cold region based on at least one or more thresholds associated with normal human body temperature.
[00102] At 814, responsive to determining that one or more thresholds have been met or exceeded with respect to the local minima, identifying the region as a potential necrosis. For example, the system may compare all local maxima identified to an initial threshold (such as approximately 33, 34, or 36 degrees Celsius). When the local maxima exceeds one or more thresholds, then the system may identify the region as a potential necrosis location. In some cases, regions having a local minima below a second threshold may be marked and available for visualization on the 3D mesh by the user or health care professional.
[00103] At 816, the system may send an alert including the region identified on the three-dimensional mesh and an indication of the potential necrosis. For example, the system may send the alert to a physician, the patient, a guardian or agent, family member, and/or the like.
[00104] FIG. 9 is another example process 900 flow diagram associated with generating multi-modal volumetric data and detecting potential conditions associated with the feature represented by the multi-modal volumetric data or model according to some implementations. In some cases, the system may generate the 3D mesh or multimodal volumetric model to detect potential conditions, including conditions that are related to, responsive to, and/or caused by bacteria and other microbes.
[00105] At 902, the system may emit ultraviolet light onto a feature of a body and, at 904, the system may capture image data and/or sensor data (e.g., UV data) of the feature of the body while the UV light is emitted. For example, the UV light may be emitted onto the feature being scanned to cause any microbial activity to fluoresce in a manner that may be detected by one or more UV sensors.
[00106] At 906, the system may generate, based at least in part on the image data, a three-dimensional mesh associated with the feature of the body. For example, the system may apply one or more SDF or TSDF that may be extended beyond the image data to other physical parameters represented by the sensor data as a PTSDF, as discussed above. In some cases, the PTSDF may be converted to a 3D mesh using similar techniques to those used to convert a TSDF into a mesh via 3D reconstruction techniques. For example, some 3D reconstruction techniques may include triangular and rectangular marching cubes. In some cases, the 3D mesh conversion may be performed in steps or parts so that the 3D reconstructed mesh and the physical parameter mesh are computed concurrently or substantially simultaneously and then combined.
[00107] At 908, the system may determine, based at least in part on the sensor data, a microbial load associated with a region of the feature of the body. For example, based on the sensor data, such as a fluoresce color, intensity, saturation, size of a region, and the like, the system may determine a type or species of microbe associated with one or more regions and an estimated or approximate quantity or load of each species of microbe. The system may also determine a bounding region associated with each instance of the microbial loads, such as via the local maxima and gradient techniques discussed herein.
[00108] At 910, the system may determine, based at least in part on the microbial load, a potential condition associated with the feature of the body. For example, using the species or type of microbe and the load data, the system may determine specific conditions, such as types of infections (e.g., bacterial infection caused by multiple classes of bacteria, fungal infection, cellulitis, abscesses, warts, human papillomavirus (HPV), and the like). In addition, the system may be theranostic. That is, the same UV light used to cause fluorescence of tissue may also be used to kill pathogens, thus having a therapeutic application in addition to a diagnostic one.
[00109] At 912, the system may send an alert including the region identified on the three-dimensional mesh and an indication of the potential microbial condition. For example, the system may send the alert to a physician, the patient, a guardian or agent, family member, and/or the like.
[00110] FIG. 10 is another example process flow 1000 diagram associated with generating multi-modal volumetric data and detecting potential conditions associated with the feature represented by the multi-modal volumetric data or model according to some implementations. In some cases, the system may generate the 3D mesh or multimodal volumetric model to detect potential conditions, including conditions that are related to, responsive to, and/or caused by low oxygenation levels.
[00111] At 1002, the system may emit infrared (IR) light onto a feature of a body and, at 1004, the system may capture image data and/or sensor data (e.g., IR data) of the feature of the body while the IR light is emitted. For example, the IR light may be absorbed by oxygenated red blood cells, thus indicating a higher concentration of oxygen and, thus, perfusion, at the tissue location.
[00112] At 1006, the system may generate, based at least in part on the image data, a three-dimensional mesh associated with the feature of the body. For example, the system may apply one or more SDF or TSDF that may be extended beyond the image data to other physical parameters represented by the sensor data as a PTSDF, as discussed above. In some cases, the PTSDF may be converted to a 3D mesh using similar techniques to those used to convert a TSDF into a mesh via 3D reconstruction techniques. For example, some 3D reconstruction techniques may include triangular and rectangular marching cubes. In some cases, the 3D mesh conversion may be performed in steps or parts so that the 3D reconstructed mesh and the physical parameter mesh are computed concurrently or substantially simultaneously and then combined.
[00113] At 1008, the system may determine, based at least in part on the sensor data, an oxygenation level associated with a region of the feature of the body (including but not limited to tissue regions and arterial oxygenation levels). For example, based on the sensor data, such as a blood flow, color, and the like, the system may determine a level of oxygenation associated with one or more regions. The system may also determine a bounding region associated with each instance of the different oxygenation levels, such as via the local maxima, local minima, and/or gradient techniques discussed herein.
[00114] At 1010, the system may determine, based at least in part on the oxygenation level, a potential condition associated with the feature of the body. For example, using the level of oxygenation, the system may determine specific conditions, such as skin breakdowns, pressure ulcers, cyanosis, chronic venous insufficiency, and the like. In some cases, the level of oxygenation may also be utilized to determine the progress of a therapy or treatment of a known condition, such as healing of wounds. In this case, the level of oxygenation may be determined over time and the amount and rate of change may be utilized to determine progress of treatment and/or healing of the wound. [00115] At 1012, the system may send an alert including the region identified on the three-dimensional mesh and an indication of the potential condition. For example, the system may send the alert to a physician, the patient, a guardian or agent, family member, and/or the like.
[00116] FIG. 11 is another example process 1100 flow diagram associated with generating multi-modal volumetric data and detecting potential conditions associated with the feature represented by the multi-modal volumetric data or model according to some implementations. In some cases, the system discussed herein may be configured to monitor a condition of a patient or user over time. For example, if the user is suffering from a wound, bed sore, or the like, the system may be configured to monitor changes in size, depth, blood flow, temperature (e.g., infection), and the like in a medical facility, at home, and/or the like.
[00117] At 1102, the system may receive image data and/or sensor data of a feature of a body having a condition and undergoing therapy for the condition. For example, the system may capture, generate, and/or receive the image data (such as 3D image data, RGB image data, and the like) of a feature (such as a limb, appendage, torso, region, organ, or the like of a patient) together with sensor data having various types or modes (e.g., radiation levels, texture, temperature, tissue perfusion, tissue oxygenation, pressure, electric field, infrared, ultraviolet, and the like) of the feature. In some cases, prior 3D multi-modal volumetric models or 3D meshes of the feature including the wound may exist and be stored with respect to a non-transitory computer readable media accessible to the system.
[00118] At 1104, the system may generate, based at least in part on the image data, a 3D mesh associated with the feature of the body including the condition. For example, the system may apply one or more SDF or TSDF that may be extended beyond the image data to other physical parameters represented by the sensor data as a PTSDF, as discussed above. In some cases, the PTSDF may be converted to a 3D mesh using similar techniques to those used to convert a TSDF into a mesh via 3D reconstruction techniques. For example, some 3D reconstruction techniques may include triangular and rectangular marching cubes. In some cases, the 3D mesh conversion may be performed in steps or parts so that the 3D reconstructed mesh and the physical parameter mesh are computed concurrently or substantially simultaneously and then combined.
[00119] At 1106, the system may determine, based at least in part on the sensor data, a physical parameter(s) (such as a microbial load or the like) associated with the condition of the feature of the body. In some cases, the system may integrate or overlay physical parameter(s) (such as a microbial load or the like) data onto the 3D mesh to generate a multi-modal volumetric model of the feature of the body, as discussed herein. [00120] At 1108, the system may determine, based at least in part on the physical parameter(s) (such as a microbial load or the like) and a second three-dimensional mesh of the feature of the body captured at a prior period of time, a change in the physical parameter associated with the feature of the body. For example, the system may be configured to periodically, continuously, or at various other intervals generate the 3D mesh and/or multi-modal volumetric model, such that a record of the feature and/or condition is generated over a period of time. In this case, the system may compare the microbial load (and/or other physical parameters) of a current 3D mesh or model to the prior generated models to determine if there is a change, such as an increase or decrease in the microbial load, a size of the microbial activity, or the like.
[00121] At 1110, the system may determine, based at least in part on the change in the region and/or the physical parameter, if the treatment is improving the condition. For example, the change may be a change in the size of the region associated with the condition, a change in the total metric or physical parameter (e.g., temperature or the like), or the like.
[00122] At 1112, the system may determine if the condition is improving based at least in part on the change. For example, if the size of the region associated with a microbial load is shrinking or the total microbial load is decreasing the treatment may be improving the condition. Otherwise, if the microbial load is stable or growing or the total microbial load is stable or increasing the treatment is failing to improve the condition. If the condition is improving the process 1100, may proceed to 1114 and send a recommendation to continue the therapy to a health professional, insurance company, patient, guardian, and/or the like. However, if the condition is not improving (e.g., the condition is declining) the process 1100, may proceed to 1116 and send a recommendation to alter the therapy to a health professional, insurance company, patient, guardian, and/or the like. In some cases, the system may recommend alternative therapy or treatment. In other cases, the system may advise the health professional to review the data and select an alternative treatment or therapy.
[00123] FIG. 12 is an example process 1200 flow diagram associated with generating multi-modal volumetric data according to some implementations. As discussed above, a system may be configured to generate a multi-modal volumetric model of a feature of a body. In some cases, the system may overlay one or more physical parameters onto the 3D mesh of the feature of the body in order to generate the multi-modal volumetric model that may be viewed by a healthcare professional, patient, or other user.
[00124] At 1202, the system may receive image data, depth data, and/or sensor data of a feature of a body being modeled. For example, the system may capture, generate, and/or receive the image data (such as 3D image data, RGB image data, and the like) of a feature (such as a limb, appendage, torso, region, organ, or the like of a patient) together with sensor data having various types or modes (e.g., radiation levels, texture, temperature, pressure, electric field, infrared, ultraviolet, and the like) of the feature. In some cases, the image data, the depth data and the sensor data may be captured by a single device concurrently.
In the current example, the image data and the depth data 1204 may be provided to a 3D mesh generation system and the sensor data 1206 may, concurrently, be provided to a system for generating a parameter map of a physical parameter being exhibited by the feature of the body. At 1216, the system may generate a 3D model, mesh, or depth map of the feature of the body and output local services 1208 of the 3D model, mesh, or depth map. In some examples, the system may generate the 3D model or depth map by applying a time integrating loop (such as a KinectFusion technique). For example, the system may determine a new pose for an existing 3D model or depth map based at least in part on image data and the depth data 1204. The system may extract expected surfaces from the new pose. The system may apply one or more tracking algorithms or techniques (such as an iterative closest point technique) to refine the pose estimates. In some cases, the system may, based at least in part on the refined pose estimates, project the depth map onto a signed distance function (SDF). The system may also perform weighted average updates of the SDF and on select voxels to output 3D mesh, model, and depth map as well as one or more local surfaces 1208 associated with the 3D mesh and depth map.
[00125] At 1210, the system may determine local surface normals 1212 based at least in part on the local surfaces 1208. For example, the system may utilize a point to surface correspondence technique to determine local surfaces 1208. In other examples, the system may utilize techniques such as principal component analysis, least-squares plane fitting, nearest neighbors, or the like.
[00126] At 1214, the system may generate a physical parameter map 1216 of the feature of the body, based at least in part on the sensor data 1206. For example, the physical parameter may be temperature, microbial load, dielectric properties, blood flow, electric field potential, magnetic field strength, tissue perfusion, tissue oxygenation, and the like.
[00127] At 1218, the system may perform angular and distance correction on the physical parameter map 1216. For example, the angular and distance correction may be applied for each pixel in the physical parameter map 1216 based at least in part on the surface normals 1212 generated at 1210. For instance, the angular and distance correction may be based at least in part on a-priori knowledge of a response of the sensor device used to capture the sensor data 1206 and/or the image and depth data 1204. In some cases, the a-priori knowledge may be determined based on the sensor response to different parameter readings at different distances from the feature of the body and/or at different angles with respect to the feature of the body. In this manner, the system may correctly align the values of the physical parameter map to the local surfaces 1208 using known characteristics of the sensor device. In some cases, the angular and distance correction may be performed for each pixel of the physical parameter map 1216. In other examples, the angular and distance correction may be performed with respect to regions or segments of pixels as a single concurrent operation (e.g., the same correction is applied to each pixel of the region or segment).
[00128] At 1220, the system may apply mean value properties of a harmonic function to the physical parameter map and, at 1222, the system may apply spatial averaging to the physical parameter. For example, the system may apply the mean value properties of the harmonic function to estimate a value of a given point in the physical parameter map as an average determined by pixels in a circle or neighborhood proximate to the given point. In some cases, the system may apply mean value properties of the harmonic function and spatial averaging to complete or fill empty points or pixels in the physical parameter map, replace outliers, or the like. For instance, the system may utilize spatial averaged estimates to remove outliers, apply one or more Laplacian filters, or the like. [00129] At 1224, the system may update local value estimates of one or more pixels of the physical parameter map and, at 1226, the system may apply mean value property and temporally integrate the physical parameter map. For example, the system may apply the mean value property theorem.
[00130] At 1228, the system may determine thermodynamic cases associated with the physical parameter map. For example, the system may apply a Poission technique or algorithm to solve for the thermodynamic cases of the physical parameter map based at least in part on values in an isosurface. In some cases, such as a steady state, the isosurface may be defined by a Laplacian = 0. In other cases, such as in a thermodynamic state, values different from zero may be used to define the isosurface. For instance, in an example in which the physical property is temperature, a positive value may be applied for a heating body (e.g., a feature or region that is increasing in temperature) while a negative value may be apply for a cooling body (e.g., the feature or region that is decreasing in temperature).
[00131] At 1230, the system may apply temporal filters to the physical parameter map following angular and distance correction at 1218 and, at 1232, the system may store the localized parameter values and boundary conditions as localized values 1234. For example, the system may apply one or more Kalman filters to improve physical parameter value estimates within the physical parameter map. In some cases, the system may utilize localized physical parameter derivatives that correspond to estimates of a von Neuman one or more boundary conditions.
[00132] At 1236, the system may determine global spatiotemporal signed distance functions for the physical parameter. In some cases, the system may determine the global spatiotemporal signed distance functions to improve localized physical parameter values at one or more pixels or points within the physical parameter map. In the current example, the system may utilize the localized values 1234 and the thermodynamic cases generated at 1228 to determine the global spatiotemporal signed distance functions.
[00133] At 1238, the system may determine physical parameter sources and sinks with respect to the physical parameter map of the feature and, at 1240, the system may determine and visualize physical parameter flow (e.g., movement or changes in the physical parameters over the map, such as from one pixel to the next). In some cases, the system may overlay the physical parameter sources and sinks as high and low (hot and cold) regions on the 3D mesh. The system may also overlay the physical flow (such as changes in temperature or parameters values), via one or more arrows showing increases and/or decrease between regions of the 3D mesh. In this manner, the system may generate a multi-modal volumetric model or mesh having an overlay physical parameter map of the feature of the body.
[00134] FIG. 13 is an example process 1300 flow diagram associated with generating a three-dimensional model of a feature of a body according to some implementations. As discussed above, a system may be configured to generate a 3D multi-modal volumetric model of a feature of a body. In some cases, the system may overlay one or more physical parameters onto the 3D mesh of the feature of the body in order to generate the multi-modal volumetric model that may be viewed by a healthcare professional, patient, or other user. In the current example, the system may generate the 3D mesh used to overlay the physical parameter values generated by various sensor systems and modalities, such as indicated by 1216 of FIG. 12.
[00135] At 1302, the system may receive image data, depth data, and/or sensor data of a feature of a body being modeled. For example, the system may capture, generate, and/or receive the image data (such as 3D image data, RGB image data, and the like) of a feature (such as a limb, appendage, torso, region, organ, or the like of a patient) together with sensor data having various types or modes (e.g., radiation levels, texture, temperature, pressure, electric field, infrared, ultraviolet, and the like) of the feature. In some cases, the image data, the depth data and the sensor data may be captured by a single device concurrently.
[00136] At 1304, the system may determine a depth map based at least in part on the image data, the depth data, and/or the sensor data. For example, the system may determine the depth map of the feature using various 3D reconstruction techniques and the depth data and/or image data of the feature.
[00137] At 1306, the system may determine a pose for the feature, such as a new pose or a change in pose as the feature moves or as additional depth data and/or image data is captured/generated at 1302. For example, the system may utilize an existing depth map to estimate a new pose of the feature. [00138] At 1308, the system may extract expected surface based at least in part on the new pose or the pose determined for the feature and, at 1310, the system may apply one or more tracking techniques to the physical parameter map based at least in part on local surface normals 1212 generated as discussed above with respect to process 1200 of FIG. 12.
[00139] At 1312, the system may update the depth map. For example, the system may project the depth map onto a signed distance function (SDF) and perform one or more weighted average updates with respect to the SDF. The system may also apply one or more weights to various voxels, surfaces, and/or pixels of the SDF. In some cases, the system may then output the SDF or the depth map as a 3D mesh of the feature for use by other processes, such as the process 1200 of FIG. 12 for overlaying or integrating of physical parameter data on the 3D mesh.
[00140] FIG. 14 is an example pictorial diagram 1400 of a multi-modal volumetric three-dimensional model 1402 of a feature of a patient according to some implementations. In the current example, the feature is a foot of a human and the multimodal volumetric three-dimensional model 1402 is showing a physical parameter, such as temperature in the current example, with respect to the 3D model 1402. For instance, in the current example, the model 1402 includes a high region (or heat source for temperature), generally indicated by 1404. In this case, the system shows multiple regions proximate or about the hot spot 1404 having increasing temperature ranges that may be utilized to determine a size or region associated with the hot spot 1404 and may be used for further diagnostics of the foot.
[00141] Although the discussion above sets forth example implementations of the described techniques, other architectures may be used to implement the described functionality and are intended to be within the scope of this disclosure. Furthermore, although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as exemplary forms of implementing the claims.
EXAMPLE CLAUSES
[00142] A. A method comprising: capturing, via user equipment, sensor data of a feature of a human body; generating, based at least in part on the sensor data, a three- dimensional mesh of the feature of the human body; generating, based at least in part on the sensor data, a first physical parameter map associated with feature of the human body and a first physical parameter; generating, based at least in part on the first physical parameter map and the three-dimensional mesh of the feature of the human body, a multi-modal volumetric model of the features of the human body; and outputting the multi-modal volumetric model.
[00143] B. The method of A, wherein generating the multi-modal volumetric model of the features of the human body includes aligning values of the first physical parameter map to a local surfaces of the three-dimensional mesh based at least in part on known characteristics of the user equipment.
[00144] C. The method of A, wherein: the sensor data includes image data, depth data, and data associated with at least one additional sensor modality; generating the three-dimensional mesh of the feature of the human body is based at least in part on the image data and the depth data; and generating the first physical parameter map associated with the feature of the human body is based at least in part on the data associated with at least one additional sensor modality.
[00145] D. The method of any of C, wherein the data associated with at least one additional sensor modality includes at least one of: temperature data, microbial load data, dielectric property data, electric field potential data, magnetic field strength data, tissue perfusion data, and/or tissue oxygenation data.
[00146] E. The method of C, further comprising: determining, based at least in part on the multi-modal volumetric model, a potential condition associated with the feature of the human body; and determining, based at least in part on the multi-modal volumetric model, a region of the feature of the body associated with the potential condition; and wherein outputting the multi-modal volumetric model includes outputting an indication of the region or the potential condition.
[00147] F. The method of E, wherein determining the potential condition further comprises determining that a quantity of the first physical parameter meets or exceeds a threshold.
[00148] G. The method of E, wherein determining the potential condition further comprises determining that a size, ratio, or percentage of the region associated with the potential condition meets or exceeds a threshold.
[00149] H. The method of E, further comprising determining, based at least in part on the region and the potential condition, a recommended potential therapy and wherein outputting the multi-modal volumetric model includes outputting the recommended potential therapy.
[00150] I. The method of E, wherein the sensor data is first sensor data, the multimodal volumetric model is a first multi-modal volumetric model and the method further comprises: capturing, via user equipment, second sensor data of the feature of the human body, the second sensor data captured at a second time subsequent to a time at which the first sensor data was captured and the second sensor data including second image data, second depth data, and second data associated with the at least one additional sensor modality; generating, based at least in part on the second image data and the second depth data, a second three-dimensional mesh of the feature of the human body; generating, based at least in part on the second data associated with the at least one additional sensor modality, a second physical parameter map associated with feature of the human body and the first physical parameter; generating, based at least in part on the second physical parameter map and the second three-dimensional mesh of the feature of the human body, a second multi-modal volumetric model of the features of the human body, the second multi-modal volumetric model including the region; determining a difference in the potential condition between the first multi-modal volumetric model and the second multi-modal volumetric model; and outputting the difference.
[00151] J. The method of I, further comprising determining a severity of the potential condition is decreasing based at least in part on a reduction in the size of the region or a reduction in the quantity of the first physical parameter.
[00152] K. The method of I, further comprising determining a severity of the potential condition is increasing based at least in part on an increase in the size of the region or an increase in the quantity of the first physical parameter.
[00153] L. The method of A, further comprising generating, based at least in part on the sensor data, a second physical parameter map associated with feature of the human body and a second physical parameter; and wherein generating the multi-modal volumetric model of the features of the human body is based at least in part on the second physical parameter map.
[00154] M. A system comprising a first sensor device to capture image data; a second sensor device to capture depth data; a third sensor device to capture data associated with a first physical parameter; one or more processors; and one or more non-transitory computer-readable media storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising generating, based at least in part on the image data and the depth data, a three-dimensional mesh of a feature of a human body; generating, based at least in part on the data associated with the first physical parameter, a first physical parameter map associated with feature of the human body and the first physical parameter; generating, based at least in part on the first physical parameter map and the three-dimensional mesh of the feature of the human body, a multi-modal volumetric model of the features of the human body; and outputting the multi-modal volumetric model.
[00155] N. The system of M, wherein the data associated with a physical parameter includes at least one of temperature data, microbial load data, dielectric property data, electric field potential data, magnetic field strength data, tissue perfusion data, and/or tissue oxygenation data.
[00156] O. The system of M, further comprising a fourth sensor to capture data associated with a second physical parameter and wherein the operations further comprise generating, based at least in part on the data associated with the second physical parameter, a second physical parameter map associated with feature of the human body and the second physical parameter; and wherein generating the multimodal volumetric model of the features of the human body is based at least in part on the second physical parameter map.
[00157] P. The system of M, wherein the first physical parameter is temperature and the operations further comprise determining, based at least in part on the data associated with a first physical parameter, a gradient associated with the first physical parameter and the feature of the human body; determining, based at least in part on the gradient, a local minima associated with the first physical parameter and the feature of the human body; determining, based at least in part on the local minima, the three- dimensional mesh, a region of the feature of the human body associated with the local minima; responsive to determining that one or more thresholds have been met or exceeded with respect to the local minima or the region, identifying the region as a potential necrosis; and sending to a remote device an alert associated with the potential necrosis.
[00158] Q. The system of M, wherein the first physical parameter is temperature and the operations further comprise determining, based at least in part on the data associated with a first physical parameter, a gradient associated with the first physical parameter and the feature of the human body; determining, based at least in part on the gradient, a local maxima associated with the first physical parameter and the feature of the human body; determining, based at least in part on the local maxima, the three- dimensional mesh, a region of the feature of the human body associated with the local maxima; responsive to determining that one or more thresholds have been met or exceeded with respect to the local maxima or the region, identifying the region as a potential pre-ulcerative lesion or deep tissue pressure injury; and sending to a remote device an alert associated with the potential pre-ulcerative lesion or deep tissue pressure injury.
[00159] R. One or more non-transitory computer-readable media storing instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising receiving, from user equipment, image data, depth data, and sensor data of a feature of an individual; generating, based at least in part on the image data and the depth data, a three-dimensional mesh of the feature of the individual; generating, based at least in part on the sensor data and the three- dimensional mesh, a multi-modal volumetric model including a representation of a first physical parameter of the feature of the individual; and outputting the multi-modal volumetric model.
[00160] S. The one or more non-transitory computer-readable media of R, wherein the multi-modal volumetric model includes a representation of a second physical parameter of the feature of the individual.
[00161] T. The one or more non-transitory computer-readable media of R, wherein the multi-modal volumetric model is associated with a point in time and the one or more non-transitory computer-readable media stores a plurality of multi-modal volumetric models associated with the feature of the individual, each multi-modal volumetric model captured at a different point in time and the plurality of multi-modal volumetric model forms a record of the feature of the individual.
[00162] U. The one or more non-transitory computer-readable media of R, wherein the image data, depth data, and sensor data of a feature of the individual is received on a continuous or periodic basis and the operation further comprise determining a change associated with the first physical parameter and the feature of the individual with respect to a prior multi-modal volumetric model and sending an alert indicating the change.
[00163] While the example clauses described above are described with respect to one particular implementation, it should be understood that, in the context of this document, the content of the example clauses can also be implemented via a method, device, system, a computer-readable medium, and/or another implementation. Additionally, any of the examples A-U may be implemented alone or in combination with any other one or more of the examples A-U.
CONCLUSION
[00164] While one or more examples of the techniques described herein have been described, various alterations, additions, permutations and equivalents thereof are included within the scope of the techniques described herein. As can be understood, the components discussed herein are described as divided for illustrative purposes. However, the operations performed by the various components can be combined or performed in any other component. It should also be understood that components or steps discussed with respect to one example or implementation may be used in conjunction with components or steps of other examples.
[00165] In the description of examples, reference is made to the accompanying drawings that form a part hereof, which show by way of illustration specific examples of the claimed subject matter. It is to be understood that other examples can be used and that changes or alterations, such as structural changes, can be made. Such examples, changes or alterations are not necessarily departures from the scope with respect to the intended claimed subject matter. While the steps herein may be presented in a certain order, in some cases the ordering may be changed so that certain inputs are provided at different times or in a different order without changing the function of the systems and methods described. The disclosed procedures could also be executed in different orders. Additionally, various computations that are herein need not be performed in the order disclosed, and other examples using alternative orderings of the computations could be readily implemented. In addition to being reordered, the computations could also be decomposed into sub-computations with the same results.

Claims

CLAIMS WHAT IS CLAIMED IS:
1. A method comprising: capturing, via user equipment, sensor data of a feature of a human body; generating, based at least in part on the sensor data, a three-dimensional mesh of the feature of the human body; generating, based at least in part on the sensor data, a first physical parameter map associated with feature of the human body and a first physical parameter; generating, based at least in part on the first physical parameter map and the three-dimensional mesh of the feature of the human body, a multi-modal volumetric model of the features of the human body; and outputting the multi-modal volumetric model.
2. The method of claim 1, wherein generating the multi-modal volumetric model of the features of the human body includes aligning values of the first physical parameter map to a local surfaces of the three-dimensional mesh based at least in part on known characteristics of the user equipment.
3. The method of claim 1, wherein: the sensor data includes image data, depth data, and data associated with at least one additional sensor modality; generating the three-dimensional mesh of the feature of the human body is based at least in part on the image data and the depth data; and generating the first physical parameter map associated with the feature of the human body is based at least in part on the data associated with at least one additional sensor modality.
4. The method of claim 3, further comprising: determining, based at least in part on the multi-modal volumetric model, a potential condition associated with the feature of the human body; and determining, based at least in part on the multi-modal volumetric model, a region of the feature of the body associated with the potential condition; and wherein outputting the multi-modal volumetric model includes outputting an indication of the region or the potential condition.
5. The method of claim 4, wherein determining the potential condition further comprises determining that a quantity of the first physical parameter meets or exceeds a threshold.
6. The method of claim 4, wherein determining the potential condition further comprises determining that a size, ratio, or percentage of the region associated with the potential condition meets or exceeds a threshold.
7. The method of claim 4, further comprising determining, based at least in part on the region and the potential condition, a recommended potential therapy and wherein outputting the multi-modal volumetric model includes outputting the recommended potential therapy.
8. The method of claim 4, wherein the sensor data is first sensor data, the multi-modal volumetric model is a first multi-modal volumetric model and the method further comprises: capturing, via user equipment, second sensor data of the feature of the human body, the second sensor data captured at a second time subsequent to a time at which the first sensor data was captured and the second sensor data including second image data, second depth data, and second data associated with the at least one additional sensor modality; generating, based at least in part on the second image data and the second depth data, a second three-dimensional mesh of the feature of the human body; generating, based at least in part on the second data associated with the at least one additional sensor modality, a second physical parameter map associated with feature of the human body and the first physical parameter; generating, based at least in part on the second physical parameter map and the second three-dimensional mesh of the feature of the human body, a second multi-modal volumetric model of the features of the human body, the second multi-modal volumetric model including the region; determining a difference in the potential condition between the first multi-modal volumetric model and the second multi-modal volumetric model; and outputting the difference.
9. The method of claim 8, further comprising determining a severity of the potential condition is decreasing based at least in part on a reduction in the size of the region or a reduction in the quantity of the first physical parameter.
10. The method of claim 8, further comprising determining a severity of the potential condition is increasing based at least in part on an increase in the size of the region or an increase in the quantity of the first physical parameter.
11. The method of claim 1, further comprising: generating, based at least in part on the sensor data, a second physical parameter map associated with feature of the human body and a second physical parameter; and wherein generating the multi-modal volumetric model of the features of the human body is based at least in part on the second physical parameter map.
12. A system comprising: a first sensor device to capture image data; a second sensor device to capture depth data; a third sensor device to capture data associated with a first physical parameter; one or more processors; and one or more non-transitory computer-readable media storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising: generating, based at least in part on the image data and the depth data, a three- dimensional mesh of a feature of a human body; generating, based at least in part on the data associated with the first physical parameter, a first physical parameter map associated with feature of the human body and the first physical parameter; generating, based at least in part on the first physical parameter map and the three-dimensional mesh of the feature of the human body, a multi-modal volumetric model of the features of the human body; and outputting the multi-modal volumetric model.
13. The system of claim 12, wherein the data associated with a physical parameter includes at least one of: temperature data, microbial load data, dielectric property data, electric field potential data, magnetic field strength data, tissue perfusion data, or tissue oxygenation data.
14. The system of claim 12, further comprising a fourth sensor to capture data associated with a second physical parameter and wherein the operations further comprise: generating, based at least in part on the data associated with the second physical parameter, a second physical parameter map associated with feature of the human body and the second physical parameter; and wherein generating the multi-modal volumetric model of the features of the human body is based at least in part on the second physical parameter map.
15. The system of claim 12, wherein the first physical parameter is temperature and the operations further comprise: determining, based at least in part on the data associated with a first physical parameter, a gradient associated with the first physical parameter and the feature of the human body; determining, based at least in part on the gradient, a local minima associated with the first physical parameter and the feature of the human body; determining, based at least in part on the local minima, the three-dimensional mesh, a region of the feature of the human body associated with the local minima; responsive to determining that one or more thresholds have been met or exceeded with respect to the local minima or the region, identifying the region as a potential necrosis; and sending to a remote device an alert associated with the potential necrosis.
16. The system of claim 12, wherein the first physical parameter is temperature and the operations further comprise: determining, based at least in part on the data associated with a first physical parameter, a gradient associated with the first physical parameter and the feature of the human body; determining, based at least in part on the gradient, a local maxima associated with the first physical parameter and the feature of the human body; determining, based at least in part on the local maxima, the three-dimensional mesh, a region of the feature of the human body associated with the local maxima; responsive to determining that one or more thresholds have been met or exceeded with respect to the local maxima or the region, identifying the region as a potential pre-ulcerative lesion or deep tissue pressure injury; and sending to a remote device an alert associated with the potential pre-ulcerative lesion or deep tissue pressure injury.
17. One or more non-transitory computer-readable media storing instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising: receiving, from user equipment, image data, depth data, and sensor data of a feature of an individual; generating, based at least in part on the image data and the depth data, a three- dimensional mesh of the feature of the individual; generating, based at least in part on the sensor data and the three-dimensional mesh, a multi-modal volumetric model including a representation of a first physical parameter of the feature of the individual; and outputting the multi-modal volumetric model.
18. The one or more non-transitory computer-readable media of claim 16, wherein the multi-modal volumetric model includes a representation of a second physical parameter of the feature of the individual.
19. The one or more non-transitory computer-readable media of claim 16, wherein the multi-modal volumetric model is associated with a point in time and the one or more non-transitory computer-readable media stores a plurality of multi-modal volumetric models associated with the feature of the individual, each multi-modal volumetric model captured at a different point in time and the plurality of multi-modal volumetric model forms a record of the feature of the individual.
20. The one or more non-transitory computer-readable media of claim 16, wherein the image data, depth data, and sensor data of a feature of the individual is received on a continuous or periodic basis and the operation further comprise determining a change associated with the first physical parameter and the feature of the individual with respect to a prior multi-modal volumetric model and sending an alert indicating the change.
PCT/US2024/057016 2023-11-29 2024-11-22 System and method for volumetric sensing for medical applications Pending WO2025117351A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363603877P 2023-11-29 2023-11-29
US63/603,877 2023-11-29

Publications (1)

Publication Number Publication Date
WO2025117351A1 true WO2025117351A1 (en) 2025-06-05

Family

ID=95897725

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2024/057016 Pending WO2025117351A1 (en) 2023-11-29 2024-11-22 System and method for volumetric sensing for medical applications

Country Status (1)

Country Link
WO (1) WO2025117351A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220108786A1 (en) * 2020-10-02 2022-04-07 Omics Data Automation, Inc. System and method for data visualization
US20220189611A1 (en) * 2020-12-11 2022-06-16 Align Technology, Inc. Noninvasive multimodal oral assessment and disease diagnoses apparatus and method
US20220383584A1 (en) * 2019-09-18 2022-12-01 Inveox Gmbh System and methods for generating a 3d model of a pathology sample

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220383584A1 (en) * 2019-09-18 2022-12-01 Inveox Gmbh System and methods for generating a 3d model of a pathology sample
US20220108786A1 (en) * 2020-10-02 2022-04-07 Omics Data Automation, Inc. System and method for data visualization
US20220189611A1 (en) * 2020-12-11 2022-06-16 Align Technology, Inc. Noninvasive multimodal oral assessment and disease diagnoses apparatus and method

Similar Documents

Publication Publication Date Title
Lucas et al. Wound size imaging: ready for smart assessment and monitoring
Kassem et al. Machine learning and deep learning methods for skin lesion classification and diagnosis: a systematic review
Brinker et al. Skin cancer classification using convolutional neural networks: systematic review
Hosny et al. Deep learning and optimization-based methods for skin lesions segmentation: a review
Astorino et al. Melanoma detection by means of multiple instance learning
García-Zapirain et al. Classification of pressure ulcer tissues with 3D convolutional neural network
Petrellis A review of image processing techniques common in human and plant disease diagnosis
Chang et al. Deep learning approach based on superpixel segmentation assisted labeling for automatic pressure ulcer diagnosis
Nejati et al. Smartphone and mobile image processing for assisted living: Health-monitoring apps powered by advanced mobile imaging algorithms
US12482564B2 (en) Data processing system for estimating disease progression rates
Divya et al. Enhanced deep-joint segmentation with deep learning networks of glioma tumor for multi-grade classification using MR images
Refaee et al. A computing system that integrates deep learning and the internet of things for effective disease diagnosis in smart health care systems
Salamaa et al. Deep learning design for benign and malignant classification of skin lesions: a new approach
Meswal et al. A weighted ensemble transfer learning approach for melanoma classification from skin lesion images
Parida et al. Data science methodologies in smart healthcare: a review
Vaish et al. Smartphone based automatic abnormality detection of kidney in ultrasound images
Czajkowska et al. Computer-aided diagnosis methods for high-frequency ultrasound data analysis: a review
Pereira et al. Feature selection based on dialectics to support breast cancer diagnosis using thermographic images
Kukreja et al. Segmentation Synergy with a Dual U-Net and Federated Learning with CNN-RF Models for Enhanced Brain Tumor Analysis
Jatmiko et al. A review of big data analytics in the biomedical field
Mousa et al. Integrating vision and location with transformers: A multimodal deep learning framework for medical wound analysis
Anil Jalaja et al. Contactless face video based vital signs detection framework for continuous health monitoring using feature optimization and hybrid neural network
Husain et al. Machine learning for medical image classification
Gao et al. Position paper on diagnostic uncertainty estimation from large language models: Next-word probability is not pre-test probability
WO2025117351A1 (en) System and method for volumetric sensing for medical applications

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24898544

Country of ref document: EP

Kind code of ref document: A1