[go: up one dir, main page]

WO2024076683A1 - Image processing for medical condition diagnosis - Google Patents

Image processing for medical condition diagnosis Download PDF

Info

Publication number
WO2024076683A1
WO2024076683A1 PCT/US2023/034553 US2023034553W WO2024076683A1 WO 2024076683 A1 WO2024076683 A1 WO 2024076683A1 US 2023034553 W US2023034553 W US 2023034553W WO 2024076683 A1 WO2024076683 A1 WO 2024076683A1
Authority
WO
WIPO (PCT)
Prior art keywords
computing device
data
mobile computing
prediction
feature extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2023/034553
Other languages
French (fr)
Inventor
Anton LEBIDEV
Konstantin SEMIANOV
Artem SEMYANOV
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neatsy Inc
Original Assignee
Neatsy Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neatsy Inc filed Critical Neatsy Inc
Publication of WO2024076683A1 publication Critical patent/WO2024076683A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/20ICT specially adapted for the handling or processing of patient-related medical or healthcare data for electronic clinical trials or questionnaires
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/63ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/28Indexing scheme for image data processing or generation, in general involving image processing hardware
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Definitions

  • the present disclosure generally relates to medical diagnosis, and more specifically to systems and methods for detecting medical conditions that have visible symptoms.
  • Some health conditions are diagnosed by specific tests that require particular and expensive equipment. Some health conditions are diagnosed after a patient conducts a questionnaire and a visual analysis is performed by a medical professional. In some cases, a visual analysis must be performed prior to conducting a particular test. An example of such an ailment is flat feet. The presence or at least a suspicion of the condition can be obtained via a visual analysis by the medical professional before conducting a deeper analysis using expensive equipment, such as a magnetic resonance imaging (MRI) or an X-ray.
  • MRI magnetic resonance imaging
  • X-ray X-ray
  • sensors include cameras, microphones, depth cameras, light detection and ranging sensors (lidars), and so on.
  • lidars light detection and ranging sensors
  • Embodiments of this disclosure provide for a system and method that allow users to calculate a probability of the presence of certain health conditions by utilizing a mobile computing device (e.g., smart phones, tablets, laptops).
  • a mobile computing device e.g., smart phones, tablets, laptops.
  • These mobile computing devices may include, but are not limited to, a red-green-blue (RGB) camera (i.e., a conventional camera found in most mobile computing devices), a depth camera, and/or one or more sensors which may include, for example, a lidar sensor.
  • RGB red-green-blue
  • Embodiments of the systems and the methods disclosed herein utilize the data collected by a mobile computing device to process features and predict a probability of the presence of certain health conditions based on the processed features.
  • the mobile computing device is configured to collect data from users and, in some embodiments, perform feature processing via a feature processing module (e.g., feature processing circuit) and/or probability prediction via a prediction module (e.g., prediction circuit).
  • An objective of the feature processing module is to generate features from the collected data for the prediction module to perform a prediction.
  • An objective of the prediction module is to convert the processed features processed by the feature processing module into a probability of the presence of certain health conditions.
  • the prediction module and/or the feature processing module utilize machine learning algorithms or non-trainable algorithms based on prior knowledge/training or a combination of both types of algorithms.
  • machine learning techniques are developed using training data.
  • the data for training the machine learning algorithms may be obtained from real data collected from users and labels associated with the real data.
  • These labels may include, but are not limited to, features that correlate with a particular health condition. These labels may be determined by a health professional or may be self-diagnosed by the user.
  • the example of such labels can be classification binary labels of condition presence such as hypo-lordosis presence, hyper-lordosis presence, scoliosis presence, flat feet presence, hallux-valgus presence, cavus foot presence, varicose veins presence.
  • the labels can be also non-binary, such as the severity of a condition.
  • features that correlate with diagnosis include Hallux Valgus angle, Meary angle, first intermetatarsal angle, lordotic angle, bone joints coordinates.
  • Other examples of features include the presence of condition symptoms such as pain, presence of deformed veins, traumas, skin pigment changes presence, activity level of the patient.
  • another method for training the machine algorithms includes generating synthetic data.
  • synthetic data may be generated by creating a three-dimensional (3D) model of a human foot and then rendering the 3D model.
  • the rendered 3D model is synthetic data and can be used to train the machine leaning algorithms the same as the data collected from users.
  • the labels such as classification labels of condition absence/presence such as hypo-lordosis absence/ absence/presence, hyper-lordosis absence/presence, scoliosis absence/presence, flat feet absence/presence, hallux-valgus absence/presence, cavus foot absence/presence, the severity of the conditions, and even the anatomic features related to the condition such as: Hallux Valgus angle, Meary angle, first intermetatarsal angle, lordotic angle, bone joints coordinates, may be generated automatically based on the parameters of this model.
  • the training data is a combination of collected and synthetic data.
  • the system includes a mobile computing device configured to collect data from a user.
  • the system further includes a feature extraction circuit configured to preprocess the collected data to extract a feature.
  • the system also includes a prediction circuit configured to predict a probability of a medical condition of the user based on the extracted feature.
  • the system further includes a remote computing device communicatively coupled to the mobile computing device.
  • the feature extraction circuit is disposed in the remote computing device.
  • the system further includes a remote computing device communicatively coupled to the mobile computing device.
  • the prediction circuit is disposed in the remote computing device.
  • the mobile computing device includes a red-green- blue (RGB) camera.
  • the collected data includes at least one of RGB images or RGB videos captured by the RGB camera.
  • the mobile computing device includes a red-green- blue depth (RGBD) camera for collecting data.
  • the collected data includes at least one of RGBD images or RGBD videos captured by the RGBD camera.
  • the collected data includes at least one of images or videos captured by the mobile computing device.
  • the feature extraction circuit includes a preprocessing algorithm module.
  • the preprocessing algorithm module is configured to preprocess at least one of the images or the videos by using pre-trained neural networks.
  • the collected data includes at least one of images or videos captured by the mobile computing device.
  • the feature extraction circuit includes a preprocessing algorithm module.
  • the preprocessing algorithm module is configured to preprocess at least one of the images or the videos by using computer vision algorithms or non-trainable algorithms based on the computer vision algorithms.
  • the mobile computing device is further configured to collect data input by the user via a questionnaire.
  • the feature extraction circuit is configured to transform the collected data into point cloud data.
  • the feature extraction circuit is configured to transform the collected data into neural networks embeddings.
  • the prediction circuit includes a prediction algorithm module.
  • the prediction algorithm module is trained using at least one from the following: synthetic data or data collected from a plurality of users.
  • the prediction circuit includes a first prediction algorithm module and a second prediction algorithm module.
  • the first prediction algorithm module is trained using synthetic data and the second prediction algorithm module is trained using data collected by a plurality of users.
  • a first output of the first prediction algorithm module and a second output of the second prediction algorithm module are input into an aggregation algorithm module of the prediction circuit.
  • the probability of the medical condition of the user is determined based on an output of the aggregation algorithm module.
  • the prediction circuit includes a prediction algorithm module.
  • the prediction algorithm module is trained using features extracted by the feature extraction module.
  • a method for detecting medical conditions includes collecting, by a mobile computing device, data from a user.
  • the method further includes preprocessing, by a feature extraction circuit, the collected data to extract a feature, and predicting, by a prediction circuit, a probability of a medical condition of the user based on the extracted feature.
  • the method further includes sending, by the mobile computing device, the extracted feature to a remote computing device comprising the prediction circuit. In one embodiment, the method further includes sending, by the mobile computing device, the collected data to a remote computing device comprising the feature extraction circuit.
  • the collecting of the data from the user by the mobile computing device includes capturing, by a red-green-blue (RGB) camera or a red-green-blue depth (RGBD) camera of the mobile computing device, at least one of RGB images or RGB videos.
  • RGB red-green-blue
  • RGBD red-green-blue depth
  • the collecting, by the mobile computing device, the data from the user includes capturing, by a camera of the mobile computing device, at least one of images or videos.
  • the preprocessing, by the feature extraction circuit, the collected data to extract the feature includes preprocessing, by a preprocessing algorithm of the feature extraction circuit, the at least one of the images or the videos by using pre-trained neural networks.
  • the preprocessing, by the feature extraction circuit, the collected data to extract the feature includes preprocessing, by a preprocessing algorithm of the feature extraction circuit, the at least one of the images or the videos by using computer vision algorithms or non-trainable algorithms based on the computer vision algorithms.
  • the method further includes collecting, by the mobile computing device, input data by the user via a questionnaire. [0025] In some embodiments, the method further includes transforming, by the feature extraction circuit, the collected data into point cloud data and/or neural networks embeddings.
  • the method further includes training a prediction algorithm of the prediction circuit using synthetic data or using data collected from a plurality of users or using synthetic data and data collected by a plurality of users.
  • the method further includes training a first prediction algorithm of the prediction circuit using synthetic data.
  • the method further includes training a second prediction algorithm of the prediction circuit using data collected from a plurality of users.
  • the method further includes aggregating, by an aggregation algorithm module, a first output of the first prediction algorithm and a second output by the second prediction algorithm to predict the probability of the medical condition of the user.
  • the method further includes training a prediction algorithm of the prediction circuit using features extracted by the feature extraction module.
  • FIG. 1 is an example of a system diagram of a medical condition detecting system having a mobile computing device, according to some embodiments of the present application.
  • FIG. 2 is an example of a system diagram of a medical condition detecting system having a mobile computing device and an external computing system, according to some embodiments of the present application.
  • FIG. 3 is another example of a system diagram of a medical condition detecting system having a mobile computing device and an external computing system, according to some embodiments of the present application.
  • FIG. 4 is yet another example of a system diagram of a medical condition detecting system having a mobile computing device and an external computing system, according to some embodiments of the present application.
  • FIG. 5 is an example block diagram of a data collection module in a medical condition detection system, according to some embodiments of the present application.
  • FIG. 6 is an example block diagram of a feature processing module in a medical condition detection system, according to some embodiments of the present application.
  • FIG. 7 is an example block diagram of a prediction module in a medical condition detection system, according to some embodiments of the present application.
  • FIG. 8 is a flowchart diagram of a process for detecting a medical condition, according to some embodiments of the present application.
  • FIGs. 9a -9c are flowchart diagrams of a process for detecting one or more medical conditions, according to some embodiments of the present application.
  • FIGs. lOa-lOb are examples of point clouds generated during diagnosis, according to some embodiments of the present application.
  • the proposed approach is an advanced system and method that allow users to calculate a probability of the presence of certain health conditions by utilizing a mobile computing device (e.g., smart phones, tablets, laptops).
  • a mobile computing device e.g., smart phones, tablets, laptops.
  • the medical conditions that can be detected using the disclosed systems and methods include ones having symptoms that may be observed with a modern mobile device without the use of a specialized sensors and/or equipment.
  • Non-limiting examples of such medical conditions include flat feet, foot over/under-pronation, hallux valgus, nerd neck (e.g., forward head posture), and scoliosis.
  • visual analysis is a major component of the diagnosis of such conditions, they are suitable for diagnosis using a mobile computing device.
  • a preliminary step in performing the assessment is conducted through visual analysis. Only after performing the preliminary step, does the medical professional decide whether additional testing (e.g., X-rays) should be performed.
  • additional testing e.g., X-rays
  • module refers to a component of an apparatus, which may be implemented as hardware (e.g., chips, circuits, processors, etc.), software (e.g., applications, API calls, function library, embedded code, etc.), or a combination of hardware and software.
  • hardware e.g., chips, circuits, processors, etc.
  • software e.g., applications, API calls, function library, embedded code, etc.
  • FIGs. 1, 2, 3, and 4 illustrate various examples of an architecture of systems 100, 200, 300, 400 for determining a probability of a medical condition for a user.
  • the systems 100, 200, 300, 400 include a mobile computing device 110, and optionally include an external computing system 210 (external as to the mobile computing device 110).
  • the mobile computing device 110 may be, but is not limited to, a smartphone, a tablet, a laptop, virtual reality (VR)/artificial reality (AR) headset, or any other suitable mobile communication device.
  • the external computing device 210 may be a remote system or server that is communicatively coupled to the mobile computing device 110.
  • FIG. 1 For example, as illustrated in FIG.
  • the system 100 includes the mobile computing device 110.
  • the mobile computing device 110 includes a data collection module 112 (e.g., data collection circuit), a feature processing module 114 (e.g., feature extraction module, feature extraction circuit), and a prediction module 116 (e.g., prediction circuit).
  • the mobile computing device 110 receives one or more types of collected data 120 using one or more sensors of the mobile computing device 110, e.g., Data Source 1, Data Source 2, and Data Source 3.
  • the one or more types of collected data 120 are input and stored by the data collection module 112.
  • the data collection module 112 outputs the collected data to the feature processing module 114.
  • the feature processing module 114 preprocesses the data obtained by the data collection module 512 to extract one or more features from the preprocessed data via one or more preprocessing algorithms of the feature processing module 114. After extracting the one or more features, the feature processing module 114 outputs the extracted features to the prediction module 116.
  • the prediction module 116 receives the extracted features from the feature processing module 114 and calculates a probability (e.g., prediction) of the presence of a health condition via one or more prediction algorithms of the prediction module 116.
  • An output 122 indicative of a probability of the health condition is output by the mobile computing device 110.
  • the system 200 includes the mobile computing device 110 and the external computing device 210.
  • Each of the mobile computing device 110 and the external computing device 210 include communication components 118 and 218, respectively.
  • the communication components 118 and 218 communicatively couple the mobile computing device 110 and the external computing device 210. Accordingly, the mobile computing device 110 and the external computing device 210 may send and/or receive data from the other device via the communication components 118 and 218.
  • the mobile computing device 110 includes the data collection module 112 which may be structurally or operationally the same as, or similar to, the data collection module described in FIG 1.
  • the system 200 differs from the system 100 in that a feature processing module 214 and a prediction module 216 are disposed in the external computing device 210.
  • the collected data from the mobile computing device 110 is transmitted from the mobile computing device 110 to the external computing device 210 via the communication components 118 and 218.
  • the feature processing module 214 outputs the extracted features to the prediction module 216 within the external computing device 210.
  • the prediction module 216 then outputs a probability of the presence of a health condition of a user to the mobile computing device 110 via the communication components 118 and 218.
  • the mobile computing device 110 may then output 122 the probability of the health condition to the user of the mobile computing device 110.
  • the system 300 includes a mobile computing device 110 and an external computing system 210.
  • the mobile computing device 110 includes a data collection module 112, a feature processing module 114, and a communication component 118.
  • the external computing system 210 include a prediction module 216 and a communication component 218.
  • the feature processing module 114 is disposed in the mobile computing device 110, not in the external computing system 210 as in the system 200.
  • the feature processing module 114 provides the extracted features to the external computing device, via the communication components 118 and 218, which in turn, outputs the extract features to the prediction module 216.
  • the system 400 includes a mobile computing device 110 and an external computing system 210.
  • the mobile computing device 110 includes a data collection module 112, a prediction module 116, and a communication component 118.
  • the external computing system 210 include a feature processing module 214 and a communication component 218.
  • the feature processing module 314 is disposed in the external computing system 210, not in the mobile computing device 110 as in the system 200.
  • the feature processing module 214 provides the extracted features to the mobile computing device 110, via the communication components 118 and 218, which in turn, outputs the extracted features to the prediction module 116 disposed in the mobile computing device 110.
  • the mobile computing device 110 and the external computing system 210 can each include communication components 118 and 218, respectively, that facilitate communication for each of the mobile computing device 110 and the external computing system 210 shown in FIGs. 1, 2, 3, and 4, for example, to communicate with each other over a communication network.
  • Some examples of communication networks include, but are not limited to, internet, intranet, wide area network (WAN), local area network (LAN), wireless network, Bluetooth, Wi-Fi, and other similar mobile communication networks.
  • the connections of the network and the communication protocols are well known to those of skill in the art.
  • the communication components typically embody computer-readable instructions, data structures, program modules, or other data in a modulated data signal.
  • communication components include wired media such as a wired network or a direct-wired connection, and wireless media such as acoustic, radio frequency (RF), and infrared.
  • the mobile computing device 110 may not include a communication component 210 for communicating with an external server (e.g., outside of the conventional usage of a mobile computing device 110).
  • the mobile computing device 110 further includes a data collection module 112 (e.g., data collection circuit).
  • the mobile computing device 110 further includes a feature extraction module 114 (e.g., feature extraction circuit) and prediction module 116 (e.g., prediction circuit).
  • the feature extraction module and/or the prediction module are not included in a mobile computing device 110, and instead, are included in the external computing system 210.
  • the feature extraction module 214 and the prediction module 216 included in the external computing system 210 may be identical or similar to the feature extraction module 114 and the prediction module 116 in the mobile computing device 110.
  • the data collection module 512 e.g., data collection circuit
  • the user performs self-diagnosis or participates in a telecommunication visit (e.g., a telemedicine or telehealth session).
  • a medical professional operates the mobile computing device 110 for a patient.
  • the data collection module 512 may be similar or identical to the data collection module 112 in the systems 100, 200, 300, and 400. In some embodiments, only one type of data is collected from the user (e.g., a patient). In some embodiments, various types of data are collected from the user. The types of collected include, but are not limited to, red-green- blue (RGB) images that capture anteroposterior, lateral, medial, and coronal views of a human foot, RGB images of a human’s back and neck, etc.
  • RGB red-green- blue
  • One or more sensors 514 equipped in the mobile computing device 110 can be utilized by the data collection module 512 to collect the RGB images.
  • one of the one or more sensors 514 may be a native camera of the mobile computing device 110 (e.g., a standard camera included with the mobile computing device 110).
  • the mobile computing device 110 may be configured to prompt the user for additional information, for example, via a questionnaire, to assess the health or medical condition of the user.
  • the one or more sensors 514 of the mobile computing device 110 may also capture RGB depth (RGBD) photographs, record audio (via a microphone of the mobile computing device 110) or video recordings (via the native camera of the mobile computing device 110), and the like.
  • RGBBD RGB depth
  • one of the one or more sensors 514 includes motion sensors, which may be native to the mobile communication device 100, to improve accuracy for detecting certain medical conditions such as limping.
  • the data collection module 512 may be configured to provide instructions and feedbacks to the user.
  • the instructions and feedback may be provided in any manner that is well-known in the art, including one or more of visual cues (e.g., via graphical user interfaces (GUIs), prompts), auditory cues (e.g., via a speaker of the mobile computing device 110), or tactile forms.
  • the instructions and feedback may be preprogrammed or provided in real-time by a medical professional in a telemedicine environment.
  • the data collection module 512 performs the collecting of data from the sensors 514 and stores the collected data.
  • the collected data may be stored internally in one or more storages 516 of the data collection module 112.
  • the collected data is transmitted by the mobile computing device 110 to the external computing device 210 via the communication components 118, 218, respectively.
  • the data collection module 512 transmits the output data 522 to the feature processing module 614 directly or via the communication components 118 and 218.
  • the feature processing module 614 is configured to preprocess the output data 522 obtained by the data collection module 512.
  • the data obtained by the data collection module 512 may be organized by various data types 620.
  • different preprocessing algorithms 622 may be utilized with respect to a data type 620. For example, if the data type is RGB images of a user’s foot, a particular preprocessing algorithm directed to RBG images of a user’s foot may be utilized to identify features of the user’s foot.
  • the feature processing module 614 is performed by a mobile computing device 110.
  • the preprocessing algorithm 622 is performed internally in the mobile computing device 110, additional systems (e.g., an external server) are not required, which improves the speed of the diagnosis. Localized information processing also makes health information less vulnerable to hacking and reduces opportunities for health information being hacked by cyberattacks as a transmission to an auxiliary component and/or being stored by an auxiliary component are not required.
  • the preprocessing algorithm 622 is performed by the external computing system 210.
  • processing overhead on the mobile computing device 110 is reduced.
  • the external computing system 210 may comprise superior computing capabilities in comparison to the mobile computing device 110, thereby improving the speed of processing and/or the capacity to process the collected data.
  • the preprocessing algorithm 622 may be omitted. In one embodiment, the preprocessing algorithm 622 is equivalent to a simple algorithm that inputs the raw data received from the data collection module 512. In some embodiments, the feature processing module 614 transforms the input data into their latent representation 624. Examples of transformations include, but are not limited to, transforming a RGB photo to a neural network embedding, transforming video to a set of key RGB photos with an optional transformation to neural networking embeddings, transforming RGBD photos to neural network embeddings, transforming sound to spectrogram (via short-time Fourier transform or wavelet transformation, for example), transforming the RGBD video to a point cloud.
  • the feature processing module may include a pipeline of algorithms.
  • the data collection module 512 provides multiple types of data 620.
  • the feature processing module 614 separately processes each data type and outputs the latent representation 624 for each of the data types.
  • the prediction module 716 is responsible for predicting a probability of a medical condition based on the features 720 provided from the feature processing module 614.
  • the features include neural network embeddings, spectrograms, point clouds, hand-crafted features calculated on raw data, etc.
  • a feature 702 refers to extracted or transformed data obtained from the collected data after some processing by a preprocessing algorithm or a combination of different preprocessing algorithms. In some embodiments, the feature processing module 614 does not make any computations.
  • the feature processing module 614 supplies the raw input as the output to the prediction module 716, which takes the raw collected data as an input.
  • the prediction module 716 may be further configured to output the features 720 to the user of the mobile computing device 110 in real-time.
  • the data collection module 512 collects various types of data 620 and the feature processing module 614 outputs one or more latent representations 624.
  • the prediction module 716 may be a multimodal system. In a multimodal system, for each data source a separate prediction algorithm 722 may be utilized to calculate a probability prediction. The outputs of the prediction algorithm 722 are then aggregated by an aggregation algorithm 724.
  • Examples of the aggregation algorithm 724 include, but are not limited to, bootstrapping processing or boosting processing.
  • the prediction algorithms 722 are one or more of the following: deep learning algorithms, linear regression algorithms, decision trees, ensembles methods, or are non-trainable algorithms based on prior knowledge.
  • the aggregated output 122 by the aggregation algorithm 724 is indicative of a probability of the health condition to the user of the mobile computing device 110.
  • the feature processing module 614 and/or the prediction module 716 include machine learning algorithms. Such algorithms require training in order to attain accuracy. The training of such algorithms requires testing data and labels.
  • training data is data collected from real people by using the data collection module 512. Labeling of the collected data may be performed by a medical professional who determines a probability of an ailment for each user based on the input data. The labels should include the classification labels for the predicted conditions or the probability of such conditions. In addition, the labels may include other features related to the condition such as severity, anatomic features, anamnesis and even the subjective confidence in diagnosis from the clinician who performed the labeling.
  • This information can be taken into account in the training of the model by loss assigning weights for loss function or changing sampling balance for the training procedures based on mini- batch training. As an example, for data with severe conditions the weight for loss can be bigger to make the model pay more attention to severe cases.
  • the data and the labels are synthesized by another algorithm.
  • Some examples of algorithms that can be used to synthesize training data include generative neural networks, three-dimensional (3D) modeling, or a Markov process.
  • labeling of the training data can be performed by medical professionals or by other algorithms based on generative parameters.
  • An example of such parameters is a Meary’s angle on a rigged 3D foot model in a flat feet prediction.
  • the training data consists of a combination of data collected from real people and synthetic data that is generated by machines.
  • the mixing strategy of such data for training may in assigning weights for the loss, changing sampling probability for the training procedures based on mini-batch training.
  • the feature processing module 614 and/or the prediction module 716 consist of algorithms based on prior knowledge.
  • types of prior knowledge include knowledge pertaining to flat feet and hallux -valgus estimations and applying that knowledge to a 3D foot scan. The following paragraphs will describe these particular examples in detail.
  • FIGs. 10a and 10b illustrate examples of points clouds generated by the feature processing module 614, according to some embodiments of the application.
  • the point cloud is a data type utilized in the medical diagnosis processes disclosed herein.
  • a point cloud is a mandatory component in a majority of 3D models.
  • a point cloud can be created by a sampling procedure with a sufficient resolution.
  • a 3D model can be created from a series of RGB(D) photos of a foot via any Structure from Motion (SIM) algorithm.
  • SIM Structure from Motion
  • the data collection module 512 is an application that collects the RGB(D) photos of the foot
  • the feature processing module 614 is the SfM algorithm
  • the prediction module 716 is described in detail below.
  • a medical condition is diagnosed based on either an X-ray or by a visual analysis made by a skilled medical professional.
  • the diagnosis is based on a location of the bones within the foot. Accordingly, a location of the bones may be determined by a hand-crafted (e.g., customized) non-trainable algorithm.
  • FIG. 10a illustrates an example of a point cloud generated for diagnosing a hallux-valgus condition.
  • the diagnosis requires finding a joint 1002 connecting a big toe 1006 to the rest of the foot 1004 (also known as the first metatarsophalangeal (MTP) joint).
  • This joint 1002 may be found as an extreme point in the 3D point cloud in the front part of the medial view.
  • An extreme point estimation can be performed by comparing points along the length axis and their corresponding values along the width axis. A point that is farther in the width direction than its local neighborhood is the extremum point by definition.
  • a surrogate hallux-valgus angle can be defined as an angle 1008 in the dorsal view projection between the line connecting the big toe 1006 and joint point 1002 and the line connecting the heel location 1004 and the joint point 1002.
  • the big toe 1006 and the heel point locations 1004 can also be found as an extremum in an anterior and a posterior view, correspondingly.
  • the angle 1008 is calculated by the feature preprocessing module 614 using a SfM algorithm. In other embodiments, it is an initial part of the processing in the prediction module 716.
  • the angle 1008 is a highly descriptive feature in certain medical condition prediction as high angle values often indicate a high probability of hallux-valgus deformity.
  • the simplest prediction model can linearly map the surrogate angle to a probability of the presence of a medical condition with fixed coefficients determined by research.
  • the prediction module 716 can take the angle 1008 as the output of the feature processing model 614 and apply the linear model that maps the angle 1008 into a probability of a medical condition, such as hallux-valgus.
  • the prediction module 716 is a pipeline of algorithms that takes a point cloud as an input from the feature processing module 614, calculates the angle 1008, and applies the mapping model to the angle 1008 to arrive at a probably of the medical condition.
  • FIG. 10b illustrate an example of diagnosing a flat feet condition.
  • the point of interest in the point cloud may be the highest point 1050 of a longitudinal arch of the foot 1054, which can be found in the point cloud as the highest point 1050 of the bottom surface 1056 of the foot 1054.
  • the bottom surface 1056 can be calculated by splitting the point cloud in disjoint sets of points defined by a 2D grid of a floor plate. For each set of points, the point with a minimal height coordinate is computed. This set of minimal points is defined as the bottom surface 1056.
  • This highest point 1050 will not have any neighboring points that are located lower (relative to the foot) and will be closer to the camera's image plane in a medial view than any neighboring points.
  • the diagnosis can be performed by using an angle 1058 in the dorsal view projection between the line 1052 connecting the arch point 1050 and the joint point and the line 1060 connecting the arch point 1050 and the heel location in the same manner as in the diagnosis of hallux valgus.
  • each pixel can be transformed into a point in a point cloud, where these points are filtered by the measured distance to the camera (e.g., one of the one or more sensors 514 is a distance sensor), so only the points representing the foot and the floor will remain.
  • the floor can be found and then filtered by a random sample consensus (RANSAC) algorithm.
  • RANSAC random sample consensus
  • the foot point cloud for the medial view is determined and can be utilized to estimate the same points for the 3D model as in the previous example and the same angle calculated on those points.
  • Another example is through the use of only an RGB camera.
  • This method requires capturing an image using a light source above the foot, so the arch area is covered by shadow.
  • the foot area can be segmented using various computer vision techniques including pretrained neural networks or simple color segmentation.
  • the arch point can be found as the upper point on the edge line between the lighted foot and the shadows.
  • This edge line can be determined by classic computer vision algorithms such as a Canny algorithm, a Sobel operator, a Laplacian of Gaussian (LoG) algorithm, or any other suitable algorithm.
  • the feature processing module 614 or the prediction module 716 can include the points of interest position estimation.
  • a method 800 for detecting medical conditions is illustrated, according to some embodiments.
  • the method 800 may be executed by any of the systems 100, 200, 300, or 400.
  • the method 800 includes steps 802 - 806.
  • a mobile computing device such as the mobile computing device 110, collects data from a user.
  • the step 802 may be executed according to any of the manners described above with respect to collecting data from a user using the mobile computing device 110.
  • step 804 the collected data, from step 802, is preprocessed to extract a feature.
  • a feature may be a neural network embedding, spectrogram, list of key points, point cloud.
  • a feature may also be a region of interest in image measurements, for example, the length of a point cloud and so on.
  • the feature may be extracted according to any of the manners described above with respect to preprocessing the collected data.
  • the step 804 may be executed by the mobile computing device 110 or by a remote computing device, such as the external computing device 210.
  • step 806 the extracted feature from step 804 is used to predict a probability of a medical condition.
  • the probability may be predicted according to any of the manners described above with respect to predicting the probability.
  • the step 806 may be executed by the mobile computing device 110 or by the remote computing device such as the external computing device 210.
  • a method 902 is illustrated.
  • a mobile computing device such as the mobile computing device 110
  • collects data from a user In step 906, the mobile computing device sends the collected data to a remote computing device, such as the external computing device 210.
  • the remote computing device preprocesses the collected data to extract a feature.
  • the remote computing device sends the extracted feature to the mobile computing device.
  • the mobile computing device predicts a probability of a medical condition of the user based on the extracted feature.
  • the mobile computing device then outputs the probability of the medical condition to the user.
  • a mobile computing device such as the mobile computing device 110 collects data from a user (step 922).
  • the mobile computing device preprocesses the collected data to extract a feature (step 924), then sends the extracted feature to a remote computing device (step 926), such as the external computing device 210.
  • the remote computing device predicts a probability of a medical condition of the user based on the extracted feature (step 928).
  • the external computing device sends the probability of the medical condition to the mobile computing device.
  • the mobile computing device then outputs the probability of the medical condition.
  • a method 940 is illustrated.
  • a mobile computing device such as the mobile computing device 110 collects data from a user.
  • the mobile computing device sends the collected data to a remote computing device, such as the external computing device 210.
  • the remote computing device preprocesses the collected data to extract a feature.
  • the remote computing device predicts a probability of a medical condition of the user based on the extracted feature. The remote computing device then sends the probability of the medical condition to the mobile computing device (step 950) and outputs the probability of the medical condition (step 952).

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Biomedical Technology (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Pathology (AREA)
  • Evolutionary Computation (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Human Computer Interaction (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

Embodiments of the present application discloses a method and a related system for detecting medical conditions. In the method, a mobile computing device collects data (visual, sensor, etc.) from a user. A feature extraction circuit preprocesses the collected data to extract a feature. Based on the extracted feature, a prediction circuit determines a probability of the presence of a medical condition in the user. The embodiments provide a cost-effective and convenient approach to diagnosing certain health conditions that are visibly identifiable.

Description

IMAGE PROCESSING FOR MEDICAL CONDITION DIAGNOSIS
RELATED APPLICATIONS
[0001] This application claims priority to US Provisional Application 63/413,575 filed on October 5, 2022, the entire content of which is incorporated herein in its entirety.
FIELD OF THE TECHNOLOGY
[0002] The present disclosure generally relates to medical diagnosis, and more specifically to systems and methods for detecting medical conditions that have visible symptoms.
BACKGROUND
[0003] As medical sciences continue to advance, the number of identifiable health conditions also continues to grow. Some health conditions are diagnosed by specific tests that require particular and expensive equipment. Some health conditions are diagnosed after a patient conducts a questionnaire and a visual analysis is performed by a medical professional. In some cases, a visual analysis must be performed prior to conducting a particular test. An example of such an ailment is flat feet. The presence or at least a suspicion of the condition can be obtained via a visual analysis by the medical professional before conducting a deeper analysis using expensive equipment, such as a magnetic resonance imaging (MRI) or an X-ray.
[0004] With advanced technologies, mobile devices are now equipped with various types of sensors. Some examples of such sensors include cameras, microphones, depth cameras, light detection and ranging sensors (lidars), and so on. As medical costs become cost-prohibitive and, at times, access to medical devices becomes restrictive for many people, there is a need for a more cost-effective and convenient approach to diagnose some of visually identifiable health conditions.
SUMMARY
[0005] Embodiments of this disclosure provide for a system and method that allow users to calculate a probability of the presence of certain health conditions by utilizing a mobile computing device (e.g., smart phones, tablets, laptops). These mobile computing devices may include, but are not limited to, a red-green-blue (RGB) camera (i.e., a conventional camera found in most mobile computing devices), a depth camera, and/or one or more sensors which may include, for example, a lidar sensor. Embodiments of the systems and the methods disclosed herein utilize the data collected by a mobile computing device to process features and predict a probability of the presence of certain health conditions based on the processed features.
[0006] The mobile computing device is configured to collect data from users and, in some embodiments, perform feature processing via a feature processing module (e.g., feature processing circuit) and/or probability prediction via a prediction module (e.g., prediction circuit). An objective of the feature processing module is to generate features from the collected data for the prediction module to perform a prediction. An objective of the prediction module is to convert the processed features processed by the feature processing module into a probability of the presence of certain health conditions. In some embodiments, the prediction module and/or the feature processing module utilize machine learning algorithms or non-trainable algorithms based on prior knowledge/training or a combination of both types of algorithms.
[0007] In some embodiments, when the feature processing module and/or the prediction module utilize machine learning algorithms, machine learning techniques are developed using training data. The data for training the machine learning algorithms may be obtained from real data collected from users and labels associated with the real data. These labels may include, but are not limited to, features that correlate with a particular health condition. These labels may be determined by a health professional or may be self-diagnosed by the user. The example of such labels can be classification binary labels of condition presence such as hypo-lordosis presence, hyper-lordosis presence, scoliosis presence, flat feet presence, hallux-valgus presence, cavus foot presence, varicose veins presence. The labels can be also non-binary, such as the severity of a condition. Examples of features that correlate with diagnosis include Hallux Valgus angle, Meary angle, first intermetatarsal angle, lordotic angle, bone joints coordinates. Other examples of features include the presence of condition symptoms such as pain, presence of deformed veins, traumas, skin pigment changes presence, activity level of the patient.
[0008] On its own or in combination with using user data, another method for training the machine algorithms includes generating synthetic data. For example, synthetic data may be generated by creating a three-dimensional (3D) model of a human foot and then rendering the 3D model. The rendered 3D model is synthetic data and can be used to train the machine leaning algorithms the same as the data collected from users. In this example, the labels, such as classification labels of condition absence/presence such as hypo-lordosis absence/ absence/presence, hyper-lordosis absence/presence, scoliosis absence/presence, flat feet absence/presence, hallux-valgus absence/presence, cavus foot absence/presence, the severity of the conditions, and even the anatomic features related to the condition such as: Hallux Valgus angle, Meary angle, first intermetatarsal angle, lordotic angle, bone joints coordinates, may be generated automatically based on the parameters of this model. In some embodiments, the training data is a combination of collected and synthetic data. [0009] According to a first aspect, a system for detecting medical conditions is disclosed. The system includes a mobile computing device configured to collect data from a user. The system further includes a feature extraction circuit configured to preprocess the collected data to extract a feature. The system also includes a prediction circuit configured to predict a probability of a medical condition of the user based on the extracted feature.
[0010] In some embodiments, the system further includes a remote computing device communicatively coupled to the mobile computing device. The feature extraction circuit is disposed in the remote computing device.
[0011] In some embodiments, the system further includes a remote computing device communicatively coupled to the mobile computing device. The prediction circuit is disposed in the remote computing device.
[0012] In some embodiments, the mobile computing device includes a red-green- blue (RGB) camera. The collected data includes at least one of RGB images or RGB videos captured by the RGB camera.
[0013] In some embodiments, the mobile computing device includes a red-green- blue depth (RGBD) camera for collecting data. The collected data includes at least one of RGBD images or RGBD videos captured by the RGBD camera.
[0014] In some embodiments, the collected data includes at least one of images or videos captured by the mobile computing device. The feature extraction circuit includes a preprocessing algorithm module. The preprocessing algorithm module is configured to preprocess at least one of the images or the videos by using pre-trained neural networks.
[0015] In some embodiments, the collected data includes at least one of images or videos captured by the mobile computing device. The feature extraction circuit includes a preprocessing algorithm module. The preprocessing algorithm module is configured to preprocess at least one of the images or the videos by using computer vision algorithms or non-trainable algorithms based on the computer vision algorithms.
[0016] In some embodiments, the mobile computing device is further configured to collect data input by the user via a questionnaire. In one embodiment, the feature extraction circuit is configured to transform the collected data into point cloud data. In one embodiment, the feature extraction circuit is configured to transform the collected data into neural networks embeddings.
[0017] In some embodiments, the prediction circuit includes a prediction algorithm module. The prediction algorithm module is trained using at least one from the following: synthetic data or data collected from a plurality of users.
[0018] In some embodiments, the prediction circuit includes a first prediction algorithm module and a second prediction algorithm module. The first prediction algorithm module is trained using synthetic data and the second prediction algorithm module is trained using data collected by a plurality of users. A first output of the first prediction algorithm module and a second output of the second prediction algorithm module are input into an aggregation algorithm module of the prediction circuit. The probability of the medical condition of the user is determined based on an output of the aggregation algorithm module.
[0019] In some embodiments, the prediction circuit includes a prediction algorithm module. The prediction algorithm module is trained using features extracted by the feature extraction module.
[0020] According to a second aspect, a method for detecting medical conditions is disclosed. The method includes collecting, by a mobile computing device, data from a user. The method further includes preprocessing, by a feature extraction circuit, the collected data to extract a feature, and predicting, by a prediction circuit, a probability of a medical condition of the user based on the extracted feature.
[0021] In one embodiment, the method further includes sending, by the mobile computing device, the extracted feature to a remote computing device comprising the prediction circuit. In one embodiment, the method further includes sending, by the mobile computing device, the collected data to a remote computing device comprising the feature extraction circuit.
[0022] In some embodiments, the collecting of the data from the user by the mobile computing device includes capturing, by a red-green-blue (RGB) camera or a red-green-blue depth (RGBD) camera of the mobile computing device, at least one of RGB images or RGB videos.
[0023] In some embodiments, the collecting, by the mobile computing device, the data from the user includes capturing, by a camera of the mobile computing device, at least one of images or videos. In one embodiment, the preprocessing, by the feature extraction circuit, the collected data to extract the feature includes preprocessing, by a preprocessing algorithm of the feature extraction circuit, the at least one of the images or the videos by using pre-trained neural networks. In one embodiment, the preprocessing, by the feature extraction circuit, the collected data to extract the feature includes preprocessing, by a preprocessing algorithm of the feature extraction circuit, the at least one of the images or the videos by using computer vision algorithms or non-trainable algorithms based on the computer vision algorithms.
[0024] In some embodiments, the method further includes collecting, by the mobile computing device, input data by the user via a questionnaire. [0025] In some embodiments, the method further includes transforming, by the feature extraction circuit, the collected data into point cloud data and/or neural networks embeddings.
[0026] In some embodiments, the method further includes training a prediction algorithm of the prediction circuit using synthetic data or using data collected from a plurality of users or using synthetic data and data collected by a plurality of users.
[0027] In some embodiments, the method further includes training a first prediction algorithm of the prediction circuit using synthetic data. The method further includes training a second prediction algorithm of the prediction circuit using data collected from a plurality of users. The method further includes aggregating, by an aggregation algorithm module, a first output of the first prediction algorithm and a second output by the second prediction algorithm to predict the probability of the medical condition of the user.
[0028] In some embodiments, the method further includes training a prediction algorithm of the prediction circuit using features extracted by the feature extraction module.
BRIEF DESCRIPTION OF THE DRAWINGS
[0029] These and other features of the present disclosure will become readily apparent upon further review of the following specification and drawings. In the drawings, like reference numerals designate corresponding parts throughout the views. Moreover, components in the drawings are not necessarily drawn to scale, the emphasis instead being placed upon clearly illustrating the principles of the present disclosure. [0030] Aspects of the present disclosure are best understood from the following detailed description when read with the accompanying figures. It is noted that, in accordance with the standard practice in the industry, various features are not drawn to scale. In fact, the dimensions of the various features may be arbitrarily increased or reduced for clarity of discussion.
[0031] FIG. 1 is an example of a system diagram of a medical condition detecting system having a mobile computing device, according to some embodiments of the present application.
[0032] FIG. 2 is an example of a system diagram of a medical condition detecting system having a mobile computing device and an external computing system, according to some embodiments of the present application.
[0033] FIG. 3 is another example of a system diagram of a medical condition detecting system having a mobile computing device and an external computing system, according to some embodiments of the present application.
[0034] FIG. 4 is yet another example of a system diagram of a medical condition detecting system having a mobile computing device and an external computing system, according to some embodiments of the present application.
[0035] FIG. 5 is an example block diagram of a data collection module in a medical condition detection system, according to some embodiments of the present application.
[0036] FIG. 6 is an example block diagram of a feature processing module in a medical condition detection system, according to some embodiments of the present application.
[0037] FIG. 7 is an example block diagram of a prediction module in a medical condition detection system, according to some embodiments of the present application. [0038] FIG. 8 is a flowchart diagram of a process for detecting a medical condition, according to some embodiments of the present application.
[0039] FIGs. 9a -9c are flowchart diagrams of a process for detecting one or more medical conditions, according to some embodiments of the present application.
[0040] FIGs. lOa-lOb are examples of point clouds generated during diagnosis, according to some embodiments of the present application.
DETAILED DESCRIPTION
[0041] Embodiments of the disclosure are described more fully hereinafter with reference to the accompanying drawings, in which preferred embodiments of the disclosure are shown. The various embodiments of the disclosure may, however, be embodied in many different forms and should not be construed as limitations to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
[0042] The proposed approach is an advanced system and method that allow users to calculate a probability of the presence of certain health conditions by utilizing a mobile computing device (e.g., smart phones, tablets, laptops). The medical conditions that can be detected using the disclosed systems and methods include ones having symptoms that may be observed with a modern mobile device without the use of a specialized sensors and/or equipment. Non-limiting examples of such medical conditions include flat feet, foot over/under-pronation, hallux valgus, nerd neck (e.g., forward head posture), and scoliosis. As visual analysis is a major component of the diagnosis of such conditions, they are suitable for diagnosis using a mobile computing device. Although a final diagnosis of such conditions may require the use of specialized equipment (e.g., X-ray scans), a preliminary step in performing the assessment is conducted through visual analysis. Only after performing the preliminary step, does the medical professional decide whether additional testing (e.g., X-rays) should be performed.
[0043] In the following disclosure, numerous embodiments are set forth in order to provide a more thorough description of the proposed approach. It will be apparent, however, to one skilled in the art, that the disclosure extends beyond the specific embodiments and may include techniques and/or features that are well-known by those skilled in the art. In some instances, these well-known techniques and/or features have not been described in full detail so as not to obscure the teachings of this disclosure.
[0044] In the present disclosure, the term “module” refers to a component of an apparatus, which may be implemented as hardware (e.g., chips, circuits, processors, etc.), software (e.g., applications, API calls, function library, embedded code, etc.), or a combination of hardware and software.
[0045] FIGs. 1, 2, 3, and 4 illustrate various examples of an architecture of systems 100, 200, 300, 400 for determining a probability of a medical condition for a user. As shown in FIGs. 1, 2, 3, and 4, the systems 100, 200, 300, 400 include a mobile computing device 110, and optionally include an external computing system 210 (external as to the mobile computing device 110). The mobile computing device 110 may be, but is not limited to, a smartphone, a tablet, a laptop, virtual reality (VR)/artificial reality (AR) headset, or any other suitable mobile communication device. The external computing device 210 may be a remote system or server that is communicatively coupled to the mobile computing device 110. [0046] For example, as illustrated in FIG. 1, the system 100 includes the mobile computing device 110. The mobile computing device 110 includes a data collection module 112 (e.g., data collection circuit), a feature processing module 114 (e.g., feature extraction module, feature extraction circuit), and a prediction module 116 (e.g., prediction circuit). The mobile computing device 110 receives one or more types of collected data 120 using one or more sensors of the mobile computing device 110, e.g., Data Source 1, Data Source 2, and Data Source 3. The one or more types of collected data 120 are input and stored by the data collection module 112. The data collection module 112 outputs the collected data to the feature processing module 114. The feature processing module 114 preprocesses the data obtained by the data collection module 512 to extract one or more features from the preprocessed data via one or more preprocessing algorithms of the feature processing module 114. After extracting the one or more features, the feature processing module 114 outputs the extracted features to the prediction module 116.
[0047] The prediction module 116 receives the extracted features from the feature processing module 114 and calculates a probability (e.g., prediction) of the presence of a health condition via one or more prediction algorithms of the prediction module 116. An output 122 indicative of a probability of the health condition is output by the mobile computing device 110.
[0048] As illustrated in FIG. 2, the system 200 includes the mobile computing device 110 and the external computing device 210. Each of the mobile computing device 110 and the external computing device 210 include communication components 118 and 218, respectively. The communication components 118 and 218 communicatively couple the mobile computing device 110 and the external computing device 210. Accordingly, the mobile computing device 110 and the external computing device 210 may send and/or receive data from the other device via the communication components 118 and 218. [0049] In FIG. 2, the mobile computing device 110 includes the data collection module 112 which may be structurally or operationally the same as, or similar to, the data collection module described in FIG 1. However, the system 200 differs from the system 100 in that a feature processing module 214 and a prediction module 216 are disposed in the external computing device 210. In these embodiments, the collected data from the mobile computing device 110 is transmitted from the mobile computing device 110 to the external computing device 210 via the communication components 118 and 218. In these embodiments, the feature processing module 214 outputs the extracted features to the prediction module 216 within the external computing device 210. The prediction module 216 then outputs a probability of the presence of a health condition of a user to the mobile computing device 110 via the communication components 118 and 218. The mobile computing device 110 may then output 122 the probability of the health condition to the user of the mobile computing device 110.
[0050] In referring to FIG. 3, the system 300 includes a mobile computing device 110 and an external computing system 210. The mobile computing device 110 includes a data collection module 112, a feature processing module 114, and a communication component 118. The external computing system 210 include a prediction module 216 and a communication component 218. In the system 300, the feature processing module 114 is disposed in the mobile computing device 110, not in the external computing system 210 as in the system 200. In these embodiments, the feature processing module 114 provides the extracted features to the external computing device, via the communication components 118 and 218, which in turn, outputs the extract features to the prediction module 216.
[0051] In FIG. 4, the system 400 includes a mobile computing device 110 and an external computing system 210. The mobile computing device 110 includes a data collection module 112, a prediction module 116, and a communication component 118. The external computing system 210 include a feature processing module 214 and a communication component 218. In the system 400, the feature processing module 314 is disposed in the external computing system 210, not in the mobile computing device 110 as in the system 200. In these embodiments, the feature processing module 214 provides the extracted features to the mobile computing device 110, via the communication components 118 and 218, which in turn, outputs the extracted features to the prediction module 116 disposed in the mobile computing device 110.
[0052] The mobile computing device 110 and the external computing system 210 can each include communication components 118 and 218, respectively, that facilitate communication for each of the mobile computing device 110 and the external computing system 210 shown in FIGs. 1, 2, 3, and 4, for example, to communicate with each other over a communication network. Some examples of communication networks include, but are not limited to, internet, intranet, wide area network (WAN), local area network (LAN), wireless network, Bluetooth, Wi-Fi, and other similar mobile communication networks. The connections of the network and the communication protocols are well known to those of skill in the art. The communication components typically embody computer-readable instructions, data structures, program modules, or other data in a modulated data signal. By way of example, not limitation, communication components include wired media such as a wired network or a direct-wired connection, and wireless media such as acoustic, radio frequency (RF), and infrared. In an alternate embodiment, where all processing is performed by the mobile device 110 (as illustrated in FIG. 1), the mobile computing device 110 may not include a communication component 210 for communicating with an external server (e.g., outside of the conventional usage of a mobile computing device 110). [0053] The mobile computing device 110 further includes a data collection module 112 (e.g., data collection circuit). In the system 100, the mobile computing device 110 further includes a feature extraction module 114 (e.g., feature extraction circuit) and prediction module 116 (e.g., prediction circuit). In some embodiments, the feature extraction module and/or the prediction module are not included in a mobile computing device 110, and instead, are included in the external computing system 210. In these embodiments, the feature extraction module 214 and the prediction module 216 included in the external computing system 210 may be identical or similar to the feature extraction module 114 and the prediction module 116 in the mobile computing device 110. As shown in FIG. 5, the data collection module 512 (e.g., data collection circuit) includes storages and sensors, and is configured to collect data from a user. In some scenarios, the user performs self-diagnosis or participates in a telecommunication visit (e.g., a telemedicine or telehealth session). In some scenarios, a medical professional operates the mobile computing device 110 for a patient. The data collection module 512 may be similar or identical to the data collection module 112 in the systems 100, 200, 300, and 400. In some embodiments, only one type of data is collected from the user (e.g., a patient). In some embodiments, various types of data are collected from the user. The types of collected include, but are not limited to, red-green- blue (RGB) images that capture anteroposterior, lateral, medial, and coronal views of a human foot, RGB images of a human’s back and neck, etc. One or more sensors 514 equipped in the mobile computing device 110 can be utilized by the data collection module 512 to collect the RGB images. In these examples, one of the one or more sensors 514 may be a native camera of the mobile computing device 110 (e.g., a standard camera included with the mobile computing device 110). In addition to collecting images, the mobile computing device 110 may be configured to prompt the user for additional information, for example, via a questionnaire, to assess the health or medical condition of the user. The one or more sensors 514 of the mobile computing device 110 may also capture RGB depth (RGBD) photographs, record audio (via a microphone of the mobile computing device 110) or video recordings (via the native camera of the mobile computing device 110), and the like. In this way, it is not necessary to include any specialized biometric sensors such as a heart rate sensor, a weight sensor, a blood sugar sensor, or other sensors, in order for the aforementioned approach to work. This is advantageous for applications such as remote medicine (telemedicine) during self-diagnosis or otherwise, or when access to specialized equipment is limited.
[0054] In some embodiments, one of the one or more sensors 514 includes motion sensors, which may be native to the mobile communication device 100, to improve accuracy for detecting certain medical conditions such as limping. For each type of collected data, the data collection module 512 may be configured to provide instructions and feedbacks to the user. The instructions and feedback may be provided in any manner that is well-known in the art, including one or more of visual cues (e.g., via graphical user interfaces (GUIs), prompts), auditory cues (e.g., via a speaker of the mobile computing device 110), or tactile forms. The instructions and feedback may be preprogrammed or provided in real-time by a medical professional in a telemedicine environment.
[0055] Along with instructions and feedback, the data collection module 512 performs the collecting of data from the sensors 514 and stores the collected data. In some embodiments, the collected data may be stored internally in one or more storages 516 of the data collection module 112. In some embodiments, the collected data is transmitted by the mobile computing device 110 to the external computing device 210 via the communication components 118, 218, respectively. The data collection module 512 transmits the output data 522 to the feature processing module 614 directly or via the communication components 118 and 218.
[0056] As illustrated in FIG. 6, the feature processing module 614 is configured to preprocess the output data 522 obtained by the data collection module 512. The data obtained by the data collection module 512 may be organized by various data types 620. After the collected data is organized by various data types 620, different preprocessing algorithms 622 may be utilized with respect to a data type 620. For example, if the data type is RGB images of a user’s foot, a particular preprocessing algorithm directed to RBG images of a user’s foot may be utilized to identify features of the user’s foot. In some embodiments, the feature processing module 614 is performed by a mobile computing device 110. In these embodiments, as the preprocessing algorithm 622 is performed internally in the mobile computing device 110, additional systems (e.g., an external server) are not required, which improves the speed of the diagnosis. Localized information processing also makes health information less vulnerable to hacking and reduces opportunities for health information being hacked by cyberattacks as a transmission to an auxiliary component and/or being stored by an auxiliary component are not required.
[0057] In some embodiments, the preprocessing algorithm 622 is performed by the external computing system 210. In these embodiments, processing overhead on the mobile computing device 110 is reduced. Further, the external computing system 210 may comprise superior computing capabilities in comparison to the mobile computing device 110, thereby improving the speed of processing and/or the capacity to process the collected data.
[0058] In some embodiments, the preprocessing algorithm 622 may be omitted. In one embodiment, the preprocessing algorithm 622 is equivalent to a simple algorithm that inputs the raw data received from the data collection module 512. In some embodiments, the feature processing module 614 transforms the input data into their latent representation 624. Examples of transformations include, but are not limited to, transforming a RGB photo to a neural network embedding, transforming video to a set of key RGB photos with an optional transformation to neural networking embeddings, transforming RGBD photos to neural network embeddings, transforming sound to spectrogram (via short-time Fourier transform or wavelet transformation, for example), transforming the RGBD video to a point cloud. The feature processing module may include a pipeline of algorithms. An example of such algorithms is transformation of the input data into neural networking embeddings with subsequent dimension reduction. Another example is point cloud calculation with subsequent points of interest estimation and/or statistics calculation. As stated above, in some embodiments, the data collection module 512 provides multiple types of data 620. In these embodiments, the feature processing module 614 separately processes each data type and outputs the latent representation 624 for each of the data types.
[0059] As illustrated in FIG. 7, the prediction module 716 is responsible for predicting a probability of a medical condition based on the features 720 provided from the feature processing module 614. Non-limiting examples of the features include neural network embeddings, spectrograms, point clouds, hand-crafted features calculated on raw data, etc. A feature 702 refers to extracted or transformed data obtained from the collected data after some processing by a preprocessing algorithm or a combination of different preprocessing algorithms. In some embodiments, the feature processing module 614 does not make any computations. The feature processing module 614 supplies the raw input as the output to the prediction module 716, which takes the raw collected data as an input. An example of such embodiments is a classification neural network being used as a prediction module and a camera that collects images as a data collection module. The prediction module 716 may be further configured to output the features 720 to the user of the mobile computing device 110 in real-time. In some embodiments, the data collection module 512 collects various types of data 620 and the feature processing module 614 outputs one or more latent representations 624. In some embodiments, the prediction module 716, may be a multimodal system. In a multimodal system, for each data source a separate prediction algorithm 722 may be utilized to calculate a probability prediction. The outputs of the prediction algorithm 722 are then aggregated by an aggregation algorithm 724. Examples of the aggregation algorithm 724 include, but are not limited to, bootstrapping processing or boosting processing. In these examples, the prediction algorithms 722 are one or more of the following: deep learning algorithms, linear regression algorithms, decision trees, ensembles methods, or are non-trainable algorithms based on prior knowledge. The aggregated output 122 by the aggregation algorithm 724 is indicative of a probability of the health condition to the user of the mobile computing device 110.
[0060] In some embodiments, the feature processing module 614 and/or the prediction module 716 include machine learning algorithms. Such algorithms require training in order to attain accuracy. The training of such algorithms requires testing data and labels. In some embodiments, training data is data collected from real people by using the data collection module 512. Labeling of the collected data may be performed by a medical professional who determines a probability of an ailment for each user based on the input data. The labels should include the classification labels for the predicted conditions or the probability of such conditions. In addition, the labels may include other features related to the condition such as severity, anatomic features, anamnesis and even the subjective confidence in diagnosis from the clinician who performed the labeling. This information can be taken into account in the training of the model by loss assigning weights for loss function or changing sampling balance for the training procedures based on mini- batch training. As an example, for data with severe conditions the weight for loss can be bigger to make the model pay more attention to severe cases. In other embodiments, the data and the labels are synthesized by another algorithm. Some examples of algorithms that can be used to synthesize training data include generative neural networks, three-dimensional (3D) modeling, or a Markov process. In such cases, labeling of the training data can be performed by medical professionals or by other algorithms based on generative parameters. An example of such parameters is a Meary’s angle on a rigged 3D foot model in a flat feet prediction. In other embodiments, the training data consists of a combination of data collected from real people and synthetic data that is generated by machines. In this case the mixing strategy of such data for training may in assigning weights for the loss, changing sampling probability for the training procedures based on mini-batch training.
[0061] In other embodiments, the feature processing module 614 and/or the prediction module 716 consist of algorithms based on prior knowledge. Examples of types of prior knowledge include knowledge pertaining to flat feet and hallux -valgus estimations and applying that knowledge to a 3D foot scan. The following paragraphs will describe these particular examples in detail.
[0062] FIGs. 10a and 10b illustrate examples of points clouds generated by the feature processing module 614, according to some embodiments of the application. In some embodiments, the point cloud is a data type utilized in the medical diagnosis processes disclosed herein. A point cloud is a mandatory component in a majority of 3D models. For those 3D models that do not include a point cloud, a point cloud can be created by a sampling procedure with a sufficient resolution. For example, a 3D model can be created from a series of RGB(D) photos of a foot via any Structure from Motion (SIM) algorithm. In this case, the data collection module 512 is an application that collects the RGB(D) photos of the foot, the feature processing module 614 is the SfM algorithm, and the prediction module 716 is described in detail below. For clinical purposes, a medical condition is diagnosed based on either an X-ray or by a visual analysis made by a skilled medical professional. For both flat feet and hallux -valgus conditions, the diagnosis is based on a location of the bones within the foot. Accordingly, a location of the bones may be determined by a hand-crafted (e.g., customized) non-trainable algorithm. FIG. 10a illustrates an example of a point cloud generated for diagnosing a hallux-valgus condition. For the hallux-valgus condition (also known as bunions), the diagnosis requires finding a joint 1002 connecting a big toe 1006 to the rest of the foot 1004 (also known as the first metatarsophalangeal (MTP) joint). This joint 1002 may be found as an extreme point in the 3D point cloud in the front part of the medial view. An extreme point estimation can be performed by comparing points along the length axis and their corresponding values along the width axis. A point that is farther in the width direction than its local neighborhood is the extremum point by definition. When the location of the joint is known, a surrogate hallux-valgus angle can be defined as an angle 1008 in the dorsal view projection between the line connecting the big toe 1006 and joint point 1002 and the line connecting the heel location 1004 and the joint point 1002. It should be noted, in this diagnosis example, the big toe 1006 and the heel point locations 1004 can also be found as an extremum in an anterior and a posterior view, correspondingly. In some embodiments, the angle 1008 is calculated by the feature preprocessing module 614 using a SfM algorithm. In other embodiments, it is an initial part of the processing in the prediction module 716. The angle 1008 is a highly descriptive feature in certain medical condition prediction as high angle values often indicate a high probability of hallux-valgus deformity. For example, the simplest prediction model can linearly map the surrogate angle to a probability of the presence of a medical condition with fixed coefficients determined by research. In this example, the prediction module 716 can take the angle 1008 as the output of the feature processing model 614 and apply the linear model that maps the angle 1008 into a probability of a medical condition, such as hallux-valgus. In other embodiments, the prediction module 716 is a pipeline of algorithms that takes a point cloud as an input from the feature processing module 614, calculates the angle 1008, and applies the mapping model to the angle 1008 to arrive at a probably of the medical condition.
[0063] FIG. 10b illustrate an example of diagnosing a flat feet condition. In referring to Fig. 10b, the point of interest in the point cloud may be the highest point 1050 of a longitudinal arch of the foot 1054, which can be found in the point cloud as the highest point 1050 of the bottom surface 1056 of the foot 1054. The bottom surface 1056 can be calculated by splitting the point cloud in disjoint sets of points defined by a 2D grid of a floor plate. For each set of points, the point with a minimal height coordinate is computed. This set of minimal points is defined as the bottom surface 1056. This highest point 1050 will not have any neighboring points that are located lower (relative to the foot) and will be closer to the camera's image plane in a medial view than any neighboring points. When the highest point 1050 is determined, the diagnosis can be performed by using an angle 1058 in the dorsal view projection between the line 1052 connecting the arch point 1050 and the joint point and the line 1060 connecting the arch point 1050 and the heel location in the same manner as in the diagnosis of hallux valgus.
[0064] Another method for predicting a probability of flat feet, without training an algorithm, is by utilizing a RGBD camera for a medial view. In the feature processing module 614, each pixel can be transformed into a point in a point cloud, where these points are filtered by the measured distance to the camera (e.g., one of the one or more sensors 514 is a distance sensor), so only the points representing the foot and the floor will remain. The floor can be found and then filtered by a random sample consensus (RANSAC) algorithm. At this stage, the foot point cloud for the medial view is determined and can be utilized to estimate the same points for the 3D model as in the previous example and the same angle calculated on those points.
[0065] Another example is through the use of only an RGB camera. This method requires capturing an image using a light source above the foot, so the arch area is covered by shadow. The foot area can be segmented using various computer vision techniques including pretrained neural networks or simple color segmentation. The arch point can be found as the upper point on the edge line between the lighted foot and the shadows. This edge line can be determined by classic computer vision algorithms such as a Canny algorithm, a Sobel operator, a Laplacian of Gaussian (LoG) algorithm, or any other suitable algorithm. As for the hallux valgus prediction, either the feature processing module 614 or the prediction module 716 can include the points of interest position estimation.
[0066] In referring now to FIG. 8, a method 800 for detecting medical conditions is illustrated, according to some embodiments. The method 800 may be executed by any of the systems 100, 200, 300, or 400. The method 800 includes steps 802 - 806. In step 802, a mobile computing device, such as the mobile computing device 110, collects data from a user. The step 802 may be executed according to any of the manners described above with respect to collecting data from a user using the mobile computing device 110.
[0067] In step 804, the collected data, from step 802, is preprocessed to extract a feature. A feature may be a neural network embedding, spectrogram, list of key points, point cloud. A feature may also be a region of interest in image measurements, for example, the length of a point cloud and so on. The feature may be extracted according to any of the manners described above with respect to preprocessing the collected data. The step 804 may be executed by the mobile computing device 110 or by a remote computing device, such as the external computing device 210.
[0068] In step 806, the extracted feature from step 804 is used to predict a probability of a medical condition. The probability may be predicted according to any of the manners described above with respect to predicting the probability. The step 806 may be executed by the mobile computing device 110 or by the remote computing device such as the external computing device 210.
[0069] In referring to FIGs. 9a, 9b, and 9c, various embodiments for detecting medical conditions (such as the method 800) are disclosed. In FIG. 9a, a method 902 is illustrated. In step 904, a mobile computing device, such as the mobile computing device 110, collects data from a user. In step 906, the mobile computing device sends the collected data to a remote computing device, such as the external computing device 210. In step 908, the remote computing device preprocesses the collected data to extract a feature. In step 910, the remote computing device sends the extracted feature to the mobile computing device. In step 912, the mobile computing device predicts a probability of a medical condition of the user based on the extracted feature. In step 914, the mobile computing device then outputs the probability of the medical condition to the user.
[0070] In FIG. 9b, a method 920 is illustrated. A mobile computing device, such as the mobile computing device 110, collects data from a user (step 922). The mobile computing device preprocesses the collected data to extract a feature (step 924), then sends the extracted feature to a remote computing device (step 926), such as the external computing device 210. The remote computing device predicts a probability of a medical condition of the user based on the extracted feature (step 928). In step 930, the external computing device sends the probability of the medical condition to the mobile computing device. In step 932, the mobile computing device then outputs the probability of the medical condition.
[0071] In FIG. 9c, a method 940 is illustrated. In step 942, a mobile computing device, such as the mobile computing device 110, collects data from a user. In step 944, the mobile computing device sends the collected data to a remote computing device, such as the external computing device 210. In step 946, the remote computing device preprocesses the collected data to extract a feature. In step 948, the remote computing device predicts a probability of a medical condition of the user based on the extracted feature. The remote computing device then sends the probability of the medical condition to the mobile computing device (step 950) and outputs the probability of the medical condition (step 952).
[0072] Although the disclosure is illustrated and described herein with reference to specific embodiments, the disclosure is not intended to be limited to the details shown. Rather, various modifications may be made in the details within the scope and range of equivalents of the claims and without departing from the disclosure.

Claims

CLAIMS What is claimed is:
1. A system for detecting medical conditions, comprising: a mobile computing device configured to collect data from a user; a feature extraction circuit configured to preprocess the collected data to extract a feature; and a prediction circuit configured to predict a probability of a medical condition of the user based on the extracted feature.
2. The system of claim 1, further comprising a remote computing device communicatively coupled to the mobile computing device, wherein the feature extraction circuit is comprised in the remote computing device.
3. The system of claim 2, wherein the prediction circuit is comprised in the remote computing device.
4. The system of claims 1, 2, or 3, wherein: the mobile computing device comprises a red-green-blue (RGB) camera; and the collected data includes at least one of RGB images or RGB videos captured by the RGB camera.
5. The system of claims 1, 2, or 3, wherein: the mobile computing device has a red-green-blue depth (RGBD) camera; and the collected data includes at least one of RGBD images or RGBD videos captured by the RGBD camera.
6. The system of claim 1, wherein: the collected data includes at least one of images or videos captured by the mobile computing device; the feature extraction circuit comprises a preprocessing algorithm module; and the preprocessing algorithm module is configured to preprocess at least one of the images or the videos by using pre-trained neural networks.
7. The system of claim 1, wherein: the collected data includes at least one of images or videos captured by the mobile computing device; the feature extraction circuit comprises a preprocessing algorithm module; and the preprocessing algorithm module is configured to preprocess at least one of the images or the videos by using computer vision algorithms or non-trainable algorithms based on the computer vision algorithms.
8. The system of claim 1, wherein the mobile computing device is further configured to collect data input by the user via a questionnaire.
9. The system as in any of claims 6-8, wherein the feature extraction circuit is configured to transform the collected data into point cloud data.
10. The system as in any of claims 6-8, wherein the feature extraction circuit is configured to transform the collected data into neural networks embeddings.
11. The system of claim 1, wherein: the prediction circuit comprises a prediction algorithm module; and the prediction algorithm module is trained using at least one from the following: synthetic data or data collected from a plurality of users.
12. The system of claim 1 , wherein: the prediction circuit comprises a first prediction algorithm module and a second prediction algorithm module; the first prediction algorithm module is trained using synthetic data and the second prediction algorithm module is trained using data collected by a plurality of users; a first output of the first prediction algorithm module and a second output of the second prediction algorithm module are input into an aggregation algorithm module of the prediction circuit; and the probability of the medical condition of the user is determined based on an output of the aggregation algorithm module.
13. The system of claim 1, wherein: the prediction circuit comprises a prediction algorithm module; and the prediction algorithm module is trained using features extracted by the feature extraction module.
14. A method for detecting medical conditions, comprising: collecting, by a mobile computing device, data from a user; preprocessing, by a feature extraction circuit, the collected data to extract a feature; and predicting, by a prediction circuit, a probability of a medical condition of the user based on the extracted feature.
15. The method of claim 14, further comprising sending, by the mobile computing device, the extracted feature to a remote computing device comprising the prediction circuit.
16. The method of claim 14, further comprising sending, by the mobile computing device, the collected data to a remote computing device comprising the feature extraction circuit.
17. The method of claim 14, wherein the collecting, by the mobile computing device, the data from the user comprises: capturing, by a red-green-blue (RGB) camera of the mobile computing device, at least one of RGB images or RGB videos.
18. The method of claim 14, wherein the collecting, by the mobile computing device, the data from the user comprises: capturing, by a red-green-blue depth (RGBD) camera of the mobile computing device, at least one of RGBD images or RGBD videos.
19. The method of claim 14, wherein the collecting, by the mobile computing device, the data from the user comprises: capturing, by a camera of the mobile computing device, at least one of images or videos; wherein the preprocessing, by the feature extraction circuit, the collected data to extract the feature comprises: preprocessing, by a preprocessing algorithm of the feature extraction circuit, the at least one of the images or the videos by using pre-trained neural networks.
20. The method of claim 14, wherein the collecting, by the mobile computing device, the data from the user comprises: capturing, by a camera of the mobile computing device, at least one of images or videos; wherein the preprocessing, by the feature extraction circuit, the collected data to extract the feature comprises: preprocessing, by a preprocessing algorithm of the feature extraction circuit, the at least one of the images or the videos by using computer vision algorithms or non-trainable algorithms based on the computer vision algorithms.
21. The method of claim 14, further comprising: collecting, by the mobile computing device, input data by the user via a questionnaire.
22. The method as in any of claims 14-21, further comprising: transforming, by the feature extraction circuit, the collected data into point cloud data. The method as in any of claims 14-21, further comprising: transforming, by the feature extraction circuit, the collected data into neural networks embeddings. The method as in any of claims 14-21, further comprising: training a prediction algorithm of the prediction circuit using synthetic data. The method as in any of claims 14-21, further comprising: training a prediction algorithm of the prediction circuit using data collected from a plurality of users. The method as in any of claims 14-21, further comprising: training a prediction algorithm of the prediction circuit using synthetic data and data collected by a plurality of users. The method as in any of claims 14-21, further comprising: training a first prediction algorithm of the prediction circuit using synthetic data; training a second prediction algorithm of the prediction circuit using data collected from a plurality of users; aggregating, by an aggregation algorithm module, a first output of the first prediction algorithm and a second output by the second prediction algorithm to predict the probability of the medical condition of the user. The method as in any of claims 14-21, further comprising: training a prediction algorithm of the prediction circuit using features extracted by the feature extraction module.
PCT/US2023/034553 2022-10-05 2023-10-05 Image processing for medical condition diagnosis Ceased WO2024076683A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202263413575P 2022-10-05 2022-10-05
US63/413,575 2022-10-05

Publications (1)

Publication Number Publication Date
WO2024076683A1 true WO2024076683A1 (en) 2024-04-11

Family

ID=90608944

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2023/034553 Ceased WO2024076683A1 (en) 2022-10-05 2023-10-05 Image processing for medical condition diagnosis

Country Status (1)

Country Link
WO (1) WO2024076683A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119170254A (en) * 2024-08-30 2024-12-20 北京开普云信息科技有限公司 Method, device, medium and equipment for training multimodal question answering model of medical images

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160147959A1 (en) * 2014-11-20 2016-05-26 Board Of Regents, The University Of Texas System Systems, apparatuses and methods for predicting medical events and conditions reflected in gait
US20190139641A1 (en) * 2017-11-03 2019-05-09 Siemens Healthcare Gmbh Artificial intelligence for physiological quantification in medical imaging
US20200219272A1 (en) * 2019-01-07 2020-07-09 The University Of North Carolina At Chapel Hill Methods, systems, and computer readable media for deriving a three-dimensional (3d) textured surface from endoscopic video
US20200303074A1 (en) * 2013-01-20 2020-09-24 Martin Mueller-Wolf Individualized and collaborative health care system, method and computer program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200303074A1 (en) * 2013-01-20 2020-09-24 Martin Mueller-Wolf Individualized and collaborative health care system, method and computer program
US20160147959A1 (en) * 2014-11-20 2016-05-26 Board Of Regents, The University Of Texas System Systems, apparatuses and methods for predicting medical events and conditions reflected in gait
US20190139641A1 (en) * 2017-11-03 2019-05-09 Siemens Healthcare Gmbh Artificial intelligence for physiological quantification in medical imaging
US20200219272A1 (en) * 2019-01-07 2020-07-09 The University Of North Carolina At Chapel Hill Methods, systems, and computer readable media for deriving a three-dimensional (3d) textured surface from endoscopic video

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119170254A (en) * 2024-08-30 2024-12-20 北京开普云信息科技有限公司 Method, device, medium and equipment for training multimodal question answering model of medical images

Similar Documents

Publication Publication Date Title
CN111862044B (en) Ultrasonic image processing method, ultrasonic image processing device, computer equipment and storage medium
US11151721B2 (en) System and method for automatic detection, localization, and semantic segmentation of anatomical objects
US10049457B2 (en) Automated cephalometric analysis using machine learning
JP6985371B2 (en) Computer-aided detection using multiple images from different views of interest to improve detection accuracy
US9536316B2 (en) Apparatus and method for lesion segmentation and detection in medical images
CN103260526B (en) There is ultrasonic image-forming system and the method for peak strength measuring ability
JP2022517769A (en) 3D target detection and model training methods, equipment, equipment, storage media and computer programs
US20160210774A1 (en) Breast density estimation
KR20150098119A (en) System and method for removing false positive lesion candidate in medical image
JP2013542046A (en) Ultrasound image processing system and method
JP6996303B2 (en) Medical image generator
US11250564B2 (en) Methods and systems for automatic measurement of strains and strain-ratio calculation for sonoelastography
WO2024076683A1 (en) Image processing for medical condition diagnosis
CN112515705B (en) Method and system for projection profile enabled computer-aided detection
EP4608278B1 (en) Echocardiogram classification with machine learning
JP7233792B2 (en) Diagnostic imaging device, diagnostic imaging method, program, and method for generating training data for machine learning
JP7712189B2 (en) ULTRASONIC IMAGE ANALYSIS DEVICE, ULTRASONIC DIAGNOSIS DEVICE, AND METHOD FOR CONTROLLING ULTRASONIC IMAGE ANALYSIS DEVICE
CN114511556B (en) Gastric mucosal bleeding risk early warning method, device and medical image processing equipment
US11430126B2 (en) Method and image processing apparatus for the segmentation of image data and computer program product
CN113243932A (en) Oral health detection system, related method, device and equipment
JP2019107453A (en) Image processing apparatus and image processing method
Vaish et al. Smartphone based automatic organ validation in ultrasound video
CN114399499A (en) Organ volume determination method, device, equipment and storage medium
KR20250102794A (en) Method for evaluating spinal alignment condition and device for evaluating spinal alignment condition using the same
CN120916701A (en) Device diagnostic system and method for acquiring and analyzing images from an ultrasound probe

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23875514

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE