[go: up one dir, main page]

WO2023150153A1 - Telemedicine system - Google Patents

Telemedicine system Download PDF

Info

Publication number
WO2023150153A1
WO2023150153A1 PCT/US2023/012098 US2023012098W WO2023150153A1 WO 2023150153 A1 WO2023150153 A1 WO 2023150153A1 US 2023012098 W US2023012098 W US 2023012098W WO 2023150153 A1 WO2023150153 A1 WO 2023150153A1
Authority
WO
WIPO (PCT)
Prior art keywords
patient
telemedicine
web
telemedicine system
sounds
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2023/012098
Other languages
French (fr)
Inventor
Stephen Randall
Mark Gretton
Dan GIESCHEN
Nick GEISCHEN
Pablo RIVAS
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Medaica Inc
Original Assignee
Medaica Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Medaica Inc filed Critical Medaica Inc
Publication of WO2023150153A1 publication Critical patent/WO2023150153A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B7/00Instruments for auscultation
    • A61B7/02Stethoscopes
    • A61B7/04Electric stethoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B7/00Instruments for auscultation
    • A61B7/003Detecting lung or respiration noise
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/63ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H80/00ICT specially adapted for facilitating communication between medical practitioners or patients, e.g. for collaborative diagnosis, therapy or health monitoring

Definitions

  • Telemedicine systems enable remote diagnostics and clinical caring for patients, i.e. when a health professional and patient are not physically present with each other.
  • Telehealth is generally thought of as broader in scope and includes non-clinical health care services; in this specification, the terms 'telemedicine' and 'telehealth' are used interchangeably and so 'telemedicine' should be broadly construed to include telehealth and hence include remote healthcare services that are both clinical and non-clinical.
  • telemedicine is more than just using Skype®, Zoom®, or Facetime®, so that a doctor can look a Patient in the eyes.
  • the Patient For telemedicine to be truly useful, the Patient must be able to collect and transmit a variety of data the healthcare professional needs to assess the Patient’s health.
  • telemedicine can easily leverage patient-collectable data from simple and affordable devices, such as blood pressure cuffs, heart monitors, pulse oximeters and thermometers, etc.
  • current solutions fail to provide uniform or easy ways for healthcare professionals to acquire more subjective or useful information from patients without a doctor’s or nurse’s supervision e.g. listening to a patient’s body sounds (auscultation), taking an EKG or performing an ultrasound.
  • Some telemedicine systems enable a healthcare professional to listen to auscultation sounds from a medical device, such as a digital stethoscope. These sounds are generally streamed to enable real-time consultation. Five streaming audio can however lead to dropped or delayed data packets; this can result in doctors being unable to accurately detect heart rhythms (e.g. murmurs) or other critical sounds.
  • Some telemedicine systems enable the patient or a caregiver to record auscultation sounds in their own time, then send those sounds to the healthcare professionally. This type of exam is sometimes referred to as a Store and Forward exam or Asynchronous exam.
  • the invention in a first aspect, is a telemedicine system including:
  • a medical device that includes a microphone system configured (i) to detect and/ or record patient sounds, and (ii) to generate audio data from those sounds, and (iii) to send that audio data;
  • a file handling system configured (i) to receive, download and store the audio data from the medical device, and (ii) make that file available for near-real-time listening to the patient sounds.
  • the medical device may be a digital stethoscope and the patient sounds are then sounds such as clinically relevant auscultation sounds, e.g. sounds made by the heart, lungs or other organs.
  • auscultation sounds detected during an audio/video telehealth session would be live streamed in real-time to a physician or other healthcare professional; live streaming provides real-time audio, but can however result in dropped or delayed data packets, with the physician then being unable to accurately detect heart rhythms (e.g. murmurs) or other critical sounds and/ or other valuable timing information.
  • This invention ensures that the audio file, because it is fully downloaded before it is played back, is at the highest possible quality as soon as possible, which is especially important for clinically relevant auscultation sounds.
  • the file handling system can download the audio data on demand (e.g. a pull service initiated by the recipient) or it can be downloaded automatically (e.g. a push service) by the system for example at the end of a user action such as releasing a “listen” button on the system interface typically with a very small latency of 1 or 2 seconds.
  • the file can be downloading in the background to be available as soon as the healthcare professional clicks on the option to review the file.
  • the audio data may well be live-streamed, e.g to enable the healthcare professional to guide the patient to accurately position the microphone system, but is then also sent to the file handling system that downloads the audio data; as with any downloading system, the audio data can be played back once the complete audio data file has been downloaded.
  • the live-streamed audio can also be presented to the physician as a lower resolution and/ or preview version of the downloaded data file.
  • the file handling system introduces some minor latency, but ensures that the physician can hear the auscultation sounds as clearly and completely as possible, at a quality that is better than live streaming quality affected by dropped and delayed data packets: the downloaded file is the source file. And the physician can replay the audio file, stop and ‘rewind’ it, tag sections of interest in the file, store it in conjunction with medical notes and share the file. So the downloaded data can be played back/ reviewed once it has been downloaded/ sent to the physician, which can be after an imperceptible period, depending on the speed of the physician’s internet connection.
  • the audio data file is sent from the medical device to an intermediate device or web server that implements the downloading of the complete audio data file; alternatively, the downloading can be local to the healthcare professional, e.g. at their local PC or smartphone.
  • TCP layer protocol processing and IP layer protocol processing (TCP/IP) is used to send the data file from the medical device to the web server and from the server to the healthcare provider’s device.
  • TCP ensures that the data file is not damaged or lost.
  • the medical device may include (i) a speech microphone configured to detect and/ or record patient speech and (ii) a second microphone configured to detect and/or record patient sounds and generate an audio dataset from those sounds and send the audio dataset to a file handling system for downloading; and in which the speech microphone uses one channel of a stereo channel pair, and the second microphone uses the other channel.
  • the telemedicine system is configured to enable the healthcare professional to select whether to listen to real-time voice communication from the patient or to listen to the downloaded patient sounds (e.g. auscultation audio data) sent via the file handling system in near real-time or at any later time.
  • the downloaded patient sounds e.g. auscultation audio data
  • the system can be further configured to enable the speech microphone to be muted automatically or manually when the healthcare professional is listening to the auscultation sounds (live or forwarded).
  • the invention is implemented in a system called the Medaica system, which is described in the following sections.
  • Figure 1 is a simplified cross section of a digital electronic stethoscope.
  • Figure 2 is a simplified top view and cross section of a digital electronic stethoscope.
  • Figure 3 is a simplified diagram of the electrical design of the electronic stethoscope
  • Figure 4 is a diagram of some of the key players interacting with the Medaica system.
  • Figure 5 is a diagram of the Medaica platform.
  • Figure 6 is a system overview of one implementation of the invention.
  • Figure 7 is a diagram illustrating a patient’s journey.
  • Figure 8 is a diagram illustrating a patient’s journey.
  • Figure 9 is a diagram illustrating a doctor’s journey.
  • Figure 10 is a diagram illustrating a user’s interaction with the playback page.
  • Figure 11 shows an example of a patient’s web-app displaying an outline of a torso along with a video feed.
  • Figure 12 shows another example of a patient’s web-app with the graphical interface of a self-exam heart mode.
  • Figure 13 shows a patient’s web app displaying a countdown and recording quality window.
  • Figure 14 shows a patient’s web app displaying the torso outline shows when each auscultation position is recorded successfully
  • Figure 15 is a flow diagram summarizing the steps of the self-exam procedure.
  • Figure 16 shows a patient’s web app displaying a specific exam procedure overlaid over a live video image of the user.
  • Figure 17 shows a graphical interface of front lungs self-examination including a torso outline of a front torso and the required examination positions.
  • Figure 18 shows a graphical interface of back lungs assisted examination including a torso outline of a back torso and required examination positions.
  • Figure 19 shows a graphical interface of a video-positioning mode.
  • Figure 20 shows a simplified flow diagram illustrating when an exam starts.
  • Figure 21 shows a flow diagram illustrating the different steps according to a selfexamination mode, custom examination mode or guided examination mode.
  • Figure 22 shows a diagram illustrating the system key components.
  • Figure 23 shows photographs illustrating several digital stethoscope devices.
  • Figure 24 shows photographs illustrating a number of digital stethoscope devices.
  • Figure 25 shows photographs illustrating a digital stethoscope device including a dummy socket (210).
  • Figure 26 shows top-down, side and bottom up views (respectively, descending) of a digital stethoscope device.
  • Figure 27 shows a screenshot of a healthcare provider interface.
  • Figure 28 shows a screenshot of a healthcare provider interface including an auscultation (magnified) section.
  • Figure 29 shows a patient interface including a temporary graphic of an animated help screen.
  • Figure 30 shows a screenshot of a web page of the healthcare provide user interface.
  • Figure 31 shows a screenshot of another web page of the healthcare provide user interface.
  • Figure 32 shows a screenshot of the healthcare provider interface that enables the healthcare provider to create a store and forward exam.
  • Figure 33 shows a screenshot of the automated message received by the patient from the healthcare provider.
  • Figure 34 shows a screenshot of a secure page provided to the patient with the step-by- step exam procedure based on the auscultation positions selected by the healthcare provider for them.
  • Figure 35 shows an instruction page, as displayed by the patient’s software.
  • Figure 36 shows a screenshot of the patient’s web interface displaying the first auscultation position required.
  • Figure 37 shows a screenshot of the patient’s web interface enabling the patient to review the first auscultation position recording.
  • Figure 38 shows an example of heart and lung body maps, as displayed on screen, in which each auscultation position is shown as a numbered circle.
  • Figure 39 shows an example of the heart body map indicating that the first auscultation position has been successfully recorded.
  • Figure 40 shows a screenshot of the automated message received by the patient when the examination procedure is completed.
  • Figure 41 shows a screenshot of the healthcare provider interface with the patient’s exam status updated on the dashboard.
  • Figure 42 shows an example of a web interface that enables the healthcare provider to view the auscultation file.
  • Figure 43 shows a screenshot of the automated message displaying the live exam join details.
  • Figure 44 shows a message displayed to the patient with a weblink and a secure code to enter the examination web room.
  • Figure 45 shows a window requesting the patient enters the live exam access code and continues to the Tive Exam’ page.
  • Figure 46 shows a diagram with an example, of the live exam patient view on the left and the healthcare provider view on the right.
  • Figure 47 shows a screenshot of a page or menu available on the healthcare provider side, in which the healthcare provider has control over the patient’s stethoscope listen/record function.
  • systems and methods are provided to enable a healthcare professional to conduct a remote exam from any web-enabled audio and/ or video platform, not only simplifying telemedicine consultations that would otherwise require special devices and/ or integration of disparate systems but also increasing the value of the telemedicine consultation.
  • the systems and methods produce unique links that are exchanged between patients and healthcare professionals to either review files, such as but not limited to a patient’s auscultation sounds, or for the patient to participate in a virtual exam.
  • the unique link can also be used to control access rights, privacy and enable additional services, such as but not limited to diagnostic analysis, research and verification.
  • the links can additionally contain rules, such as but not limited to permitting third party access right, sharing/ viewing rules and financial controls such as but not limited to subscription usage and per user limits.
  • Telemedicine platforms do not provide uniform or easy support for multiple DMDs. Tikewise, many DMDs will not work with any telemedicine systems without extensive (and often expensive) technology integration work. This is clearly a problem for both sides of the healthcare value-chain; healthcare professionals would ideally like telemedicine to support the use of most if not all the tools they use in their typical patient exams. If a telemedicine system doesn’t support all their tools, its utility is limited.
  • DMDs leverage mobile technology and use wireless interfaces such as Bluetooth, primarily designed for consumers, they fail to address usability problems for healthcare professionals including a) a doctor might not wish to use a private device (their own phone) while examining a patient — that phone might ring with a personal call, and it is not ideal for sharing if they only have one DMD in the clinic and b) Bluetooth can be difficult to use when there is other radio-enabled equipment near-by or metal objects.
  • Bluetooth can be difficult to use when there is other radio-enabled equipment near-by or metal objects.
  • the Medaica solutions provide an intermediary web-hub that operates separately from the telemedicine platform, and can, in its simplest form, work on any web-enabled system and can be simply accessed by a doctor and/ or patient as a new window alongside their existing chosen telemedicine or video/chat/messaging solution, without requiring further integration.
  • This is further enabled with secure web-enabled links that can grant access rights to connect permitted parties and provide features to securely share, review, authenticate files, export files and set rules over timing, sharing rights and business models, payments etc.
  • Ml is a low-cost digital stethoscope that is aimed at telemedicine applications, rather than as a replacement for traditional stethoscopes. As such, it is aimed at the patient rather than the healthcare professional. A more detailed description of Ml now follows.
  • Medaica’s system is designed to be hardware agnostic, however, today, there is no plug- and-play device that will result in the simple functionality and affordability required. To that end, Medaica has produced a simple electronic stethoscope, the Ml.
  • a target retail price is for example under $50.
  • a target material cost (bill of materials) is for example under USD $15.
  • FIG. 1 shows a simplified cross section of Ml including examples of dimensions.
  • Figure 2 shows a top view and another cross section of the device including further examples of dimensions.
  • Ml includes a USB microphone. It is mounted in a rigid molded enclosure. The enclosure is in the basic shape of a stethoscope. The front face has a traditional stethoscope diaphragm sealed onto an acoustic chamber into which a microphone, such as an electret or piezo microphone is mounted. In addition to the stethoscope microphone, a second microphone for patient voice, for detecting whether background noises are too loud and could affect the stethoscope microphone, and for noise cancelling, is mounted facing upwards towards the user.
  • These two microphones are connected respectively to the left and right channels of the USB stereo microphone channel so they can be processed in parallel.
  • a small “I’m alive” LED, a '"now recording” LED, and a single user push button are mounted on the rear face.
  • the device is washable, so the LEDs and button are water resistant (IPX4) and fabricated as a simple membrane, like many medical and household cookery products.
  • IPX4 water resistant
  • the various electrical items are connected to a USB audio bridge IC mounted on a small PCB.
  • the device is large enough to be comfortable in the hand and therefore may contain a significant amount of empty space. This could be filled with ballast to improve the weight and feel of the device. Alternatively, the space may be used for more electronics components and a rechargeable lithium cell battery in more sophisticated and/or wireless versions.
  • the design leaves the head of the device easily viewable when held by the patient, such that in a telemedicine consultation the patient will be able to be guided, either by the user interface or the healthcare professional, optionally using an onscreen target/ pointer via the Medaica system to guide the patient to move the head of device over specific auscultation target areas.
  • the initial design for Ml is a USB C wired design. Additionally, the device may also support Bluetooth (BT) connectivity. Adding BT connectivity would enable connectivity to supported device platforms and would add the following components: BT transceiver, ISM band antenna, microcontroller capable of implementing BT stack and application level encryption, power management device and battery plus some more UI elements and potentially an MFi (Made for Apple® iPhone® ) chip. With USB 2 connectivity only, Ml is compatible with a number of platform or devices, such as: Windows laptops and PCs, Apple laptops and PCs, Android tablets and some phones (with a USB 2 to USB C adapter which is readily available) and Apple phones with a Uightning to USB C to Uightning converter and MFi device.
  • BT Bluetooth
  • the main housing is formed from a target maximum of two injection molded plastic parts. These parts are molded from high density medical grade plastic and have sufficiently thick wall sections as to be acoustically stable. These plastic parts may be finished or plated to give a comfortable and durable finish.
  • the electronic design is based around a standard USB to audio bridge IC from (e.g. CMedia CM 6317A).
  • the Ueft and Right channels are used for the voice and auscultation microphones respectively.
  • Figure 3 shows a simplified diagram of the electrical design. Ml UX philosophy
  • the website and mobile app can be used by users in “Guest” mode without any user login or sign up. This minimizes additional UX steps which could be life-saving if the user has an emergency and wants the fastest route to getting advice.
  • the website and/ or mobile app recognizes that the Ml device is plugged in (and will indicate if it is not) and can then guide the user on next steps.
  • a visual indicator such as the LED glowing in white, indicates that the Ml device is correctly powered and a d a data connection exists with the computer or mobile device, i.e. The stethoscope is functioning correctly and is ready for use.
  • Users of medaica.com include, but not limited to:
  • Patients at home such as consumers who directly connect Ml to PC, Mac or iOS or Android platforms to record heart and/ or lung sounds.
  • Figure 4 shows a diagram illustrating the different players interacting with the Medaica system.
  • the Medaica system offers a number of product differentiation features, including but not limited to:
  • a simple device e.g. No BluetoothTM to pair, no battery to charge.
  • Figure 5 shows a diagram of the system’s platform.
  • a patient At the patient side (51), a patient (52) connects a Medaica Ml stethoscope to a USB port of the patient’s Web-connected mobile or desktop client (53).
  • the patient enters the Medaica Patient Side (51).
  • the software recognizes Medaica Ml UDID and enables recording of auscultation sounds.
  • HCP health care professional
  • the patient In Store and Forward mode, the patient records auscultation sounds, guided by UI and can then send a unique link to those sounds to the Healthcare Professional (HCP).
  • HCP Healthcare Professional
  • Medaica Servers include a file handling system, such as a store and forward system, where an auscultation audio data file is downloaded and can be, like any conventional file download service, played back once the complete data file has been downloaded.
  • the file handling system Medaica Servers (54) introduces some minor and potentially imperceptible latency, but ensures that the physician can hear the auscultation sounds as clearly and completely as possible, at a quality that is better and more dependable for diagnosis than live streaming quality, which can be affected by dropped and delayed data packets.
  • TCP/IP may be used to send the data file from the medical device to the web server and from the web server to the healthcare provider’s device. This ensures that the data file is not damaged or lost.
  • the auscultation sounds web-link is sent to the HCP side.
  • the HCP visits the Medaica HCP Side.
  • the HCP In Live mode, the HCP generates and sends an exam room passcode to the patient. Once the patient enters the passcode, the HCP can direct the patient and initiate recording.
  • the HCP can choose to listen to auscultation sounds filtered or unfiltered and share, comment and/ or export sounds, according to permissions.
  • Figure 6 illustrates a further example of the interactions within the Medaica system.
  • a patient (100) is located at a remote location from the health care professional HCP (103).
  • 101 is a web-enabled electronic medical device used for auscultation of body sounds.
  • 102 is a cable connecting the electronic medical device (101) to either a web-enabled computing platform (104) or mobile phone (105).
  • 103 is a healthcare professional such as but not limited to a doctor (and interchangeably referred to as a specialist and/ or clinician in this document) at a different location than the patient.
  • 104 is a web-enabled computing platform such as but not limited to a laptop.
  • 105 is a mobile phone (or other such mobile computing platform), connected to the Internet via cellular or other wireless interconnectivity such as WiFi.
  • 106 is a website (in this embodiment, medaica.com) for recording, storing and controlling access to patients’ uploaded files, such as but not limited to auscultation files.
  • This website can be viewed on any web-enabled devices such as the patient’s laptop (104) or mobile phone (105) or the healthcare professional’s laptop (114) or mobile phone (115).
  • Sound file 107 is an example sound file recording via a patient’s web-enabled electronic medical device. Sound file 107 is processed by a file handling system that downloads the complete file before making that file available for playback.
  • 108 is a web-enabled link controlling access to a patient’s auscultation files.
  • 109 is a web-enabled video or telemedicine site. This web-enabled site can be viewed on any web-enabled devices such as the patient’s laptop (104) or mobile phone (105) or the healthcare professional’s laptop (114) or mobile phone (115).
  • 111 is wireless connectivity for the electronic medical device, such as but not limited to Bluetooth or WiFi.
  • 112 is cellular connectivity to/ from the mobile phone to the cellular network (118).
  • 113 is a cable connecting the headset and mic (110) to either the doctor’s web-enabled computing platform (114) or mobile phone (115).
  • 114 is a web-enabled computing platform such as but not limited to a laptop at the doctor’s location.
  • 115 is a mobile phone connected to the Internet via cellular or other wireless interconnectivity, such as WiFi at the Doctor’s location.
  • 116 is wireless connectivity for the healthcare professional’s headset and mic 110
  • 117 is the internet.
  • 118 is a cellular network, connected to the internet (117).
  • 119 is a record/play pause/ stop example for recording and reviewing a sound file (107).
  • the Medaica website (106) displays simple instructions for the user (100) to connect and record auscultation sounds from the Ml device (101).
  • the Ml LED When the Ml device is plugged into the USB port of the web-enabled PC or mobile device (104 or 105), the Ml LED is on constantly, medaica.com recognizes it and displays an icon showing it is plugged in and guides the user to the next steps. (If the Ml device is plugged in already, then #1 doesn’t display).
  • the device (101) may be wirelessly connected, using for example Bluetooth, to the web-enabled PC or mobile device, which consequently would provide additional steps in the user journey.
  • a start/ stop record button (119) is provided on the website.
  • AR Augmented Realty
  • the Ml device is recognized by the web-enabled platform’s camera (either directly via its shape, color etc., or via an identifying mark/ code on Ml). Once recognized by the system, the system shows the user when Ml is over a position to collect sounds, and either auto-start recording (optionally first showing a countdown) or highlight a start/ stop recoding button.
  • Ml LED displays red flashing.
  • a timer on the website UX displays a countdown (say 20 secs) (This could be greyed out if the Ml device is not plugged in to help the user understand that the options will be available after a user action)
  • Timer displays “Done” at the end of the countdown or when the user presses the Ml Record Button again.
  • a web-enabled link (108) (which the user can just copy and paste into a telemedicine session, email or text message).
  • the term 'Telemedicine' refers to any telemedicine system such as TeladocTM, American WellTM including consumer video conferencing such as but not limited to FacetimeTM, ZoomTM etc.
  • a window opens showing additional fields for the user to add (for example):
  • doctor’s (103) email address (the user id unlikely to have the doctor’s phone number, but this could be an additional field),
  • the user s name, and user’s email.
  • the patient s information is required here so that the doctor knows they have received a link from a specific patient e.g. John Smith. Also required when multiple users use the same device to help Medaica know where to store data and create different user pages.).
  • the user may need to add a unique username (if they have not already) and their email (in case the doctor needs to communicate with them). If the user has already added a name or email, then the system will remember that name (via the UDID) and could provide prompts to edit that name/email, add more details, or associate a new file with a new user if being used by multiple users on same device e.g. a family, which the system could confirm when it sees different user names against the same UDID.
  • the user might have a unique secure name that only the doctor or the doctor’s system knows (such as but not limited to a patient record number, enabling the patient to exchange details without the Medaica website having the identity of the patient).
  • the system could enable a blockchain feature that further secures the patient’s details, and would also provide the ability to set further access rights as well as provide audit trails for users to see who and when people have accessed their details.
  • a “heath wallet/pass” would enable the patient to be the secure owner of their own heath data, providing not only access to it, but also controlling who, where and when they give such access, and enabling full auditable data if they (or other parties) need proof of info/access.
  • the system will prompt them to add an identifying name.
  • the identifier need not be unique as the actual unique identifier is the UDID + the user name. Only if a user creates a new user with the same name will the system protest.
  • the system can further require the user to confirm if they are the ONTY user of the device, thereby enabling the system to associate a new or different users with device (e.g. family members using same device) AND a user using more than one device.
  • device e.g. family members using same device
  • the SEND window could also have options for a receipt checkbox. Selecting the receipt checkbox enables the user to get a notification that the file has been reviewed (this gives Medaica another chance to get the user’s email address and can also give additional trust to the user that their file has been accessed by the Doctor and/ or not accessed by others).
  • the web-enabled link could have features (like some URE shorteners) that limit the number of times it can be used or expiry time.
  • the doctor (103) receives either; a templated email/text from the user via medaica.com containing the web- enabled link to the patient’s sound(s) file which contains the embedded UDID and the patient’s name (or other method of identifying the patient) and email OR a web-enabled link (108) in their telemedicine session, pasted in by the user.
  • the Doctor could also receive a direct email/text from the user with the web- enabled link which behaves the same way as the web-enabled link in the Telemedicine session.
  • the web-enabled link takes the doctor directly to the sound(s) file webpage (106) where he/ she can listen to the file.
  • the weblink structure can present the file to the doctor on the current webpage being used.
  • the system can also have an option of generating a web-enabled embed code which, when pasted into the telemedicine system, displays the Medaica “player” with the sound (or other) file(s).
  • telemedicine systems could enable the doctor to review the sounds and/or perform a virtual exam without leaving the telemedicine website.
  • the system might only grant access to the file in a compressed format which would typically be good enough (e.g. CD quality) for most professional use.
  • the uncompressed (RAW) file could be more useful to certain users and applications, for example, for machine learning, Al or other research functions, in which case, that file could be made accessible to authenticated users via their access rights.
  • the web-enabled link having been sent by the user, there is an implicit permission from the user to the doctor to access their file, and a risk of anyone else reviewing that file does not risk leaking private data, as only the user’s sound file is accessible.
  • a virtual exam is typically initiated by the doctor (rationale: otherwise the doctor would be waiting for the user, which is not only less efficient for doctors, but also for the user), via their telemedicine platform of choice (109) and does not require any additional tools or software within their telemedicine platform to operate.
  • the user (100) has simple instructions from multiple channels; a) medaica.com b) Ml device and c) if Ml was sent to them via Telemedicine Platform text/ email.
  • the doctor (103) visits medaica.com (106) and clicks on the 'clinician’s tab' and can either: click a secure/ temporary pass or enter his/her login/password details.
  • the Exam Room displays two fields: a room code with a ⁇ 6> figure random number and a blank 'Doctors’ Invite' code field.
  • the Exam Room could display reminder text re the patient: e.g. “Ask your Patient to follow these 3 easy steps 1 ) Plug in their M1 , 2) visit medaica.com then 3) Enter the 6 figure Exam Room Code under the Exam Room tab. When your Patient does that, they will get a Doctor Invite Code for you. ”
  • the patient see two blank fields, an Exam Room field and a Doctor’s Invite field.
  • the Doctor Invite Code field then displays a ⁇ 6> figure random number which the patient tells the doctor. Once the doctor types the invite code into his/her screen, the doctor and the patient are in same Exam Room.
  • the doctor can now listen live to Ml or nearly live (where the auscultation sounds are not live streamed but instead processed at a file handling system that fully downloads the relevant audio files before enabling them to be played back).
  • the doctor listens through high quality over-the-ear headphones (110) connected via either wireless (112) or wired (113) such that he/she can hear lower frequency sounds and will guide the patient accordingly.
  • the doctor’s headphones (110) can also be a suitable electronic stethoscope, capable of listening to recorded files on a web-enabled device.
  • DMDs including other digital stethoscopes, but also devices that record medically-related audio, image or video or other media types that would typically require interpretation by a healthcare professional, can send their files to the Medaica website. These files (which can be processed by the file handling system to download the files) are then able to be accessed by healthcare professionals using the same weblink (i.e. web-enabled link) methods described.
  • the advantage of doing this for the DMD provider is that they do not need to separately integrate their devices into a telemedicine system and the advantage for the healthcare professional is that they can now use multiple DMDs within their chosen telemedicine system.
  • the recipient of the data avatar need not know that they have specific data about a patient, rather they have pieces taken from perhaps hundreds, thousands or millions of patients, to create the “typical” patient to be reviewed.
  • the system generating such a data avatar can therefore serve the recipient without the recipient needing to browse through more complex database structures.
  • the resulting file could also contain information that it has data from x number of patients in each of the query categories, which could further give a degree of confidence to the recipient. It is further understood that the cost of conducting clinical studies and/ or other patient-related studies can be expensive and slow, so such a system could provide a dramatic advantage to the recipient.
  • such a system could not only provide a specific output (the data avatar) but could be configured to require a specific “health query language” as an input to query anonymous bulk user data. This would not only enable the system to provide the appropriate results, but also standardize how multiple users, vendors and models can be uniformly addressed. There is further potential for such a system to prevent exposure of private data (under HIPAA or GDPR or similar) to outside parties and yet provide compliant/ secure results.
  • such as system could also provide reputational data to patients (or other interested parties). For example if a file is reviewed by a 3 rd party for a doctor or patient, the system can know that the reviewer has reviewed x files and achieved an accuracy rate of x% (determined by the number of times other reviewers have agreed or disagreed with the first reviewer or other such techniques). Whist such methods are known in social media (for example, a product review can display the reviewers record of reviewing products, an Uber driver has a reputational score built from multiple rides etc), these techniques have not been used or able to be provided in healthcare. By providing a system that is not only agnostic to devices and telemedicine system, but also can support patients being able to use the system in “guest mode” and providing data avatars, the system is predisposed to being a more trusted interface for all users.
  • the website and/or application provides a method of helping the patient correctly position the DMD by providing an Augmented Reality (AR) composite video of the patient and the device.
  • the device is recognised either by its unique shape or a code (or other recognised methods) that the camera can identify.
  • the system traces the outline of the patient and, with the identified DMD, can now direct the patient to move the device to a desired position on the patient.
  • the user sees an outline of a human torso in the video feed, in which the user best positions him/herself.
  • the outline also displays an auscultation target icon.
  • the user moves the stethoscope head to be within the auscultation target and can then start recording the auscultation sound.
  • this embodiment can be leveraged by the healthcare professional on the other side of the video feed, by moving the auscultation target to sites that he/she desires to listen to. That target/pointer could also be semi-transparent and/or the same shape as the stethoscope head to make it easier for the patient to position the stethoscope “virtually” under the pointer and over the auscultation site.
  • these sites can be tagged alongside the recordings to aid either store and forward diagnose or archive notes, as each recording will display the target location on the patient’s body where it was captured.
  • the user has the option of a bulk recording then upload function - scenario: nurses or doctors travelling around collecting sample files, then uploading multiple files once they get back online.
  • the system can enable scheduling of auscultation exams for exam, twice a day for 10 days at positions Heart 1 and 2. Such as system can then be used to confirm adherence as well as generate more continuous health data. This could be particularly helpful for applications concerned with Remote Patient Monitoring, as well as Hospital At Home applications and/or preventing/ reducing hospital re-admissions.
  • the interconnected web-app may guide the user to perform a number of examinations, such as:
  • Self-examinations and assisted examinations can be done at any time, recording body sounds such as heart and/or lung sounds and then sending those results to a healthcare professional.
  • the Ml digital stethoscope can be used during a live telehealth session with a healthcare professional listening to heart and lung sounds live, guiding the user, and being able to record auscultation data together with any notes in their electronic medical records, subject to HIPAA compliant permission.
  • This type of examination is called a live examination.
  • Figure 11 shows an example of a patient’s web-app displaying a mirrored view of an outline of a torso along with a video feed.
  • the body map is mirrored for interfaces for self-examinations, using a mobile or desktop screen and/ or camera for assistance. It will be appreciated that some embodiments of this invention do not require a mirrored version of the body map.
  • the outline of the torso may also be displayed together with guidelines to help the patient find a specific position to place the digital medical device.
  • the current position of the digital medical device (1) may be displayed alongside previous auscultation positions for which measurements or patient data has been generated.
  • the next sequence of auscultation positions needed may also be displayed, either from a pre-programmed sequence or from the direct guidance of a healthcare professional.
  • the auscultation sites can be moved by the healthcare professional in real time. Each location can be recorded alongside the audio file as tagged references to further assist in diagnosis and records.
  • Figure 12 shows a further example of a patient’s web-app displaying a self-examination heart mode including a mirrored body map and auscultation (body sound) positions on the chest.
  • the body map is mirrored for interfaces for self-examinations, using a mobile or desktop screen and/or camera for assistance. It will be appreciated that some embodiments of this invention do not require a mirrored version of the body map.
  • the self-examination displays auscultation positions that a user should be able to reach without assistance.
  • the user is also able to select a required assisted examination option.
  • a body map shows the body sound (auscultation) positions as if the user was looking in a mirror. Each auscultation position is shown as a numbered circle with the current position to be recorded highlighted, such as the first position.
  • a graphical representation of the specific examination procedure is displayed. It displays a torso outline including a sequence of required auscultation positions.
  • the torso graphical representation is configured to guide the patient to use the digital stethoscope Ml at the required auscultation positions for a specific duration and frequency.
  • a countdown and recording quality window displays the level of the recording of body sounds in relation to external ambient sounds.
  • the level of sound received by the body microphone and the level of sound received by the ambient microphone are graphically represented.
  • the sound level detected by the microphones is also associated with a specific color.
  • the ambient noise displayed on the right of the countdown (133) is grey and indicates no ambient noise.
  • the ambient noise (134) is displayed in red indicating that it is too loud to achieve a good auscultation recording. If the external sounds are too loud for a good auscultation recording, the recording will stop and a “silence” icon will be displayed.
  • the mirrored torso outline shows when each auscultation position is recorded successfully.
  • the body map is mirrored for interfaces for self-examinations, using a mobile or desktop screen and/ or camera for assistance. It will be appreciated that some embodiments of this invention do not require a mirrored version of the body map.
  • the previously recorded position turns a different colour, such as green and displays a “tick” (141). The next recording position is then indicated (142).
  • the graphical representation is then configured to indicate when the exam is complete. As an example, all completed auscultation positions are displayed green.
  • the results can then be sent as a file to a healthcare professional by selecting SEND.
  • the user will get notified once the exam has been reviewed. This can be an instant notification when the healthcare professional has opened and closed the file, or it can be an email confirmation sent to the user including any remarks from the healthcare professional.
  • Figure 15 is a flow diagram summarizing the steps of the self-examination procedure for recording phonocardiograms (PCG) from different auscultation positions using a digital stethoscope.
  • PCG phonocardiograms
  • Figure 16 shows a graphical representation of the specific examination procedure overlaid over a live video image of the user (151).
  • the live feed of the user may include the body shown as transparent or semi-transparent, with the rest of the image masked, opaque or solid to avoid the background interfering with the live video image of the user.
  • a torso outline is displayed (152) alongside the current auscultation position of the digital stethoscope (153) and specific auscultation positions (154,155) required by the exam procedure.
  • the body map is mirrored for interfaces for selfexaminations, using a mobile or desktop screen and/ or camera for assistance. It will be appreciated that some embodiments of this invention do not require a mirrored version of the body map.
  • the user positions him/herself inside the torso outline and can then accurately position the Ml over the required auscultation position.
  • the current auscultation position can flash on/ off so that when the Ml is in position covered by the circle it is not a confusing image for the user.
  • Figure 17 shows a graphical interface of front lungs self-examination including a mirrored image of a torso outline of a front torso and required examination positions. For lung sound recording, two full deep, slow breaths should be captured.
  • Figure 18 shows a graphical interface of back lungs assisted-examination including a torso outline of a back torso and required examination positions.
  • Figure 19 shows a graphical interface of a video-positioning mode. Selecting 'Video Positioning' mode first displays a window asking for permission to use the video camera. For privacy, video-positioning mode is only used for guiding recording positions without recording any video. With video mode positioning on, the mirrored live video feed of the user is displayed alongside outline of the body (181) and the current auscultation position displayed as a flashing circle (182). The auscultation icon might need to alternatively flash black/white (or other contrasting colors) to make sure that whatever the user is wearing is not confusing the image. The torso outline may also need to have a black/white stroke to make sure it is visible. When the user positions himself inside the body map and holds Ml at the flashing auscultation position, recording is started when the user pushes either a start button on the digital stethoscope or an icon or symbol on the graphical interface.
  • the countdown/record window is automatically displayed (or pops up), such as when Ml is in position and the user is still and quiet.
  • a symbol on the Ml head is recognized by the image processing software and when the user moves Ml to the correct position, the software prompts the user accordingly and/ or auto-starts recording.
  • the camera detects an outline of the user and creates a specific body map. This is done by accessing a library of auscultation positions to fit specific body types, or by re-calculating the positions based on the detected outline and specific exam positions.
  • the user is able to select a body map based on nearest fit.
  • the software automatically selects the nearest fit body map from a library based on the video feed of the user.
  • a healthcare professional may send the user a link to a virtual room, such as by email or via a text message or any other messaging application. Clicking the link will take the user directly to the virtual exam room.
  • an onscreen message will be prompted such as “plug in your M1 device”.
  • the virtual room exam displays his/her name. The healthcare professional then guides the user through the auscultation positions, or moves the auscultation positions to where he/she wants to listen. The healthcare professional is able to control when the Ml starts recording each body sound.
  • Figure 21 shows a flow diagram summarising the different steps according to a selfexamination mode, custom examination mode or guided examination mode.
  • Figure 22 shows a diagram summarizing key elements of the system.
  • Figures 23 and 24 shows photographs illustrating a number of digital stethoscope devices.
  • the designs are user-friendly, easy to grip and include at least one button.
  • the cable plug can be inserted into a dummy socket (210) in the unit to fold the cable in half when the device is unplugged. This makes the cable much less unwieldy, and easier to stow in a bag.
  • Figure 26 shows top, side and bottom views of another example of a digital stethoscope device.
  • Figures 27 to 29 show further examples of healthcare provider and patient interfaces.
  • Figure 27 shows a screenshot of a healthcare provider interface, which enables the healthcare provider to analyze, record or edit auscultation audio files. The user interface may also include for each patient: medication, medical history, previously recorded auscultation audio files, healthcare provider’s notes or any other information associated with the specific patient.
  • Figure 28 shows a screenshot of the healthcare provider interface including an auscultation (magnified) section (281).
  • Figure 29 shows a patient interface including a temporary graphic of an animated help screen (291).
  • the patient interface includes a graphical body outline with a number of target positions at which the medical device is to be positioned by the patient.
  • High level healthcare programming environment includes a graphical body outline with a number of target positions at which the medical device is to be positioned by the patient.
  • Instructions, devices and notifications can be "chained" together to help patients perform specific healthcare management protocols.
  • the system can guide the patient to take specific tests with a specific frequency and can optionally send reminders to the patient as well as updates to the patient's healthcare provider(s) and/ or insurer or other parties with appropriate permissions.
  • a system could guide the patient to use a digital stethoscope to "record heart sounds in Position 3, twice a day, for seven days".
  • Position 3 could be a specific instruction with a diagram or video. That specific instruction, frequency and duration can have notifications such that user is sent reminders, and the healthcare provider is sent results.
  • a hospital could for example, set up a "Patient Release Protocol” as a one click "applet” (sending the patient a link to the applet so the Doctor will know if/when the patient is following the release procedure and recovering on plan).
  • an “applet” could be different for each healthcare provider, patient and/ or condition and could provide methods for the healthcare provider to brand the experience as well as integrate the outputs into their healthcare records.
  • Telemedicine Device including a ‘room’ or 'patient' microphone
  • Adding a 'room' or 'patient' microphone (mic) to a telemedicine device allows the patient to continue to communicate with their healthcare provider.
  • a 'room' or 'patient' microphone (mic) to a telemedicine device allows the patient to continue to communicate with their healthcare provider.
  • browser security models only allows for a single audio device to be used at any given time, it is, in the prior art, necessary to switch the audio source in the browser. For example, if the patient is on a laptop and using its default mic, they would have to switch the browser audio source to the telemedicine device to perform an exam that required a digital stethoscope microphone. This would cause the user to lose the connection with the built in mic and their means of verbal communication with their healthcare provider.
  • Adding a second 'room' or 'patient' mic to such a telemedicine device enables the patient and healthcare provider to maintain communications and still capture exam sounds.
  • the audio will be delivered over a stereo channel but the app will separate the audio signal into two separate mono feeds and will process each differently.
  • application can be delivered as a web app (i.e. thin-client) as well as a native desktop or mobile app.
  • the auscultation sound channel will have a gain control so a strong enough signal will be captured for the body recording.
  • filters such as a low pass filter (or any other processing) may be applied to the sound (typically after the sound has been recorded, maintaining the raw audio file).
  • the auscultation sound channel can be sent via the file handling system as described above, with minor latency, but no loss of quality and the 'patient' or 'voice' or 'room' channel goes via a streaming path for live conversation where quality is less mission critical. As result, a clinician can hear auscultation sounds with exactly the same fidelity as if the patient was in their office.
  • the room channel may also have a gain control but will mainly just be passed on to the room and ultimately the healthcare professional's headphones.
  • the healthcare professional and/ or patient can have control of muting each channel separately if they want to only hear one or the other mic.
  • the system may also automatically mute the room mic when the healthcare professional is listening to and/ or recording the auscultation channel.
  • the room mic can be used to capture audio that can be used to reduce or remove non-heartbeat sounds in the heartbeat audio file using standard noise reduction techniques.
  • This specific feature can additionally be used by the system to determine if the room is too noisy for a patient reading and/ or if a patient is speaking when the exam is being recorded. This information can then enable the system to display a message to the patient to be silent and/ or there is too much noise to perform the exam.
  • An audio signal may be used to enable the capture, transmission, storage, and display of data from one or more sensors over a regular USB audio channel.
  • This connection can work in any device that allows a microphone to connect and transmit data to a computer, phone, tablet, etc.
  • the captured data is converted to audio using a predefined system that maps character data to audio frequency bands. Each character (number, letter, or symbol of the digital message) is mapped to a specific, unique frequency band (or mix of frequencies, like DTMF encoding, dual tone multi frequency encoding).
  • a special “start” and “end” identifier is given a specific, unique frequency band or mix of frequencies as well (and a check sum could be added to ensure that the system has successfully transmitted the data).
  • a set duration is established for all characters of the message so that each tone lasts the same duration.
  • a sine wave is generated at the specific frequency in the middle of the character's frequency band that matches the current character of the message.
  • Each message starts by sending a “begin” tone at the predefined “begin” frequency for the predefined duration. This is followed by each character’s predefined frequency again at the specified duration.
  • an “end” tone is sent to complete the message.
  • This signal is transmitted over the USB connection as regular audio and re-encoded in the browser as digital data using the same frequency band to character map. This converted data can then be captured, stored, manipulated, displayed to the user, etc. as regular digital data.
  • the system adds a camera with OCR software to translate any digital readout (for example a blood pressure display) into audio.
  • such a system can leverage a single mono track in the stereo audio signal of a web video interface and keep a room mic open as well so patients can still talk to their healthcare provider while using and/ or transmitting data from the medical device.
  • This allows integration with any platform that either accepts an audio connection or has a display that can be read by an OCR reader and audio converted.
  • Telemedicine is a subset of telehealth that refers solely to the provision of health care services over audio, video and/or messaging platforms via mobile phones and/or computers. Telemedicine involves the use of telecommunications systems and software to provide clinical services to patients without an in-person visit. Telemedicine technology is frequently used for follow-up visits, management of chronic conditions, medication management, specialist consultation and a host of other clinical services that can be provided remotely.
  • the WHO also uses the term “telematics” as a “a composite term for both telemedicine and telehealth, or any health-related activities carried out over distance by means of information communication technologies.”
  • the term 'telemedicine' should be broadly constmed to encompass telehealth and telematics, and is not limited to professional or consumer systems.
  • 'doctor' 'healthcare professional'
  • 'clinician' are interchangeable and may also refer to nurses or any other practitioners who might not be doctors.
  • the Medaica 'Auscultation hub' is a website that stores files, such as but not limited to auscultation recordings from users’ devices such as digital stethoscopes. It can include a file handling system as described above, or receive auscultation data that has been processed at a file handling system.
  • the auscultation hub enables easy linking of those recordings to/from health practitioners and telemedicine platforms.
  • the auscultation hub also enables editing of auscultation audio files; for example, a source audio file could be a sound recording lasting 60 seconds or more.
  • the doctor/healthcare professional can review that complete auscultation audio file from within the auscultation hub and edit out or select sections of clinical relevance;
  • the edited sound recording can be shared, for example with experts for an expert opinion, by sending that expert a weblink that, when selected, opens a website (e.g. the Medaica Auscultation hub) and the expert can then play back the edited sound recording.
  • the Medaica 'Virtual exam room' enables a doctor/healthcare professional to send a web-enabled link to patients as an invite with a unique security code for a virtual exam that will take place in the Medaica virtual exam room.
  • a patient clicking on the web- enabled link is taken to a webpage virtual exam room accessed by entered their unique code.
  • the virtual exam can then display instructions which could include timing for exam, instructions to be ready to place the stethoscope where the doctor requires it etc.
  • the system can reject the invite code, and generate a new one with a new email as an additional security measure.
  • the exam session can be recorded and data files sent to 3rd parties to review/ diagnose.
  • the doctor/healthcare professional can also edit files and send edited files to other experts, as described in the Auscultation hub section above.
  • the doctor can initiate the record start/stop from the website (i.e. not requiring the patient to initiate from the device). Web-enabled links to/from auscultation files and/or other medical records
  • the Medaica system generates a secure and unique web-enabled link or web link that, when clicked on, takes the recipient to that file.
  • the unique web-enabled link can include meta data such as but not limited to date, time device ID, user info, but also, if there are business model rules such as but not limited to access rights, permissions, number of clicks per link permitted, rate per click, billing codes etc.
  • the web link could also have a one-time or multiple use feature which could in turn be linked to the user’s membership rights (as could any of the aforementioned features).
  • Access rights could be leveraged to subsidize the business model e.g. assuming access options include telemedicine platforms, insurers, research etc. and, if research is enabled, the session could be free to patients if they agree to the terms that their data is being used for research and/or is being supported by a charity, e.g. the Gates Foundation.
  • the web link could also offer a drop-down menu to compatible telemedicine systems and/or doctors nearby etc.
  • Referral programs could then support Medaica when Medaica customers link to a specific telemedicine platform.
  • the system can also have an option of generating a web-enabled embed code which, when pasted into the telemedicine system, displays the Medaica “player” with the sound (or other) file(s).
  • telemedicine systems could enable the doctor to review the sounds and/or perform a virtual exam without leaving the telemedicine website.
  • Sound files can be watermarked such that if they are downloaded or used off-site, it can be easily determined that they are Medaica files. Such watermarks could be overlaid/added to Medaica files in a unique manner that the system could know how to remove alter (for example adding new date/user/ owner info).
  • 3rd parties such as analytical labs and/ or researchers, can be granted access to files, either by system admins, or by doctors or other authorized users to diagnose files and/ or enable a second opinion and/ or conduct research for local government or other medical research, subject to their access rights.
  • 3rd parties could also provide a crowd sourced human verification diagnostic solution (like CAPTCHA) whereby x people claiming a sound is a certain condition, increases the confidence that that sound is indeed that condition. This could be further enhanced, to give doctors confidence that the diagnosis has been conducted by peers, for example by providing auditable references (e.g. clicking on who reviewed the sample — how many samples he/ she has been credited with correctly reviewing etc.).
  • CAPTCHA human verification diagnostic solution
  • Telemedicine platform providers could use Medaica devices for customer acquisition — i.e. they send users a Medaica device for free or for a discount if they sign up. They would do this because with Medaica, users will be getting a more useful telemedicine session, and the platform providers will be getting higher revenue and (until Medaica is ubiquitous), a more competitive solution.
  • Medaica could sell direct to end-users (patients) with a coupon for a discount for their first telemedicine session with Company X.
  • Medaica can charge a per click or per seat fee — per click could be based on types of clicks e.g. a doctor listens to a file is standard rate, but if she/he forwards the file for diagnosis, that could be a different rate (higher or lower).
  • Medaica could have a third party subsidize each recording and/ or click in return for the data/ research potential.
  • Bluetooth stethoscope Most medical devices have proprietary systems and, in the case of digital stethoscopes, cannot easily interface with telemedicine systems. This is even more challenging with Bluetooth devices as they can compete/confuse systems and devices that assume Bluetooth is for communication with the user not a device and rarely can handle communicating with both a device and a user (in a telemedicine session, a Bluetooth stethoscope will typically take over the audio channel, making it impossible for the patient to talk or hear the doctor).
  • a user interface such as an application running on a mobile device or a web-app, enables the healthcare provider to access his account related parameters, such as account settings, patients records and/or exams.
  • the interface is also configured to enable the healthcare provider to create new patient entries and start or configure patient examination procedures.
  • the healthcare provider may either create a “Store and Forward Exam” or start a live exam (with lossless near real time audio).
  • a ‘Store and Forward Exam’ is an exam a patient can conduct in their own time for specific auscultation positions the healthcare provider requires.
  • the healthcare provider may first confirm that the patient has read and understood the instructions before asking them to perform a store and forward exam.
  • the healthcare provider may also confirm that the patient has been successfully guided to use the Ml stethoscope in a live exam before asking them to use it in a store and forward exam.
  • a store and forward exam may be requested by the healthcare provider or may be selfinitiated by the patient and then sent to the healthcare provider.
  • Figure 32 shows a screenshot of the healthcare provider interface that enables the healthcare provider to create a store and forward exam.
  • the steps taken by the healthcare provider to create the store and forward exam may be as followed:
  • Figures 33-37 show the corresponding patient’s web interface, following the creation of the ‘Store and Forward Exam’ by the healthcare provider.
  • the patient receives an automated message from the healthcare provider with the exam date and a link to the exam and auscultation positions that have been previously selected by the healthcare provider.
  • the exam request may be sent via email, or text message.
  • the patient is then able to enter a secure page, as shown in Figure 34, with the step-by- step exam procedure based on the auscultation positions selected by the healthcare provider for them.
  • the patient’s software displays an instruction page and checks that the patient’s digital stethoscope is connected. If the digital stethoscope is not connected, the system displays an error message and will not progress until the digital stethoscope has been correctly detected.
  • the patient sees the first auscultation recording position displayed, as shown in Figure 36.
  • the patient is guided through each auscultation position via an on-screen body map.
  • the recording of auscultation sounds may be started via either the on-screen button or via a button located on the digital stethoscope. After a position has been recorded, the patient is able to review the auscultation file to make sure that no room noise or frictional noise was recorded. In the case that room noise or frictional noise are present on the recording, the patient can re-record at the same position, as shown in Figure 37. Once the auscultation position has been correctly recorded, the patient can move to the next position and continue until all required positions have been completed. The system may automatically save each recording once the patient selects ‘next step’.
  • the web interface includes a room sound indictor that illuminates green when the room is quiet or red if it's too loud or there is too much frictional movement. If the room indicator is red, the patient is advised to move to a quieter location and/ or make sure they are not moving the stethoscope around when recording.
  • Figure 37 shows an example of heart and lung body maps, as displayed on screen, in which each auscultation position is shown as a numbered circle.
  • the system When the patient completes their examination, the system notifies the healthcare provider that the recordings are ready to be accessed and reviewed.
  • the healthcare provider On the healthcare provider’s interface, the healthcare provider is then able to see the patient’s exam status updated on the dashboard, marked as an ‘unreviewed exam’ as shown in Figure 40.
  • Figure 41 shows an example of a web interface that enables the healthcare provider to view the auscultation file.
  • the healthcare provider is able to (a) play or pause the recording of the audio at a specific auscultation position, (b) apply filtering techniques and (c) include notes.
  • the phonocardiogram can be expanded to view in finer detail. Filtering methods, such as bell, diaphragm and/ or extended filters can be switched on to help the healthcare provider focus on specific frequencies. Sounds may also be boosted by sliding avolume slider. A notes field enables any notes to be added for the exam. The healthcare provider is then able to request a live exam or an alternative exam for example if the healthcare provider is not satisfied that the patient has followed the instructions completely or if any recordings sound too noisy or too quiet to provide a diagnosis.
  • Filtering methods such as bell, diaphragm and/ or extended filters can be switched on to help the healthcare provider focus on specific frequencies. Sounds may also be boosted by sliding avolume slider.
  • a notes field enables any notes to be added for the exam. The healthcare provider is then able to request a live exam or an alternative exam for example if the healthcare provider is not satisfied that the patient has followed the instructions completely or if any recordings sound too noisy or too quiet to provide a diagnosis.
  • the patient will also receive a notification from the healthcare provider once they have reviewed the examination files.
  • the healthcare provider may select ‘Start Live Exam’ on the healthcare provider interface, as shown in Figure 32.
  • Figure 43 shows a screenshot of the automated message displaying the live exam join details.
  • the healthcare provider may then select ‘Send Join Info’ to send an email or a message to the patient with a weblink and a secure code to enter the examination web room, as shown in Figure 44.
  • the healthcare provider may ask the patient to manually select ‘Live Exam’ and type in their unique Code.
  • Figure 46 shows a diagram illustrating the live exam patient view on the left and the healthcare provider view on the right.
  • the healthcare provider is therefore able to see and hear the patient and guide the patient on the correct placement of the stethoscope.
  • Mirroring settings are available, in which the healthcare provider can look at a mirrored image of themselves and a facing image of the patient (i.e. as if the patient was facing the healthcare provider).
  • Figure 47 shows a screenshot of a page or menu available on the healthcare provider side, in which the healthcare provider has control over the patient’s stethoscope listen/record function. Pressing the button enables the healthcare provider to hear the patient’s auscultation sounds streamed live and starts background recording of the auscultation sounds. In order to achieve a high quality and clear recording of heart and/ or lung sounds, the healthcare provider may remind the patient:
  • the system is configured to enable the healthcare provider to listen to the patient’s auscultation sounds live (streamed)
  • the system is also configured, via the file handling system, to simultaneously record the auscultation file locally on the patient’s computer.
  • the recorded audio on the patient’s computer is automatically sent to the healthcare provider’s computer as a store and forward .wav file, enabling the healthcare provider to hear lossless (CD quality) audio.
  • the file can be downloading in the background to be available as soon as the healthcare professional clicks on the option to review the file.
  • the 'live exam with lossless near real time audio’ approach therefore provides access to a near real time store and forward exam within a live streamed exam and achieves a very different user experience for both the patient and the healthcare provider as compared to the standard standalone live streamed exam.
  • One implementation of this invention envisages an internet-connected app that is hardware agnostic and can hence be easily deployed across all Android and iOS smartphones; as well as PC and Apple desktop computers and virtually any medical device can be easily and cheaply architected to send patient datasets to the smartphone or computers, e.g. over a standard USB cable or wireless connection; and the internet- connected app can then manage the secure transfer of these patient datasets to a web server.
  • the datasets Once on the datasets are stored on the web-server, they can be shared by generating a web-link to those specific datasets and sharing that web-link; any physician with a web browser can then review those datasets.
  • One conventional approach when designing telemedicine systems is to provide some sort of proprietary and secure data transfer system directly into the medical device or a host computer; this data transfer system can then securely transfer data to a cloud-based telemedicine system.
  • the architecture is quite simple: medical device connects to telemedicine system.
  • the overall architecture is more complex, because we add in an internet-connected app (resident on the medical device or a connected smartphone etc.) and a web-server that the web-app communicates; that web-server can then in turn connect to the cloud-based telemedicine system.
  • the present invention offers the same potential to enabling medical device vendors to focus on what they do best, enabling the design of medical devices that work with any telemedicine system, so long as the medical device can include an internet-connected app or send data to a device like a smartphone etc. that can run an internet-connected app; and so long as the telemedicine system has a web browser. Similarly, it enables telemedicine vendors to focus on what they do best, without having to be concerned about the specifics of how medical devices work, or requiring medical devices to include specific proprietary software.
  • this invention can provide a universal backbone connecting in essence any medical device to any telemedicine system.
  • the medical device may be a digital stethoscope and the patient sounds can then be patient sounds such as auscultation sounds, e.g. sounds made by the heart, lungs or other organs.
  • auscultation sounds e.g. sounds made by the heart, lungs or other organs.
  • these auscultation sounds would be live streamed to a physician or other healthcare professional; as noted earlier, live streaming can however result in dropped or delayed packets, with the physician then being unable to accurately detect heart rhythms (e.g. murmurs) or other critical sounds.
  • the audio data is sent to a file handling system for download and not live real-time streaming, although live streaming remains an option for audio where the highest quality is not essential.
  • the audio data is sent from the medical device to an intermediate device or web server that implements the file handling system; the audio data is fully downloaded at the intermediate device or web server; playback can take place once the data has been fully downloaded; the intermediate device or web server in turn can provide the file to the PC or smartphone or other device of the healthcare professional; this local device then downloads the file and enables the healthcare professional to listen to the file, replay it, annotate the file with metadata, store it in a digital patient record, share it etc.
  • the intermediate device or web server can stream the file to the healthcare professional’s device; this streaming will however be at higher quality than direct real-time live streaming from the medical device.
  • the file handling system introduces some minor and potentially imperceptible latency, but ensures that the physician/healthcare professional etc. can hear the auscultation sounds as clearly and completely as possible, at a quality that is better than direct live streaming quality, which can be affected by dropped and delayed packets.
  • the healthcare professional can also receive live- streaming audio, for example to hear the user speaking and to hear audio useful for the accurate positioning of the device (e.g. to hear a heartbeat).
  • the timer UX can show an animated “downloading” bar or dots, during which the file is being sent to the healthcare professional’s computer or local device. It typically takes 1-2 seconds, depending on the internet bandwidth, to receive the fully downloaded the file at the healthcare professional’s device and for the healthcare professional to be able to start local playback of the fully downloaded file.
  • a telemedicine system including:
  • a medical device that includes a microphone system configured (i) to detect and/ or record patient sounds, and (ii) to generate audio data from those sounds, and (iii) to send that audio data;
  • a file handling system configured (i) to receive, download and store the audio data from the medical device, and (ii) make that file available for near-real-time listening to the patient sounds.
  • the telemedicine system is configured to simultaneously (a) record the file and (b) enable a healthcare professional to listen to the patient sounds in real time.
  • the delay between real time and near-real time is less than 30s.
  • the delay between real time and near-real time is less than 10s.
  • the delay between real time and near-real time is less than 5s.
  • the delay between real time and near-real time is less than 2s.
  • the telemedicine system is configured to enable the file to be recorded in a format suitable for clinical grade analysis, such as a lossless format.
  • the telemedicine system is configured to generate sections or fragments of audio data from the patient sounds and the file handling system is configured to receive, download, and store each section of audio data and to make each section available for near-real-time listening to patient sounds.
  • Each section is configured to represent a pre-defined length of audio data, such as 10 seconds of audio data, or 1 second of audio data.
  • the system is configured to enable a healthcare professional to select from a remote location when to start listening to patient sounds in real time.
  • the system is configured to automatically make the file available to the healthcare provider for listening at the end of an action from the healthcare professional, such as releasing a “listen” button or selecting a “review” button.
  • the telemedicine system is configured to store the file locally on a device that is connected to the medical device, such as a mobile device, smartwatch, smartphone, desktop, or laptop.
  • TCP layer protocol processing and IP layer protocol processing (TCP/IP) is used to send the file from the medical device to a web server.
  • TCP/IP is used to send the file from the web server to a healthcare provider’s device.
  • the file handling system The file handling system
  • the medical device is a digital stethoscope and the patient sounds are clinically relevant, e.g. auscultation sounds, such as sounds made by the heart, lungs or other organs of a human or indeed any other animal.
  • auscultation sounds such as sounds made by the heart, lungs or other organs of a human or indeed any other animal.
  • the file handling system is implemented on a web server that receives audio data directly or indirectly from the medical device and is configured for recording, storing and controlling access to uploaded patient datasets that include the audio data processed by the file handling system.
  • the file handling system is implemented on an intermediary device that receives audio data directly or indirectly from the medical device and sends processed audio data to a web server that is configured for recording, storing and controlling access to uploaded patient datasets that include the audio data processed by the file handling system.
  • the file handling system is implemented as a store and forward system.
  • the medical device includes (i) a speech microphone configured to detect and/or record patient speech and (ii) a second microphone configured to detect and/or record patient sounds (e.g. clinically relevant sounds) and generate audio data (e.g. clinically relevant audio data) from those sounds.
  • a speech microphone configured to detect and/or record patient speech
  • a second microphone configured to detect and/or record patient sounds (e.g. clinically relevant sounds) and generate audio data (e.g. clinically relevant audio data) from those sounds.
  • the speech microphone is configured to enable real-time voice communication from the patient to the healthcare professional at the same time as the audio data is being provided to the healthcare professional via the file handling system to enable the healthcare professional to listen to the downloaded audio data in near real-time or at a later time.
  • the telemedicine system is configured to enable the healthcare professional to select whether to listen to real-time voice communication from the patient or to listen to the downloaded clinically relevant audio data sent via the file handling system, by muting, fully or partly, either the real-time voice communication or the audio data
  • the speech microphone uses one channel of a stereo channel pair, and the second microphone uses the other channel.
  • the system is configured to use the speech microphone to determine unwanted noise or noise that otherwise affects the quality of the audio data and to generate a warning if the unwanted noise exceeds a threshold.
  • the speech and clinically relevant audio data are each delivered as an audio signal over a stereo channel and a web app separates the audio signal into two separate mono feeds or channels and processes each differently.
  • the clinically relevant audio data channel has a gain control to increase the strength of the signal.
  • filters are applied to the speech sounds and also the clinically relevant audio data, after these sounds have been recorded, maintaining a raw audio file or files.
  • the healthcare professional and/or patient each have control of muting the speech channel and the audio data channel separately if they want to only hear one or the other channel.
  • the speech microphone is used to capture audio that is used to reduce or remove sounds that are not relevant to the audio data.
  • the speech microphone output is used to determine if the room a patient is in is too noisy for a patient reading and/or if a patient is speaking when the exam is being recorded, to enable a message to be shown or given to the patient to be silent and/ or that there is too much noise to perform the examination.
  • each channel is processed to enable noise reduction/cancellation techniques.
  • the noise reduction/cancellation techniques involve measuring the timing/phasing of noise detected by the speech microphone compared with the same noise detected by the auscultation microphone.
  • each channel is processed to enable compensating for different timing in receiving auscultation sounds in patients with different body masses.
  • the telemedicine system is configured to use the speech microphone to determine unwanted noise or noise that otherwise affects the quality of the audio data and to generate a warning if the unwanted noise exceeds a threshold.
  • the clinically relevant audio data is processed at the file handling system to improve the quality of the audio from a clinical or diagnostic perspective.
  • the clinically relevant audio data channel has a gain control to increase the strength of the signal.
  • the medical device is a single, unitary device and the speech microphone and the second microphone are integrated into that single, unitary device.
  • the medical device comprises two physically separate or separable units, and the speech microphone and the second microphone are integrated into different separate or separable units.
  • the medical device is configured to upload or send patient datasets to a remote web server, from an internet-connected app running either on the device or on an intermediary device
  • the remote web server posts or makes available webpages that include patient datasets and can be viewed on any web-enabled device, such as the patient’s laptop, or mobile phone or the healthcare professional’s laptop or mobile phone.
  • the remote web server is configured to generate a unique web-link that is associated with a specific patient dataset; and in which the unique web-link enables a healthcare professional to review the specific patient dataset by selecting the web-link from within a web browser or from within any dedicated telemedicine application that opens web-links.
  • the medical device is configured to upload or send patient datasets to a remote web server, directly from an internet-connected app running either on the medical device or on an intermediary device;
  • the remote web server is configured to generate a unique web-link that is associated with a specific patient dataset that has been processed by the file handling system; and in which the unique web-link enables a healthcare professional to review the specific patient dataset by selecting the weblink from within a web browser or from within any dedicated telemedicine application that opens web-links.
  • the unique web-link is configured to enable a healthcare professional to review the specific patient dataset by selecting the web-link from within a web browser or from within any dedicated telemedicine application that opens web-links and to initiate a virtual examination of the patient by opening a link to a virtual examination room hosted on the remote web server.
  • the web link is configured to be copied and pasted by the user into a telemedicine session, email message, text message or any other communications system.
  • the web link is configured to be sent automatically to the healthcare professional.
  • the web link is configured to be sent automatically to the healthcare professional only after the user has confirmed it should be sent by interacting with a web page posted by the web server.
  • the web link is configured to be used to control access rights and privacy control access rights.
  • the web link is configured to be used to control additional healthcare services such as diagnostic analysis and verification.
  • the web link contains rules permitting third party access right, sharing/ viewing rules and financial controls. • the web link is a HTML hyperlink.
  • the web link provides access to the patient dataset to an authorized third party only when the authorized third party has been authenticated by the system and/ or patient and/ or healthcare provider.
  • the method enables an authorized third party to start or stop the creation of a patient dataset by at least one of the medical devices.
  • the intermediary device is a device that provides the intermediary device:
  • the intermediary device sends audio data to the web server that is configured for recording, storing and controlling access to uploaded patient datasets that include the clinically relevant audio data processed by the file handling system.
  • the medical device is connected or sends data to an intermediary device, such as a laptop or PC, and an internet-connected app running on the intermediary device treats the patient speech and the audio data generated by the medical device in a way that satisfies the standard browser security model of allowing for multiple audio sources to be used at any given time
  • the medical device is connected or sends data to a portable intermediary device such as a smartphone or smartwatch
  • a portable intermediary device such as a smartphone or smartwatch
  • an internet-connected app running on the portable intermediary device processes both the patient speech and also the audio data generated by the medical device in a way that satisfies the standard smartphone or smartwatch model of allowing for multiple audio sources to be used at any given time only if they are integrated into a single app.
  • the intermediary device is a smartphone or laptop or any other computing device is configured to connect to the medical device and the remote file handling system.
  • the medical device connects to the intermediary device using a data cable, such as a USB cable.
  • the medical device connects to the intermediary device over short-range wireless, such as Bluetooth.
  • short-range wireless such as Bluetooth.
  • digital stethoscope comprises a first audio sensor that is configured to pick up speech from the patient or sounds from the patient environment and a second audio sensor that is configured to measure or sense clinically relevant body sounds.
  • the medical device is any digital medical device that can generate patient data and send that data, directly or via an intermediary device, to a remote web server.
  • the medical device is one of the following: digital stethoscope, ultrasound, blood pressure monitoring device or any other digital monitoring devices.
  • the medical device is a smart device that is configured to monitor vital signs and other patient parameters for anomalies or events and to automatically send an alert to the remote web-server if an anomaly or event is detected, together with a patient dataset that captures the anomaly or event, and generate a unique web-link that is associated with that patient dataset and to send that unique web-link to a healthcare professional or emergency service.
  • the anomaly or event includes an onset of organ failure or malfunction
  • the anomaly or event includes an altered breathing rate or cough
  • the medical device connects to the intermediary device running the web app over a USB port.
  • Audio filters can include compensation for body mass or other human characteristics that are known to alter auscultation sounds, including male/ female body differences, age etc. It is known that sound travelling through say a heavier patient will have a different frequency response than the same sound travelling through a thinner patient. Likewise, a female patient’s heart and lung sounds might present more quietly, due to the impact of sound travelling through breast tissue.
  • Such audio characteristics can be compensated for using equalisation, compression and/ or convolution techniques much like Digital Audio Workstation (DAW) software can for example, remove room ambience and/ or compensate for a recoding made in a live room and make it sound as if it was made in a carpeted room.
  • DAW Digital Audio Workstation
  • a telemedicine system including:
  • a medical device that includes one or more sensors configured (i) to detect and/ or record patient sounds and/ or images, and (ii) to generate data from those sounds and/ or images, and (iii) to send that data;
  • a file handling system configured (i) to download and store the data from the medical device, and (ii) make that file available for non-real-time listening and/or viewing to the patient sounds and/ or images.
  • a telemedicine system enables patient datasets that are generated from multiple medical devices to be sent to a remote web server or servers.
  • a remote web server or servers For example, there could be thousands of low-cost stethoscopes, e.g. Ml devices as described in this document, each being used by a patient at home by being plugged into that patient's smartphone using a simple USB cable connection.
  • Each smartphone runs an internet connected application that records the heart etc sounds captured by the tethered stethoscope and creates a dataset for each recording. It sends that recording, or patient dataset, to a remove server over the internet. The remote server then associates that recording, or patient dataset, with a unique web-link.
  • the patient's doctor is sent the web-link, or perhaps the server sends the web-link for automatic integration into the electronic records for that patient.
  • the patient's doctor can then simply click on the web-link and then the recording or other patient dataset is then made available - e.g. a media player could open within the doctor's browser or dedicated telemedicine application and when the doctor presses 'play', the sound recording is played back.
  • a telemedicine system comprising one or more medical devices that are each configured to generate patient datasets, and a remote web server; in which: a medical device is configured to upload or send patient datasets to the remote web server, directly from an internet-connected app running either on the medical device or on an intermediary device; the remote web server is configured to generate a unique web-link that is associated with a specific patient dataset; and in which the unique web-link enables a healthcare professional to review the specific patient dataset by selecting the web-link from within a web browser or from within any dedicated telemedicine application that opens web-links.
  • a telemedicine system comprising one or more medical devices that are each configured to generate patient datasets, and a remote web server connected to at least one of the medical devices; in which: a medical device is configured to upload or send patient datasets to the remote web server, directly from an internet-connected app running either on the device or on at least one intermediary device; the remote web server is configured to generate a unique web-link that is associated with a specific patient dataset; and in which the unique web-link is configured to enable a healthcare professional to review the specific patient dataset by selecting the web-link from within a web browser or from within any dedicated telemedicine application that opens web-links and to initiate a virtual examination of the patient by opening a link to a virtual examination room hosted on the remote web server.
  • Remote web server the remote web server is configured for recording, storing and controlling access to uploaded patient datasets. • the remote web server posts or makes available webpages that include the patient datasets and can be viewed on any web-enabled device, such as the patient’s laptop, or mobile phone or the healthcare professional’s laptop or mobile phone.
  • the web link is configured to be copied and pasted by the user into a telemedicine session, email message, text message or any other communications system.
  • the web link is configured to be sent automatically to the healthcare professional.
  • the web link is configured to be sent automatically to the healthcare professional only after the user has confirmed it should be sent by interacting with a web page posted by the web server.
  • the web link is configured to be used to control access rights and privacy control access rights.
  • the web link is configured to be used to control additional healthcare services such as diagnostic analysis and verification.
  • the web link contains rules permitting third party access right, sharing/ viewing rules and financial controls.
  • the web link provides access to the patient dataset to an authorized third party only when the authorized third party has been authenticated by the system and/ or patient and/ or healthcare provider.
  • the system and method enables an authorized third party to start or stop the creation of a patient dataset by at least one of the medical devices. • the system and method enables an authorized third party to record the patient dataset.
  • the system and method enables an authorized third party to preview the patient dataset in live streaming mode then, in near real-time receive the downloaded higher quality version of the same dataset without risk of data packet loss.
  • the intermediary device is a device that provides the intermediary device:
  • is a smartphone or laptop or any other computing device that is configured to connect to at least one of the medical devices and the remote web server.
  • the medical device connects to the intermediary device using a data cable, such as a USB cable.
  • the medical device connects to the intermediary device over short-range wireless, such as Bluetooth.
  • the medical device is a portable and portable.
  • the medical device is any digital medical device that can generate patient data and send that data, directly or via an intermediary device, to a remote web server.
  • the medical device is one of the following: digital stethoscope, ultrasound, blood pressure monitoring device or any other digital monitoring devices.
  • a visual indicator on the digital medical device indicates when sufficient data has been measured to generate a patient dataset.
  • a visual indicator on the digital medical device indicates that an authorized third party is accessing, such as streaming the patient dataset.
  • the medical device is a smart device that is configured to monitor vital signs and other patient parameters for anomalies or events and to automatically send an alert to the remote web-server if an anomaly or event is detected, together with a patient dataset that captures the anomaly or event, and generate a unique web-link that is associated with that patient dataset and to send that unique web-link to a healthcare professional or emergency service.
  • the anomaly or event includes an onset of organ failure or malfunction
  • the anomaly or event includes an altered breathing rate or cough
  • the medical device connects to the intermediary device running the web app over a USB port.
  • Second microphone Telemedicine Audio Systems and Methods
  • the doctor can start a video or audio examination of a remote patient, and during that examination can choose to listen to the real-time heart/lung sounds being recorded by the stethoscope the patient is using (using for example the web-link sharing process described above), and can also have an audio conversation with the patient because the stethoscope includes two microphones: one for picking up the heart/lung sounds, and a second microphone for picking up the voice of the patient.
  • the doctor when listening to heart/lung sounds, can mute those sounds fully, and instead listen to the patient talking; the doctor can also partly mute either the heart/lung sounds or the patient's voice; for example, to have the heart/lung sounds as the primary sound and have the patient's voice partly muted and hence at a lower level. Similarly, the doctor may have the patient's voice as the main sound and have the real-time heart/lung sounds muted to a lower level.
  • Using one microphone per channel i.e. one microphone on the left channel and the other on the right channel, allows the design to leverage common amp and/ or A-D chip designs. Without this design, a system would need a method of switching from the auscultation/ stethoscope microphone to the patient voice microphone, which is challenging to engineer since it requires a system-level change. Further, being able to process the sound signals from both microphones in parallel can be very advantageous for various noise reduction/ cancellation and enhancement functions. For example, in a noisy environment (e.g.
  • ER noise reduction/ cancellation techniques can be applied such as measuring the timing/phasing of noise detected by the voice microphone compared with the same noise detected by the auscultation microphone: this requires simultaneous or parallel processing of the sonic signals from both microphones, and would not be possible if the auscultation/ stethoscope could only be sending signals when the patient voice microphone was off, and vice versa.
  • Simultaneous or parallel processing of the sonic signals from both microphones also enables compensating for different timing in receiving auscultation sounds in patients with different body masses: for example, assume the patient voice microphone detects a sound in the room with a given intensity; that same sound will pass through the patient's upper body tissue and be reflected off the ribcage and hard tissue; the auscultation/ stethoscope will detect that reflected signal.
  • the attenuation of the reflected signals increases as body mass increases; hence we are able to approximately infer body mass by measuring the intensity of the reflected signals; we can use that body mass estimation to compensate for the small but different time delay in receiving auscultation sounds in patients with different body masses, and can hence normalise auscultation sounds across patients in a way that compensates for different body mass.
  • a telemedicine system comprising: multiple medical devices that are each configured to generate patient datasets, and a remote web server connected to each medical device; in which: a medical devices is configured to upload or send patient datasets to the remote web server, directly from an internet-connected app running either on the device or on an intermediary device; and in which the medical device includes (i) a speech microphone configured to detect and/or record patient speech and (ii) a second microphone in the medical device configured to detect and/or record clinically relevant sounds and generate an audio dataset from those sounds; and in which the internet-connected app is configured to treat that patient speech separately from the audio dataset and is hence configured to enable real-time voice communication from the patient to the healthcare professional at the same time as the audio dataset is being shared with the healthcare professional via the remote web server; and the system is configured to enable the healthcare professional to select whether to listen to real-time voice communication from the patient or to listen to the audio dataset in real-time by muting, fully or partly, either the real-time voice communication or the audio dataset.
  • the system is configured to use the speech microphone to determine unwanted noise or noise that otherwise affects the quality of the audio dataset and to generate a warning if the unwanted noise exceeds a threshold.
  • the internet-connected app treats the patient speech and the audio dataset generated by the medical device in a way that satisfies the standard browser security model of allowing for multiple audio sources to be used at any given time
  • the intermediary device is a smartphone or smartwatch
  • the internet- connected app processes both the patient speech and also the audio dataset generated by the medical device in a way that satisfies the standard smartphone or smartwatch model of allowing for multiple audio sources to be used at any given time only if they are integrated into a single app.
  • the clinically relevant audio dataset channel has a gain control to increase the strength of the signal.
  • filters are applied to the speech sounds and also the clinically relevant sounds, after these sounds have been recorded, maintaining a raw audio file or files.
  • the healthcare professional and/or patient each have control of muting the speech channel and the clinically relevant sound channel separately if they want to only hear one or the other channel.
  • the speech microphone is used to capture audio that is used to reduce or remove sounds that are not relevant to the clinically relevant sound channel and hence the audio dataset.
  • the speech microphone output is used to determine if the room is too noisy for a patient reading and/or if a patient is speaking when the exam is being recorded to enable a message to be shown or given to the patient to be silent and/ or that there is too much noise to perform the examination.
  • the medical device is a digital stethoscope
  • the medical device is a digital stethoscope and the clinically relevant sound are auscultation sounds.
  • the audio dataset channel e.g. auscultation sound channel
  • the digital stethoscope connects to the intermediary device using a USB port.
  • the digital stethoscope connects to the intermediary device using short-range wireless.
  • the digital stethoscope includes a single visual output and a single button.
  • digital stethoscope comprises a first audio sensor that is configured to measure or sense body sounds and a second audio sensor that is configured to measure or sense sounds from the patient or the environment around the patient.
  • the remote web server is configured to generate a unique web-link that is associated with a specific patient dataset; and in which the unique web-link enables a healthcare professional to review the specific patient dataset by selecting the web-link from within a web browser or from within any dedicated telemedicine application that opens web-links.
  • Another aspect is a medical device that includes (i) a speech microphone configured to detect and/or record patient speech and (ii) a second microphone configured to detect and/or record clinically relevant sounds and generate an audio dataset from those sounds; in which the speech microphone uses one channel of a stereo channel pair, and the second microphone uses the other channel, and each channel is processed substantially in parallel or simultaneously.
  • the medical device is a digital stethoscope and the clinically relevant sounds are auscultation sounds.
  • each channel is processed substantially in parallel or simultaneously to enable noise reduction/ cancellation techniques.
  • the noise reduction/cancellation techniques involve measuring the timing/phasing of noise detected by the speech microphone compared with the same noise detected by the auscultation microphone.
  • each channel is processed substantially in parallel or simultaneously to enable compensating for different timing in receiving auscultation sounds in patients with different body masses.
  • the system is configured to use the speech microphone to determine unwanted noise or noise that otherwise affects the quality of the audio dataset and to generate a warning if the unwanted noise exceeds a threshold.
  • the clinically relevant audio dataset channel has a gain control to increase the strength of the signal.
  • filters are applied to the speech sounds and also the clinically relevant sounds, after these sounds have been recorded, maintaining a raw audio file or files.
  • a healthcare professional and/or patient each have control of muting the speech channel and the clinically relevant sound channel separately if they want to only hear one or the other channel.
  • the speech microphone is used to capture audio that is used to reduce or remove sounds that are not relevant to the clinically relevant sound channel and hence the audio dataset.
  • the speech microphone output is used to determine if the room is too noisy for a patient reading and/or if a patient is speaking when the exam is being recorded to enable a message to be shown or given to the patient to be silent and/ or that there is too much noise to perform the examination.
  • each channel is processed substantially in parallel or simultaneously to enable noise reduction/ cancellation techniques at the medical device.
  • the medical device is configured to upload or send patient datasets to a remote web server, directly from an internet-connected app running either on the device or on an intermediary device
  • each channel is processed substantially in parallel or simultaneously to enable noise reduction/ cancellation techniques at the intermediary device
  • the intermediary device is a laptop or PC
  • the patient speech and the audio dataset generated by the medical device are treated in a way that satisfies the standard browser security model of allowing for multiple audio sources to be used at any given time
  • the intermediary device is a smartphone or smartwatch
  • the patient speech and also the audio dataset generated by the medical device are treated in a way that satisfies the standard smartphone or smartwatch model of allowing for multiple audio sources to be used at any given time only if they are integrated into a single app.
  • the medical device is a single, unitary device and the speech microphone and the second microphone are integrated into that single, unitary device.
  • the medical device comprises two physically separate or separable units, and the speech microphone and the second microphone are integrated into different separate or separable units.
  • the Medaica system is able to generate advice or instructions on when to perform specific healthcare management protocols, such as when specific bodily sounds or functions should be measured.
  • specific healthcare management protocols such as when specific bodily sounds or functions should be measured.
  • the patient is taken to be manually placing the stethoscope at positions on his or her body that the patient hopes are correct.
  • the patient can be guided, by an application running on the smartphone, to position the device at different positions and to then create a recording from each of those positions.
  • the application could provide voice instructions to the patient, such as 'first, place your stethoscope over the heart and press record'.
  • the application could display a graphic indicating on an image of a body where to place the stethoscope. Once that recording has been made, the application could provide another spoken instruction such as 'Now, move the stethoscope down 5cm"; again a graphic could be shown to guide the patient.
  • the guidance could be timed, so that, for example, at two or three pre-set times each day, the patient would be guided through the steps needed to use the stethoscope in the ways dictated by a protocol set by the patient's doctor.
  • a telemedicine system comprising multiple medical devices that are each configured to generate patient datasets, and a remote web server; in which: a medical device is configured to upload or send patient datasets to the remote web server, directly from an internet-connected app running either on the device or on an intermediary device; and in which the remote web server hosts or enables access to an applet that, when run on the internet-connected app, provides instructions or guides to the patient to perform specific healthcare management protocols.
  • the applet guides the patient to take specific tests with a specific frequency • the applet sends reminders to the patient as well as updates to the patient's healthcare provider(s) and/ or insurer or other parties with appropriate permissions.
  • the applet guides the patient to use a digital stethoscope in a specific position, for a specific duration and frequency.
  • the applet sends reminders to the patient as well as updates to the patient's healthcare provider(s) and/ or insurer or other parties with appropriate permissions.
  • the applet guides the patient to use a digital stethoscope in a specific position, for a specific duration and frequency.
  • the applet is a Patient Release Protocol that provides instructions or guides to the patient to perform specific healthcare management protocols relevant to their release from hospital
  • the applet integrates patient datasets generated in response to the applet into their healthcare records of the relevant patient.
  • the remote web server is configured to generate a unique web-link that is associated with a specific patient dataset; and in which the unique web-link enables a healthcare professional to review the specific patient dataset by selecting the web-link from within a web browser or from within any dedicated telemedicine application that opens web-links.
  • the Medaica system enables healthcare professionals to directly conduct remote examination using a virtual examination room hosted on a remote web server.
  • the doctor can open a virtual examination video room, invite the patient to join, and conduct a virtual examination by asking the patient to move the stethoscope to specific areas and select 'record'; the audio recording can be streamed to the remote server, and added to the resources available to the doctor in the virtual examination room so that the doctor can listen to the recording in real-time.
  • the doctor can ask the patient to repeat the recording, or guide the patient to move the stethoscope to a new position, and create a new recording, which can be listened to in real-time.
  • the doctor can edit the recording to eliminate clinically irrelevant sections and can then share a web-link that includes that edited audio file, for example with experts for a second opinion.
  • a telemedicine system comprising multiple medical devices that are each configured to generate patient datasets, and a remote web server connected to each medical device; in which: a medical device is configured to upload or send patient datasets to the remote web server, directly from an internet-connected app running either on the device or on an intermediary device; in which the system is configured to enable a healthcare professional and a patient to communicate via a virtual examination room, and the system is further configured to display a user interface that includes a virtual or graphical body image or body outline and one or more target positions at which a medical device is to be positioned by the patient; and the system is further configured to enable a dynamic interaction between the patient or the healthcare professional and the user interface, to enable the patient to correctly position the medical device at the target position or positions.
  • the system is configured to overlay or integrate a real-time image of the patient with the virtual or graphical body image or body outline to enable a dynamic interaction in which the patient matches or overlaps the two images to enable the patient to position the medical device at the target position or positions.
  • the system is configured to enable a dynamic interaction in which the healthcare professional alters the location of the target position or positions.
  • the patient enters that virtual examination room by entering a code, such as a code provided by the healthcare professional and once both healthcare professional and patient are in the same virtual examination room, the healthcare professional and the patient can communicate by voice and/ or video.
  • a code such as a code provided by the healthcare professional and once both healthcare professional and patient are in the same virtual examination room
  • the system is configured to enable the code to be provided by the healthcare professional to the patient.
  • the healthcare professional can guide the patient into using the medical device in specific ways defined by the examination protocol and the system is further configured to provide feedback if the patient is operating the medical device in compliance with that protocol.
  • the patient can use their medical device to create datasets which are uploaded to the remote web server and made available automatically and substantially immediately to the healthcare professional to review and/ or record.
  • the user interface is configured to show a body map or body image of a part of a patient’s body with an icon or other mark representing the medical device, in which the icon or mark is movable by a participant in a telemedicine session.
  • the system is configured to enable the healthcare professional to move the icon or mark on the body map or body image and to display to the patient the moving icon or mark to enable the patient to place his/her medical device to overlay the icon or mark on the body map or body image.
  • the icon or mark could be semi-transparent and/ or the same shape as the stethoscope head to make it easier for the patient to position the stethoscope “virtually” under the icon and over the auscultation site.
  • the internet-connected app displays an augmented reality view to guide the patient to find a specific position to place the medical device.
  • the medical device automatically generates a patient dataset when the medical device is positioned at or near the specific position.
  • an augmented reality view is provided that includes an outline of the patient based on sensor data, and the augmented reality view is displayed to both the patient and healthcare professional at the same time.
  • the internet-connected app displays an outline of a torso or other part of the body in a video feed and indicates a specific position on the torso or other body part at which the patient is to place the medical device.
  • the system is configured to provide a patient self-examination mode, in which different target positions, at which the medical device is to be placed, are shown or indicated to the patient on the internet-connected app; and the system is configured to create, manually or automatically, a patient dataset or recording at each specific position.
  • target positions are medically standard positions or are specifically chosen by the healthcare professional.
  • the medical device is a stethoscope and the target positions are specific, standard auscultation positions, or, if the patient is receiving guidance from the healthcare professional, the desired auscultation positions can be moved by the healthcare professional in real time.
  • the patient dataset is an audio or video file or stream.
  • the patient dataset is an auscultation audio or video file or stream.
  • the patient dataset is data relating to the heart, lung or any other organ.
  • the remote web server is configured to generate a unique web-link that is associated with a specific patient dataset.
  • the unique web-link is configured to enable a healthcare professional to review the specific patient dataset by selecting the web-link from within a web browser or from within any dedicated telemedicine application that opens web-links and to initiate a virtual examination of the patient by opening a link to a virtual examination room hosted on the remote web server.
  • Doctor Guided Device Icon (Simple non-AR implementation) •
  • An app (including a web) view shows a “a body map” being a patient’s upper body (e.g. an outline) with an icon/mark representing the device (e.g. a stethoscope) that is movable by a participant (usually the healthcare provider) in the telemedicine session.
  • a body map being a patient
  • an icon/mark representing the device (e.g. a stethoscope) that is movable by a participant (usually the healthcare provider) in the telemedicine session.
  • the doctor can move the icon/ mark on the body map, such that the patient sees the mark moving and the patient can then place his/her device (in the real world) to overlay the mark on the body map to position the device accurately and correctly.
  • AR Augmented reality
  • an augmented reality view includes an outline of the end-user based on sensor data, such as camera or LIDAR data and the augmented reality view is displayed to the both the patient and healthcare professional at the same time.
  • the digital medical device automatically generates a patient dataset when the digital medical device is positioned at or near the specific location.
  • a patient web-app displays an image or outline of a torso in its video feed.
  • the patient positions him/herself into or within the torso image or outline, and is then guided to place the digital medical device at specific position(s) (such as auscultation positions where the device is a stethoscope).
  • the (e.g. auscultation) positions can be sequentially displayed to the patient after each has been recorded. Alternatively, if a specific sequence has been requested by the healthcare professional, that sequence can be displayed.
  • the positions can be altered or moved by the healthcare professional in real time.
  • Each position can be recorded alongside the audio file as tagged references, to further assist in diagnosis and records.
  • Patient dataset the patient dataset is a file or stream the patient dataset is an audio or video file or stream the patient dataset is an auscultation audio or video file or stream the patient dataset is data relating to the heart, lung or any other organ.
  • • use restrictions includes: a time period for accessing the shareable link, predefined number of times the web link is accessible, authorized third party, compression format, sharing rights, downloading rights, payments.
  • each patient dataset is associated with a secure unique ID.
  • blockchain server stores an audit trail of all events associated with each patient dataset.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Public Health (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Veterinary Medicine (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Pathology (AREA)
  • Pulmonology (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

A telemedicine system includes: (a) a medical device that includes a microphone configured (i) to detect and/or record patient sounds, and (ii) to generate audio data from those sounds, and (iii) to send that audio data; and (b) a file handling system configured (i) to receive, download and store the audio data from the medical device, and (ii) make that file available for near-real-time listening to the patient sounds. The medical device can be a digital stethoscope and the patient sounds are then auscultation sounds, e.g. sounds made by the heart, lungs or other organs.

Description

TELEMEDICINE SYSTEM
CROSS-REFERENCE TO RELATED APPLICATIONS
This application is based on, and claims priority to U.S. Provisional Application No. 63/305,482, filed on February 1, 2022, the entire contents of which being fully incorporated herein by reference.
BACKGROUND OF THE INVENTION
1. Field of the Invention
The field of the invention relates to a telemedicine system. Telemedicine systems enable remote diagnostics and clinical caring for patients, i.e. when a health professional and patient are not physically present with each other. Telehealth is generally thought of as broader in scope and includes non-clinical health care services; in this specification, the terms 'telemedicine' and 'telehealth' are used interchangeably and so 'telemedicine' should be broadly construed to include telehealth and hence include remote healthcare services that are both clinical and non-clinical.
A portion of the disclosure of this patent document contains material, which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
2. Description of the prior art
Many appreciate that telemedicine is more than just using Skype®, Zoom®, or Facetime®, so that a doctor can look a Patient in the eyes. For telemedicine to be truly useful, the Patient must be able to collect and transmit a variety of data the healthcare professional needs to assess the Patient’s health. Although telemedicine can easily leverage patient-collectable data from simple and affordable devices, such as blood pressure cuffs, heart monitors, pulse oximeters and thermometers, etc., current solutions fail to provide uniform or easy ways for healthcare professionals to acquire more subjective or useful information from patients without a doctor’s or nurse’s supervision e.g. listening to a patient’s body sounds (auscultation), taking an EKG or performing an ultrasound. Consequently, the inability for telemedicine platforms to easily interoperate with web-enabled electronic/ digital medical devices (“Digital Medical Devices” or “DMDs”, also called simply "medical devices' in this specification) has been an inhibitor of telemedicine advancing beyond the use of simple diagnostic sessions, mental health and dermatology.
In an ever-connected world, with increased fears of infections being spread in doctors’ waiting rooms and hospitals, especially in light of the COVID-19 pandemic, patients and healthcare professionals alike need easier, more secure and/ or interoperable telemedicine solutions.
Current telemedicine devices are not patient centric. They have been designed for healthcare professionals and few have been cleared by the FDA to be sold to or used by consumers. In addition, the end-to-end experience often competes with and/ or is too complex or incompatible to use with existing telemedicine systems.
As demand for telemedicine increases, not least due to COVID-19, there is a drive to reduce the cost of medical devices as well as improve the quality and utility of services. The opportunity to truly democratize telemedicine will be unlocked when medical devices are much more affordable, and easy to use with any telemedicine platform.
Some telemedicine systems enable a healthcare professional to listen to auscultation sounds from a medical device, such as a digital stethoscope. These sounds are generally streamed to enable real-time consultation. Five streaming audio can however lead to dropped or delayed data packets; this can result in doctors being unable to accurately detect heart rhythms (e.g. murmurs) or other critical sounds.
Some telemedicine systems enable the patient or a caregiver to record auscultation sounds in their own time, then send those sounds to the healthcare professionally. This type of exam is sometimes referred to as a Store and Forward exam or Asynchronous exam.
SUMMARY OF THE INVENTION
The invention, in a first aspect, is a telemedicine system including:
(a) a medical device that includes a microphone system configured (i) to detect and/ or record patient sounds, and (ii) to generate audio data from those sounds, and (iii) to send that audio data;
(b) a file handling system configured (i) to receive, download and store the audio data from the medical device, and (ii) make that file available for near-real-time listening to the patient sounds.
The medical device may be a digital stethoscope and the patient sounds are then sounds such as clinically relevant auscultation sounds, e.g. sounds made by the heart, lungs or other organs. In conventional digital stethoscope devices, the auscultation sounds detected during an audio/video telehealth session would be live streamed in real-time to a physician or other healthcare professional; live streaming provides real-time audio, but can however result in dropped or delayed data packets, with the physician then being unable to accurately detect heart rhythms (e.g. murmurs) or other critical sounds and/ or other valuable timing information. This invention ensures that the audio file, because it is fully downloaded before it is played back, is at the highest possible quality as soon as possible, which is especially important for clinically relevant auscultation sounds. The file handling system can download the audio data on demand (e.g. a pull service initiated by the recipient) or it can be downloaded automatically (e.g. a push service) by the system for example at the end of a user action such as releasing a “listen” button on the system interface typically with a very small latency of 1 or 2 seconds. Alternatively, the file can be downloading in the background to be available as soon as the healthcare professional clicks on the option to review the file.
In one implementation, the audio data may well be live-streamed, e.g to enable the healthcare professional to guide the patient to accurately position the microphone system, but is then also sent to the file handling system that downloads the audio data; as with any downloading system, the audio data can be played back once the complete audio data file has been downloaded. The live-streamed audio can also be presented to the physician as a lower resolution and/ or preview version of the downloaded data file.
As noted above, the file handling system introduces some minor latency, but ensures that the physician can hear the auscultation sounds as clearly and completely as possible, at a quality that is better than live streaming quality affected by dropped and delayed data packets: the downloaded file is the source file. And the physician can replay the audio file, stop and ‘rewind’ it, tag sections of interest in the file, store it in conjunction with medical notes and share the file. So the downloaded data can be played back/ reviewed once it has been downloaded/ sent to the physician, which can be after an imperceptible period, depending on the speed of the physician’s internet connection.
In one example, the audio data file is sent from the medical device to an intermediate device or web server that implements the downloading of the complete audio data file; alternatively, the downloading can be local to the healthcare professional, e.g. at their local PC or smartphone.
In one example, TCP layer protocol processing and IP layer protocol processing (TCP/IP) is used to send the data file from the medical device to the web server and from the server to the healthcare provider’s device. TCP ensures that the data file is not damaged or lost.
The medical device may include (i) a speech microphone configured to detect and/ or record patient speech and (ii) a second microphone configured to detect and/or record patient sounds and generate an audio dataset from those sounds and send the audio dataset to a file handling system for downloading; and in which the speech microphone uses one channel of a stereo channel pair, and the second microphone uses the other channel.
This enables real-time voice communication from the patient to the healthcare professional at the same time as the audio dataset is being shared with the healthcare professional via the file handling system; the telemedicine system is configured to enable the healthcare professional to select whether to listen to real-time voice communication from the patient or to listen to the downloaded patient sounds (e.g. auscultation audio data) sent via the file handling system in near real-time or at any later time.
The system can be further configured to enable the speech microphone to be muted automatically or manually when the healthcare professional is listening to the auscultation sounds (live or forwarded).
The invention is implemented in a system called the Medaica system, which is described in the following sections.
BRIEF DESCRIPTION OF THE FIGURES
Aspects of the invention will now be described, by way of example(s), with reference to the following Figures, which each show features of the invention:
Figure 1 is a simplified cross section of a digital electronic stethoscope.
Figure 2 is a simplified top view and cross section of a digital electronic stethoscope.
Figure 3 is a simplified diagram of the electrical design of the electronic stethoscope
Figure 4 is a diagram of some of the key players interacting with the Medaica system.
Figure 5 is a diagram of the Medaica platform.
Figure 6 is a system overview of one implementation of the invention.
Figure 7 is a diagram illustrating a patient’s journey.
Figure 8 is a diagram illustrating a patient’s journey.
Figure 9 is a diagram illustrating a doctor’s journey.
Figure 10 is a diagram illustrating a user’s interaction with the playback page.
Figure 11 shows an example of a patient’s web-app displaying an outline of a torso along with a video feed.
Figure 12 shows another example of a patient’s web-app with the graphical interface of a self-exam heart mode.
Figure 13 shows a patient’s web app displaying a countdown and recording quality window.
Figure 14 shows a patient’s web app displaying the torso outline shows when each auscultation position is recorded successfully
Figure 15 is a flow diagram summarizing the steps of the self-exam procedure. Figure 16 shows a patient’s web app displaying a specific exam procedure overlaid over a live video image of the user.
Figure 17 shows a graphical interface of front lungs self-examination including a torso outline of a front torso and the required examination positions.
Figure 18 shows a graphical interface of back lungs assisted examination including a torso outline of a back torso and required examination positions.
Figure 19 shows a graphical interface of a video-positioning mode.
Figure 20 shows a simplified flow diagram illustrating when an exam starts.
Figure 21 shows a flow diagram illustrating the different steps according to a selfexamination mode, custom examination mode or guided examination mode.
Figure 22 shows a diagram illustrating the system key components.
Figure 23 shows photographs illustrating several digital stethoscope devices.
Figure 24 shows photographs illustrating a number of digital stethoscope devices.
Figure 25 shows photographs illustrating a digital stethoscope device including a dummy socket (210).
Figure 26 shows top-down, side and bottom up views (respectively, descending) of a digital stethoscope device.
Figure 27 shows a screenshot of a healthcare provider interface.
Figure 28 shows a screenshot of a healthcare provider interface including an auscultation (magnified) section.
Figure 29 shows a patient interface including a temporary graphic of an animated help screen.
Figure 30 shows a screenshot of a web page of the healthcare provide user interface.
Figure 31 shows a screenshot of another web page of the healthcare provide user interface.
Figure 32 shows a screenshot of the healthcare provider interface that enables the healthcare provider to create a store and forward exam.
Figure 33 shows a screenshot of the automated message received by the patient from the healthcare provider.
Figure 34 shows a screenshot of a secure page provided to the patient with the step-by- step exam procedure based on the auscultation positions selected by the healthcare provider for them.
Figure 35 shows an instruction page, as displayed by the patient’s software. Figure 36 shows a screenshot of the patient’s web interface displaying the first auscultation position required.
Figure 37 shows a screenshot of the patient’s web interface enabling the patient to review the first auscultation position recording.
Figure 38 shows an example of heart and lung body maps, as displayed on screen, in which each auscultation position is shown as a numbered circle.
Figure 39 shows an example of the heart body map indicating that the first auscultation position has been successfully recorded.
Figure 40 shows a screenshot of the automated message received by the patient when the examination procedure is completed.
Figure 41 shows a screenshot of the healthcare provider interface with the patient’s exam status updated on the dashboard.
Figure 42 shows an example of a web interface that enables the healthcare provider to view the auscultation file.
Figure 43 shows a screenshot of the automated message displaying the live exam join details.
Figure 44 shows a message displayed to the patient with a weblink and a secure code to enter the examination web room.
Figure 45 shows a window requesting the patient enters the live exam access code and continues to the Tive Exam’ page.
Figure 46 shows a diagram with an example, of the live exam patient view on the left and the healthcare provider view on the right.
Figure 47 shows a screenshot of a page or menu available on the healthcare provider side, in which the healthcare provider has control over the patient’s stethoscope listen/record function.
DETAILED DESCRIPTION
In one implementation of the invention, systems and methods are provided to enable a healthcare professional to conduct a remote exam from any web-enabled audio and/ or video platform, not only simplifying telemedicine consultations that would otherwise require special devices and/ or integration of disparate systems but also increasing the value of the telemedicine consultation. The systems and methods produce unique links that are exchanged between patients and healthcare professionals to either review files, such as but not limited to a patient’s auscultation sounds, or for the patient to participate in a virtual exam. The unique link can also be used to control access rights, privacy and enable additional services, such as but not limited to diagnostic analysis, research and verification. The links can additionally contain rules, such as but not limited to permitting third party access right, sharing/ viewing rules and financial controls such as but not limited to subscription usage and per user limits.
We will now describe examples of problems or challenges that have been addressed by implementations of the present invention, such as interoperability and security and tracking usage/permissions.
Interoperability
Interoperability is an often overlooked but major challenge for telemedicine systems. Although the FDA recognizes the challenges of medical device interoperability (see fda.gov/ medical-devices/digital-health/medical-device-interoperability) and mentions that devices with the ability to share information across systems and platforms can improve patient care, reduce errors and adverse events and encourage innovation, the problems of interconnectivity between digital medical devices (DMDs) and telemedicine systems can be far more subtle yet complex.
Telemedicine platforms do not provide uniform or easy support for multiple DMDs. Tikewise, many DMDs will not work with any telemedicine systems without extensive (and often expensive) technology integration work. This is clearly a problem for both sides of the healthcare value-chain; healthcare professionals would ideally like telemedicine to support the use of most if not all the tools they use in their typical patient exams. If a telemedicine system doesn’t support all their tools, its utility is limited.
These and other related problems are currently inadequately addressed by:
1) DMD manufacturers supplying their own closed/proprietary telemedicine solutions. However, such approaches are not very scalable as every doctor or hospital wishing to use that DMD will be unable to do so with their existing telemedicine solution or they will be forced to have multiple telemedicine solutions for every DMD. Furthermore, such solutions compete with telemedicine systems, so are unlikely to be widely embraced by those platforms. 2) Integrating the DMD into each telemedicine platform via the telemedicine platform’s Application Program Interfaces (APIs). The problem with this approach is that with many hundreds of telemedicine solutions and thousands of specific implementations/configurations, each DMD manufacturer would have a very difficult and expensive task to integrate and maintain the end-to-end experience, having to potentially test and update software and hardware every time each telemedicine system is updated.
In addition, as DMDs leverage mobile technology and use wireless interfaces such as Bluetooth, primarily designed for consumers, they fail to address usability problems for healthcare professionals including a) a doctor might not wish to use a private device (their own phone) while examining a patient — that phone might ring with a personal call, and it is not ideal for sharing if they only have one DMD in the clinic and b) Bluetooth can be difficult to use when there is other radio-enabled equipment near-by or metal objects. Furthermore, if a user is talking with a doctor over any web-enabled video channel, and they then turn on a Bluetooth DMD, the most likely scenario is that the DMD will take over the audio channel resulting in the patient being unable to talk to or hear the doctor (the audio will be routed to the DMD). This is a solvable solution, but requires an interface that can negotiate between the telemedicine and DMD audio channels and switch between them manually or automatically, but in a way that does not confuse the patient or doctor. In a time-sensitive consultation, neither the doctor nor the patient want to waste time with complex interfaces and will undoubtedly be put off the experience if that happened.
Devices that have either not considered the above scenarios and related user experience issues from the start of their design, are invariably ill-suited to telemedicine.
Security and tracking usage/permissions
With each conventional DMD typically being a closed/proprietary system and with HIPAA (The Health Insurance Portability and Accountability Act of 1996) and GDPR (The General Data Protection Regulation 2016/679) requirements, there is a very complicated and politically charged problem to solve. The Medaica solutions provide an intermediary web-hub that operates separately from the telemedicine platform, and can, in its simplest form, work on any web-enabled system and can be simply accessed by a doctor and/ or patient as a new window alongside their existing chosen telemedicine or video/chat/messaging solution, without requiring further integration. This is further enabled with secure web-enabled links that can grant access rights to connect permitted parties and provide features to securely share, review, authenticate files, export files and set rules over timing, sharing rights and business models, payments etc.
We will now describe the Medaica Ml DMD. Ml is a low-cost digital stethoscope that is aimed at telemedicine applications, rather than as a replacement for traditional stethoscopes. As such, it is aimed at the patient rather than the healthcare professional. A more detailed description of Ml now follows.
Ml Low-cost digital stethoscope
Medaica’s system is designed to be hardware agnostic, however, today, there is no plug- and-play device that will result in the simple functionality and affordability required. To that end, Medaica has produced a simple electronic stethoscope, the Ml. A target retail price is for example under $50. A target material cost (bill of materials) is for example under USD $15.
Figure 1 shows a simplified cross section of Ml including examples of dimensions. Figure 2 shows a top view and another cross section of the device including further examples of dimensions. Ml includes a USB microphone. It is mounted in a rigid molded enclosure. The enclosure is in the basic shape of a stethoscope. The front face has a traditional stethoscope diaphragm sealed onto an acoustic chamber into which a microphone, such as an electret or piezo microphone is mounted. In addition to the stethoscope microphone, a second microphone for patient voice, for detecting whether background noises are too loud and could affect the stethoscope microphone, and for noise cancelling, is mounted facing upwards towards the user.
These two microphones are connected respectively to the left and right channels of the USB stereo microphone channel so they can be processed in parallel.
On the rear face, a small “I’m alive” LED, a '"now recording" LED, and a single user push button are mounted. The device is washable, so the LEDs and button are water resistant (IPX4) and fabricated as a simple membrane, like many medical and household cookery products. The various electrical items are connected to a USB audio bridge IC mounted on a small PCB. The device is large enough to be comfortable in the hand and therefore may contain a significant amount of empty space. This could be filled with ballast to improve the weight and feel of the device. Alternatively, the space may be used for more electronics components and a rechargeable lithium cell battery in more sophisticated and/or wireless versions. Furthermore, the design leaves the head of the device easily viewable when held by the patient, such that in a telemedicine consultation the patient will be able to be guided, either by the user interface or the healthcare professional, optionally using an onscreen target/ pointer via the Medaica system to guide the patient to move the head of device over specific auscultation target areas.
Ml Connectivity
The initial design for Ml is a USB C wired design. Additionally, the device may also support Bluetooth (BT) connectivity. Adding BT connectivity would enable connectivity to supported device platforms and would add the following components: BT transceiver, ISM band antenna, microcontroller capable of implementing BT stack and application level encryption, power management device and battery plus some more UI elements and potentially an MFi (Made for Apple® iPhone® ) chip. With USB 2 connectivity only, Ml is compatible with a number of platform or devices, such as: Windows laptops and PCs, Apple laptops and PCs, Android tablets and some phones (with a USB 2 to USB C adapter which is readily available) and Apple phones with a Uightning to USB C to Uightning converter and MFi device.
Ml Mechanical design
The main housing is formed from a target maximum of two injection molded plastic parts. These parts are molded from high density medical grade plastic and have sufficiently thick wall sections as to be acoustically stable. These plastic parts may be finished or plated to give a comfortable and durable finish.
Ml Electrical design
The electronic design is based around a standard USB to audio bridge IC from (e.g. CMedia CM 6317A). The Ueft and Right channels are used for the voice and auscultation microphones respectively. Figure 3 shows a simplified diagram of the electrical design. Ml UX philosophy
Like the hardware, the software must be simple. The website and mobile app can be used by users in “Guest” mode without any user login or sign up. This minimizes additional UX steps which could be life-saving if the user has an emergency and wants the fastest route to getting advice. The website and/ or mobile app recognizes that the Ml device is plugged in (and will indicate if it is not) and can then guide the user on next steps.
When the Ml device is plugged in into a computer’s or mobile device’s USB port, a visual indicator, such as the LED glowing in white, indicates that the Ml device is correctly powered and a d a data connection exists with the computer or mobile device, i.e. The stethoscope is functioning correctly and is ready for use.
Ml Users
Users of medaica.com include, but not limited to:
Patients at home, such as consumers who directly connect Ml to PC, Mac or iOS or Android platforms to record heart and/ or lung sounds.
Healthcare practitioners working remotely from patients. They have access to Ml files and can listen to them asynchronously or live on any web-enabled platform.
Researchers/analysts/specialists, subject to access rights to Ml files in order to diagnose, tag sounds, conduct research, teach, and/ or to help ML/AI systems learn. Hospitals/doctors requiring hosted solutions, such as healthcare practitioners who desire access rights to Ml files within their own networks and security requirements. Assisted living/ nursing homes where doctors may visit on a periodic basis, but can still be informed via forwarded auscultation data.
Platform overview
Figure 4 shows a diagram illustrating the different players interacting with the Medaica system. The Medaica system offers a number of product differentiation features, including but not limited to:
For Healthcare professionals:
• Interoperability: Plug and Play solution, works with any existing telehealth system without the need to change systems, workflow, processes or procedures.
• Adds value to telehealth exams by adding more capabilities (extending the clinical exam to the patient’s home).
• No proprietary diagnostics: focus is on simplicity and on end-to-end utility first, not on tech or artificial intelligence (Al).
• Diagnostic analysis, including Al diagnosis will be provided including with 3rd party non-proprietary solutions as an additional service.
For Telehealth platforms and other healthcare service providers/developers/device companies:
• Extend platform utility with new services offerings.
• Rapid integration and an optional integration via APIs.
• Increase value to all users.
• Data = value = improved/ stickier services and capabilities.
• Enable incremental revenue (by extending service and/or reach capabilities).
• Increase clinical data, insights.
• Roadmap for 3rd party hardware devices, Al services and EMR providers / integrators .
For Patients
• A simple device: e.g. No Bluetooth™ to pair, no battery to charge.
• Virtual clinic (on demand exam services) added to existing Telehealth experience.
• Works with any video /messaging platform and device (e.g. Zoom™, Facetime™, Teladoc™, on a PC, MAC™, iPhone™, Android™).
• No medical knowledge needed to operate device or software. • No subscription fees (business model is for exams to be charged via telehealth/ data services).
• Designed for consumer use.
• As safe and easy to use as other consumer medical devices — e.g. blood pressure devices and PO2 devices.
• Minimal information required from user (e.g. can use system as guest).
Figure 5 shows a diagram of the system’s platform. At the patient side (51), a patient (52) connects a Medaica Ml stethoscope to a USB port of the patient’s Web-connected mobile or desktop client (53). The patient enters the Medaica Patient Side (51). The software recognizes Medaica Ml UDID and enables recording of auscultation sounds.
In Tive mode, a health care professional (HCP) generates and sends an exam room passcode to the patient. Once the patient enters the passcode, the HCP can direct the patient and initiate recording.
In Store and Forward mode, the patient records auscultation sounds, guided by UI and can then send a unique link to those sounds to the Healthcare Professional (HCP).
Auscultation sounds are transmitted via Medaica Servers (54). Medaica Servers (54) include a file handling system, such as a store and forward system, where an auscultation audio data file is downloaded and can be, like any conventional file download service, played back once the complete data file has been downloaded. The file handling system Medaica Servers (54) introduces some minor and potentially imperceptible latency, but ensures that the physician can hear the auscultation sounds as clearly and completely as possible, at a quality that is better and more dependable for diagnosis than live streaming quality, which can be affected by dropped and delayed data packets.
TCP/IP may be used to send the data file from the medical device to the web server and from the web server to the healthcare provider’s device. This ensures that the data file is not damaged or lost.
The auscultation sounds web-link is sent to the HCP side.
At the HCP side (55), the HCP (56) visits the Medaica HCP Side. In Live mode, the HCP generates and sends an exam room passcode to the patient. Once the patient enters the passcode, the HCP can direct the patient and initiate recording.
In Store and Forward mode, a link to the patient’s sounds is sent to the HCP for review.
The HCP can choose to listen to auscultation sounds filtered or unfiltered and share, comment and/ or export sounds, according to permissions.
Figure 6 illustrates a further example of the interactions within the Medaica system. A patient (100) is located at a remote location from the health care professional HCP (103).
101 is a web-enabled electronic medical device used for auscultation of body sounds.
102 is a cable connecting the electronic medical device (101) to either a web-enabled computing platform (104) or mobile phone (105).
103 is a healthcare professional such as but not limited to a doctor (and interchangeably referred to as a specialist and/ or clinician in this document) at a different location than the patient.
104 is a web-enabled computing platform such as but not limited to a laptop.
105 is a mobile phone (or other such mobile computing platform), connected to the Internet via cellular or other wireless interconnectivity such as WiFi.
106 is a website (in this embodiment, medaica.com) for recording, storing and controlling access to patients’ uploaded files, such as but not limited to auscultation files. This website can be viewed on any web-enabled devices such as the patient’s laptop (104) or mobile phone (105) or the healthcare professional’s laptop (114) or mobile phone (115).
107 is an example sound file recording via a patient’s web-enabled electronic medical device. Sound file 107 is processed by a file handling system that downloads the complete file before making that file available for playback.
108 is a web-enabled link controlling access to a patient’s auscultation files.
109 is a web-enabled video or telemedicine site. This web-enabled site can be viewed on any web-enabled devices such as the patient’s laptop (104) or mobile phone (105) or the healthcare professional’s laptop (114) or mobile phone (115).
110 is a headset and mic set enabling better listening/ talking experience for the healthcare professional.
111 is wireless connectivity for the electronic medical device, such as but not limited to Bluetooth or WiFi. 112 is cellular connectivity to/ from the mobile phone to the cellular network (118).
113 is a cable connecting the headset and mic (110) to either the doctor’s web-enabled computing platform (114) or mobile phone (115).
114 is a web-enabled computing platform such as but not limited to a laptop at the doctor’s location.
115 is a mobile phone connected to the Internet via cellular or other wireless interconnectivity, such as WiFi at the Doctor’s location.
116 is wireless connectivity for the healthcare professional’s headset and mic 110
117 is the internet. 118 is a cellular network, connected to the internet (117).
119 is a record/play pause/ stop example for recording and reviewing a sound file (107).
Examples of user journeys are now described.
Use Case 1 - Store and Forward (See Figures 6 to 10)
As shown in Figure 6, the Medaica website (106) displays simple instructions for the user (100) to connect and record auscultation sounds from the Ml device (101).
1. “Plug Ml into the USB port of your <PC, Mac, iPhone or Android>”.
When the Ml device is plugged into the USB port of the web-enabled PC or mobile device (104 or 105), the Ml LED is on constantly, medaica.com recognizes it and displays an icon showing it is plugged in and guides the user to the next steps. (If the Ml device is plugged in already, then #1 doesn’t display).
Alternatively, the device (101) may be wirelessly connected, using for example Bluetooth, to the web-enabled PC or mobile device, which consequently would provide additional steps in the user journey.
2. “Using the Exam Positions diagram (not shown), place Ml on a position, then press the Record Button on Ml.”
Alternatively, a start/ stop record button (119) is provided on the website.
Alternatively, the user is guided via an Augmented Realty (AR) application, into position.
The Ml device is recognized by the web-enabled platform’s camera (either directly via its shape, color etc., or via an identifying mark/ code on Ml). Once recognized by the system, the system shows the user when Ml is over a position to collect sounds, and either auto-start recording (optionally first showing a countdown) or highlight a start/ stop recoding button.
The User places the Ml device on a position and presses the Ml record button. Ml LED displays red flashing.
A timer on the website UX displays a countdown (say 20 secs) (This could be greyed out if the Ml device is not plugged in to help the user understand that the options will be available after a user action)
Timer displays “Done” at the end of the countdown or when the user presses the Ml Record Button again.
3. Sound file (107) icon displayed with: a web-enabled link (108) (which the user can just copy and paste into a telemedicine session, email or text message). The term 'Telemedicine' refers to any telemedicine system such as Teladoc™, American Well™ including consumer video conferencing such as but not limited to Facetime™, Zoom™ etc. a Play button to review/ erase (119) the recording and go back to #2 and a Send button (not shown) to send a web-enabled link (108) to the sound(s) wav file.
4. User presses Send Button
A window opens showing additional fields for the user to add (for example):
The doctor’s (103) email address (the user id unlikely to have the doctor’s phone number, but this could be an additional field),
The user’s name, and user’s email. (The patient’s information is required here so that the doctor knows they have received a link from a specific patient e.g. John Smith. Also required when multiple users use the same device to help Medaica know where to store data and create different user pages.).
If the user is sending the link via an email, the user may need to add a unique username (if they have not already) and their email (in case the doctor needs to communicate with them). If the user has already added a name or email, then the system will remember that name (via the UDID) and could provide prompts to edit that name/email, add more details, or associate a new file with a new user if being used by multiple users on same device e.g. a family, which the system could confirm when it sees different user names against the same UDID.
In another embodiment, the user might have a unique secure name that only the doctor or the doctor’s system knows (such as but not limited to a patient record number, enabling the patient to exchange details without the Medaica website having the identity of the patient).
In yet another embodiment, the system could enable a blockchain feature that further secures the patient’s details, and would also provide the ability to set further access rights as well as provide audit trails for users to see who and when people have accessed their details. In such an embodiment, a “heath wallet/pass” would enable the patient to be the secure owner of their own heath data, providing not only access to it, but also controlling who, where and when they give such access, and enabling full auditable data if they (or other parties) need proof of info/access.
If the user selects Send without adding their minimum details, the system will prompt them to add an identifying name. The identifier need not be unique as the actual unique identifier is the UDID + the user name. Only if a user creates a new user with the same name will the system protest.
The system can further require the user to confirm if they are the ONTY user of the device, thereby enabling the system to associate a new or different users with device (e.g. family members using same device) AND a user using more than one device.
Optionally, the SEND window could also have options for a receipt checkbox. Selecting the receipt checkbox enables the user to get a notification that the file has been reviewed (this gives Medaica another chance to get the user’s email address and can also give additional trust to the user that their file has been accessed by the Doctor and/ or not accessed by others).
Optionally, the web-enabled link could have features (like some URE shorteners) that limit the number of times it can be used or expiry time. This gives Medaica opportunities for example for the doctor to forward the same web link to another doctor as a premium feature or the user to limit multiple access and to have the file “expire” for additional security.
5. The doctor (103) receives either; a templated email/text from the user via medaica.com containing the web- enabled link to the patient’s sound(s) file which contains the embedded UDID and the patient’s name (or other method of identifying the patient) and email OR a web-enabled link (108) in their telemedicine session, pasted in by the user.
The Doctor could also receive a direct email/text from the user with the web- enabled link which behaves the same way as the web-enabled link in the Telemedicine session.)
Whether in the telemedicine session (109), text or email, the web-enabled link takes the doctor directly to the sound(s) file webpage (106) where he/ she can listen to the file. Alternatively, the weblink structure can present the file to the doctor on the current webpage being used.
The system can also have an option of generating a web-enabled embed code which, when pasted into the telemedicine system, displays the Medaica “player” with the sound (or other) file(s). In such an embodiment, telemedicine systems could enable the doctor to review the sounds and/or perform a virtual exam without leaving the telemedicine website.
Alternatively, the system might only grant access to the file in a compressed format which would typically be good enough (e.g. CD quality) for most professional use.
However, the uncompressed (RAW) file could be more useful to certain users and applications, for example, for machine learning, Al or other research functions, in which case, that file could be made accessible to authenticated users via their access rights. Alternatively, with the web-enabled link, having been sent by the user, there is an implicit permission from the user to the doctor to access their file, and a risk of anyone else reviewing that file does not risk leaking private data, as only the user’s sound file is accessible.
Use Case 2 - Live Stream/Virtual Exam (See Figures 6 to 10)
A virtual exam is typically initiated by the doctor (rationale: otherwise the doctor would be waiting for the user, which is not only less efficient for doctors, but also for the user), via their telemedicine platform of choice (109) and does not require any additional tools or software within their telemedicine platform to operate.
The user (100) has simple instructions from multiple channels; a) medaica.com b) Ml device and c) if Ml was sent to them via Telemedicine Platform text/ email.
1. The doctor (103) visits medaica.com (106) and clicks on the 'clinician’s tab' and can either: click a secure/ temporary pass or enter his/her login/password details.
2. Within the clinician’s tab, the doctor selects “Exam Room”
The Exam Room displays two fields: a room code with a <6> figure random number and a blank 'Doctors’ Invite' code field.
The Exam Room could display reminder text re the patient: e.g. “Ask your Patient to follow these 3 easy steps 1 ) Plug in their M1 , 2) visit medaica.com then 3) Enter the 6 figure Exam Room Code under the Exam Room tab. When your Patient does that, they will get a Doctor Invite Code for you. ”
3. The patient accesses medaica.com and clicks on the Exam Room tab
The patient see two blank fields, an Exam Room field and a Doctor’s Invite field.
4. Patient types in the Exam Room number given by their Doctor.
5. The Doctor Invite Code field then displays a <6> figure random number which the patient tells the doctor. Once the doctor types the invite code into his/her screen, the doctor and the patient are in same Exam Room.
The doctor can now listen live to Ml or nearly live (where the auscultation sounds are not live streamed but instead processed at a file handling system that fully downloads the relevant audio files before enabling them to be played back). Ideally the doctor listens through high quality over-the-ear headphones (110) connected via either wireless (112) or wired (113) such that he/she can hear lower frequency sounds and will guide the patient accordingly. The doctor’s headphones (110) can also be a suitable electronic stethoscope, capable of listening to recorded files on a web-enabled device.
6. If the doctor wishes to explicitly record the sounds from 'Livestream Mode', he/she can select a 'Record' function. Typically, the system would do this automatically such that the doctor has immediate use of the file and can then, if he/ she desires, delete the file.
We now list some further features:
1. Other DMDs including other digital stethoscopes, but also devices that record medically-related audio, image or video or other media types that would typically require interpretation by a healthcare professional, can send their files to the Medaica website. These files (which can be processed by the file handling system to download the files) are then able to be accessed by healthcare professionals using the same weblink (i.e. web-enabled link) methods described. The advantage of doing this for the DMD provider is that they do not need to separately integrate their devices into a telemedicine system and the advantage for the healthcare professional is that they can now use multiple DMDs within their chosen telemedicine system.
2. Related to #1, in an Internet of Things (IoT) scenario, it is envisaged that multiple devices will be able to constantly monitor events such as laboured breathing, a baby stopping breathing, a patient’s cough etc. This type of background monitoring is similar to what a device such as Amazon’s Alexa does when it is constantly listening for a user's key commands. In the IoT scenario, these devices, once they detect a potential health-related issue, can then send the files (such as but not limited to an audio file), together with some device and/or patient information, to the Medaica website, where a healthcare professional can review the files to decide if further action is required.
3. It will be appreciated by those skilled in the art, that once such a system has sufficient market acceptance, it can also act as a central hub for research and other related services. Examples include 3rd party diagnosis (which could be via human or machine techniques), medical insurance intelligence (which can evaluate macro trends to finetune their products and services). For example, looking at some or all heart-related conditions in a specific geographic location, over a specific age group, over a period of time, could help reveal trends that could be used to pre-emptively prevent patients needing more acute care. 4. The idea of a data avatar is presented to help interested parties (such as but not limited to researchers, insurance providers etc) generate a generic patient, from otherwise private pieces of data. By doing this, the recipient of the data avatar need not know that they have specific data about a patient, rather they have pieces taken from perhaps hundreds, thousands or millions of patients, to create the “typical” patient to be reviewed. The system generating such a data avatar can therefore serve the recipient without the recipient needing to browse through more complex database structures. The resulting file could also contain information that it has data from x number of patients in each of the query categories, which could further give a degree of confidence to the recipient. It is further understood that the cost of conducting clinical studies and/ or other patient-related studies can be expensive and slow, so such a system could provide a dramatic advantage to the recipient. Furthermore, such a system could not only provide a specific output (the data avatar) but could be configured to require a specific “health query language” as an input to query anonymous bulk user data. This would not only enable the system to provide the appropriate results, but also standardize how multiple users, vendors and models can be uniformly addressed. There is further potential for such a system to prevent exposure of private data (under HIPAA or GDPR or similar) to outside parties and yet provide compliant/ secure results.
5. In another embodiment, such as system could also provide reputational data to patients (or other interested parties). For example if a file is reviewed by a 3rd party for a doctor or patient, the system can know that the reviewer has reviewed x files and achieved an accuracy rate of x% (determined by the number of times other reviewers have agreed or disagreed with the first reviewer or other such techniques). Whist such methods are known in social media (for example, a product review can display the reviewers record of reviewing products, an Uber driver has a reputational score built from multiple rides etc), these techniques have not been used or able to be provided in healthcare. By providing a system that is not only agnostic to devices and telemedicine system, but also can support patients being able to use the system in “guest mode” and providing data avatars, the system is predisposed to being a more trusted interface for all users.
6. In a further embodiment, the website and/or application provides a method of helping the patient correctly position the DMD by providing an Augmented Reality (AR) composite video of the patient and the device. The device is recognised either by its unique shape or a code (or other recognised methods) that the camera can identify. The system traces the outline of the patient and, with the identified DMD, can now direct the patient to move the device to a desired position on the patient. Such a system has additional advantages for educational purposes.
7. In another embodiment, the user sees an outline of a human torso in the video feed, in which the user best positions him/herself. The outline also displays an auscultation target icon. The user moves the stethoscope head to be within the auscultation target and can then start recording the auscultation sound. Similarly, this embodiment can be leveraged by the healthcare professional on the other side of the video feed, by moving the auscultation target to sites that he/she desires to listen to. That target/pointer could also be semi-transparent and/or the same shape as the stethoscope head to make it easier for the patient to position the stethoscope “virtually” under the pointer and over the auscultation site. Furthermore, these sites can be tagged alongside the recordings to aid either store and forward diagnose or archive notes, as each recording will display the target location on the patient’s body where it was captured.
8. In yet another embodiment, the user has the option of a bulk recording then upload function - scenario: nurses or doctors travelling around collecting sample files, then uploading multiple files once they get back online.
In still another embodiment, the system can enable scheduling of auscultation exams for exam, twice a day for 10 days at positions Heart 1 and 2. Such as system can then be used to confirm adherence as well as generate more continuous health data. This could be particularly helpful for applications concerned with Remote Patient Monitoring, as well as Hospital At Home applications and/or preventing/ reducing hospital re-admissions.
The interconnected web-app may guide the user to perform a number of examinations, such as:
• Self-examination for the heart, via a number of auscultation (body sound) positions on the chest.
• Self-examination for the lungs (front), via a number of auscultation positions on the chest and 2 on the side. Assisted examination, via a number of on the chest, on the side and on the back.
Live examination by a healthcare professional.
Self-examinations and assisted examinations can be done at any time, recording body sounds such as heart and/or lung sounds and then sending those results to a healthcare professional.
Alternatively, the Ml digital stethoscope can be used during a live telehealth session with a healthcare professional listening to heart and lung sounds live, guiding the user, and being able to record auscultation data together with any notes in their electronic medical records, subject to HIPAA compliant permission. This type of examination is called a live examination.
Figure 11 shows an example of a patient’s web-app displaying a mirrored view of an outline of a torso along with a video feed. In this example, the body map is mirrored for interfaces for self-examinations, using a mobile or desktop screen and/ or camera for assistance. It will be appreciated that some embodiments of this invention do not require a mirrored version of the body map. The outline of the torso may also be displayed together with guidelines to help the patient find a specific position to place the digital medical device. The current position of the digital medical device (1) may be displayed alongside previous auscultation positions for which measurements or patient data has been generated. The next sequence of auscultation positions needed may also be displayed, either from a pre-programmed sequence or from the direct guidance of a healthcare professional. The auscultation sites can be moved by the healthcare professional in real time. Each location can be recorded alongside the audio file as tagged references to further assist in diagnosis and records.
Figure 12 shows a further example of a patient’s web-app displaying a self-examination heart mode including a mirrored body map and auscultation (body sound) positions on the chest. In this example, the body map is mirrored for interfaces for self-examinations, using a mobile or desktop screen and/or camera for assistance. It will be appreciated that some embodiments of this invention do not require a mirrored version of the body map. The self-examination displays auscultation positions that a user should be able to reach without assistance. Optionally, the user is also able to select a required assisted examination option. In this example, when selecting a self-examination, a body map shows the body sound (auscultation) positions as if the user was looking in a mirror. Each auscultation position is shown as a numbered circle with the current position to be recorded highlighted, such as the first position.
An example of the self-examination heart procedure guidance for a user using a digital stethoscope is now described:
• A graphical representation of the specific examination procedure is displayed. It displays a torso outline including a sequence of required auscultation positions. The torso graphical representation is configured to guide the patient to use the digital stethoscope Ml at the required auscultation positions for a specific duration and frequency.
• Place the Ml on the highlighted auscultation position (121), either directly on your skin or over light clothing such as a shirt.
• Press either the Ml Start button or the Start button on the screen. Make sure there is no background noise, do not talk and do not move the stethoscope during recording. The Ml TED and the highlighted auscultation circle (121) will flash for 20 seconds indicating recording is in progress.
• During auscultation recording, a countdown and recording quality window (See Figure 13) displays the level of the recording of body sounds in relation to external ambient sounds. The level of sound received by the body microphone and the level of sound received by the ambient microphone are graphically represented. The sound level detected by the microphones is also associated with a specific color. As an example, in 131, the ambient noise displayed on the right of the countdown (133) is grey and indicates no ambient noise. In 132, the ambient noise (134) is displayed in red indicating that it is too loud to achieve a good auscultation recording. If the external sounds are too loud for a good auscultation recording, the recording will stop and a “silence” icon will be displayed.
• As seen in Figure 14, the mirrored torso outline shows when each auscultation position is recorded successfully. In this example, the body map is mirrored for interfaces for self-examinations, using a mobile or desktop screen and/ or camera for assistance. It will be appreciated that some embodiments of this invention do not require a mirrored version of the body map. For example, the previously recorded position turns a different colour, such as green and displays a “tick” (141). The next recording position is then indicated (142).
• The graphical representation is then configured to indicate when the exam is complete. As an example, all completed auscultation positions are displayed green.
• The results can then be sent as a file to a healthcare professional by selecting SEND. The user will get notified once the exam has been reviewed. This can be an instant notification when the healthcare professional has opened and closed the file, or it can be an email confirmation sent to the user including any remarks from the healthcare professional.
Figure 15 is a flow diagram summarizing the steps of the self-examination procedure for recording phonocardiograms (PCG) from different auscultation positions using a digital stethoscope.
Figure 16 shows a graphical representation of the specific examination procedure overlaid over a live video image of the user (151). The live feed of the user may include the body shown as transparent or semi-transparent, with the rest of the image masked, opaque or solid to avoid the background interfering with the live video image of the user. A torso outline is displayed (152) alongside the current auscultation position of the digital stethoscope (153) and specific auscultation positions (154,155) required by the exam procedure. In this example, the body map is mirrored for interfaces for selfexaminations, using a mobile or desktop screen and/ or camera for assistance. It will be appreciated that some embodiments of this invention do not require a mirrored version of the body map. The user positions him/herself inside the torso outline and can then accurately position the Ml over the required auscultation position. The current auscultation position can flash on/ off so that when the Ml is in position covered by the circle it is not a confusing image for the user.
Additionally, the software may recognize a symbol on the Ml head and when the user moves Ml to the correct position, the software can prompt the user accordingly (and/ or autostart recording). This can be implemented together with augmented reality techniques. Figure 17 shows a graphical interface of front lungs self-examination including a mirrored image of a torso outline of a front torso and required examination positions. For lung sound recording, two full deep, slow breaths should be captured.
Figure 18 shows a graphical interface of back lungs assisted-examination including a torso outline of a back torso and required examination positions.
Figure 19 shows a graphical interface of a video-positioning mode. Selecting 'Video Positioning' mode first displays a window asking for permission to use the video camera. For privacy, video-positioning mode is only used for guiding recording positions without recording any video. With video mode positioning on, the mirrored live video feed of the user is displayed alongside outline of the body (181) and the current auscultation position displayed as a flashing circle (182). The auscultation icon might need to alternatively flash black/white (or other contrasting colors) to make sure that whatever the user is wearing is not confusing the image. The torso outline may also need to have a black/white stroke to make sure it is visible. When the user positions himself inside the body map and holds Ml at the flashing auscultation position, recording is started when the user pushes either a start button on the digital stethoscope or an icon or symbol on the graphical interface.
Other features include:
• The countdown/record window is automatically displayed (or pops up), such as when Ml is in position and the user is still and quiet.
• A symbol on the Ml head is recognized by the image processing software and when the user moves Ml to the correct position, the software prompts the user accordingly and/ or auto-starts recording.
• The camera detects an outline of the user and creates a specific body map. This is done by accessing a library of auscultation positions to fit specific body types, or by re-calculating the positions based on the detected outline and specific exam positions.
• The user is able to select a body map based on nearest fit.
• The software automatically selects the nearest fit body map from a library based on the video feed of the user. For a live exam, a healthcare professional may send the user a link to a virtual room, such as by email or via a text message or any other messaging application. Clicking the link will take the user directly to the virtual exam room. As shown in the flow diagram of Figure 20, if the Ml device is not plugged in, or not recognized by the system, an onscreen message will be prompted such as “plug in your M1 device”. When the healthcare professional is present, the virtual room exam displays his/her name. The healthcare professional then guides the user through the auscultation positions, or moves the auscultation positions to where he/she wants to listen. The healthcare professional is able to control when the Ml starts recording each body sound.
Figure 21 shows a flow diagram summarising the different steps according to a selfexamination mode, custom examination mode or guided examination mode.
Figure 22 shows a diagram summarizing key elements of the system.
Figures 23 and 24 shows photographs illustrating a number of digital stethoscope devices. The designs are user-friendly, easy to grip and include at least one button.
As shown in Figure 25, the cable plug can be inserted into a dummy socket (210) in the unit to fold the cable in half when the device is unplugged. This makes the cable much less unwieldy, and easier to stow in a bag.
Figure 26 shows top, side and bottom views of another example of a digital stethoscope device.
Figures 27 to 29 show further examples of healthcare provider and patient interfaces. Figure 27 shows a screenshot of a healthcare provider interface, which enables the healthcare provider to analyze, record or edit auscultation audio files. The user interface may also include for each patient: medication, medical history, previously recorded auscultation audio files, healthcare provider’s notes or any other information associated with the specific patient. Figure 28 shows a screenshot of the healthcare provider interface including an auscultation (magnified) section (281). Figure 29 shows a patient interface including a temporary graphic of an animated help screen (291). The patient interface includes a graphical body outline with a number of target positions at which the medical device is to be positioned by the patient. High level healthcare programming environment
Instructions, devices and notifications can be "chained" together to help patients perform specific healthcare management protocols. For example, rather than a patient taking a general reading such as recording a heart or lung sound, the system can guide the patient to take specific tests with a specific frequency and can optionally send reminders to the patient as well as updates to the patient's healthcare provider(s) and/ or insurer or other parties with appropriate permissions. For example such a system could guide the patient to use a digital stethoscope to "record heart sounds in Position 3, twice a day, for seven days". In such an example, Position 3 could be a specific instruction with a diagram or video. That specific instruction, frequency and duration can have notifications such that user is sent reminders, and the healthcare provider is sent results.
One example of the value of such a system: a hospital could for example, set up a "Patient Release Protocol" as a one click "applet" (sending the patient a link to the applet so the Doctor will know if/when the patient is following the release procedure and recovering on plan). Such an “applet" could be different for each healthcare provider, patient and/ or condition and could provide methods for the healthcare provider to brand the experience as well as integrate the outputs into their healthcare records.
When new devices, instruction modules or features are added to such as system, it adds utility to its users. For example, a patient could be taking their temperature, heart sounds recording the data for the doctor in a fairly automatic and regimented method. 3rd parties could develop simple branded applets. Applets could also be protocols for clinical trials and/ or other useful applications.
Telemedicine Device including a ‘room’ or 'patient' microphone
Adding a 'room' or 'patient' microphone (mic) to a telemedicine device allows the patient to continue to communicate with their healthcare provider. As browser security models only allows for a single audio device to be used at any given time, it is, in the prior art, necessary to switch the audio source in the browser. For example, if the patient is on a laptop and using its default mic, they would have to switch the browser audio source to the telemedicine device to perform an exam that required a digital stethoscope microphone. This would cause the user to lose the connection with the built in mic and their means of verbal communication with their healthcare provider. Adding a second 'room' or 'patient' mic to such a telemedicine device enables the patient and healthcare provider to maintain communications and still capture exam sounds.
Initially, the audio will be delivered over a stereo channel but the app will separate the audio signal into two separate mono feeds and will process each differently. It is noted that such as application can be delivered as a web app (i.e. thin-client) as well as a native desktop or mobile app. The auscultation sound channel will have a gain control so a strong enough signal will be captured for the body recording. In addition, filters such as a low pass filter (or any other processing) may be applied to the sound (typically after the sound has been recorded, maintaining the raw audio file). The auscultation sound channel can be sent via the file handling system as described above, with minor latency, but no loss of quality and the 'patient' or 'voice' or 'room' channel goes via a streaming path for live conversation where quality is less mission critical. As result, a clinician can hear auscultation sounds with exactly the same fidelity as if the patient was in their office.
The room channel may also have a gain control but will mainly just be passed on to the room and ultimately the healthcare professional's headphones. The healthcare professional and/ or patient can have control of muting each channel separately if they want to only hear one or the other mic. The system may also automatically mute the room mic when the healthcare professional is listening to and/ or recording the auscultation channel.
In addition, the room mic can be used to capture audio that can be used to reduce or remove non-heartbeat sounds in the heartbeat audio file using standard noise reduction techniques. This specific feature can additionally be used by the system to determine if the room is too noisy for a patient reading and/ or if a patient is speaking when the exam is being recorded. This information can then enable the system to display a message to the patient to be silent and/ or there is too much noise to perform the exam.
An audio signal may be used to enable the capture, transmission, storage, and display of data from one or more sensors over a regular USB audio channel. This connection can work in any device that allows a microphone to connect and transmit data to a computer, phone, tablet, etc. The captured data is converted to audio using a predefined system that maps character data to audio frequency bands. Each character (number, letter, or symbol of the digital message) is mapped to a specific, unique frequency band (or mix of frequencies, like DTMF encoding, dual tone multi frequency encoding). In addition, a special “start” and “end” identifier is given a specific, unique frequency band or mix of frequencies as well (and a check sum could be added to ensure that the system has successfully transmitted the data). A set duration is established for all characters of the message so that each tone lasts the same duration. In one method, a sine wave is generated at the specific frequency in the middle of the character's frequency band that matches the current character of the message. Each message starts by sending a “begin” tone at the predefined “begin” frequency for the predefined duration. This is followed by each character’s predefined frequency again at the specified duration. When complete, an “end” tone is sent to complete the message. This loops and continuously updates the message frequencies each time as the data changes. This signal is transmitted over the USB connection as regular audio and re-encoded in the browser as digital data using the same frequency band to character map. This converted data can then be captured, stored, manipulated, displayed to the user, etc. as regular digital data.
In another embodiment, the system adds a camera with OCR software to translate any digital readout (for example a blood pressure display) into audio.
Using these methods, such a system can leverage a single mono track in the stereo audio signal of a web video interface and keep a room mic open as well so patients can still talk to their healthcare provider while using and/ or transmitting data from the medical device. This allows integration with any platform that either accepts an audio connection or has a display that can be read by an OCR reader and audio converted.
In the following section, we provide more detail on various details and features of the Medaica system.
T elemedicine
As a preliminary point, the terms 'telemedicine' and 'telehealth' are often used interchangeably in the public domain; Medaica follows that approach. Telemedicine is a subset of telehealth that refers solely to the provision of health care services over audio, video and/or messaging platforms via mobile phones and/or computers. Telemedicine involves the use of telecommunications systems and software to provide clinical services to patients without an in-person visit. Telemedicine technology is frequently used for follow-up visits, management of chronic conditions, medication management, specialist consultation and a host of other clinical services that can be provided remotely. Furthermore, the WHO also uses the term “telematics” as a “a composite term for both telemedicine and telehealth, or any health-related activities carried out over distance by means of information communication technologies.”
It is also noted that some high-end telemedicine systems, typically used by hospitals for follow-up visits, often require complex and expensive “medical carts”, operated by skilled doctors, nurses or technicians at the patient’s location connecting to operators at a medical center. As is often the way with technology advancements, many companies are now providing some or all of these features to doctors and/or patients via more affordable devices and/ or smartphones. There is however, an increasing requirement to address ease-of-use, scalability and security as such systems start to gain wider appeal.
For the purpose of this document, the term 'telemedicine' should be broadly constmed to encompass telehealth and telematics, and is not limited to professional or consumer systems.
Additionally, the terms 'doctor', 'healthcare professional', and 'clinician' are interchangeable and may also refer to nurses or any other practitioners who might not be doctors.
Auscultation hub The Medaica 'Auscultation hub' is a website that stores files, such as but not limited to auscultation recordings from users’ devices such as digital stethoscopes. It can include a file handling system as described above, or receive auscultation data that has been processed at a file handling system. The auscultation hub enables easy linking of those recordings to/from health practitioners and telemedicine platforms. The auscultation hub also enables editing of auscultation audio files; for example, a source audio file could be a sound recording lasting 60 seconds or more. But that sound recording could include extraneous noises of no clinical significance; the doctor/healthcare professional can review that complete auscultation audio file from within the auscultation hub and edit out or select sections of clinical relevance; the edited sound recording can be shared, for example with experts for an expert opinion, by sending that expert a weblink that, when selected, opens a website (e.g. the Medaica Auscultation hub) and the expert can then play back the edited sound recording.
Virtual exam room
The Medaica 'Virtual exam room' enables a doctor/healthcare professional to send a web-enabled link to patients as an invite with a unique security code for a virtual exam that will take place in the Medaica virtual exam room. A patient clicking on the web- enabled link is taken to a webpage virtual exam room accessed by entered their unique code. The virtual exam can then display instructions which could include timing for exam, instructions to be ready to place the stethoscope where the doctor requires it etc. Additionally, after a predetermined period of time (say 15 minutes), the system can reject the invite code, and generate a new one with a new email as an additional security measure.
The exam session can be recorded and data files sent to 3rd parties to review/ diagnose. The doctor/healthcare professional can also edit files and send edited files to other experts, as described in the Auscultation hub section above. In some implementations, the doctor can initiate the record start/stop from the website (i.e. not requiring the patient to initiate from the device). Web-enabled links to/from auscultation files and/or other medical records
Files/records are not sent to doctors or telemedicine systems. Instead, the Medaica system generates a secure and unique web-enabled link or web link that, when clicked on, takes the recipient to that file. The unique web-enabled link can include meta data such as but not limited to date, time device ID, user info, but also, if there are business model rules such as but not limited to access rights, permissions, number of clicks per link permitted, rate per click, billing codes etc.
The web link could also have a one-time or multiple use feature which could in turn be linked to the user’s membership rights (as could any of the aforementioned features).
Access rights could be leveraged to subsidize the business model e.g. assuming access options include telemedicine platforms, insurers, research etc. and, if research is enabled, the session could be free to patients if they agree to the terms that their data is being used for research and/or is being supported by a charity, e.g. the Gates Foundation.
Related is that the web link could also offer a drop-down menu to compatible telemedicine systems and/or doctors nearby etc. Referral programs could then support Medaica when Medaica customers link to a specific telemedicine platform.
The system can also have an option of generating a web-enabled embed code which, when pasted into the telemedicine system, displays the Medaica “player” with the sound (or other) file(s). In such an embodiment, telemedicine systems could enable the doctor to review the sounds and/or perform a virtual exam without leaving the telemedicine website.
Security/ permissions
Users can have certain rights to listen, review, tag, annotate, forward, analyze, download files. For example, if a doctor does not have permission, he/ she cannot tag the file with an opinion. Similarly, a 3rd party could be supported to give an opinion of the file, but not have permission to re-send the link. (If they cut and pasted the link they received, the system would know it was a one-time review link and would have expired and the system would inform the system owner/user/ admin of the attempted impermissible use. Watermarks
Sound files can be watermarked such that if they are downloaded or used off-site, it can be easily determined that they are Medaica files. Such watermarks could be overlaid/added to Medaica files in a unique manner that the system could know how to remove alter (for example adding new date/user/ owner info).
Collaboration and verification features
3rd parties such as analytical labs and/ or researchers, can be granted access to files, either by system admins, or by doctors or other authorized users to diagnose files and/ or enable a second opinion and/ or conduct research for local government or other medical research, subject to their access rights.
Researchers could also be granted access to multiple files based on time, type, region etc. 3rd parties could also provide a crowd sourced human verification diagnostic solution (like CAPTCHA) whereby x people claiming a sound is a certain condition, increases the confidence that that sound is indeed that condition. This could be further enhanced, to give doctors confidence that the diagnosis has been conducted by peers, for example by providing auditable references (e.g. clicking on who reviewed the sample — how many samples he/ she has been credited with correctly reviewing etc.).
Business model(s)
There are numerous anticipated business models including but not limited to:
1. charging telemedicine platform providers $x for every telemedicine session that leverages a Medaica exam (determined via the web-enabled link data). This could be a revenue share of the incremental reviews generated by such traffic.
2. Telemedicine platform providers could use Medaica devices for customer acquisition — i.e. they send users a Medaica device for free or for a discount if they sign up. They would do this because with Medaica, users will be getting a more useful telemedicine session, and the platform providers will be getting higher revenue and (until Medaica is ubiquitous), a more competitive solution.
3. Medaica could sell direct to end-users (patients) with a coupon for a discount for their first telemedicine session with Company X. 4. Medaica can charge a per click or per seat fee — per click could be based on types of clicks e.g. a doctor listens to a file is standard rate, but if she/he forwards the file for diagnosis, that could be a different rate (higher or lower).
5. As mentioned under Smart/Web-enabled links, Medaica could have a third party subsidize each recording and/ or click in return for the data/ research potential.
6. Insurers will be interested in participating in the ecosystem if a Medaica session can help triage the need for patients to have more expensive exams or in person visits.
Agnostic/ plug and play
Most medical devices have proprietary systems and, in the case of digital stethoscopes, cannot easily interface with telemedicine systems. This is even more challenging with Bluetooth devices as they can compete/confuse systems and devices that assume Bluetooth is for communication with the user not a device and rarely can handle communicating with both a device and a user (in a telemedicine session, a Bluetooth stethoscope will typically take over the audio channel, making it impossible for the patient to talk or hear the doctor).
Furthermore, most telemedicine platforms are closed systems and cannot easily enable device integration. Similarly, most medical devices are closed systems and/ or have their own telemedicine solutions, making them ill-suited to multiple telemedicine solutions. Even in a well-designed telemedicine system, or video platform, the ability of using an additional device will invariably require a new window or tab or menu item to be selected, so Medaica not only provides the same utility as a well-integrated solution, but does so for ANY video platform. The virtual Exam Room is simply a new window that can be clicked on outside of the telemedicine screen, but without having to launch a complex alternative application.
Healthcare provider and patient’s web-app examples The following screenshots provide an example of the web interfaces as seen by a healthcare provider at a first location and by a patient at a second different location.
As shown in Figure 30, a user interface, such as an application running on a mobile device or a web-app, enables the healthcare provider to access his account related parameters, such as account settings, patients records and/or exams. The interface is also configured to enable the healthcare provider to create new patient entries and start or configure patient examination procedures.
As shown in Figure 31, once a new patient entry has been created, the healthcare provider may either create a “Store and Forward Exam” or start a live exam (with lossless near real time audio).
Store and Forward Exam
As described above, a ‘Store and Forward Exam’ is an exam a patient can conduct in their own time for specific auscultation positions the healthcare provider requires. The healthcare provider may first confirm that the patient has read and understood the instructions before asking them to perform a store and forward exam. The healthcare provider may also confirm that the patient has been successfully guided to use the Ml stethoscope in a live exam before asking them to use it in a store and forward exam.
A store and forward exam may be requested by the healthcare provider or may be selfinitiated by the patient and then sent to the healthcare provider.
Figure 32 shows a screenshot of the healthcare provider interface that enables the healthcare provider to create a store and forward exam. The steps taken by the healthcare provider to create the store and forward exam may be as followed:
• Select ‘Patients’ in the left sidebar Select ‘Create Store and Forward Exam’;
• Select Tatient Details’ to see an existing list of patients or add them as a new patient;
• Select ‘Save Changes’;
• The ‘Store and Forward Exam’ template is displayed;
• Enter an ‘Exam date’ for the patient; • Enter any additional notes for the patient such as asking them to contact the office if they require additional assistance;
• Click on the ‘Heart’ and/or ‘Lung’ auscultation positions that the patient needs to record;
• Select ‘Save’.
Figures 33-37 show the corresponding patient’s web interface, following the creation of the ‘Store and Forward Exam’ by the healthcare provider.
As shown in Figure 33, the patient receives an automated message from the healthcare provider with the exam date and a link to the exam and auscultation positions that have been previously selected by the healthcare provider. Alternatively, the exam request may be sent via email, or text message.
The patient is then able to enter a secure page, as shown in Figure 34, with the step-by- step exam procedure based on the auscultation positions selected by the healthcare provider for them.
As shown in Figure 35, the patient’s software displays an instruction page and checks that the patient’s digital stethoscope is connected. If the digital stethoscope is not connected, the system displays an error message and will not progress until the digital stethoscope has been correctly detected.
Once the software confirms the patient’s digital stethoscope is connected, the patient sees the first auscultation recording position displayed, as shown in Figure 36. The patient is guided through each auscultation position via an on-screen body map.
The recording of auscultation sounds may be started via either the on-screen button or via a button located on the digital stethoscope. After a position has been recorded, the patient is able to review the auscultation file to make sure that no room noise or frictional noise was recorded. In the case that room noise or frictional noise are present on the recording, the patient can re-record at the same position, as shown in Figure 37. Once the auscultation position has been correctly recorded, the patient can move to the next position and continue until all required positions have been completed. The system may automatically save each recording once the patient selects ‘next step’.
The web interface includes a room sound indictor that illuminates green when the room is quiet or red if it's too loud or there is too much frictional movement. If the room indicator is red, the patient is advised to move to a quieter location and/ or make sure they are not moving the stethoscope around when recording.
Figure 37 shows an example of heart and lung body maps, as displayed on screen, in which each auscultation position is shown as a numbered circle.
As each auscultation position is recorded successfully, that position on the Body Map turns green and displays a “tick”, as shown in Figure 38 displaying the heart body map.
When the examination procedure is completed, a message is automatically displayed to the patient, as shown in Figure 39.
When the patient completes their examination, the system notifies the healthcare provider that the recordings are ready to be accessed and reviewed.
On the healthcare provider’s interface, the healthcare provider is then able to see the patient’s exam status updated on the dashboard, marked as an ‘unreviewed exam’ as shown in Figure 40.
Figure 41 shows an example of a web interface that enables the healthcare provider to view the auscultation file. The healthcare provider is able to (a) play or pause the recording of the audio at a specific auscultation position, (b) apply filtering techniques and (c) include notes.
When the auscultation file is playing, the phonocardiogram (PCG) can be expanded to view in finer detail. Filtering methods, such as bell, diaphragm and/ or extended filters can be switched on to help the healthcare provider focus on specific frequencies. Sounds may also be boosted by sliding avolume slider. A notes field enables any notes to be added for the exam. The healthcare provider is then able to request a live exam or an alternative exam for example if the healthcare provider is not satisfied that the patient has followed the instructions completely or if any recordings sound too noisy or too quiet to provide a diagnosis.
The patient will also receive a notification from the healthcare provider once they have reviewed the examination files.
Live exam with lossless near real time audio
To start a live exam with lossless near real time audio, the healthcare provider may select ‘Start Live Exam’ on the healthcare provider interface, as shown in Figure 32.
Figure 43 shows a screenshot of the automated message displaying the live exam join details. The healthcare provider may then select ‘Send Join Info’ to send an email or a message to the patient with a weblink and a secure code to enter the examination web room, as shown in Figure 44. Alternatively, if the patient is already online, the healthcare provider may ask the patient to manually select ‘Live Exam’ and type in their unique Code.
When the patient clicks the link the healthcare provider sent, a window requesting they enter an access code is displayed, as shown in Figure 45. The patient then enters the access code and continues to the ‘Live Exam’ page.
Figure 46 shows a diagram illustrating the live exam patient view on the left and the healthcare provider view on the right. The healthcare provider is therefore able to see and hear the patient and guide the patient on the correct placement of the stethoscope. Mirroring settings are available, in which the healthcare provider can look at a mirrored image of themselves and a facing image of the patient (i.e. as if the patient was facing the healthcare provider).
Figure 47 shows a screenshot of a page or menu available on the healthcare provider side, in which the healthcare provider has control over the patient’s stethoscope listen/record function. Pressing the button enables the healthcare provider to hear the patient’s auscultation sounds streamed live and starts background recording of the auscultation sounds. In order to achieve a high quality and clear recording of heart and/ or lung sounds, the healthcare provider may remind the patient:
• to make sure they are in a quiet location;
• not to move the stethoscope during recording;
• not to speak during the recording.
While the system is configured to enable the healthcare provider to listen to the patient’s auscultation sounds live (streamed), the system is also configured, via the file handling system, to simultaneously record the auscultation file locally on the patient’s computer. As soon as the healthcare provider releases the listen button or stops the recording, the recorded audio on the patient’s computer is automatically sent to the healthcare provider’s computer as a store and forward .wav file, enabling the healthcare provider to hear lossless (CD quality) audio.
Alternatively, the file can be downloading in the background to be available as soon as the healthcare professional clicks on the option to review the file.
The 'live exam with lossless near real time audio’ approach therefore provides access to a near real time store and forward exam within a live streamed exam and achieves a very different user experience for both the patient and the healthcare provider as compared to the standard standalone live streamed exam.
There are several useful advantages in simultaneously providing a streamed sound and recording a lossless sound, such as if: a) the healthcare provider needs to review a sound again, b) the healthcare provider wants to hear the patient’s sound in a format that is suitable for clinical grade analysis, c) the streamed sound was interrupted due to internet quality issues, d) the healthcare provider wants to compare different filters applied to the recorded sound to help focus specific frequencies. Appendix 1 - Key Features of the Medaica system
One implementation of this invention envisages an internet-connected app that is hardware agnostic and can hence be easily deployed across all Android and iOS smartphones; as well as PC and Apple desktop computers and virtually any medical device can be easily and cheaply architected to send patient datasets to the smartphone or computers, e.g. over a standard USB cable or wireless connection; and the internet- connected app can then manage the secure transfer of these patient datasets to a web server. Once on the datasets are stored on the web-server, they can be shared by generating a web-link to those specific datasets and sharing that web-link; any physician with a web browser can then review those datasets.
One conventional approach when designing telemedicine systems is to provide some sort of proprietary and secure data transfer system directly into the medical device or a host computer; this data transfer system can then securely transfer data to a cloud-based telemedicine system. So the architecture is quite simple: medical device connects to telemedicine system. In one implementation of this invention, the overall architecture is more complex, because we add in an internet-connected app (resident on the medical device or a connected smartphone etc.) and a web-server that the web-app communicates; that web-server can then in turn connect to the cloud-based telemedicine system.
So we have added additional layer of complexity to the overall architecture. But, paradoxically, by adding this extra complexity, we enable simplicity: This approach decouples designers of medical devices from the complex technical challenges of the secure, reliable and accurate transmission of confidential patient data and integration into proprietary telemedicine systems: all they need to include is a standard data transfer system (e.g. USB cable) and so they can focus on doing what they do best, namely designing the best medical devices they can. Likewise, it de-couples designers of telemedicine systems from these same technical challenges: they can instead focus on doing what they do best, namely designing systems that best serve the needs of healthcare professionals and patients. One can draw an analogy to the early days of personal computing. If you wanted to build a peripheral device, such as a laser printer, then you would need to understand and implement some sort of data transfer system that enabled you to communicate quite deeply with the computer’s hardware — e.g. memory where documents were stored. This required laser printer designers to master the intricacies of how CPUs and memories operated. But then a universal abstraction layer was added — such as the Windows® operating system— this adds an additional layer of complexity to the overall architecture, but was fundamental to enabling overall simplicity: laser printer designers could simply ensure they could work with the Windows operating system and focus on doing what they did best, namely designing laser printers that will work with any computer from any manufacturer, so long as they ran on the Windows operating system.
The present invention offers the same potential to enabling medical device vendors to focus on what they do best, enabling the design of medical devices that work with any telemedicine system, so long as the medical device can include an internet-connected app or send data to a device like a smartphone etc. that can run an internet-connected app; and so long as the telemedicine system has a web browser. Similarly, it enables telemedicine vendors to focus on what they do best, without having to be concerned about the specifics of how medical devices work, or requiring medical devices to include specific proprietary software.
Since all smartphones etc. run web apps, and all telemedicine systems can use a web browser, this invention can provide a universal backbone connecting in essence any medical device to any telemedicine system. In the following sections, we outline five features of the Medaica system; we list also various optional sub-features for each feature. Note that any feature can be combined with one or more other features; any feature can be combined with any one or more sub-features (whether attributed to that feature or not) and every sub-feature can be combined with one or more other sub-features.
Feature 1: File Handling System
The medical device may be a digital stethoscope and the patient sounds can then be patient sounds such as auscultation sounds, e.g. sounds made by the heart, lungs or other organs. In earlier digital stethoscope devices, these auscultation sounds would be live streamed to a physician or other healthcare professional; as noted earlier, live streaming can however result in dropped or delayed packets, with the physician then being unable to accurately detect heart rhythms (e.g. murmurs) or other critical sounds.
With this feature, the audio data is sent to a file handling system for download and not live real-time streaming, although live streaming remains an option for audio where the highest quality is not essential. In one example, the audio data is sent from the medical device to an intermediate device or web server that implements the file handling system; the audio data is fully downloaded at the intermediate device or web server; playback can take place once the data has been fully downloaded; the intermediate device or web server in turn can provide the file to the PC or smartphone or other device of the healthcare professional; this local device then downloads the file and enables the healthcare professional to listen to the file, replay it, annotate the file with metadata, store it in a digital patient record, share it etc. Alternatively, the intermediate device or web server can stream the file to the healthcare professional’s device; this streaming will however be at higher quality than direct real-time live streaming from the medical device.
The file handling system introduces some minor and potentially imperceptible latency, but ensures that the physician/healthcare professional etc. can hear the auscultation sounds as clearly and completely as possible, at a quality that is better than direct live streaming quality, which can be affected by dropped and delayed packets. The healthcare professional can also receive live- streaming audio, for example to hear the user speaking and to hear audio useful for the accurate positioning of the device (e.g. to hear a heartbeat).
Once the healthcare professional is happy that the stethoscope is in the correct position, they press record on their computer screen, and a timer will display a timer/ countdown on their screen. The timer UX can show an animated “downloading" bar or dots, during which the file is being sent to the healthcare professional’s computer or local device. It typically takes 1-2 seconds, depending on the internet bandwidth, to receive the fully downloaded the file at the healthcare professional’s device and for the healthcare professional to be able to start local playback of the fully downloaded file. We can generalise to:
A telemedicine system including:
(a) a medical device that includes a microphone system configured (i) to detect and/ or record patient sounds, and (ii) to generate audio data from those sounds, and (iii) to send that audio data;
(b) a file handling system configured (i) to receive, download and store the audio data from the medical device, and (ii) make that file available for near-real-time listening to the patient sounds.
Optional features
• The telemedicine system is configured to simultaneously (a) record the file and (b) enable a healthcare professional to listen to the patient sounds in real time.
• The delay between real time and near-real time is less than 30s.
• The delay between real time and near-real time is less than 10s.
• The delay between real time and near-real time is less than 5s.
• The delay between real time and near-real time is less than 2s.
• The telemedicine system is configured to enable the file to be recorded in a format suitable for clinical grade analysis, such as a lossless format.
• The telemedicine system is configured to generate sections or fragments of audio data from the patient sounds and the file handling system is configured to receive, download, and store each section of audio data and to make each section available for near-real-time listening to patient sounds.
• Each section is configured to represent a pre-defined length of audio data, such as 10 seconds of audio data, or 1 second of audio data.
• The system is configured to enable a healthcare professional to select from a remote location when to start listening to patient sounds in real time.
• The system is configured to automatically make the file available to the healthcare provider for listening at the end of an action from the healthcare professional, such as releasing a “listen” button or selecting a “review” button. • The telemedicine system is configured to store the file locally on a device that is connected to the medical device, such as a mobile device, smartwatch, smartphone, desktop, or laptop.
• TCP layer protocol processing and IP layer protocol processing (TCP/IP) is used to send the file from the medical device to a web server.
• TCP/IP is used to send the file from the web server to a healthcare provider’s device.
The file handling system
• the medical device is a digital stethoscope and the patient sounds are clinically relevant, e.g. auscultation sounds, such as sounds made by the heart, lungs or other organs of a human or indeed any other animal.
• the file handling system is implemented on a web server that receives audio data directly or indirectly from the medical device and is configured for recording, storing and controlling access to uploaded patient datasets that include the audio data processed by the file handling system.
• the file handling system is implemented on an intermediary device that receives audio data directly or indirectly from the medical device and sends processed audio data to a web server that is configured for recording, storing and controlling access to uploaded patient datasets that include the audio data processed by the file handling system.
• the file handling system is implemented on a computer operated by a healthcare professional.
• the file handling system is implemented as a store and forward system.
Second Microphone
• the medical device includes (i) a speech microphone configured to detect and/or record patient speech and (ii) a second microphone configured to detect and/or record patient sounds (e.g. clinically relevant sounds) and generate audio data (e.g. clinically relevant audio data) from those sounds.
• the speech microphone is configured to enable real-time voice communication from the patient to the healthcare professional at the same time as the audio data is being provided to the healthcare professional via the file handling system to enable the healthcare professional to listen to the downloaded audio data in near real-time or at a later time.
• the telemedicine system is configured to enable the healthcare professional to select whether to listen to real-time voice communication from the patient or to listen to the downloaded clinically relevant audio data sent via the file handling system, by muting, fully or partly, either the real-time voice communication or the audio data
• the speech microphone uses one channel of a stereo channel pair, and the second microphone uses the other channel.
• the system is configured to use the speech microphone to determine unwanted noise or noise that otherwise affects the quality of the audio data and to generate a warning if the unwanted noise exceeds a threshold.
• the speech and clinically relevant audio data are each delivered as an audio signal over a stereo channel and a web app separates the audio signal into two separate mono feeds or channels and processes each differently.
• the clinically relevant audio data channel has a gain control to increase the strength of the signal.
• filters are applied to the speech sounds and also the clinically relevant audio data, after these sounds have been recorded, maintaining a raw audio file or files.
• the healthcare professional and/or patient each have control of muting the speech channel and the audio data channel separately if they want to only hear one or the other channel.
• the speech microphone is used to capture audio that is used to reduce or remove sounds that are not relevant to the audio data.
• the speech microphone output is used to determine if the room a patient is in is too noisy for a patient reading and/or if a patient is speaking when the exam is being recorded, to enable a message to be shown or given to the patient to be silent and/ or that there is too much noise to perform the examination.
• the speech channel and the other channel are processed asynchronously, with the other channel being sent via the file handling system.
• each channel is processed to enable noise reduction/cancellation techniques. • the noise reduction/cancellation techniques involve measuring the timing/phasing of noise detected by the speech microphone compared with the same noise detected by the auscultation microphone.
• each channel is processed to enable compensating for different timing in receiving auscultation sounds in patients with different body masses.
• each channel is processed to enable noise reduction/cancellation techniques at an intermediary device
• the telemedicine system is configured to use the speech microphone to determine unwanted noise or noise that otherwise affects the quality of the audio data and to generate a warning if the unwanted noise exceeds a threshold.
• the clinically relevant audio data is processed to improve the quality of the audio from a clinical or diagnostic perspective.
• the clinically relevant audio data is processed at the file handling system to improve the quality of the audio from a clinical or diagnostic perspective.
• the clinically relevant audio data channel has a gain control to increase the strength of the signal.
• the medical device is a single, unitary device and the speech microphone and the second microphone are integrated into that single, unitary device.
• the medical device comprises two physically separate or separable units, and the speech microphone and the second microphone are integrated into different separate or separable units.
Remote web server
• the medical device is configured to upload or send patient datasets to a remote web server, from an internet-connected app running either on the device or on an intermediary device
• the remote web server posts or makes available webpages that include patient datasets and can be viewed on any web-enabled device, such as the patient’s laptop, or mobile phone or the healthcare professional’s laptop or mobile phone.
• the remote web server is configured to generate a unique web-link that is associated with a specific patient dataset; and in which the unique web-link enables a healthcare professional to review the specific patient dataset by selecting the web-link from within a web browser or from within any dedicated telemedicine application that opens web-links.
Web-link
• the medical device is configured to upload or send patient datasets to a remote web server, directly from an internet-connected app running either on the medical device or on an intermediary device; the remote web server is configured to generate a unique web-link that is associated with a specific patient dataset that has been processed by the file handling system; and in which the unique web-link enables a healthcare professional to review the specific patient dataset by selecting the weblink from within a web browser or from within any dedicated telemedicine application that opens web-links.
• the unique web-link is configured to enable a healthcare professional to review the specific patient dataset by selecting the web-link from within a web browser or from within any dedicated telemedicine application that opens web-links and to initiate a virtual examination of the patient by opening a link to a virtual examination room hosted on the remote web server.
• the web link is configured to be copied and pasted by the user into a telemedicine session, email message, text message or any other communications system.
• the web link is configured to be sent automatically to the healthcare professional.
• the web link is configured to be sent automatically to the healthcare professional only after the user has confirmed it should be sent by interacting with a web page posted by the web server.
• the web link, when selected by the healthcare professional on their web-enabled device, takes the healthcare professional directly to the patient dataset stored on the web server, to enable the healthcare professional to review that patient dataset.
• the web link is configured to be used to control access rights and privacy control access rights.
• the web link is configured to be used to control additional healthcare services such as diagnostic analysis and verification.
• the web link contains rules permitting third party access right, sharing/ viewing rules and financial controls. • the web link is a HTML hyperlink.
• the web link when selected opens a video conferencing application.
• the web link when selected opens a video conferencing application that is integrated within a telemedicine session
• the web link provides access to the patient dataset to an authorized third party only when the authorized third party has been authenticated by the system and/ or patient and/ or healthcare provider.
• an authorized third party accesses the patient dataset in real time as it is being created.
• the method enables an authorized third party to start or stop the creation of a patient dataset by at least one of the medical devices.
• the method enables an authorized third party to record the patient dataset.
The intermediary device:
• the intermediary device sends audio data to the web server that is configured for recording, storing and controlling access to uploaded patient datasets that include the clinically relevant audio data processed by the file handling system.
• the medical device is connected or sends data to an intermediary device, such as a laptop or PC, and an internet-connected app running on the intermediary device treats the patient speech and the audio data generated by the medical device in a way that satisfies the standard browser security model of allowing for multiple audio sources to be used at any given time
• the medical device is connected or sends data to a portable intermediary device such as a smartphone or smartwatch, then an internet-connected app running on the portable intermediary device processes both the patient speech and also the audio data generated by the medical device in a way that satisfies the standard smartphone or smartwatch model of allowing for multiple audio sources to be used at any given time only if they are integrated into a single app.
• the intermediary device is a smartphone or laptop or any other computing device is configured to connect to the medical device and the remote file handling system.
• the medical device connects to the intermediary device using a data cable, such as a USB cable.
• the medical device connects to the intermediary device over short-range wireless, such as Bluetooth. The medical device
• digital stethoscope comprises a first audio sensor that is configured to pick up speech from the patient or sounds from the patient environment and a second audio sensor that is configured to measure or sense clinically relevant body sounds.
• the medical device is any digital medical device that can generate patient data and send that data, directly or via an intermediary device, to a remote web server.
• the medical device is one of the following: digital stethoscope, ultrasound, blood pressure monitoring device or any other digital monitoring devices.
• the medical device is a smart device that is configured to monitor vital signs and other patient parameters for anomalies or events and to automatically send an alert to the remote web-server if an anomaly or event is detected, together with a patient dataset that captures the anomaly or event, and generate a unique web-link that is associated with that patient dataset and to send that unique web-link to a healthcare professional or emergency service.
• the anomaly or event includes an onset of organ failure or malfunction
• the anomaly or event includes an altered breathing rate or cough
• the medical device connects to the intermediary device running the web app over a USB port.
• Audio filters can include compensation for body mass or other human characteristics that are known to alter auscultation sounds, including male/ female body differences, age etc. It is known that sound travelling through say a heavier patient will have a different frequency response than the same sound travelling through a thinner patient. Likewise, a female patient’s heart and lung sounds might present more quietly, due to the impact of sound travelling through breast tissue. Such audio characteristics can be compensated for using equalisation, compression and/ or convolution techniques much like Digital Audio Workstation (DAW) software can for example, remove room ambiance and/ or compensate for a recoding made in a live room and make it sound as if it was made in a carpeted room.
Cloud based telemedicine server
• the file handling system is implemented in a cloud-based telemedicine server We can generalise further by expanding the scope from patient sounds to other sorts of patient data (such as ultrasound images) that are conventionally live-streamed to a healthcare professional:
A telemedicine system including:
(a) a medical device that includes one or more sensors configured (i) to detect and/ or record patient sounds and/ or images, and (ii) to generate data from those sounds and/ or images, and (iii) to send that data;
(b) a file handling system configured (i) to download and store the data from the medical device, and (ii) make that file available for non-real-time listening and/or viewing to the patient sounds and/ or images.
The Optional Features listed above apply equally to this generalised form.
Feature 2: Web-links
In one implementation, a telemedicine system enables patient datasets that are generated from multiple medical devices to be sent to a remote web server or servers. For example, there could be thousands of low-cost stethoscopes, e.g. Ml devices as described in this document, each being used by a patient at home by being plugged into that patient's smartphone using a simple USB cable connection. Each smartphone runs an internet connected application that records the heart etc sounds captured by the tethered stethoscope and creates a dataset for each recording. It sends that recording, or patient dataset, to a remove server over the internet. The remote server then associates that recording, or patient dataset, with a unique web-link. The patient's doctor is sent the web-link, or perhaps the server sends the web-link for automatic integration into the electronic records for that patient. In any event, the patient's doctor can then simply click on the web-link and then the recording or other patient dataset is then made available - e.g. a media player could open within the doctor's browser or dedicated telemedicine application and when the doctor presses 'play', the sound recording is played back.
We can generalise to: A telemedicine system comprising one or more medical devices that are each configured to generate patient datasets, and a remote web server; in which: a medical device is configured to upload or send patient datasets to the remote web server, directly from an internet-connected app running either on the medical device or on an intermediary device; the remote web server is configured to generate a unique web-link that is associated with a specific patient dataset; and in which the unique web-link enables a healthcare professional to review the specific patient dataset by selecting the web-link from within a web browser or from within any dedicated telemedicine application that opens web-links.
And we can further generalise to:
A telemedicine system comprising one or more medical devices that are each configured to generate patient datasets, and a remote web server connected to at least one of the medical devices; in which: a medical device is configured to upload or send patient datasets to the remote web server, directly from an internet-connected app running either on the device or on at least one intermediary device; the remote web server is configured to generate a unique web-link that is associated with a specific patient dataset; and in which the unique web-link is configured to enable a healthcare professional to review the specific patient dataset by selecting the web-link from within a web browser or from within any dedicated telemedicine application that opens web-links and to initiate a virtual examination of the patient by opening a link to a virtual examination room hosted on the remote web server.
Optional features:
Remote web server: the remote web server is configured for recording, storing and controlling access to uploaded patient datasets. • the remote web server posts or makes available webpages that include the patient datasets and can be viewed on any web-enabled device, such as the patient’s laptop, or mobile phone or the healthcare professional’s laptop or mobile phone.
• there are multiple remote web servers
Web link:
• the web link is configured to be copied and pasted by the user into a telemedicine session, email message, text message or any other communications system.
• the web link is configured to be sent automatically to the healthcare professional.
• the web link is configured to be sent automatically to the healthcare professional only after the user has confirmed it should be sent by interacting with a web page posted by the web server.
• the web link, when selected by the healthcare professional on their web-enabled device, takes the healthcare professional directly to the patient dataset stored on the web server, to enable the healthcare professional to review that patient dataset.
• the web link is configured to be used to control access rights and privacy control access rights.
• the web link is configured to be used to control additional healthcare services such as diagnostic analysis and verification.
• the web link contains rules permitting third party access right, sharing/ viewing rules and financial controls.
• the web link is a HTML hyperlink.
• the web link when selected opens a video conferencing application.
• the web link when selected opens a video conferencing application that is integrated within a telemedicine session
• the web link provides access to the patient dataset to an authorized third party only when the authorized third party has been authenticated by the system and/ or patient and/ or healthcare provider.
• an authorized third party accesses the patient dataset in real time as it is being created.
• the system and method enables an authorized third party to start or stop the creation of a patient dataset by at least one of the medical devices. • the system and method enables an authorized third party to record the patient dataset.
• The system and method enables an authorized third party to preview the patient dataset in live streaming mode then, in near real-time receive the downloaded higher quality version of the same dataset without risk of data packet loss.
The intermediary device:
• is a smartphone or laptop or any other computing device that is configured to connect to at least one of the medical devices and the remote web server.
• the medical device connects to the intermediary device using a data cable, such as a USB cable.
• the medical device connects to the intermediary device over short-range wireless, such as Bluetooth.
The medical device:
• the medical device is any digital medical device that can generate patient data and send that data, directly or via an intermediary device, to a remote web server.
• the medical device is one of the following: digital stethoscope, ultrasound, blood pressure monitoring device or any other digital monitoring devices.
• a visual indicator on the digital medical device automatically turns on when a patient dataset is being generated.
• a visual indicator on the digital medical device indicates when sufficient data has been measured to generate a patient dataset.
• a visual indicator on the digital medical device indicates that an authorized third party is accessing, such as streaming the patient dataset.
• the medical device is a smart device that is configured to monitor vital signs and other patient parameters for anomalies or events and to automatically send an alert to the remote web-server if an anomaly or event is detected, together with a patient dataset that captures the anomaly or event, and generate a unique web-link that is associated with that patient dataset and to send that unique web-link to a healthcare professional or emergency service.
• the anomaly or event includes an onset of organ failure or malfunction
• the anomaly or event includes an altered breathing rate or cough the medical device connects to the intermediary device running the web app over a USB port.
Feature 3: Second microphone: Telemedicine Audio Systems and Methods
Maintaining communications between a patient and healthcare professional while examination sounds are being shared is currently still a challenging task. In the example we gave above, the patient used a simple stethoscope connected via a USB-C cable to a smartphone; after the patient had completed recording his heart/lung etc. sounds, the recording was sent by the smartphone to the remote server, and a web-link was generated by the server and then sent to the patient's doctor. The doctor could hence review the patient's records a few hours or days etc. after the patient had made the recording by selecting and opening the web-link in a browser. But in the Medaica system, the doctor can start a video or audio examination of a remote patient, and during that examination can choose to listen to the real-time heart/lung sounds being recorded by the stethoscope the patient is using (using for example the web-link sharing process described above), and can also have an audio conversation with the patient because the stethoscope includes two microphones: one for picking up the heart/lung sounds, and a second microphone for picking up the voice of the patient. The doctor, when listening to heart/lung sounds, can mute those sounds fully, and instead listen to the patient talking; the doctor can also partly mute either the heart/lung sounds or the patient's voice; for example, to have the heart/lung sounds as the primary sound and have the patient's voice partly muted and hence at a lower level. Similarly, the doctor may have the patient's voice as the main sound and have the real-time heart/lung sounds muted to a lower level.
Using one microphone per channel, i.e. one microphone on the left channel and the other on the right channel, allows the design to leverage common amp and/ or A-D chip designs. Without this design, a system would need a method of switching from the auscultation/ stethoscope microphone to the patient voice microphone, which is challenging to engineer since it requires a system-level change. Further, being able to process the sound signals from both microphones in parallel can be very advantageous for various noise reduction/ cancellation and enhancement functions. For example, in a noisy environment (e.g. in an ambulance, ER) noise reduction/ cancellation techniques can be applied such as measuring the timing/phasing of noise detected by the voice microphone compared with the same noise detected by the auscultation microphone: this requires simultaneous or parallel processing of the sonic signals from both microphones, and would not be possible if the auscultation/ stethoscope could only be sending signals when the patient voice microphone was off, and vice versa.
Simultaneous or parallel processing of the sonic signals from both microphones also enables compensating for different timing in receiving auscultation sounds in patients with different body masses: for example, assume the patient voice microphone detects a sound in the room with a given intensity; that same sound will pass through the patient's upper body tissue and be reflected off the ribcage and hard tissue; the auscultation/ stethoscope will detect that reflected signal. But the attenuation of the reflected signals increases as body mass increases; hence we are able to approximately infer body mass by measuring the intensity of the reflected signals; we can use that body mass estimation to compensate for the small but different time delay in receiving auscultation sounds in patients with different body masses, and can hence normalise auscultation sounds across patients in a way that compensates for different body mass.
We can generalize to:
A telemedicine system comprising: multiple medical devices that are each configured to generate patient datasets, and a remote web server connected to each medical device; in which: a medical devices is configured to upload or send patient datasets to the remote web server, directly from an internet-connected app running either on the device or on an intermediary device; and in which the medical device includes (i) a speech microphone configured to detect and/or record patient speech and (ii) a second microphone in the medical device configured to detect and/or record clinically relevant sounds and generate an audio dataset from those sounds; and in which the internet-connected app is configured to treat that patient speech separately from the audio dataset and is hence configured to enable real-time voice communication from the patient to the healthcare professional at the same time as the audio dataset is being shared with the healthcare professional via the remote web server; and the system is configured to enable the healthcare professional to select whether to listen to real-time voice communication from the patient or to listen to the audio dataset in real-time by muting, fully or partly, either the real-time voice communication or the audio dataset.
Optional features:
• the system is configured to use the speech microphone to determine unwanted noise or noise that otherwise affects the quality of the audio dataset and to generate a warning if the unwanted noise exceeds a threshold.
• where the intermediary device is a laptop or PC, then the internet-connected app treats the patient speech and the audio dataset generated by the medical device in a way that satisfies the standard browser security model of allowing for multiple audio sources to be used at any given time
• where the intermediary device is a smartphone or smartwatch, then the internet- connected app processes both the patient speech and also the audio dataset generated by the medical device in a way that satisfies the standard smartphone or smartwatch model of allowing for multiple audio sources to be used at any given time only if they are integrated into a single app.
• the speech and audio datasets are each delivered over a stereo channel and the web app separates the audio signal into two separate mono feeds and processes each differently.
• the clinically relevant audio dataset channel has a gain control to increase the strength of the signal.
• the clinically relevant audio datasets to improve the quality of the audio from a clinical or diagnostic perspective.
• filters are applied to the speech sounds and also the clinically relevant sounds, after these sounds have been recorded, maintaining a raw audio file or files.
• the healthcare professional and/or patient each have control of muting the speech channel and the clinically relevant sound channel separately if they want to only hear one or the other channel.
• the speech microphone is used to capture audio that is used to reduce or remove sounds that are not relevant to the clinically relevant sound channel and hence the audio dataset. • the speech microphone output is used to determine if the room is too noisy for a patient reading and/or if a patient is speaking when the exam is being recorded to enable a message to be shown or given to the patient to be silent and/ or that there is too much noise to perform the examination.
The medical device is a digital stethoscope
• the medical device is a digital stethoscope and the clinically relevant sound are auscultation sounds.
• The audio dataset channel, e.g. auscultation sound channel, has a gain control so a strong enough signal will be captured for the body recording.
• The digital stethoscope connects to the intermediary device using a USB port.
• the digital stethoscope connects to the intermediary device using short-range wireless.
• the digital stethoscope includes a single visual output and a single button.
• the digital stethoscope is waterproof.
• digital stethoscope comprises a first audio sensor that is configured to measure or sense body sounds and a second audio sensor that is configured to measure or sense sounds from the patient or the environment around the patient.
• the remote web server is configured to generate a unique web-link that is associated with a specific patient dataset; and in which the unique web-link enables a healthcare professional to review the specific patient dataset by selecting the web-link from within a web browser or from within any dedicated telemedicine application that opens web-links.
Another aspect is a medical device that includes (i) a speech microphone configured to detect and/or record patient speech and (ii) a second microphone configured to detect and/or record clinically relevant sounds and generate an audio dataset from those sounds; in which the speech microphone uses one channel of a stereo channel pair, and the second microphone uses the other channel, and each channel is processed substantially in parallel or simultaneously.
Optional features • the medical device is a digital stethoscope and the clinically relevant sounds are auscultation sounds.
• each channel is processed substantially in parallel or simultaneously to enable noise reduction/ cancellation techniques.
• the noise reduction/cancellation techniques involve measuring the timing/phasing of noise detected by the speech microphone compared with the same noise detected by the auscultation microphone.
• each channel is processed substantially in parallel or simultaneously to enable compensating for different timing in receiving auscultation sounds in patients with different body masses.
• the system is configured to use the speech microphone to determine unwanted noise or noise that otherwise affects the quality of the audio dataset and to generate a warning if the unwanted noise exceeds a threshold.
• the clinically relevant audio datasets is processed to improve the quality of the audio from a clinical or diagnostic perspective.
• the clinically relevant audio dataset channel has a gain control to increase the strength of the signal.
• filters are applied to the speech sounds and also the clinically relevant sounds, after these sounds have been recorded, maintaining a raw audio file or files.
• a healthcare professional and/or patient each have control of muting the speech channel and the clinically relevant sound channel separately if they want to only hear one or the other channel.
• the speech microphone is used to capture audio that is used to reduce or remove sounds that are not relevant to the clinically relevant sound channel and hence the audio dataset.
• the speech microphone output is used to determine if the room is too noisy for a patient reading and/or if a patient is speaking when the exam is being recorded to enable a message to be shown or given to the patient to be silent and/ or that there is too much noise to perform the examination.
• each channel is processed substantially in parallel or simultaneously to enable noise reduction/ cancellation techniques at the medical device. • the medical device is configured to upload or send patient datasets to a remote web server, directly from an internet-connected app running either on the device or on an intermediary device
• each channel is processed substantially in parallel or simultaneously to enable noise reduction/ cancellation techniques at the intermediary device
• where the intermediary device is a laptop or PC, then the patient speech and the audio dataset generated by the medical device are treated in a way that satisfies the standard browser security model of allowing for multiple audio sources to be used at any given time
• where the intermediary device is a smartphone or smartwatch, then the patient speech and also the audio dataset generated by the medical device are treated in a way that satisfies the standard smartphone or smartwatch model of allowing for multiple audio sources to be used at any given time only if they are integrated into a single app.
• the medical device is a single, unitary device and the speech microphone and the second microphone are integrated into that single, unitary device.
• the medical device comprises two physically separate or separable units, and the speech microphone and the second microphone are integrated into different separate or separable units.
Feature 4: Healthcare Applet: a High Level Healthcare Programming Environment
The Medaica system is able to generate advice or instructions on when to perform specific healthcare management protocols, such as when specific bodily sounds or functions should be measured. In previous examples, we described how a low-cost stethoscope could be connected to a patient's smartphone which could in turn send audio etc. recordings to a remote server. In those earlier scenarios, the patient is taken to be manually placing the stethoscope at positions on his or her body that the patient hopes are correct. In the Medaica system, the patient can be guided, by an application running on the smartphone, to position the device at different positions and to then create a recording from each of those positions. For example, the application could provide voice instructions to the patient, such as 'first, place your stethoscope over the heart and press record'. The application could display a graphic indicating on an image of a body where to place the stethoscope. Once that recording has been made, the application could provide another spoken instruction such as 'Now, move the stethoscope down 5cm"; again a graphic could be shown to guide the patient. The guidance could be timed, so that, for example, at two or three pre-set times each day, the patient would be guided through the steps needed to use the stethoscope in the ways dictated by a protocol set by the patient's doctor.
We can generalize to:
A telemedicine system comprising multiple medical devices that are each configured to generate patient datasets, and a remote web server; in which: a medical device is configured to upload or send patient datasets to the remote web server, directly from an internet-connected app running either on the device or on an intermediary device; and in which the remote web server hosts or enables access to an applet that, when run on the internet-connected app, provides instructions or guides to the patient to perform specific healthcare management protocols.
Optional features: the applet guides the patient to take specific tests with a specific frequency • the applet sends reminders to the patient as well as updates to the patient's healthcare provider(s) and/ or insurer or other parties with appropriate permissions.
• the applet guides the patient to use a digital stethoscope in a specific position, for a specific duration and frequency.
• the applet sends reminders to the patient as well as updates to the patient's healthcare provider(s) and/ or insurer or other parties with appropriate permissions.
• the applet guides the patient to use a digital stethoscope in a specific position.
• the applet guides the patient to use a digital stethoscope in a specific position, for a specific duration and frequency.
• the applet provides instructions or guides with a diagram, animation or video.
• the applet sends patient datasets to the healthcare professional
• the applet monitors compliance with the instructions or guides it provides to the patient
• the applet is a Patient Release Protocol that provides instructions or guides to the patient to perform specific healthcare management protocols relevant to their release from hospital
• the applet integrates patient datasets generated in response to the applet into their healthcare records of the relevant patient.
• the applet provides a protocol for a clinical trials.
• the remote web server is configured to generate a unique web-link that is associated with a specific patient dataset; and in which the unique web-link enables a healthcare professional to review the specific patient dataset by selecting the web-link from within a web browser or from within any dedicated telemedicine application that opens web-links.
Feature 5: Virtual Healthcare Exam Systems and Methods
The Medaica system enables healthcare professionals to directly conduct remote examination using a virtual examination room hosted on a remote web server. For example, extending the use cases described above, the doctor can open a virtual examination video room, invite the patient to join, and conduct a virtual examination by asking the patient to move the stethoscope to specific areas and select 'record'; the audio recording can be streamed to the remote server, and added to the resources available to the doctor in the virtual examination room so that the doctor can listen to the recording in real-time. The doctor can ask the patient to repeat the recording, or guide the patient to move the stethoscope to a new position, and create a new recording, which can be listened to in real-time. The doctor can edit the recording to eliminate clinically irrelevant sections and can then share a web-link that includes that edited audio file, for example with experts for a second opinion.
We can generalize to:
A telemedicine system comprising multiple medical devices that are each configured to generate patient datasets, and a remote web server connected to each medical device; in which: a medical device is configured to upload or send patient datasets to the remote web server, directly from an internet-connected app running either on the device or on an intermediary device; in which the system is configured to enable a healthcare professional and a patient to communicate via a virtual examination room, and the system is further configured to display a user interface that includes a virtual or graphical body image or body outline and one or more target positions at which a medical device is to be positioned by the patient; and the system is further configured to enable a dynamic interaction between the patient or the healthcare professional and the user interface, to enable the patient to correctly position the medical device at the target position or positions.
Optional features
• the system is configured to overlay or integrate a real-time image of the patient with the virtual or graphical body image or body outline to enable a dynamic interaction in which the patient matches or overlaps the two images to enable the patient to position the medical device at the target position or positions.
• the system is configured to enable a dynamic interaction in which the healthcare professional alters the location of the target position or positions.
• the patient enters that virtual examination room by entering a code, such as a code provided by the healthcare professional and once both healthcare professional and patient are in the same virtual examination room, the healthcare professional and the patient can communicate by voice and/ or video.
• the system is configured to enable the code to be provided by the healthcare professional to the patient.
• healthcare professional and the patient are communicating via the virtual examination room, the healthcare professional can guide the patient into using the medical device in specific ways defined by the examination protocol and the system is further configured to provide feedback if the patient is operating the medical device in compliance with that protocol.
• once both healthcare professional and patient are in the same virtual examination room, the patient can use their medical device to create datasets which are uploaded to the remote web server and made available automatically and substantially immediately to the healthcare professional to review and/ or record.
• the user interface is configured to show a body map or body image of a part of a patient’s body with an icon or other mark representing the medical device, in which the icon or mark is movable by a participant in a telemedicine session.
• the system is configured to enable the healthcare professional to move the icon or mark on the body map or body image and to display to the patient the moving icon or mark to enable the patient to place his/her medical device to overlay the icon or mark on the body map or body image. The icon or mark could be semi-transparent and/ or the same shape as the stethoscope head to make it easier for the patient to position the stethoscope “virtually” under the icon and over the auscultation site.
• the internet-connected app displays an augmented reality view to guide the patient to find a specific position to place the medical device.
• the medical device automatically generates a patient dataset when the medical device is positioned at or near the specific position. • an augmented reality view is provided that includes an outline of the patient based on sensor data, and the augmented reality view is displayed to both the patient and healthcare professional at the same time.
• the internet-connected app displays an outline of a torso or other part of the body in a video feed and indicates a specific position on the torso or other body part at which the patient is to place the medical device.
• the system is configured to provide a patient self-examination mode, in which different target positions, at which the medical device is to be placed, are shown or indicated to the patient on the internet-connected app; and the system is configured to create, manually or automatically, a patient dataset or recording at each specific position.
• different target positions are sequentially displayed after each patient dataset or recording at a target position has been completed.
• some or all of the target positions are medically standard positions or are specifically chosen by the healthcare professional.
• the medical device is a stethoscope and the target positions are specific, standard auscultation positions, or, if the patient is receiving guidance from the healthcare professional, the desired auscultation positions can be moved by the healthcare professional in real time.
• data defining the target positions is recorded as part of the related patient dataset.
• the patient dataset is an audio or video file or stream.
• the patient dataset is an auscultation audio or video file or stream.
• the patient dataset is data relating to the heart, lung or any other organ.
• the remote web server is configured to generate a unique web-link that is associated with a specific patient dataset.
• the unique web-link is configured to enable a healthcare professional to review the specific patient dataset by selecting the web-link from within a web browser or from within any dedicated telemedicine application that opens web-links and to initiate a virtual examination of the patient by opening a link to a virtual examination room hosted on the remote web server.
Other generally applicable optional features include:
Doctor Guided Device Icon (Simple non-AR implementation) • An app (including a web) view shows a “a body map” being a patient’s upper body (e.g. an outline) with an icon/mark representing the device (e.g. a stethoscope) that is movable by a participant (usually the healthcare provider) in the telemedicine session.
• The doctor can move the icon/ mark on the body map, such that the patient sees the mark moving and the patient can then place his/her device (in the real world) to overlay the mark on the body map to position the device accurately and correctly.
Augmented reality (AR)
• the patient’s web-app displays an augmented reality view to guide the patient to find a specific position to place the digital medical device.
• an augmented reality view includes an outline of the end-user based on sensor data, such as camera or LIDAR data and the augmented reality view is displayed to the both the patient and healthcare professional at the same time.
• the digital medical device automatically generates a patient dataset when the digital medical device is positioned at or near the specific location.
Assisted Exam Interface
• Similar to the AR embodiment described above, a patient’s web-app displays an image or outline of a torso in its video feed. The patient positions him/herself into or within the torso image or outline, and is then guided to place the digital medical device at specific position(s) (such as auscultation positions where the device is a stethoscope).
• In a self-exam mode, the (e.g. auscultation) positions can be sequentially displayed to the patient after each has been recorded. Alternatively, if a specific sequence has been requested by the healthcare professional, that sequence can be displayed.
• If the patient is receiving guidance from the healthcare professional, the positions can be altered or moved by the healthcare professional in real time.
• Each position can be recorded alongside the audio file as tagged references, to further assist in diagnosis and records.
Patient dataset the patient dataset is a file or stream the patient dataset is an audio or video file or stream the patient dataset is an auscultation audio or video file or stream the patient dataset is data relating to the heart, lung or any other organ.
Security
• web link is associated with one or more use restrictions.
• use restrictions includes: a time period for accessing the shareable link, predefined number of times the web link is accessible, authorized third party, compression format, sharing rights, downloading rights, payments.
• use restrictions includes enabling the decryption of the patient datasets.
• use restrictions includes enabling diagnostic analysis
• each patient dataset is encrypted before being saved at the remote web server location.
• each patient dataset is associated with a secure unique ID.
• a secure unique ID is linked to an end-user and a unique device ID.
• a secure unique ID is only identifiable by the healthcare professional.
• web- link only provides access to encrypted patient dataset.
• encrypted patient datasets can only be decrypted by authorized third party.
Blockchain
• the method uses a blockchain server to store patient datasets.
• only authorized third party can have access to the blockchain server
• blockchain server stores an audit trail of all events associated with each patient dataset.
Note
It is to be understood that the above-referenced arrangements are only illustrative of the application for the principles of the present invention. Numerous modifications and alternative arrangements can be devised without departing from the spirit and scope of the present invention. While the present invention has been shown in the drawings and fully described above with particularity and detail in connection with what is presently deemed to be the most practical and preferred example(s) of the invention, it will be apparent to those of ordinary skill in the art that numerous modifications can be made without departing from the principles and concepts of the invention as set forth herein.

Claims

1. A telemedicine system including:
(a) a medical device that includes a microphone system configured (i) to detect and/ or record patient sounds, and (ii) to generate audio data from those sounds, and (iii) to send that audio data;
(b) a file handling system configured (i) to receive, download and store the audio data from the medical device, and (ii) make that file available for near-real-time listening to the patient sounds.
2. The telemedicine system of Claim 1, in which the medical device is a digital stethoscope and the patient sounds are auscultation sounds, e.g. sounds made by the heart, lungs or other organs.
3. The telemedicine system of any preceding Claim, in which the system is configured to simultaneously (a) record the file and (b) enable a healthcare professional to listen to the patient sounds in real time.
4. The telemedicine system of Claim 3, in which the delay between real time and near-real time is less than 30s.
5. The telemedicine system of Claim 3, in which the delay between real time and near-real time is less than 10s.
6. The telemedicine system of Claim 3, in which the delay between real time and near-real time is less than 5s.
7. The telemedicine system of Claim 3, in which the delay between real time and near-real time is less than 2s.
8. The telemedicine system of any preceding Claim, in which the telemedicine system is configured to enable the file to be recorded in a format suitable for clinical grade analysis, such as a lossless format.
9. The telemedicine system of any preceding Claim, in which the telemedicine system is configured to generate sections or fragments of audio data from the patient sounds and the file handling system is configured to receive, download, and store each section of audio data and to make each section available for near-real-time listening to patient sounds.
10. The telemedicine system of Claim 9, in which each section is configured to represent a pre-defined length of audio data, such as 10 seconds of audio data, or 1 second of audio data.
11. The telemedicine system of any preceding Claim, in which the system is configured to enable a healthcare professional to select from a remote location when to start listening to patient sounds in real time.
12. The telemedicine system of any preceding Claim, in which the system is configured to automatically make the file available to a healthcare provider for listening at the end of an action from the healthcare professional, such as releasing a “listen” button or selecting a “review” button.
13. The telemedicine system of any preceding Claim, in which the system is configured to store the file locally on a device that is connected to the medical device, such as a mobile device, smartwatch, smartphone, desktop, or laptop.
14. The telemedicine system of any preceding Claim, in which TCP layer protocol processing and IP layer protocol processing (TCP/IP) is used to send the file from the medical device to a web server and from the web server to a healthcare provider’s device.
15. The telemedicine system of any preceding Claim, in which the file handling system is implemented on a web server that receives audio data directly or indirectly from the medical device and is configured for recording, storing and controlling access to uploaded patient datasets that include the audio data processed by the file handling system.
16. The telemedicine system of any preceding Claim, in which the file handling system is implemented on an intermediary device that receives audio data directly or indirectly from the medical device and sends processed audio data to a web server that is configured for recording, storing and controlling access to uploaded patient datasets that include the clinically relevant audio data processed by the file handling system.
17. The telemedicine system of any preceding Claim, in which the file handling system is implemented on a computer operated by a healthcare professional.
18. The telemedicine system of any preceding Claim, in which the file handling system is implemented as a store and forward system.
19. The telemedicine system of any preceding Claim, in which the medical device includes (i) a speech microphone configured to detect and/ or record patient speech and (ii) a second microphone configured to detect and/ or record patient sounds and generate audio data from those sounds.
20. The telemedicine system of any of Claim 16-19, in which a speech microphone is configured to enable real-time voice communication from the patient to the healthcare professional at the same time as the clinically relevant audio data is being provided to the healthcare professional via the file handling system to enable the healthcare professional to listen to the clinically relevant audio data in near real-time or at a later time.
21. The telemedicine system of any of Claim 16-19, in which the system is configured to enable the healthcare professional to select whether to listen to real-time voice communication from the patient or to listen to the clinically relevant audio data sent via the file handling system, by muting, fully or partly, either the real-time voice communication or the audio data.
22. The telemedicine system of any preceding Claim, in which a speech microphone uses one channel of a stereo channel pair, and a second microphone uses the other channel.
23. The telemedicine system of any preceding Claim, in which the system is configured to use a speech microphone to determine unwanted noise or noise that otherwise affects the quality of the audio data and to generate a warning if the unwanted noise exceeds a threshold.
24. The telemedicine system of any of Claim 16-23, in which the speech and clinically relevant audio data are each delivered as an audio signal over a stereo channel and a web app separates the audio signal into two separate mono feeds or channels and processes each differently.
25. The telemedicine system of Clam 24, in which the clinically relevant audio data channel has a gain control to increase the strength of the signal.
26. The telemedicine system of any of Claim 16-25, in which filters are applied to the speech sounds and also the clinically relevant audio data, after these sounds have been recorded, maintaining a raw audio file or files.
27. The telemedicine system of any of Claim 20-26, in which a healthcare professional and/ or patient each have control of muting the speech channel and the clinically relevant audio data channel separately if they want to only hear one or the other channel.
28. The telemedicine system of any of Claim 16-27, in which a speech microphone is used to capture audio that is used to reduce or remove sounds that are not relevant to the clinically relevant audio data.
29. The telemedicine system of any preceding Claim in which, a speech microphone output is used to determine if the room a patient is in is too noisy for a patient reading and/ or if a patient is speaking when the exam is being recorded, to enable a message to be shown or given to the patient to be silent and/or that there is too much noise to perform the examination.
30. The telemedicine system of any of Claim 19-29, in which the speech channel and the other channel are processed asynchronously, with the other channel being sent via the file handling system.
31. The telemedicine system of any of Claim 19-30, in which each channel is processed to enable noise reduction/cancellation techniques.
32. The telemedicine system of any of Claim 19-31, in which the noise reduction/cancellation techniques involve measuring the timing/phasing of noise detected by the speech microphone compared with the same noise detected by the auscultation microphone.
33. The telemedicine system of any of Claim 19-32, in which each channel is processed to enable compensating for different timing in receiving auscultation sounds in patients with different body masses.
34. The telemedicine system of any of Claim 19-33, in which each channel is processed to enable noise reduction/cancellation techniques at an intermediary device.
35. The telemedicine system of any of Claim 19-34, in which the telemedicine system is configured to use the speech microphone to determine unwanted noise or noise that otherwise affects the quality of the audio data and to generate a warning if the unwanted noise exceeds a threshold.
36. The telemedicine system of any of Claim 16-35, in which the clinically relevant audio data is processed to improve the quality of the audio from a clinical or diagnostic perspective.
37. The telemedicine system of any of Claim 16-36, in which the clinically relevant audio data is processed at the file handling system to improve the quality of the audio from a clinical or diagnostic perspective.
38. The telemedicine system of any of Claim 16-37, in which the clinically relevant audio data channel has a gain control to increase the strength of the signal.
39. The telemedicine system of any of Claim 19-38, in which the medical device is a single, unitary device and the speech microphone and the second microphone are integrated into that single, unitary device.
40. The telemedicine system of any of Claim 19-39, in which the medical device comprises two physically separate or separable units, and the speech microphone and the second microphone are integrated into different separate or separable units.
41. The telemedicine system of any preceding Claim, in which the medical device is configured to upload or send patient datasets to a remote web server, from an internet- connected app running either on the device or on an intermediary device.
42. The telemedicine system of Claim 41, in which the remote web server posts or makes available webpages that include patient datasets and can be viewed on any web- enabled device, such as the patient’s laptop, or mobile phone or the healthcare professional’s laptop or mobile phone.
43. The telemedicine system of any of Claim 41-42, in which the remote web server is configured to generate a unique web-link that is associated with a specific patient dataset; and in which the unique web-link enables a healthcare professional to review the specific patient dataset by selecting the web-link from within a web browser or from within any dedicated telemedicine application that opens web-links.
44. The telemedicine system of Claim 43, in which the medical device is configured to upload or send patient datasets to a remote web server, directly from an internet- connected app running either on the medical device or on an intermediary device; the remote web server is configured to generate a unique web-link that is associated with a specific patient dataset that has been processed by the file handling system; and in which the unique web-link enables a healthcare professional to review the specific patient dataset by selecting the web-link from within a web browser or from within any dedicated telemedicine application that opens web-links.
45. The telemedicine system of any of Claim 43-44, in which the unique web-link is configured to enable a healthcare professional to review the specific patient dataset by selecting the web-link from within a web browser or from within any dedicated telemedicine application that opens web-links and to initiate a virtual examination of the patient by opening a link to a virtual examination room hosted on the remote web server.
46. The telemedicine system of any of Claim 43-45, in which the web link is configured to be copied and pasted by the user into a telemedicine session, email message, text message or any other communications system.
47. The telemedicine system of any of Claim 43-46, in which the web link is configured to be sent automatically to the healthcare professional.
48. The telemedicine system of any of Claim 43-47, in which the web link is configured to be sent automatically to the healthcare professional only after the user has confirmed it should be sent by interacting with a web page posted by the web server.
49. The telemedicine system of any of Claim 43-48, in which the web link, when selected by the healthcare professional on their web-enabled device, takes the healthcare professional directly to the patient dataset stored on the web server, to enable the healthcare professional to review that patient dataset.
50. The telemedicine system of any of Claim 43-49, in which the web link is configured to be used to control access rights and privacy control access rights.
51. The telemedicine system of any of Claim 43-50, in which the web link is configured to be used to control additional healthcare services such as diagnostic analysis and verification.
52. The telemedicine system of any of Claim 43-51, in which the web link contains rules permitting third party access right, sharing/ viewing rules and financial controls.
53. The telemedicine system of any of Claim 43-52, in which the web link is a HTMT hyperlink.
54. The telemedicine system of any of Claim 43-53, in which the web link when selected opens a video conferencing application.
55. The telemedicine system of any of Claim 43-54, in which the web link when selected opens a video conferencing application that is integrated within a telemedicine session.
56. The telemedicine system of any of Claim 43-55, in which the web link provides access to the patient dataset to an authorized third party only when the authorized third party has been authenticated by the system and/ or patient and/ or healthcare provider.
57. The telemedicine system of any preceding Claim, in which an authorized third party accesses the patient dataset in real time as it is being created.
58. The telemedicine system of any preceding Claim, in which the system is configured to enable an authorized third party to start or stop the creation of a patient dataset by at least one of the medical devices.
59. The telemedicine system of any preceding Claim, in which the system is configured to enable an authorized third party to record the patient dataset.
60. The telemedicine system of any preceding Claim, in which the intermediary device sends audio data to the web server that is configured for recording, storing and controlling access to uploaded patient datasets that include the clinically relevant audio data processed by the file handling system.
61. The telemedicine system of any preceding Claim, in which the medical device is connected or sends data to an intermediary device, such as a laptop or PC, and an internet-connected app running on the intermediary device treats the patient speech and the audio data generated by the medical device in a way that satisfies the standard browser security model of allowing for multiple audio sources to be used at any given time.
62. The telemedicine system of any preceding Claim, in which the medical device is connected or sends data to a portable intermediary device such as a smartphone or smartwatch, then an internet-connected app running on the portable intermediary device processes both the patient speech and also the audio data generated by the medical device in a way that satisfies the standard smartphone or smartwatch model of allowing for multiple audio sources to be used at any given time only if they are integrated into a single app.
63. The telemedicine system of any preceding Claim, in which the intermediary device is a smartphone or laptop or any other computing device is configured to connect to the medical device and the remote file handling system.
64. The telemedicine system of any preceding Claim, in which the medical device connects to the intermediary device using a data cable, such as a USB cable.
65. The telemedicine system of any preceding Claim, in which the medical device connects to the intermediary device over short-range wireless, such as Bluetooth.
66. The telemedicine system of any preceding Claim, in which digital stethoscope comprises a first audio sensor that is configured to pick up speech from the patient or sounds from the patient environment and a second audio sensor that is configured to measure or sense clinically relevant body sounds.
67. The telemedicine system of any preceding Claim, in which the medical device is any digital medical device that can generate patient data and send that data, directly or via an intermediary device, to a remote web server.
68. The telemedicine system of any preceding Claim, in which the medical device is one of the following: digital stethoscope, ultrasound, blood pressure monitoring device or any other digital monitoring devices.
69. The telemedicine system of any preceding Claim, in which the medical device is a smart device that is configured to monitor vital signs and other patient parameters for anomalies or events and to automatically send an alert to the remote web-server if an anomaly or event is detected, together with a patient dataset that captures the anomaly or event, and generate a unique web-link that is associated with that patient dataset and to send that unique web-link to a healthcare professional or emergency service.
70. The telemedicine system of Claim 69, in which the anomaly or event includes an onset of organ failure or malfunction.
71. The telemedicine system of Claim 69 or Claim 70, in which the anomaly or event includes an altered breathing rate or cough.
72. The telemedicine system of any preceding Claim, in which the medical device connects to the intermediary device running the web app over a USB port either directly or wirelessly for example via Bluetooth.
73. The telemedicine system of any preceding Claim, in which the file handling system is implemented in a cloud-based telemedicine server.
PCT/US2023/012098 2022-02-01 2023-02-01 Telemedicine system Ceased WO2023150153A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202263305482P 2022-02-01 2022-02-01
US63/305,482 2022-02-01

Publications (1)

Publication Number Publication Date
WO2023150153A1 true WO2023150153A1 (en) 2023-08-10

Family

ID=87552830

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2023/012098 Ceased WO2023150153A1 (en) 2022-02-01 2023-02-01 Telemedicine system

Country Status (1)

Country Link
WO (1) WO2023150153A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110190665A1 (en) * 2010-02-01 2011-08-04 3M Innovative Properties Company Electronic stethoscope system for telemedicine applications
US20160015359A1 (en) * 2014-06-30 2016-01-21 The Johns Hopkins University Lung sound denoising stethoscope, algorithm, and related methods
WO2022051269A1 (en) * 2020-09-01 2022-03-10 Medaica Inc. Telemedicine system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110190665A1 (en) * 2010-02-01 2011-08-04 3M Innovative Properties Company Electronic stethoscope system for telemedicine applications
US20160015359A1 (en) * 2014-06-30 2016-01-21 The Johns Hopkins University Lung sound denoising stethoscope, algorithm, and related methods
WO2022051269A1 (en) * 2020-09-01 2022-03-10 Medaica Inc. Telemedicine system

Similar Documents

Publication Publication Date Title
US20230270389A1 (en) Telemedicine system
US10423760B2 (en) Methods, system and apparatus for transcribing information using wearable technology
US7346174B1 (en) Medical device with communication, measurement and data functions
US20210327582A1 (en) Method and system for improving the health of users through engagement, monitoring, analytics, and care management
US9524530B2 (en) Method, system and apparatus for transcribing information using wearable technology
US12230407B2 (en) Medical intelligence system and method
US20200365258A1 (en) Apparatus for generating and transmitting annotated video sequences in response to manual and image input devices
US20140200913A1 (en) Method, System, And Apparatus For Providing Remote Healthcare
US10424405B2 (en) Method, system and apparatus for transcribing information using wearable technology
WO2024263661A2 (en) Methods and systems for multi-channel service platforms
KR20140040186A (en) Tele auscultation medicine smart-healthcare system based on digital stethoscope and method thereof
WO2023150153A1 (en) Telemedicine system
WO2024059869A1 (en) Medical visit recording system
KR20130053772A (en) Tele auscultation medicine smart-healthcare system based on digital stethoscope and the method thereof
JP2002215797A (en) Hospital information system
JP2021047919A (en) Remote medical care system and method
Hartvigsen Technology considerations
US20250132059A1 (en) Electronic creation, curation, approval and delivery of health-related, educational content
Rajput et al. Wireless Stethoscope with Digital Feedback
WO2021195099A1 (en) System and method for immutable virtual pre-site study
JP2001290881A (en) Medical treatment and examination report distribution system and input device used for the same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23750133

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 23750133

Country of ref document: EP

Kind code of ref document: A1