GB2631305A - Neurostimulation device positioning method and system - Google Patents
Neurostimulation device positioning method and system Download PDFInfo
- Publication number
- GB2631305A GB2631305A GB2309666.2A GB202309666A GB2631305A GB 2631305 A GB2631305 A GB 2631305A GB 202309666 A GB202309666 A GB 202309666A GB 2631305 A GB2631305 A GB 2631305A
- Authority
- GB
- United Kingdom
- Prior art keywords
- neurostimulation device
- subject
- target
- body part
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/70—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N1/00—Electrotherapy; Circuits therefor
- A61N1/18—Applying electric currents by contact electrodes
- A61N1/32—Applying electric currents by contact electrodes alternating or intermittent currents
- A61N1/36—Applying electric currents by contact electrodes alternating or intermittent currents for stimulation
- A61N1/36014—External stimulators, e.g. with patch electrodes
- A61N1/3603—Control systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N2/00—Magnetotherapy
- A61N2/004—Magnetotherapy specially adapted for a specific therapy
- A61N2/006—Magnetotherapy specially adapted for a specific therapy for magnetic stimulation of nerve tissue
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N2/00—Magnetotherapy
- A61N2/02—Magnetotherapy using magnetic fields produced by coils, including single turn loops or electromagnets
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/30—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Animal Behavior & Ethology (AREA)
- Radiology & Medical Imaging (AREA)
- Neurology (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Surgery (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Child & Adolescent Psychology (AREA)
- Developmental Disabilities (AREA)
- Hospice & Palliative Care (AREA)
- Psychiatry (AREA)
- Psychology (AREA)
- Social Psychology (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Image Processing (AREA)
Abstract
A system for determining a target neurostimulation device position, comprising one or more cameras 108 to capture one or more images of a subject’s body part, and a neurostimulation device 102. A computer 112 is arranged to receive the images 120 from the cameras, and determine using computer vision a 3D target neurostimulation position 118 located relative to the subject’s body part. The target position comprises a 3D translational position metric indicating an area of operation of the neurostimulation device. Also disclosed is a method for determining a position of a plurality of target positions comprising obtaining for each training subject a neurostimulation device position and an image dataset of a body part of the subject. Each dataset is formatted and provided to a network architecture including a neural network path, and a scorer. The images are scored and compared to the target device position to train and adjust weighting of the network architecture.
Description
NEUROSTIMULATION DEVICE POSITIONING METHOD AND SYSTEM
Field of the Disclosure
The present disclosure relates to a system and method for determining an optimal position for a neurostimulation device and particularly relates to making use of computer vision for optimally positioning a neurostimulation device.
Background to the Disclosure
Neuromonitoring and neurostimulation are limited by tools and methods to determine physical locations either of biological reference points on a subject, or of relevant signals in a tissue region.
Contemporary forms of neurostimulation typically require numerous extensive clinical sessions, the availability of which is determined in accordance with the availability of a small number of highly experienced and skilled practitioners. Such sessions are often lengthy and invasive, and typically begin with the precise determination of a target site for application of neurostimulation, which differs according to an individual subject's anatomy. Such determinations depend on a skilled but subjective judgement of the practitioner and therefore treatment can often lack consistency of quality, and as yet methods have been unable to tackle this lack of consistency.
Procedures additionally make extensive use of marker technology which can be unwieldy and uncomfortable, and additionally continues to rely on the proficiency of the practitioners placing the markers.
It is therefore desirable to provide a neurostimulation technique which overcomes these drawbacks, and in particular which provides a simple, minimally invasive procedure with improved accuracy, precision and consistency which does not require use by skilled personnel.
Summary of the Disclosure
The present disclosure is directed to a neurostimulation or neuromonitoring device positioning system and method including receiving images of a body part of a subject from one or more cameras and determining, by processing the one or more images using computer vision, a target location in space on or proximate the body part for positioning the neurostimulation or neuromonitoring device. The target position preferably includes information relating to both a 3D translational state of the neurostimulation or neuromonitoring device, together with a 3D rotational state of the neurostimulation or neuromonitoring device, based on properties of the user's head read by the computer vision. The computer vision preferably infers said properties based on visual light images of the user's body part alone in a markerless fashion. In most preferable embodiments, a trained machine learning architecture is used to determine the target neurostimulation or neuromonitoring device position, such that a simple, consistent and minimally invasive neurostimulation or neuromonitoring approach is provided which requires minimal to no pre-treatment calibration and minimises the skills required from a clinician.
Additionally, the present disclosure intends to eliminate or reduce any source of error in alignment which may be caused by a movement or otherwise of the subject, unrelated to the actions of the clinician.
As such, the present system may in some embodiments permit the imaging of a body part of a subject for determining a target neurostimulation or neuromonitoring device position for output to a user. The user may, using the output, place the neurostimulation or neuromonitoring device at the target position, without the need for extensive and sometimes invasive, prepositioning checks common to current methods. The target neurostimulation or neuromonitoring device position may be associated with a known neurostimulation or neuromonitoring device position associated with a particular desired neurostimulation or neuromonitoring protocol, but tailored to a subject's particular body part dimensions. Thereby a personalised and consistent approach is provided.
The present disclosure will now be discussed in relation to a neurostimulation device, but embodiments will be appreciated wherein the term "neurostimulation device" may be substituted with "neuromonitoring device".
In accordance with a first aspect of the present disclosure, there is provided a system for determining a target neurostimulation device position, the system comprising: one or more cameras arranged to capture one or more images of a body part of a subject; a neurostimulation device arranged to provide a neurostimulation to the body part of the subject; and a computer comprising a processor and a computer-readable medium arranged to: receive the one or more images from the one or more cameras; determine, by the processor using computer vision to process the one or more images, a three-dimensional (3D) target neurostimulation device position located relative to the body part of the subject; wherein the target neurostimulation device position comprises a 3D translational position metric indicating an area of operation of the neurostimulation device located relative to the body part of the subject.
It will be understood by the skilled addressee that the term "subject' used herein may refer to a human subject or an animal subject.
The term "3D translational position metric" will be understood by the skilled addressee as representing at least one point in three-dimensional physical or virtual space, and for example may be associated with at least an (x, y, z) coordinate in said space. The 3D translation position metric may additionally, or instead, indicate a plurality of points in said space, such as to define a region or area in said space. The 3D translational position metric may be determined in any suitable manner, for example in accordance with a predetermined reference position or origin position in said space, the 3D translational position metric being determined relative to the predetermined reference position or origin position.
The one or more cameras may, in some embodiments, be preferably arranged to capture one or more first images of the body part of the subject, each of the one or more first images captured of the body part positioned at a first respective angle relative to the one or more cameras; and wherein the one or more cameras are further arranged to subsequently capture one or more second images of the body part of the subject, each of the one or more second images captured of the body part positioned at a second respective angle relative to the one or more cameras; wherein the first respective angle is different to the second respective angle. For example, in some such embodiments, successive sets of images may be obtained using each of the one or more cameras, each successive set of images being obtained from a different angle relative to the body part. Such successive sets of images may, for example, be obtained by repositioning or rotating the one or more cameras about the body part, and/or by rotating or repositioning the body part relative to the one or more cameras.
The processor may be arranged to, using the computer vision to process the one or more images, determine a virtual 3D model of the body part. In such embodiments, the 3D translational position metric may define a point in virtual space relative to said model. In some embodiments, the one or more images may be provided as an input to a trained machine learning architecture for the determination of the target neurostimulation device position. The trained machine learning architecture may be trained using images of a said body part.
Obtaining successive sets of images of the body part obtained at more than one angle relative thereto using the one or more cameras may in some embodiments aid the determination of the virtual 3D model.
In some embodiments, the processor may be arranged to receive or obtain a known neurostimulation device position, such as for example associated with a target neurostimulation protocol. The processor may, in such embodiments, be arranged to determine, using computer vision to process the one or more images and the known neurostimulation device position, the 3D target neurostimulation device position. In such embodiments, the determination of the target neurostimulation device position is preferably guided in accordance with a position known to be efficacious for a desired neurostimulation protocol. For example, in embodiments wherein the body part is the head of the subject, the known neurostimulation device position may be associated with the location from the International 10-20 System, or determined in accordance with clinical data obtained defining a desired location of the subject's brain. In embodiments wherein the processor is arranged to determine a 3D model of the body part, the processor may be further arranged to determine, relative to the 3D model and using the known neurostimulation device position, the target neurostimulation device position. In embodiments wherein the processor is arranged to provide the one or more images as input to a trained machine learning architecture, the processor may further provide the known neurostimulation device position as input to the trained machine learning architecture.
The target neurostimulation device position may, in some embodiments, be a first target neurostimulation device position, the first target neurostimulation device position identifying a motor threshold spot on the subject. The processor may, subsequent to placement of the neurostimulation device at the first target neurostimulation device position, determine a second target neurostimulation device position, the second target neurostimulation device position identifying a treatment location. The determining of the second target neurostimulation device position may in some embodiments be associated with a predefined or known sequence of steps for determining a neurostimulation treatment location, for example the 5.5 cm rule.
3o In some embodiments, the target neurostimulation device position preferably further comprises a 3D rotational position metric indicating an orientation of the neurostimulation device (or the relevant part thereof) relative to the body part. The term "3D rotational position metric" will be understood by the skilled addressee as representing a three-dimensional rotational state (for example representing each of a pitch, yaw and roll state) of the neurostimulation device (or the relevant part thereof). Combining the 3D translation positon metric and the 3D rotational position metric preferably provides a target neurostimulation device position which maximises efficacy for a desired neurostimulation protocol.
In some embodiments, the neurostimulation device may comprise a position sensor, a motion sensor or an accelerometer, which may be arranged to determine and output a 3D translational position and/or 3D rotational position of the neurostimulation device. Such a sensor may therefore comprise, for example a 3DoF sensor or a 6DoF sensor. The processor may in such embodiments be further arranged to receive the outputted 3D translational position and/or 3D rotational position of the neurostimulation device, and may be further arranged to determine, using the outputted 3D translational position and/or 3D rotational position, that the neurostimulation device is at the target neurostimulation device position.
Upon determining that the neurostimulation device is at the target neurostimulation device position, the processor may be further arranged to output, for example to a display of the computer, a signal indicating that the neurostimulation device is at the target neurostimulation device position. The present disclosure in such embodiments may therefore provide for complete guidance of the neurostimulation device to the target position, preferably maximising accuracy and precision of placement, and therefore maximising efficacy of a desired neurostimulation protocol.
In place of, or in addition to, the 3D rotational position metric, the processor may be arranged to receive a signal indicating contact of the neurostimulation device (or said relevant part thereof) with the body part of the subject. In such embodiments, the neurostimulation device may comprise a contact sensor arranged to detect contact of the neurostimulation device, or a desired part thereof, with the body part of the subject. The neurostimulation device may be further arranged to output the signal indicating said contact, for receipt and processing by the processor. In such embodiments, combining the 3D translation position metric with the contact signal (and optionally the 3D rotational position metric), preferably improves accuracy and precision of placement of the neurostimulation device for a desired neurostimulation protocol.
The processor may be arranged to receive data defining the body part; and further arranged to determine, based on the data defining the body part, the known neurostimulation device position. In some embodiments, the data defining the body part may be clinical data indicating or modelling internal anatomy of the body part. For example, in embodiments wherein the body part is the head of the subject, the data defining the body part may include magnetic resonance imaging (MRI) data and/or positron emission tomography (PET) data indicating a location of the dorsolateral prefrontal cortex (DLPFC) of the subject. Any suitable such data may be used depending on the body part for receiving neurostimulation, and means of obtaining such data will be appreciated by the skilled addressee. Such embodiments preferably combine the use of external dimensional information defining the body part obtained using the one or more images, with internal dimensional information defining the body part obtained using the data defining the body part. Such a combination preferably improves the accuracy and precision of the target neurostimulation device position, thereby improving the efficacy obtainable for neurostimulation provided to the area by the neurostimulation device.
The processor may be further arranged to, following said determination of the target neurostimulation device position, output the target neurostimulation device position, for example to a display of the computer for display to a user.
The target neurostimulation device position may be determined in virtual space, for example relative to a 3D model of the body part as described herein, or in real space, for example relative to a determined 3D position of the body part using, for example, depth-sensing information. In some embodiments, the reference position or origin position used to determine the 3D translational position metric may be determined using any suitable method, and for example may be determined by the processor upon receipt of the one or more images, or upon receipt of one or more reference images obtained in the presence or absence of the subject. Such reference images may be captured including a predefined reference marker positioned at a known position, for example relative to the one or more cameras. Said reference marker at the known position may be used to determine the reference position or origin position by the processor, for example prior to determining the 3D target neurostimulation device position. Other markerless embodiments may make use of environment mapping, such as using an infra-red time-of-flight (TOF) depth sensor, in order to obtain spatial information defining the environment surrounding the neurostimulation device. In such embodiments, the 3D translational position metric may be determined relative to the obtained environment mapping. The system may in such a way be either arranged to perform a calibration step prior to determining the 3D neurostimulation device position, or provide real-time relative spatial information for the purposes of determining the target neurostimulation device position.
The 3D translational position metric therefore indicates a location in space for positioning a neurostimulation device, or a part thereof (such as a relevant part intended for providing neurostimulation to the body part), and may be relative to a known reference or origin location, or relative to known positions within a mapped environment determined using environment mapping. As noted herein, the 3D translation position metric may additionally, or instead, indicate a plurality of points in said space, such as to define an area. For example the area may be a suggested area of operation for particular neurostimulation protocol, and may include a plurality of "points" within said area to be achieved in sequence. Said area may take any suitable shape depending on the subject or intended neurostimulation protocol.
In some embodiments, the area of operation is preferably one selected from: a location in space of a portion of the neurostimulation device; a point of contact between the body part of the subject and the neurostimulation device; an area of contact between the body part of the subject and the neurostimulation device. In some embodiments, at the area of operation, the neurostimulation device is preferably positioned at a predefined non-zero distance away from the body part. The operation of the neurostimulation device may therefore require contact between the neurostimulation device (or a relevant therapeutic part thereof) and the body part in some embodiments, and may in other embodiments require only proximity of the neurostimulation device (or a relevant therapeutic part thereof) to the body part. In contactless embodiments, a space may be provided between the neurostimulation device and the subject, which may facilitate the provision of air flow, improved comfort, noise isolation, temperature or heating protection or any combination of these potential benefits.
In some embodiments, the system preferably comprises at least two said cameras, each of the at least two cameras positioned to capture respective said images at different angles. The capturing of the images at different angles preferably improves the speed and accuracy of operation of the computer vision, for example for determining a 3D model of the body part or for use in a trained machine learning architecture. In some embodiments, the system preferably comprises a first said camera and a second said camera; wherein the first camera is positioned to capture a first said image at a first angle relative to the body part of the subject; and wherein the second camera is positioned to capture a second said image at a second angle relative to the body part of the subject. In some embodiments, the first camera is positioned to capture a first said image at a first angle relative to the body part of the subject; and wherein the second is positioned to capture a second said image at a second angle relative to the body part of the subject. In some embodiments, the first and second cameras may be arranged to capture a first set of images comprising the first image and the second image. In such embodiments, the first and second cameras are preferably further arranged to subsequently capture a second set of images comprising a third and fourth image, wherein the first camera is positioned to capture the third image at a third angle relative to the body part of the subject; and wherein the second camera is positioned to capture the fourth image at a fourth angle relative to the body part of the subject. As such the first and second cameras may each be arranged to capture successive sets of images of the body part, each set of images captured at a different angle relative thereto. Such successive sets of images may, for example, be captured by repositioning or rotating each of the first and second cameras about the body part, and/or by rotating or repositioning the body part relative to the first and second cameras. In most preferable embodiments, when capturing the first and second sets of images, the first and second cameras are positioned at a known angle relative to one another. In some embodiments, the known angle remains the same for capturing both the first and second sets of images.
In some preferable embodiments, each said camera is preferably positioned at a non-zero distance relative to one another, said distance preferably between 3 and 24 inches, and more preferably between 6 and 12 inches. While a greater distance between said cameras provides a preferable ration between number of cameras and accuracy of the computer vision, thereby optimising resource usage and performance, a greater distance than the ranges described, and preferably greater than 12 inches, may hinder usability based on available space for the system. Each said camera is preferably positioned at a non-zero distance from the body part of the subject, said distance preferably between 1 and 5 feet, and most preferably approximately 3 feet. While greater proximity to the subject is preferable for maximising accuracy, this reduces subject comfort.
In preferable embodiments wherein the body part is a head, the desired neurostimulation protocol may require neurostimulation of a front region of the subject's head or a rear region of the subject's head. In embodiments requiring neurostimulation of a front region of the subject's head, the one or more images preferably comprise at least one feature of the face of the subject (for example, one or both eyes, one or both ears, nose, mouth). Such features preferably improve accuracy of the computer vision. The one or more images may preferably comprise a first image depicting at least a portion of a first temple of the subject, and a second image depicting at least a portion of the second temple of the subject. In embodiments requiring neurostimulation of a rear region of the subject's head, the one or more images preferably additionally, or instead, comprise at least one feature located at the rear of the subject's head. Features of the face and temple region of the head are preferably beneficial for computer vision whether for neurostimulation of the front or rear of the subject's head.
In some embodiments, at least one of the one or more images is preferably a visible light image. The use of visible light images in some embodiments preferably reduces barrier to uptake by minimising hardware requirements, particularly since visible light cameras and camera sensors can in some cases be more ubiquitously obtainable when compared with other forms of camera technology. In some embodiments, each of the one or more images is preferably a visible light image. In some embodiments, the one or more images comprise a mix of visible light images and infra-red images. Infra-red images may additionally be used in environment mapping or depth sensing for determination a location of the body part and/or the neurostimulation device in real space.
In some preferable embodiments, each of the visible light images comprise features of the body part without additional tracking technology, for example using fiducial markers (e.g. Stag and ArUco fiducials). A markerless system and method are preferred for the present disclosure, but embodiments will be appreciated wherein the computer vision makes use of fiducial markers for determination of the target neurostimulation device position. In some embodiments wherein the one or more images may comprise at least a portion of the neurostimulation device, the neurostimulation device may comprise one or more surface features arranged to be detected by the one or more processors as indicating the presence and/or translational position and/or rotational position of the neurostimulation device. The one or more surface features may include, for example, any suitable fiducial markers (e.g. Stag and ArUco fiducials), or may include a distinct pattern or shape specific to said portion of the neurostimulation device, which may be formed by, for example, injection molding. In such embodiments, the neurostimulation device may comprise a plurality of surface positions located thereon, each surface position in the plurality of surface positions comprising a different said surface feature. The system may comprise a surface feature mapping comprising a spatial mapping of each said surface feature to a corresponding surface position, for access by the processor. Said mapping may, in such embodiments, be accessed and used by the processor to determine the three-dimensional (3D) target neurostimulation device position and/or a current three-dimensional (3D) target neurostimulation device position.
In some embodiments, the neurostimulation device preferably comprises a position and/or orientation sensor, wherein the position sensor is arranged to detect the 3D translational position of the neurostimulation device, and wherein the orientation sensor is arranged to detect the 3D rotational position of the neurostimulation device. The neurostimulation device in such embodiments is preferably arranged to output the 3D translational position and/or the 3D rotational position to the computer. In such embodiments the processor is preferably arranged to determine, using the received 3D translational position and/or the 3D rotational position, that the neurostimulation device is at the target neurostimulation device position. The processor may be further arranged to, upon said determination, output an alert to a user that indicating that the neurostimulation device is at the target neurostimulation device position. The processor may additionally, or instead, be arranged to, upon said determination, configure the neurostimulation device for providing a neurostimulation protocol.
In some embodiments, the neurostimulation device is preferably one selected from: a transcranial magnetic stimulation device; a transcranial electric stimulation device; a peripheral magnetic stimulation device; a peripheral electric stimulation device; a low intensity focused ultrasound stimulation device; an infrared optical stimulation device. The neurostimulation device may additionally comprise a focused ultrasound device. The type of neurostimulation device will be selected in accordance with the intended form of neurostimulation to be applied to the subject. The body part may therefore be any body part suitable for receiving neurostimulation. In preferable embodiments, the body part is a head, and in such embodiments, the neurostimulation device is preferably a transcranial magnetic stimulation device or a transcranial electric stimulation device, and most preferably a transcranial magnetic stimulation device.
In accordance with a second aspect of the present disclosure, there is provided a computer-implemented method for determining a target neurostimulation device position, the method comprising: receiving one or more images in a computer comprising a processor and a computer-readable medium, the one or more images depicting a body part of a subject; and determining, by the processor using computer vision to process the one or more images, a three-dimensional (3D) target neurostimulation device position located relative to the body part of the subject; wherein the target neurostimulation device position comprises a 3D translational position metric indicating an area of operation of the neurostimulation device located relative to the body part of the subject.
It will be appreciated that the method may be performed by a system in accordance with the first aspect, and any described as being suitable for incorporating into the system of the first aspect are therefore intended as being suitable for use in a method of the second aspect.
In some embodiments, the target neurostimulation device position preferably further comprises a 3D rotational position metric indicating an orientation of the neurostimulation device relative to the body part.
In some embodiments the area of operation is preferably one selected from: a location in space of a portion of the neurostimulation device; a point of contact between the body part of the subject and the neurostimulation device; an area of contact between the body part of the subject and the neurostimulation device.
In some embodiments, at the area of operation, the neurostimulation device is preferably positioned at a predefined non-zero distance away from the body part.
In some embodiments, each of the one or more images preferably depicts the body part of the subject imaged from a different angle relative thereto. In some embodiments, the step of receiving the one or more images preferably comprises: receiving a first set of images of the body part, each image of the first set of images captured at a first respective angle relative to the body part; and subsequently receiving a second set of images of the body part, each image of the second set of images captured at a second respective angle relative to the body part; wherein the first respective angle is different to the second respective angle. For example, in some such embodiments, successive sets of images may be captured using each of the one or more cameras, each successive set of images being obtained from a different angle relative to the body part. Such successive sets of images may, for example, be obtained by repositioning or rotating the one or more cameras about the body part, and/or by rotating or repositioning the body part relative to the one or more cameras.
The body part will be appreciated as any body part suitable for receiving neurostimulation, and in most preferable embodiments is a head of the subject. In some embodiments wherein the body part is a head of the subject, the one or more images preferably comprises a first image depicting at least a portion of a first temple of the subject, and a second image depicting at least a portion of the second temple of the subject. Embodiments will be appreciated wherein one or more of the images may include any suitable markers for determining a head or a head position, for example one or more temple of the subject; one or more pre-auricular region of the subject; and/or a nasion of the subject.
In some embodiments, one or more of said images preferably depicts at least a portion of the neurostimulation device. In such embodiments, the processor may be arranged to determine, using the one or more images, a current neurostimulation device position. The processor may be further arranged to, using said determination, further determine whether the current neurostimulation device position and the target neurostimulation device position are the same. In such embodiments, the processor may be further arranged to, based on said further determination, provide an output indicating that the neurostimulation device is at the target neurostimulation device position.
In some embodiments, the processor may be arranged to determine a degree of error between the current neurostimulation device position and the target neurostimulation device position, and may comprise one or more values indicating a relative difference between the current neurostimulation device and the target neurostimulation device, or corresponding components thereof. The processor may be arranged to determine a confidence value associated with the target neurostimulation device position, the current neurostimulation device position, or the degree of error, the confidence value indicating a statistical confidence in the respective associated parameter.
In some embodiments, at least one of the one or more images is preferably a visible light image. In some embodiments, each of the one or more images is preferably a visible light image. In some embodiments, the one or more images comprise a mix of visible light images and infra-red images. It will be appreciated that while visible light images are preferable for the present invention due to simplicity of hardware required, in some instances infra-red imaging may be less susceptible to occlusion of the body part. Thus for the purposes of determining the target neurostimulation device position, infra-red imaging in combination with visible light imaging may provide a more precise determination using fewer images obtained at fewer angles relative to the body part.
In some embodiments, the method preferably further comprises: positioning the neurostimulation device at the target neurostimulation device position. In some embodiments, the neurostimulation device may comprise a positioning system arranged to position the neurostimulation device at the target neurostimulation device position. In such embodiments, such positioning may be instructed by the processor following determination of the target neurostimulation device position.
In some embodiments, the method preferably further comprises: determining that the neurostimulation device is at the target neurostimulation device position. Such a determination may be performed using the computer vision, or upon receipt of a signal from a position sensor, movement sensor or accelerometer of the neurostimulation device as described herein.
In some embodiments, the method preferably further comprises either: i. receiving, by the processor, a target neurostimulation protocol, or H. determining, by the processor using computer vision to process the one or more images, a target neurostimulation protocol.
In some embodiments, the method preferably further comprises: after determining that the neurostimulation device is at the target neurostimulation device position, configuring, by the processor, the neurostimulation device for applying the target neurostimulation protocol.
In some embodiments, the method preferably further comprises: instructing, by the processor, one or more cameras to obtain the one or more images. In some embodiments, the one or more cameras comprise image sensors arranged to obtain visible light images. In some embodiments, the one or more cameras comprises at least two cameras.
In some embodiments, the method preferably further comprises: storing the one or more images on the computer readable medium. In some embodiments, the method preferably comprises receiving by the processor from the computer readable storage medium, a known neurostimulation device position. The processor may, in such embodiments, determine, using computer vision to process the one or more images and using the known neurostimulation device position, the three-dimensional (3D) target neurostimulation device position located relative to the body part of the subject. The known neurostimulation device position may be as described herein.
In some embodiments, the method preferably further comprises: outputting, by the processor to a display of the computer, the target neurostimulation device position.
In some embodiments, the computer vision comprises a machine learning architecture trained on a plurality of images, each said image depicting a said body part of a subject.
In accordance with a third aspect of the present disclosure, there is provided a computer program product stored on a processing device, the computer program product comprising instructions which when executed by the processing device, are arranged to perform a method in accordance with the second aspect. It will further be appreciated that the method may be performed using a system in accordance with the first aspect.
In accordance with a fourth aspect of the present disclosure, there is provided a computer-implemented method for determining a target neurostimulation device position of a plurality of target neurostimulation device positions for a test subject, the method comprising: obtaining, for each respective training subject in a plurality of training subjects: (i) a target neurostimulation device position and (H) an image dataset that includes one or more images of a body part of the subject, thereby obtaining a plurality of image data sets; formatting each image dataset into a corresponding formatted image data set, thereby creating a plurality of formatted image data sets; providing the plurality of formatted image data sets to a network architecture that includes at least (i) a first convolutional neural network path comprising a first plurality of layers including at least a first convolutional layer associated with at least a first filter comprising a first set of filter weights and (ii) a scorer; obtaining a plurality of scores from the scorer, wherein each score in the plurality of scores corresponds to an input of one of the formatted image data sets in the plurality of formatted image data sets into the network architecture; using a comparison of respective scores in the plurality of scores to the corresponding target neurostimulation device position of the corresponding training subject in the plurality of training subjects to adjust at least the first set of filter weights, thereby training the network architecture to determine a target neurostimulation device position, in the plurality of target neurostimulation device positions.
In some embodiments, the method preferably further comprises: using the trained network architecture to determine the target neurostimulation device position of the plurality of target neurostimulation device positions for the test subject, using one or more images of a said body part of the test subject as input to the trained network architecture.
In accordance with a fifth aspect of the present disclosure, there is provided a computer-implemented method for determining a target neurostimulation device position of a plurality of target neurostimulation device positions for a subject, the method comprising: obtaining one or more images of a body part of the subject; formatting each image dataset into a corresponding formatted image data set; providing the image data set to a network architecture that includes at least (i) a first convolutional neural network path comprising a first plurality of layers including at least a first convolutional layer associated with at least a first filter comprising a first set of filter weights and (ii) a scorer; obtaining a first score from the scorer, wherein the score corresponds to an input of one of the formatted image data sets in the plurality of vector sets into the network architecture; and using the first score to determine the target neurostimulation device position of the plurality of target neurostimulation device positions for the subject.
In accordance with a sixth aspect of the present disclosure, there is provided a computer-implemented method for determining a target neurostimulation device for a subject, the method comprising: obtaining one or more images of a body part of the subject; determining, from the one or more images a three-dimensional (3D) model of the body part; receiving a known neurostimulation device position of a plurality of known neurostimulation device positions; and using the 3D model and the known neurostimulation device position to determine the target neurostimulation device position for the subject.
It will be appreciated that any features described herein as being suitable for incorporation into one or more aspects or embodiments of the present disclosure are intended to be generalizable across any and all aspects and embodiments of the present disclosure. In particular, any mention of the term "neurostimulation" or "neurostimulation device" is intended to be interchangeable with the term "neuromonitoring" or "neuromonitoring device" respectively. Other aspects of the present disclosure can be understood by those skilled in the art in light of the description, the claims, and the drawings of the present disclosure. The foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the claims.
Detailed Description
Specific embodiments will now be described by way of example only, and with reference to the accompanying drawings, in which: FIG. 1 provides a perspective view of an example embodiment of a system in accordance with the first aspect of the present disclosure, wherein the body part is a head of a subject; FIG. 2 provides a plan view of a head of a subject depicting known neurostimulation locations in accordance with the International 10-20 System; FIG. 3 provides a perspective view of an example neurostimulation device for use in a system according to the first aspect, the example neurostimulation device taking the form of a transcranial magnetic stimulation (TMS) device; FIG. 4 provides a 3D perspective view of a head of the subject of FIG. 1, depicting the determination of the target neurostimulation device position, which in the example shown comprises a plurality of 3D translation position metrics to define an area of operation, and a 3D rotational position metric; FIG. 5 provides an alternative perspective view to that shown in FIG. 4, which shows a 3D translation position metric defining a single point of operation, and a 3D rotational position metric; FIG. 6 depicts a flow chart detailing steps of an example embodiment of a method in accordance with the second aspect; FIG. 7 depicts a flow chart detailing steps of an example embodiment of a method in accordance with the fourth aspect; FIG. 8 depicts a flow chart detailing steps of an example embodiment of a method in accordance with the fifth aspect; and FIG. 9 depicts a flow chart detailing steps of an example embodiment of a method in accordance with the sixth aspect.
Referring to FIG. 1, a perspective view of an example embodiment of a system 100 in accordance with the first aspect of the present disclosure is shown. The particular embodiment 100 shown comprises a neurostimulation device 102 which in the particular embodiment shown is a transcranial magnetic stimulation (TMS) device 102. The TMS device 102 comprises a therapeutic portion (shown in FIG. 1) in communication with a power supply (not shown) arranged to drive the TMS device 102 to provide neurostimulation to the head 104 of a subject 106. The system 100 further comprises two cameras 108 (which in this case are visible light cameras) positioned to capture images of the head 104 at different angles to one another. In particular, each of the two cameras 108 is positioned to capture an image comprising a respective temple region 110 of the head 104 of the subject 106. The system 100 further comprises a computer 112 comprising a processor 114 and a computer-readable storage medium 116, the computer 112 being arranged to receive the images 120 from each of the cameras 108 and store these in the computer readable storage medium 116. The computer readable storage medium 116 comprises computer vision instructions 122 stored therein which, when executed by the processor 114 cause the processor 114 to determine using the computer vision to process the images 120, a three-dimensional (3D) target neurostimulation device position 118 located relative to the head 104 of the subject 106. The target neurostimulation device position 118 comprises a 3D translational position metric indicating an area of operation 120 of the TMS device 102 located relative to the head 104 of the subject 106.
In use, a neurostimulation protocol is intended for provision to the subject 106. In contemporary neurostimulation procedures, such as TMS for example, an electrical coil may be placed near or adjacent a subject's head, traditionally directly outside an area on the patient's brain intended to receive electromagnetic stimulation. The alignment of the coil to the intended location is important -both translationally (in a three dimensional orthonormal coordinate system, such as a Euclidean x, y, z space) and rotationally (such as in a three rotational axis system, for instance as a roll, pitch and yaw sense). Alignment of the coil of the TMS device, as with other similar neurostimulation techniques, is conventionally afforded by one of two largely manual mechanisms.
With reference to the first manual approach, this approach relies on numerous direct physical measurements of a subject's head. As an example, in a common TMS treatment protocol to address depression, a subject may be seated in a chair and wear a temporary cap. The cap may be carefully aligned to the patient's head (making and manually recording a series of measurements, for instance, between the perimeter of the cap and to biomarkers such as tops of ears, nose, or a bone prominence on the back of the head, referred to as the "ineon" 202). A marking system may subsequently also be used by making measurements from such biomarkers to an intended known location for neurostimulation, such as those of the International 10-20 System 200 shown in FIG. 2. For instance, the C3 location (as seen in FIG. 2) may be found by measuring approximately 60% of the curved path distance along the surface of the scalp from the left preauricular point, in front of the ear, up to the top of the head, or "vertex". A brief series of individual TMS pulses at and subsequently around this point may be used to determine both the location of the "motor threshold" hotspot and to determine the power level appropriate for driving a neuronal response from the subject for subsequent treatment of a neurological condition such as depression. For some protocols, said optimum location may be marked after sequentially attempting individual pulses around the C3 location to find a patient-specific location slightly away from C3 that most effectively targets the abductor pollicis brevis (APB) nerve center in the cortex. In some major depressive disorder (MDD) treatment protocols, after determining such a motor threshold "hotspot", another location may be marked directly forward a specified distance (possibly 5 or 5.5 cm) along a sagittal plane. Subsequent repetitive treatment of TMS pulses may be provided at this "area of operation". In both cases (in finding the hotspot and in the area of operation), a TMS clinician has used a manual process of seeing biomarkers, and then seeing/measuring and marking a location for a coil to be situated. In both cases, a TMS clinician sees and moves a coil into the required location.
Such a process requires multiple manual steps and relies extensively upon a clinician's skill and subjective judgment which may thus limit precision and reproducibility as any clinician may be prone to error.
A second approach attempts to improve the ease of finding the motor hotspot by providing a real-time graphical interface to the clinician indicating the location of the head and the location of the neurostimulation device during said locating. This approach makes use of physical infrared reflectors located on a bracket worn by the user, the reflectors providing locating feedback to the interface regarding the location of the subject's head. Since the infra-red reflective bracket was merely applied to an unspecified location on the subject's head, calibration as to the specific location/orientation of the subject's head is required to be determined separately.
Therefore, a subsequent series of steps is required in which a tracking wand is used, the wand taking the form of a second bracket with known geometry to contain multiple IR retroflective spheres. The bracket comprises a pointed end that can be used by a clinician to touch a subject's biological feature -for instance in the case of TMS, two pre-auricular points in front of each ear (see 204 in FIG. 2) and the nasion 206 near the patient's nose. Upon touching the wand to the designated location, the graphical user interface is updated to represent the user's head. A similar retrofit bracket may be used for the TMS device to provide similar locating functionality. A process of finding the motor hotspot may occur as before, this time with an easier process of finding the initial C3 location, and an easier mechanism of "recording" the motor threshold. Similarly, a process of moving from the motor hotspot to the area of operation occurs similarly.
This second process has advantages over the first more manual process because it automates some of the steps and provides visual feedback to allow clinicians to track positions, but adds additional cost and complexity and still leaves a therapy with a risk that the clinician misidentified the initial locations of the pre-auriculars and nasion with the wand, so still remains clinician-dependent on alignment precision in a way that hasn't been eliminated.
The present system as exemplified by the example 100 in FIG. 1 provides an improvement over these manual methods by reducing resource cost and complexity, while improving accuracy, precision and overall consistency in treatment and deskilling the process of providing a neurostimulation protocol. In the particular example described in relation to FIG. 1, two visible light image cameras are used to capture visible light images comprising regions of the subject's head, said images processed using computer vision to provide a target neurostimulation device position. In the case of TMS, the target neurostimulation device position may be any position known to have efficacy for a particular protocol, such as those of the established International 10-20 System shown in FIG. 2. Additionally, the target neurostimulation device position may be an area occupying more than one of the positions shown (such as in a case with multiple stimulators) or a region not identified with a common 10-20 point, and may be driven for example by MRI or PET data identifying a location of a specific target region of a subject's brain. In most preferable embodiments the target neurostimulation device position comprises not only a 3D translation position metric indicating a point in space relative to the subject's body part for applying the neurostimulation protocol, but also a 3D rotational position metric indicating, for example, a pitch, yaw and roll state of the neurostimulation device in order to maximise precision.
Referring to FIG. 3, an example neurostimulation device 300 is shown, which takes the form of a TMS device 300 suitable for use with the system of FIG. 1. The TMS device 300 comprises electromagnetic coils 302 positioned to apply a transcranial electromagnetic stimulation 304 to an appropriate portion of a subject's scalp. In this drawing, a coil resembling a "figure of eight" with two nominally circular windings of coil is denoted, but it should be understood that any stimulator may be replaced with any suitable stimulator design.
As shown in FIG. 4, using the system of FIG. 1 a particular 3D target neurostimulation device position may be determined making use of computer-vision related processing of the images captured by the cameras. In the particular example shown, the images are provided as input to a trained neural network architecture which provides, as an output, a 3D translational position metric which defines an area 402 in Euclidean space (associated with an (x, y, z) coordinate) for positioning the neurostimulation device 404, together with a 3D rotational position metric 406 describing an intended pitch, yaw and roll state of the device 404 at the area 402 on the subject's head 400. Embodiments will be appreciated wherein the images are provided together with MRI/PET data identifying a target location of the subject's brain, as input to the trained neural network architecture.
FIG. 5 depicts an identical process as shown in FIG. 4 the images are provided, together with a known target location (identified as C3 from the International 10-20 System), as input to a trained neural network architecture which provides, as an output, a 3D translational position metric which defines a precise point 502 in Euclidean space (associated with an (x, y, z) coordinate) for positioning the neurostimulation device 504, together with a 3D rotational position metric 506 describing an intended pitch, yaw and roll state of the device 504 at the point 502.
It will be appreciated that any suitable computer vision technique may be used.
Referring to FIG. 6 a flow chart is shown detailing steps of an example embodiment of a method 600 in accordance with the second aspect, the method comprising: receiving one or more images in a computer comprising a processor and a computer-readable medium, the one or more images depicting a body part of a subject 602; and determining, by the processor using computer vision to process the one or more images, a three-dimensional (3D) target neurostimulation device position located relative to the body part of the subject; wherein the target neurostimulation device position comprises a 3D translational position metric indicating an area of operation of the neurostimulation device located relative to the body part of the subject 604.
It will be appreciated that the computer vision may be used to determine the target neurostimulation device position in any suitable way, for example using a trained machine learning architecture as described herein and in relation to FIG. 4 and 5. The computer vision in most preferable embodiments requires the use of only visible light images captured of a user's body part in a markerless fashion. Embodiments will of course be appreciated wherein suitable markers are used to augment the computer vision determination of the target neurostimulation device position, for example in an initial calibration procedure in the absence of or in the presence of the subject.
Referring to FIG. 7 a flow chart is shown detailing steps of an example embodiment of a method 700 in accordance with the fourth aspect, the method comprising: obtaining, for each respective training subject in a plurality of training subjects: (i) a target neurostimulation device position and (ii) an image dataset that includes one or more images of a body part of the subject, thereby obtaining a plurality of image data sets 702; formatting each image dataset into a corresponding formatted image data set, thereby creating a plurality of formatted image data sets 704; providing the plurality of formatted image data sets to a network architecture that includes at least (i) a first convolutional neural network path comprising a first plurality of layers including at least a first convolutional layer associated with at least a first filter comprising a first set of filter weights and (ii) a scorer 706; obtaining a plurality of scores from the scorer, wherein each score in the plurality of scores corresponds to an input of one of the formatted image data sets in the plurality of formatted image data sets into the network architecture 708; using a comparison of respective scores in the plurality of scores to the corresponding target neurostimulation device position of the corresponding training subject in the plurality of training subjects to adjust at least the first set of filter weights, thereby training the network architecture to determine a target neurostimulation device position, in the plurality of target neurostimulation device positions 710.
The method 700 shown in FIG. 7 represents a method of training a neural network architecture for determining a target neurostimulation device position when taking only images as input. Embodiments will be appreciated wherein the input images are accompanied by a known neurostimulation device position as herein described.
Referring to FIG. 8 a flow chart is shown detailing steps of an example embodiment of a method 800 in accordance with the fifth aspect, the method comprising: obtaining one or more images of a body part of the subject 802; formatting each image dataset into a corresponding formatted image data set 804; providing the image data set to a network architecture that includes at least (i) a first convolutional neural network path comprising a first plurality of layers including at least a first convolutional layer associated with at least a first filter comprising a first set of filter weights and (ii) a scorer 806; obtaining a first score from the scorer, wherein the score corresponds to an input of one of the formatted image data sets in the plurality of vector sets into the network architecture 808; and using the first score to determine the target neurostimulation device position of the plurality of target neurostimulation device positions for the subject 810.
Referring to FIG. 9 a flow chart is shown detailing steps of an example embodiment of a method 900 in accordance with the sixth aspect the method comprising: obtaining one or more images of a body part of the subject; determining, from the one or more images a three-dimensional (3D) model of the body part 902; receiving a known neurostimulation device position of a plurality of known neurostimulation device positions 904; and using the 3D model and the known neurostimulation device position to determine the target neurostimulation device position for the subject 906.
It will be appreciated that the above described embodiments are given as examples only and that alternatives are also considered within the scope of the disclosure. For example, the embodiments described relate to neurostimulation of a head of a subject, specifically transcranial magnetic stimulation. Embodiments will, however, be appreciated using any suitable form of neurostimulation, and the body part may be any suitable body part.
Additionally, embodiments will be appreciated utilising "neuromonitoring" instead of "neurostimulation". The above-described embodiments are provided for example only and are not intended to the broader disclosure as set forth herein and in accordance with the appending claims. Additionally, in the presently described example embodiments, a first camera images a first temple of the subject, and a second camera images a second temple of the subject. Many embodiments will preferably work readily by having the two cameras have largely overlapping fields of view. For example, the first and second cameras may both be centered on the middle of the forehead, while being positioned at different angles relative thereto. The first camera therefore may see a little more of the left of the head than the second camera, and conversely the second camera may see a little more of the right of the head than the first camera.
Claims (25)
- CLAIMS1. A system for determining a target neurostimulation device position, the system comprising: one or more cameras arranged to capture one or more images of a body part of a subject; a neurostimulation device arranged to provide a neurostimulation to the body part of the subject; and a computer comprising a processor and a computer-readable medium arranged to: receive the one or more images from the one or more cameras; determine, by the processor using computer vision to process the one or more images, a three-dimensional (3D) target neurostimulation device position located relative to the body part of the subject; wherein the target neurostimulation device position comprises a 3D translational position metric indicating an area of operation of the neurostimulation device located relative to the body part of the subject.
- 2. A system as claimed in claim 1, wherein the target neurostimulation device position further comprises a 3D rotational position metric indicating an orientation of the neurostimulation device relative to the body part.
- 3. A system as claimed in claim 1 or claim 2, wherein the body part is a head.
- 4. A system as claimed in claim 1, claim 2 or claim 3, comprising a first said camera and a second said camera; wherein the first camera is positioned to capture a first said image at a first angle relative to the body part of the subject; and wherein the second is positioned to capture a second said image at a second angle relative to the body part of the subject.
- 5. A system as claimed in any one of the preceding claims, wherein each of the one or more images is a visible light image.
- 6. A system as claimed in any one of the preceding claims, wherein the neurostimulation device is one selected from: a transcranial magnetic stimulation device; a transcranial electric stimulation device; a peripheral magnetic stimulation device; a peripheral electric stimulation device; a low intensity focused ultrasound stimulation device; an infrared optical stimulation device
- 7. A computer-implemented method for determining a target neurostimulation device position, the method comprising: receiving one or more images in a computer comprising a processor and a computer-readable medium, the one or more images depicting a body part of a subject; and determining, by the processor using computer vision to process the one or more images, a three-dimensional (3D) target neurostimulation device position located relative to the body part of the subject; wherein the target neurostimulation device position comprises a 3D translational position metric indicating an area of operation of the neurostimulation device located relative to the body part of the subject.
- 8. A system as claimed in claim 7, wherein the target neurostimulation device position further comprises a 3D rotational position metric indicating an orientation of the neurostimulation device relative to the body part.
- 9. A system as claimed in claim 7 or claim 8, wherein the body part is a head.
- 10. A method as claimed in claim 9, wherein the one or more images comprises a first image depicting at least a portion of a first temple of the subject, and a second image depicting at least a portion of the second temple of the subject.
- 11. A method as claimed in any one of claims 7 to 10, wherein the at least one image is a visible light image.
- 12. A method as claimed in any one of claims 7 to 11, wherein the method further comprises: positioning the neurostimulation device at the target neurostimulation device position.
- 13. A method as claimed in any one of claims 7 to 12, wherein the method further comprises: determining that the neurostimulation device is at the target neurostimulation device position.
- 14. A method as claimed in any one of claim 13, wherein the method further comprises either: i. receiving, by the processor, a target neurostimulation protocol, or ii. determining, by the processor using computer vision to process the one or more images, a target neurostimulation protocol.
- 15. A method as claimed in claim 14, wherein the method further comprises: after determining that the neurostimulation device is at the target neurostimulation device position, configuring, by the processor, the neurostimulation device for applying the target treatment protocol.
- 16. A method as claimed in any one of claims 7 to 15, wherein the method further comprises: instructing, by the processor, one or more cameras to obtain the one or more images.
- 17. A method as claimed in claim 16, wherein the one or more cameras comprise image sensors arranged to obtain visible light images.
- 18. A method as claimed in claim 16 or claim 17, wherein the one or more cameras comprises at least two cameras.
- 19. A method as claimed in any one of claims 7 to 18, wherein the method further comprises: storing the one or more images on the computer readable medium.
- 20. A method as claimed in any one of claims 7 to 19, wherein the method further comprises: outputting, by the processor to a display of the computer, the target neurostimulation device position.
- 21. A method as claimed in any one of claims 7 to 20, wherein the computer vision comprises a machine learning architecture trained on a plurality of images, each said image depicting a said body part of a subject.
- 22. A computer program product stored on a processing device, the computer program product comprising instructions which when executed by the processing device, are arranged to perform a method of any one of claims 7 to 21.
- 23. A computer-implemented method for determining a target neurostimulation device position of a plurality of target neurostimulation device positions for a test subject, the method comprising: obtaining, for each respective training subject in a plurality of training subjects: (i) a target neurostimulation device position and (ii) an image dataset that includes one or more images of a body part of the subject, thereby obtaining a plurality of image data sets; formatting each image dataset into a corresponding formatted image data set, thereby creating a plurality of formatted image data sets; providing the plurality of formatted image data sets to a network architecture that includes at least (i) a first convolutional neural network path comprising a first plurality of layers including at least a first convolutional layer associated with at least a first filter comprising a first set of filter weights and (ii) a scorer; obtaining a plurality of scores from the scorer, wherein each score in the plurality of scores corresponds to an input of one of the formatted image data sets in the plurality of formatted image data sets into the network architecture; using a comparison of respective scores in the plurality of scores to the corresponding target neurostimulation device position of the corresponding training subject in the plurality of training subjects to adjust at least the first set of filter weights, thereby training the network architecture to determine a target neurostimulation device position, in the plurality of target neurostimulation device positions.
- 24. A computer implemented method as claimed in claim 23, further comprising: using the trained network architecture to determine the target neurostimulation device position of the plurality of target neurostimulation device positions for the test subject, using one or more images of a said body part of the test subject as input to the trained network architecture.
- 25. A computer-implemented method for determining a target neurostimulation device position of a plurality of target neurostimulation device positions for a subject, the method comprising: obtaining one or more images of a body part of the subject; formatting each image dataset into a corresponding formatted image data set; providing the image data set to a network architecture that includes at least (i) a first convolutional neural network path comprising a first plurality of layers including at least a first convolutional layer associated with at least a first filter comprising a first set of filter weights and (ii) a scorer; obtaining a first score from the scorer, wherein the score corresponds to an input of one of the formatted image data sets in the plurality of vector sets into the network architecture; and using the first score to determine the target neurostimulation device position of the plurality of target neurostimulation device positions for the subject.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB2309666.2A GB2631305A (en) | 2023-06-27 | 2023-06-27 | Neurostimulation device positioning method and system |
| PCT/GB2024/051617 WO2025003647A1 (en) | 2023-06-27 | 2024-06-25 | Neurostimulation device positioning method and system |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB2309666.2A GB2631305A (en) | 2023-06-27 | 2023-06-27 | Neurostimulation device positioning method and system |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| GB202309666D0 GB202309666D0 (en) | 2023-08-09 |
| GB2631305A true GB2631305A (en) | 2025-01-01 |
Family
ID=87517679
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| GB2309666.2A Pending GB2631305A (en) | 2023-06-27 | 2023-06-27 | Neurostimulation device positioning method and system |
Country Status (2)
| Country | Link |
|---|---|
| GB (1) | GB2631305A (en) |
| WO (1) | WO2025003647A1 (en) |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20130345491A1 (en) * | 2011-03-09 | 2013-12-26 | A School Corporation Kansai University | Image data processing device and transcranial magnetic stimulation apparatus |
| US20200075171A1 (en) * | 2013-03-15 | 2020-03-05 | Empi, Inc. | Personalized image-based guidance for energy-based therapeutic devices |
| US20210390771A1 (en) * | 2019-02-26 | 2021-12-16 | Wuhan Znion Technology Co., Ltd. | Camera-based transcranial magnetic stimulation diagnosis and treatment head modeling system |
| US20220036584A1 (en) * | 2018-09-27 | 2022-02-03 | Wuhan Znion Technology Co., Ltd | Transcranial magnetic stimulation (tms) positioning and navigation method for tms treatment |
| US20220096853A1 (en) * | 2020-09-30 | 2022-03-31 | Novocure Gmbh | Methods and systems for transducer array placement and skin surface condition avoidance |
| WO2022182060A1 (en) * | 2021-02-26 | 2022-09-01 | 주식회사 에이티앤씨 | Brain stimulating device including navigation device for guiding position of coil and method thereof |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP6844019B2 (en) * | 2017-02-28 | 2021-03-17 | ブレインラボ アーゲー | Optimal selection and placement of electrodes for deep brain stimulation based on a model of stimulation field |
| KR102060483B1 (en) * | 2017-09-11 | 2019-12-30 | 뉴로핏 주식회사 | Method and program for navigating tms stimulation |
-
2023
- 2023-06-27 GB GB2309666.2A patent/GB2631305A/en active Pending
-
2024
- 2024-06-25 WO PCT/GB2024/051617 patent/WO2025003647A1/en active Pending
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20130345491A1 (en) * | 2011-03-09 | 2013-12-26 | A School Corporation Kansai University | Image data processing device and transcranial magnetic stimulation apparatus |
| US20200075171A1 (en) * | 2013-03-15 | 2020-03-05 | Empi, Inc. | Personalized image-based guidance for energy-based therapeutic devices |
| US20220036584A1 (en) * | 2018-09-27 | 2022-02-03 | Wuhan Znion Technology Co., Ltd | Transcranial magnetic stimulation (tms) positioning and navigation method for tms treatment |
| US20210390771A1 (en) * | 2019-02-26 | 2021-12-16 | Wuhan Znion Technology Co., Ltd. | Camera-based transcranial magnetic stimulation diagnosis and treatment head modeling system |
| US20220096853A1 (en) * | 2020-09-30 | 2022-03-31 | Novocure Gmbh | Methods and systems for transducer array placement and skin surface condition avoidance |
| WO2022182060A1 (en) * | 2021-02-26 | 2022-09-01 | 주식회사 에이티앤씨 | Brain stimulating device including navigation device for guiding position of coil and method thereof |
Also Published As
| Publication number | Publication date |
|---|---|
| GB202309666D0 (en) | 2023-08-09 |
| WO2025003647A1 (en) | 2025-01-02 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12020804B2 (en) | Head modeling for a therapeutic or diagnostic procedure | |
| RU2492884C2 (en) | Method and apparatus for tracking position of therapeutic ultrasonic sensor | |
| CA2948257C (en) | Operating room safety zone | |
| US6961405B2 (en) | Method and apparatus for target position verification | |
| Ettinger et al. | Experimentation with a transcranial magnetic stimulation system for functional brain mapping | |
| US10675479B2 (en) | Operation teaching device and transcranial magnetic stimulation device | |
| CN112220557A (en) | Operation navigation and robot arm device for craniocerebral puncture and positioning method | |
| JP2019063527A (en) | Interactive display of selected ECG channels | |
| CN110327016A (en) | Intelligent minimally invasive diagnosis and treatment integral system based on optical image and optical therapeutic | |
| Leuze et al. | Mixed-reality guidance for brain stimulation treatment of depression | |
| Ettinger et al. | Non-invasive functional brain mapping using registered transcranial magnetic stimulation | |
| GB2631305A (en) | Neurostimulation device positioning method and system | |
| JP2024508838A (en) | Brain stimulation device and method including navigation device for coil position guidance | |
| Leuze et al. | Landmark-based mixed-reality perceptual alignment of medical imaging data and accuracy validation in living subjects | |
| JP2025535774A (en) | Neuronavigation transcranial brain energy delivery and detection system and method | |
| JP6795744B2 (en) | Medical support method and medical support device | |
| Romero et al. | Brain mapping using transcranial magnetic stimulation | |
| Bai et al. | Robot-assisted Transcranial Magnetic Stimulation (Robo-TMS): A Review | |
| CN109199551B (en) | Individualized Brain Spatial Stereotaxic Technology | |
| Zhang | 3D Reconstruction of Large-scale Head Surface with a Binocular Camera in Transcranial Magnetic Stimulation Navigation | |
| US20250121215A1 (en) | Tfus system configured with simplified probes | |
| Zhang | Multi-Region Weighted Registration Method for Facial Deformation in Transcranial Magnetic Stimulation Navigation | |
| WO2025027502A1 (en) | System and method of patient registration | |
| Truong et al. | Virtual Neuronavigation for Parcel-guided TMS | |
| Yasumuro et al. | Coil positioning system for repetitive transcranial magnetic stimulation treatment by ToF camera ego-motion |