SE1951443A1 - Improving machine learning for monitoring a person - Google Patents
Improving machine learning for monitoring a personInfo
- Publication number
- SE1951443A1 SE1951443A1 SE1951443A SE1951443A SE1951443A1 SE 1951443 A1 SE1951443 A1 SE 1951443A1 SE 1951443 A SE1951443 A SE 1951443A SE 1951443 A SE1951443 A SE 1951443A SE 1951443 A1 SE1951443 A1 SE 1951443A1
- Authority
- SE
- Sweden
- Prior art keywords
- machine learning
- person
- data feed
- state
- label
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/20—Ensemble learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
- A61B5/1113—Local tracking of patients, e.g. in a hospital or private home
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
- A61B5/1116—Determining posture transitions
- A61B5/1117—Fall detection
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
- A61B5/1126—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb using a particular sensing technique
- A61B5/1128—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb using a particular sensing technique using image analysis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
- A61B5/113—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb occurring during breathing
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4803—Speech analysis specially adapted for diagnostic purposes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H10/00—ICT specially adapted for the handling or processing of patient-related medical or healthcare data
- G16H10/60—ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/117—Identification of persons
- A61B5/1171—Identification of persons based on the shapes or appearances of their bodies or parts thereof
- A61B5/1176—Recognition of faces
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/165—Evaluating the state of mind, e.g. depression, anxiety
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Public Health (AREA)
- Theoretical Computer Science (AREA)
- Veterinary Medicine (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Heart & Thoracic Surgery (AREA)
- Pathology (AREA)
- Physiology (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Dentistry (AREA)
- Fuzzy Systems (AREA)
- Computational Linguistics (AREA)
- Psychiatry (AREA)
- Signal Processing (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Alarm Systems (AREA)
- Image Analysis (AREA)
Abstract
According to a first aspect, it is provided a method for improving machine learning based on respective data feeds for monitoring a person, the method being performed by a machine learning device. The method comprises the steps of: determining, based on a first input data feed and a first machine learning model, that the person is in a first state, wherein the first state is associated with a first label; and training a second machine learning model which is based on a second data feed, the second data feed at least partly overlapping the first data feed in time, the second data feed being captured using a second data capturing device, wherein the training comprises providing the first label to the second machine learning model.
Description
IMPROVING MACHINE LEARN ING FOR MONITORING A PERSON TECHNICAL FIELD 1. 1. id="p-1" id="p-1" id="p-1" id="p-1" id="p-1" id="p-1" id="p-1" id="p-1" id="p-1" id="p-1" id="p-1" id="p-1" id="p-1" id="p-1"
id="p-1"
[0001] The present disclosure relates to the field of machine learning models basedon a plurality of data feeds for monitoring a person and in particular to training such a machine learning model.
BACKGROUND 2. 2. id="p-2" id="p-2" id="p-2" id="p-2" id="p-2" id="p-2" id="p-2" id="p-2" id="p-2" id="p-2" id="p-2" id="p-2" id="p-2" id="p-2"
id="p-2"
[0002] New technology opens up new opportunities. For instance, the evolution ofdigital cameras and communication technologies enable monitoring of people to beprovided using video surveillance at relatively low cost. This can be particularly usefulfor elderly people or disabled people, who in this way can enjoy greatly improved quality of life by living in their own home instead of being in a staffed care facility. 3. 3. id="p-3" id="p-3" id="p-3" id="p-3" id="p-3" id="p-3" id="p-3" id="p-3" id="p-3" id="p-3" id="p-3" id="p-3" id="p-3" id="p-3"
id="p-3"
[0003] Video surveillance is certainly useful, but privacy issues arise. Hardly anyoneenjoys being continuously monitored using video surveillance, for monitoring of when the person needs help. 4. 4. id="p-4" id="p-4" id="p-4" id="p-4" id="p-4" id="p-4" id="p-4" id="p-4" id="p-4" id="p-4" id="p-4" id="p-4" id="p-4" id="p-4"
id="p-4"
[0004] One way to reduce the privacy concern is to, instead of manual monitoring,use machine learning models to determine the events that affects the state of a person.However, machine learning models need to be trained, which requires labelling of bothtraining data and validation data and providing this data to the machine learning model.
This need for labelled training data requires a lot of time and resources.
SUMMARY . . id="p-5" id="p-5" id="p-5" id="p-5" id="p-5" id="p-5" id="p-5" id="p-5" id="p-5" id="p-5" id="p-5" id="p-5" id="p-5" id="p-5"
id="p-5"
[0005] One object is to reduce the amount of manual work needed for obtaining training data for machine learning models used for monitoring people. 6. 6. id="p-6" id="p-6" id="p-6" id="p-6" id="p-6" id="p-6" id="p-6" id="p-6" id="p-6" id="p-6" id="p-6" id="p-6" id="p-6" id="p-6"
id="p-6"
[0006] According to a first aspect, it is provided a method for improving machinelearning based on respective data feeds for monitoring a person, the method beingperformed by a machine learning device. The method comprises the steps of:determining, based on a first input data feed and a first machine learning model, thatthe person is in a first state, wherein the first state is associated with a first label; and training a second machine learning model which is based on a second data feed, the second data feed at least partly overlapping the first data feed in time, the second datafeed being captured using a second data capturing device, wherein the training comprises providing the first label to the second machine learning model. 7. 7. id="p-7" id="p-7" id="p-7" id="p-7" id="p-7" id="p-7" id="p-7" id="p-7" id="p-7" id="p-7" id="p-7" id="p-7" id="p-7" id="p-7"
id="p-7"
[0007] The step of training may comprise providing a time indication associated with when the first label was determined. 8. 8. id="p-8" id="p-8" id="p-8" id="p-8" id="p-8" id="p-8" id="p-8" id="p-8" id="p-8" id="p-8" id="p-8" id="p-8" id="p-8" id="p-8"
id="p-8"
[0008] The first state may be one of a plurality possible states that can bedetermined for the person, wherein each one of the plurality of states is associated with a respective label indicating the particular state. 9. 9. id="p-9" id="p-9" id="p-9" id="p-9" id="p-9" id="p-9" id="p-9" id="p-9" id="p-9" id="p-9" id="p-9" id="p-9" id="p-9" id="p-9"
id="p-9"
[0009] The step of determining that the person is in a first state may comprisedetermining that the person is in the first state with a confidence level above a threshold value. . . id="p-10" id="p-10" id="p-10" id="p-10" id="p-10" id="p-10" id="p-10" id="p-10" id="p-10" id="p-10" id="p-10" id="p-10" id="p-10" id="p-10"
id="p-10"
[0010] The method may further comprise the step of: training a third machinelearning model which is based on a third data feed, the third data feed at least partlyoverlapping the first data feed in time, the third data feed being captured using a thirddata capturing device, wherein the training comprises providing the first label to the third machine learning model. 11. 11. id="p-11" id="p-11" id="p-11" id="p-11" id="p-11" id="p-11" id="p-11" id="p-11" id="p-11" id="p-11" id="p-11" id="p-11" id="p-11" id="p-11"
id="p-11"
[0011] According to a second aspect, it is provided a machine learning device forimproving machine learning based on respective data feeds for monitoring a person.The machine learning device comprises: a processor; and a memory storing instructionsthat, when executed by the processor, cause the machine learning device to: determine,based on a first input data feed and a first machine learning model, that the person is ina first state, wherein the first state is associated with a first label; and train a secondmachine learning model which is based on a second data feed, the second data feed atleast partly overlapping the first data feed in time, the second data feed being capturedusing a second data capturing device, wherein the training comprises providing the first label to the second machine learning model. 12. 12. id="p-12" id="p-12" id="p-12" id="p-12" id="p-12" id="p-12" id="p-12" id="p-12" id="p-12" id="p-12" id="p-12" id="p-12" id="p-12" id="p-12"
id="p-12"
[0012] The instructions to train may comprise instructions that, When executed bythe processor, cause the machine learning device to provide a time indication associated with when the first label was determined. 3 13. 13. id="p-13" id="p-13" id="p-13" id="p-13" id="p-13" id="p-13" id="p-13" id="p-13" id="p-13" id="p-13" id="p-13" id="p-13" id="p-13" id="p-13"
id="p-13"
[0013] The first state may be one of a plurality possible states that can bedetermined for the person, wherein each one of the plurality of states is associated with a respective label indicating the particular state. 14. 14. id="p-14" id="p-14" id="p-14" id="p-14" id="p-14" id="p-14" id="p-14" id="p-14" id="p-14" id="p-14" id="p-14" id="p-14" id="p-14" id="p-14"
id="p-14"
[0014] The instructions to determine that the person is in a first state may compriseinstructions that, when executed by the processor, cause the machine learning device todetermine that the person is in the first state with a confidence level above a threshold value. . . id="p-15" id="p-15" id="p-15" id="p-15" id="p-15" id="p-15" id="p-15" id="p-15" id="p-15" id="p-15" id="p-15" id="p-15" id="p-15" id="p-15"
id="p-15"
[0015] The machine learning device may further comprise instructions that, whenexecuted by the processor, cause the machine learning device to: train a third machinelearning model which is based on a third data feed, the third data feed at least partlyoverlapping the first data feed in time, the third data feed being captured using a thirddata capturing device, wherein the training comprises providing the first label to the third machine learning model. 16. 16. id="p-16" id="p-16" id="p-16" id="p-16" id="p-16" id="p-16" id="p-16" id="p-16" id="p-16" id="p-16" id="p-16" id="p-16" id="p-16" id="p-16"
id="p-16"
[0016] According to a third aspect, it is provided a computer program for improvingmachine learning based on respective data feeds for monitoring a person. The computerprogram comprises computer program code which, when run on a machine learningdevice causes the machine learning device to: determine, based on a first input data feedand a first machine learning model, that the person is in a first state, wherein the firststate is associated with a first label; and train a second machine learning model which isbased on a second data feed, the second data feed at least partly overlapping the firstdata feed in time, the second data feed being captured using a second data capturingdevice, wherein the training comprises providing the first label to the second machine learning model. 17. 17. id="p-17" id="p-17" id="p-17" id="p-17" id="p-17" id="p-17" id="p-17" id="p-17" id="p-17" id="p-17" id="p-17" id="p-17" id="p-17" id="p-17"
id="p-17"
[0017] According to a fourth aspect, it is provided a computer program productcomprising a computer program according to the third aspect and a computer readable means on which the computer program is stored. 18. 18. id="p-18" id="p-18" id="p-18" id="p-18" id="p-18" id="p-18" id="p-18" id="p-18" id="p-18" id="p-18" id="p-18" id="p-18" id="p-18" id="p-18"
id="p-18"
[0018] Generally, all terms used in the claims are to be interpreted according to theirordinary meaning in the technical field, unless explicitly defined otherwise herein. All references to "a/an/the element, apparatus, component, means, step, etc." are to be 4 interpreted openly as referring to at least one instance of the element, apparatus,component, means, step, etc., unless explicitly stated otherwise. The steps of anymethod disclosed herein do not have to be performed in the exact order disclosed, unless explicitly stated.
BRIEF DESCRIPTION OF THE DRAVVINGS 19. 19. id="p-19" id="p-19" id="p-19" id="p-19" id="p-19" id="p-19" id="p-19" id="p-19" id="p-19" id="p-19" id="p-19" id="p-19" id="p-19" id="p-19"
id="p-19"
[0019] Aspects and embodiments are now described, by way of example, with refer- ence to the accompanying drawings, in which: . . id="p-20" id="p-20" id="p-20" id="p-20" id="p-20" id="p-20" id="p-20" id="p-20" id="p-20" id="p-20" id="p-20" id="p-20" id="p-20" id="p-20"
id="p-20"
[0020] Fig 1 is a schematic diagram illustrating an environment in which embodiments presented herein can be applied; 21. 21. id="p-21" id="p-21" id="p-21" id="p-21" id="p-21" id="p-21" id="p-21" id="p-21" id="p-21" id="p-21" id="p-21" id="p-21" id="p-21" id="p-21"
id="p-21"
[0021] Fig 2 is a flow chart illustrating embodiments of methods for improving machine learning for monitoring a person; 22. 22. id="p-22" id="p-22" id="p-22" id="p-22" id="p-22" id="p-22" id="p-22" id="p-22" id="p-22" id="p-22" id="p-22" id="p-22" id="p-22" id="p-22"
id="p-22"
[0022] Fig 3 is a schematic diagram illustrating components of the machine learning device of Fig 1; and 23. 23. id="p-23" id="p-23" id="p-23" id="p-23" id="p-23" id="p-23" id="p-23" id="p-23" id="p-23" id="p-23" id="p-23" id="p-23" id="p-23" id="p-23"
id="p-23"
[0023] Fig 4 shows one example of a computer program product comprising computer readable means.
DETAILED DESCRIPTION 24. 24. id="p-24" id="p-24" id="p-24" id="p-24" id="p-24" id="p-24" id="p-24" id="p-24" id="p-24" id="p-24" id="p-24" id="p-24" id="p-24" id="p-24"
id="p-24"
[0024] The aspects of the present disclosure will now be described more fullyhereinafter with reference to the accompanying dravvings, in which certainembodiments of the invention are shown. These aspects may, however, be embodied inmany different forms and should not be construed as limiting; rather, theseembodiments are provided by way of example so that this disclosure will be thoroughand complete, and to fully convey the scope of all aspects of invention to those skilled in the art. Like numbers refer to like elements throughout the description. [00 25] Fig 1 is a schematic diagram illustrating an environment in whichembodiments presented herein can be applied. A person 5 to be monitored is at leastpart of the time present in a physical space 14. The physical space 14 can e. g. be a room,a flat, a house, an office etc. A machine learning device 1 is configured to monitor the person based on a plurality of capturing devices 3a-c. In this example, there are three capturing devices 3a-c, but there can be any number of capturing devices as long asthere are at least two. The capturing devices 3a-c can be based on any one or more ofvideo, audio, radar, infrared sensor, etc. In some way, each one of the capturing devices contributes to monitoring the person 5. 26. 26. id="p-26" id="p-26" id="p-26" id="p-26" id="p-26" id="p-26" id="p-26" id="p-26" id="p-26" id="p-26" id="p-26" id="p-26" id="p-26" id="p-26"
id="p-26"
[0026] For each capturing device 3a-c, there is one or more respective machinelearning (ML) model 4a-c. So, for instance, one capturing device 3a in the form of acamera can be connected to a video ML model 4a, one capturing device 3b in the form ofa microphone can be connected to an audio ML model 4b and one capturing device 3c inthe form of a radar can be connected to a radar ML model 4c. Other capturing devices can also be used analogously, such as an infrared heat camera or radar. 27. 27. id="p-27" id="p-27" id="p-27" id="p-27" id="p-27" id="p-27" id="p-27" id="p-27" id="p-27" id="p-27" id="p-27" id="p-27" id="p-27" id="p-27"
id="p-27"
[0027] As mentioned, each capturing device 3a-c can be connected to a plurality ofML models. For instance, a video feed (from a capturing device being camera) can beused by one model for activity recognition, by another model for detection of physicalobjects and by yet another model for face recognition. In another example, a radar feed(from a capturing device being a radar sensor) can be used by one model for movementrecognition, by another model for breathing recognition and by yet another model forposture recognition. In another example, an audio feed (from a capturing device being amicrophone) can be used by one model for speech recognition, by another model for tonality recognition and by yet another model for distress recognition. 28. 28. id="p-28" id="p-28" id="p-28" id="p-28" id="p-28" id="p-28" id="p-28" id="p-28" id="p-28" id="p-28" id="p-28" id="p-28" id="p-28" id="p-28"
id="p-28"
[0028] The machine learning device 1 determines the state of the person 5 based onone or more of the ML models 4a-c. The states that are determined are those that may be used to reflect the safety or health state of the person. 29. 29. id="p-29" id="p-29" id="p-29" id="p-29" id="p-29" id="p-29" id="p-29" id="p-29" id="p-29" id="p-29" id="p-29" id="p-29" id="p-29" id="p-29"
id="p-29"
[0029] The machine learning device 1 is connected to a network 6, which can be aninternet protocol (IP) based network. The network 6 can e. g. comprise any one or moreof a local wireless network, a cellular network, a wired local area network, a wide areanetwork (such as the Internet), etc. Also connected to the network 6 is an alarm centre7. The alarm centre 7 can e. g. comprise a server with which the machine learning device1 can communicate to alert when an alarm occurs. The alarm centre 7 can also be amanned alarm centre that can send out caretakers or medical personnel to the physical space 14 when the person 5 is in need of help, e. g. if the person 5 has fallen to the floor 6 and is unable to get up. The alarm centre 7 can be connected to a large number of corresponding machine learning devices 1. . . id="p-30" id="p-30" id="p-30" id="p-30" id="p-30" id="p-30" id="p-30" id="p-30" id="p-30" id="p-30" id="p-30" id="p-30" id="p-30" id="p-30"
id="p-30"
[0030] The machine learning device 1 can be located at the site of the physical space14 and the capturing devices 3a-c or the machine learning device 1 can be locatedremotely from the physical space and the capturing devices 3a-c, in which case thecapturing devices 3a-c and the machine learning device 1 can communicate over the network 6. 31. 31. id="p-31" id="p-31" id="p-31" id="p-31" id="p-31" id="p-31" id="p-31" id="p-31" id="p-31" id="p-31" id="p-31" id="p-31" id="p-31" id="p-31"
id="p-31"
[0031] One type of state of the person that can be determined is whether the personis present or absent in the physical space. Consider, e. g. the situation that the person issuffering from dementia and leaves the home in the middle of the night. The presence or absence of the person can then reliably be determined using e. g. an infrared sensor. 32. 32. id="p-32" id="p-32" id="p-32" id="p-32" id="p-32" id="p-32" id="p-32" id="p-32" id="p-32" id="p-32" id="p-32" id="p-32" id="p-32" id="p-32"
id="p-32"
[0032] Another group of states that can be determined is related to the motion of theperson. For instance, if the person lies in bed with no motion whatsoever, this can beinterpreted as the person is in need of help. Another state is if the person lies on a sofafor more than a threshold number of hours, which is a state that can be interpreted as the person needs help. 33. 33. id="p-33" id="p-33" id="p-33" id="p-33" id="p-33" id="p-33" id="p-33" id="p-33" id="p-33" id="p-33" id="p-33" id="p-33" id="p-33" id="p-33"
id="p-33"
[0033] Another group of states that can be determined relates to the posture of theperson. For instance, if the person is lying down in bed, this can be a normal situation,but if the person is lying on the floor for more than a short time, this can be interpreted as the person is in need of help. 34. 34. id="p-34" id="p-34" id="p-34" id="p-34" id="p-34" id="p-34" id="p-34" id="p-34" id="p-34" id="p-34" id="p-34" id="p-34" id="p-34" id="p-34"
id="p-34"
[0034] Another type of state that can be determined is distress of the person. Forinstance, if the person is screaming or moaning in a particular way, this can beinterpreted as the person is in distress, and that the person is in need of help. This can be determined reliably e. g. using a microphone and associated audio-based ML model. . . id="p-35" id="p-35" id="p-35" id="p-35" id="p-35" id="p-35" id="p-35" id="p-35" id="p-35" id="p-35" id="p-35" id="p-35" id="p-35" id="p-35"
id="p-35"
[0035] Fig 2 is a flow chart illustrating embodiments of methods for improvingmachine learning. The machine learning is based on respective data feeds for monitoring a person. The method is performed by the machine learning device. 7 36. 36. id="p-36" id="p-36" id="p-36" id="p-36" id="p-36" id="p-36" id="p-36" id="p-36" id="p-36" id="p-36" id="p-36" id="p-36" id="p-36" id="p-36"
id="p-36"
[0036] In a determine state step 42, the machine learning device determines, basedon a first input data feed and a first machine learning model, that the person is in a firststate. This determination is based on inference using the first machine learning model.The first state is associated with a first label. Non-limiting examples of states, and thuslabels, all relating to the person, are: absent, present, lying in bed, lying on floor,breathing, distress, etc, e. g. as those exemplified above. It is to be noted that the firstinput data feed and the first machine learning model can vary between iterations; these are selected based on what state can reliably be determined. 37. 37. id="p-37" id="p-37" id="p-37" id="p-37" id="p-37" id="p-37" id="p-37" id="p-37" id="p-37" id="p-37" id="p-37" id="p-37" id="p-37" id="p-37"
id="p-37"
[0037] Optionally, this step comprises determining that the person is in the first state with a confidence level above a threshold value. 38. 38. id="p-38" id="p-38" id="p-38" id="p-38" id="p-38" id="p-38" id="p-38" id="p-38" id="p-38" id="p-38" id="p-38" id="p-38" id="p-38" id="p-38"
id="p-38"
[0038] The first state is one of a plurality possible states that can be determined forthe person. Each one of the plurality of states is associated with a respective label indicating the particular state. 39. 39. id="p-39" id="p-39" id="p-39" id="p-39" id="p-39" id="p-39" id="p-39" id="p-39" id="p-39" id="p-39" id="p-39" id="p-39" id="p-39" id="p-39"
id="p-39"
[0039] In a train 2fld ML model step 44, the machine learning device trains a secondmachine learning model which is based on a second data feed. The second data feed atleast partly overlaps the first data feed in time. In particular, the second data feed coversa time in which the first state is determined using the first data feed. The second datafeed is captured using a second data capturing device. The training comprises providingthe first label to the second machine learning model. A time indication associated withwhen the first label was determined can be provided along with the first label. The timeindication can be implicit, i.e. that the first state is determined at this time, when latencyof the processing has no great effect. Alternatively, the time indication is explicitly indicated, e.g. as a time stamp. 40. 40. id="p-40" id="p-40" id="p-40" id="p-40" id="p-40" id="p-40" id="p-40" id="p-40" id="p-40" id="p-40" id="p-40" id="p-40" id="p-40" id="p-40"
id="p-40"
[0040] In an optional train grd ML model step 46, the machine learning device trainsa third machine learning model which is based on a third data feed. The third data feedat least partly overlaps the first data feed in time. The third data feed is captured using athird data capturing device. The training comprises providing the first label to the thirdmachine learning model. An implicit or explicit time indication associated With whenthe first label was determined can be provided along With the first label as described above for step 44. Further ML models can be trained analogously. In other words, a 8 single label inference using one ML, can be used to train one or multiple other ML models. 41. 41. id="p-41" id="p-41" id="p-41" id="p-41" id="p-41" id="p-41" id="p-41" id="p-41" id="p-41" id="p-41" id="p-41" id="p-41" id="p-41" id="p-41"
id="p-41"
[0041] It is to be noted that for another state, the roles can rotate/ reverse, such thatone ML model that is trained in step 44 or 46 for one state and label, can be the MLmodel that is used to infer a state and label in step 42 in another iteration of the method. 42. 42. id="p-42" id="p-42" id="p-42" id="p-42" id="p-42" id="p-42" id="p-42" id="p-42" id="p-42" id="p-42" id="p-42" id="p-42" id="p-42" id="p-42"
id="p-42"
[0042] Using embodiments presented herein, ML models can be trained byexploiting certainty of determination for a particular data feed. For instance, a data feedbased on infrared sensors may be very reliably used in a first ML model to determinepresence/absence which can be used to train other ML models. Another example isdistress, which can be reliably determined using a first ML model based on a data feedof audio data. Another example is radar, which can be used as a data feed in a first MLmodel to reliably detect posture, such as the person lying in bed or on the floor. Usingthe reliable determination of state in one model to determine a label used in one ormore other ML models, ML models are trained automatically while in use, whichsignificantly reduces the need for training based on manual labelling. By using the MLdata from one model to train another model, human bias in training ML models isreduced or even eliminated. Moreover, compared to manual training of ML models usedin the prior art, since embodiments presented herein are based on ML determination ofstate, no person needs to monitor the data feed to determine the state and label. Hence,privacy is improved for training ML models by using the embodiments presented herein. 43. 43. id="p-43" id="p-43" id="p-43" id="p-43" id="p-43" id="p-43" id="p-43" id="p-43" id="p-43" id="p-43" id="p-43" id="p-43" id="p-43" id="p-43"
id="p-43"
[0043] Additionally, compared to manual labelling for ML model training, theembodiments presented herein can be applied on vast amounts of data at little or noadditional costs. This improves the training since more data can form part of thetraining, even while achieving this at great reduction of cost. The embodimentspresented herein might be deployed in live systems, where privacy is preserved even with continuous training. 44. 44. id="p-44" id="p-44" id="p-44" id="p-44" id="p-44" id="p-44" id="p-44" id="p-44" id="p-44" id="p-44" id="p-44" id="p-44" id="p-44" id="p-44"
id="p-44"
[0044] Embodiments presented herein are particularly applicable for monitoring people, such as elderly people in their homes. In a normal day-to-day situation, much of 9 the same people reappear. Hence, the context is relatively static, and the advantages of the different capturing devices to determine various states is exploited. 45. 45. id="p-45" id="p-45" id="p-45" id="p-45" id="p-45" id="p-45" id="p-45" id="p-45" id="p-45" id="p-45" id="p-45" id="p-45" id="p-45" id="p-45"
id="p-45"
[0045] Fig 3 is a schematic diagram illustrating components of the machine learningdevice 1 of Fig 1. A processor 60 is provided using any combination of one or more of asuitable central processing unit (CPU), multiprocessor, microcontroller, digital signalprocessor (DSP), etc., capable of executing software instructions 67 stored in a memory64, which can thus be a computer program product. The processor 60 couldalternatively be implemented using an application specific integrated circuit (ASIC),field programmable gate array (FPGA), etc. The processor 60 can be configured to execute the method described with reference to Fig 2 above. 46. 46. id="p-46" id="p-46" id="p-46" id="p-46" id="p-46" id="p-46" id="p-46" id="p-46" id="p-46" id="p-46" id="p-46" id="p-46" id="p-46" id="p-46"
id="p-46"
[0046] The memory 64 can be any combination of random-access memory (RAM)and/ or read-only memory (ROM). The memory 64 also comprises persistent storage,which, for example, can be any single one or combination of magnetic memory, optical memory, solid-state memory or even remotely mounted memory. 47. 47. id="p-47" id="p-47" id="p-47" id="p-47" id="p-47" id="p-47" id="p-47" id="p-47" id="p-47" id="p-47" id="p-47" id="p-47" id="p-47" id="p-47"
id="p-47"
[0047] A data memory 66 is also provided for reading and/ or storing data duringexecution of software instructions in the processor 60. The data memory 66 can be anycombination of RAM and/ or ROM. 48. 48. id="p-48" id="p-48" id="p-48" id="p-48" id="p-48" id="p-48" id="p-48" id="p-48" id="p-48" id="p-48" id="p-48" id="p-48" id="p-48" id="p-48"
id="p-48"
[0048] The machine learning device 1 further comprises an I / O interface 62 forcommunicating with external and/ or internal entities. For instance, the I / O interface 62allows the machine learning device 1 to communicate the network 6. Optionally, the I / O interface 62 also includes a user interface. 49. 49. id="p-49" id="p-49" id="p-49" id="p-49" id="p-49" id="p-49" id="p-49" id="p-49" id="p-49" id="p-49" id="p-49" id="p-49" id="p-49" id="p-49"
id="p-49"
[0049] Other components of the machine learning device 1 are omitted in order not to obscure the concepts presented herein. 50. 50. id="p-50" id="p-50" id="p-50" id="p-50" id="p-50" id="p-50" id="p-50" id="p-50" id="p-50" id="p-50" id="p-50" id="p-50" id="p-50" id="p-50"
id="p-50"
[0050] Fig 4 shows one example of a computer program product 90 comprisingcomputer readable means. On this computer readable means, a computer program 91can be stored, which computer program can cause a processor to execute a methodaccording to embodiments described herein. In this example, the computer programproduct is an optical disc, such as a CD (compact disc) or a DVD (digital versatile disc) or a Blu-Ray disc. As explained above, the computer program product could also be embodied in a memory of a device, such as the computer program product 64 of Fig 3.While the computer program 91 is here schematically shown as a track on the depictedoptical disk, the computer program can be stored in any way which is suitable for thecomputer program product, such as a removable solid-state memory, e. g. a UniversalSerial Bus (USB) drive. 51. 51. id="p-51" id="p-51" id="p-51" id="p-51" id="p-51" id="p-51" id="p-51" id="p-51" id="p-51" id="p-51" id="p-51" id="p-51" id="p-51" id="p-51"
id="p-51"
[0051] The aspects of the present disclosure have mainly been described above withreference to a few embodiments. However, as is readily appreciated by a person skilledin the art, other embodiments than the ones disclosed above are equally possible withinthe scope of the invention, as defined by the appended patent claims. Thus, whilevarious aspects and embodiments have been disclosed herein, other aspects andembodiments will be apparent to those skilled in the art. The various aspects andembodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims.
Claims (12)
1. A method for improving machine learning training based on respective data feedsfor monitoring a person, the method being performed by a machine learning device (1),the method comprising the steps of: determining (42), based on a first input data feed and a first machine learningmodel, that the person is in a first state, wherein the first state is associated with a firstlabel; and training (44) a second machine learning model which is based on a second datafeed, the second data feed at least partly overlapping the first data feed in time, thesecond data feed being captured using a second data capturing device, wherein the training comprises providing the first label to the second machine learning model.
2. The method according to claim 1, wherein the step of training (44) comprises providing a time indication associated with when the first label was determined.
3. The method according to any one of the preceding claims, wherein the first state isone of a plurality possible states that can be determined for the person, wherein eachone of the plurality of states is associated with a respective label indicating the particular State.
4. The method according to any one of the preceding claims, wherein the step ofdetermining (42) that the person is in a first state comprises determining that the person is in the first state with a confidence level above a threshold value.
5. The method according to any one of the preceding claims, further comprising thestep of: training (45) a third machine learning model which is based on a third data feed,the third data feed at least partly overlapping the first data feed in time, the third datafeed being captured using a third data capturing device, wherein the training comprises providing the first label to the third machine learning model.
6. A machine learning device (1) for improving machine learning based on respectivedata feeds for monitoring a person, the machine learning device (1) comprising:a processor (6o); and a memory (64) storing instructions (67) that, when executed by the processor, 12 cause the machine learning device (1) to: determine, based on a first input data feed and a first machine learning model, thatthe person is in a first state, wherein the first state is associated With a first label; and train a second machine learning model which is based on a second data feed, thesecond data feed at least partly overlapping the first data feed in time, the second datafeed being captured using a second data capturing device, wherein the training comprises providing the first label to the second machine learning model.
7. The machine learning device (1) according to claim 6, wherein the instructions totrain comprise instructions (67) that, when executed by the processor, cause themachine learning device (1) to provide a time indication associated with When the first label was determined.
8. The machine learning device (1) according to claim 6 or 7, wherein the first state isone of a plurality possible states that can be determined for the person, wherein eachone of the plurality of states is associated with a respective label indicating the particular state.
9. The machine learning device (1) according to any one of claims 6 to 8, wherein theinstructions to determine that the person is in a first state comprise instructions (67)that, when executed by the processor, cause the machine learning device (1) todetermine that the person is in the first state with a confidence level above a threshold value.
10. The machine learning device (1) according to any one of claims 6 to 9, furthercomprising instructions (67) that, when executed by the processor, cause the machinelearning device (1) to: train a third machine learning model which is based on a third data feed, the thirddata feed at least partly overlapping the first data feed in time, the third data feed beingcaptured using a third data capturing device, wherein the training comprises providing the first label to the third machine learning model.
11. A computer program (67, 91) for improving machine learning based on respectivedata feeds for monitoring a person, the computer program comprising computerprogram code which, when run on a machine learning device (1) causes the machine learning device (1) to: 13 determine, based on a first input data feed and a first machine learning model, thatthe person is in a first state, wherein the first state is associated With a first label; and train a second machine learning model which is based on a second data feed, thesecond data feed at least partly overlapping the first data feed in time, the second datafeed being captured using a second data capturing device, wherein the training comprises providing the first label to the second machine learning model.
12. A computer program product (64, 90) comprising a computer program according to claim 11 and a computer readable means on which the computer program is stored.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| SE1951443A SE1951443A1 (en) | 2019-12-12 | 2019-12-12 | Improving machine learning for monitoring a person |
| PCT/EP2020/085462 WO2021116262A1 (en) | 2019-12-12 | 2020-12-10 | Improving machine learning for monitoring a person |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| SE1951443A SE1951443A1 (en) | 2019-12-12 | 2019-12-12 | Improving machine learning for monitoring a person |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| SE1951443A1 true SE1951443A1 (en) | 2021-06-13 |
Family
ID=73839031
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| SE1951443A SE1951443A1 (en) | 2019-12-12 | 2019-12-12 | Improving machine learning for monitoring a person |
Country Status (2)
| Country | Link |
|---|---|
| SE (1) | SE1951443A1 (en) |
| WO (1) | WO2021116262A1 (en) |
Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20170223190A1 (en) * | 2014-03-14 | 2017-08-03 | Directly, Inc. | Cluster based crm |
| CN108564134A (en) * | 2018-04-27 | 2018-09-21 | 网易(杭州)网络有限公司 | Data processing method, device, computing device and medium |
| US20180350069A1 (en) * | 2017-06-01 | 2018-12-06 | International Business Machines Corporation | Neural network classification |
| US20190074079A1 (en) * | 2017-08-09 | 2019-03-07 | Varian Medical Systems International Ag | Radiotherapy treatment planning using artificial intelligence (ai) engines |
| CN109492612A (en) * | 2018-11-28 | 2019-03-19 | 平安科技(深圳)有限公司 | Fall detection method and its falling detection device based on skeleton point |
| WO2019070763A1 (en) * | 2017-10-02 | 2019-04-11 | New Sun Technologies, Inc. | Caregiver mediated machine learning training system |
| CN109711545A (en) * | 2018-12-13 | 2019-05-03 | 北京旷视科技有限公司 | Creation method, device, system and the computer-readable medium of network model |
| US20190251340A1 (en) * | 2018-02-15 | 2019-08-15 | Wrnch Inc. | Method and system for activity classification |
| US20190272725A1 (en) * | 2017-02-15 | 2019-09-05 | New Sun Technologies, Inc. | Pharmacovigilance systems and methods |
| US20190325269A1 (en) * | 2018-04-20 | 2019-10-24 | XNOR.ai, Inc. | Image Classification through Label Progression |
| WO2019211089A1 (en) * | 2018-04-30 | 2019-11-07 | Koninklijke Philips N.V. | Adapting a machine learning model based on a second set of training data |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20170132528A1 (en) * | 2015-11-06 | 2017-05-11 | Microsoft Technology Licensing, Llc | Joint model training |
-
2019
- 2019-12-12 SE SE1951443A patent/SE1951443A1/en not_active Application Discontinuation
-
2020
- 2020-12-10 WO PCT/EP2020/085462 patent/WO2021116262A1/en not_active Ceased
Patent Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20170223190A1 (en) * | 2014-03-14 | 2017-08-03 | Directly, Inc. | Cluster based crm |
| US20190272725A1 (en) * | 2017-02-15 | 2019-09-05 | New Sun Technologies, Inc. | Pharmacovigilance systems and methods |
| US20180350069A1 (en) * | 2017-06-01 | 2018-12-06 | International Business Machines Corporation | Neural network classification |
| US20190074079A1 (en) * | 2017-08-09 | 2019-03-07 | Varian Medical Systems International Ag | Radiotherapy treatment planning using artificial intelligence (ai) engines |
| WO2019070763A1 (en) * | 2017-10-02 | 2019-04-11 | New Sun Technologies, Inc. | Caregiver mediated machine learning training system |
| US20190251340A1 (en) * | 2018-02-15 | 2019-08-15 | Wrnch Inc. | Method and system for activity classification |
| US20190325269A1 (en) * | 2018-04-20 | 2019-10-24 | XNOR.ai, Inc. | Image Classification through Label Progression |
| CN108564134A (en) * | 2018-04-27 | 2018-09-21 | 网易(杭州)网络有限公司 | Data processing method, device, computing device and medium |
| WO2019211089A1 (en) * | 2018-04-30 | 2019-11-07 | Koninklijke Philips N.V. | Adapting a machine learning model based on a second set of training data |
| CN109492612A (en) * | 2018-11-28 | 2019-03-19 | 平安科技(深圳)有限公司 | Fall detection method and its falling detection device based on skeleton point |
| CN109711545A (en) * | 2018-12-13 | 2019-05-03 | 北京旷视科技有限公司 | Creation method, device, system and the computer-readable medium of network model |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2021116262A1 (en) | 2021-06-17 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11631407B2 (en) | Smart speaker system with cognitive sound analysis and response | |
| DK2353153T3 (en) | SYSTEM FOR TRACKING PERSON'S PRESENCE IN A BUILDING, PROCEDURE AND COMPUTER PROGRAM PRODUCT | |
| JP7162412B2 (en) | detection recognition system | |
| US10832673B2 (en) | Smart speaker device with cognitive sound analysis and response | |
| US10424175B2 (en) | Motion detection system based on user feedback | |
| CA3091327A1 (en) | Gunshot detection system with ambient noise modeling and monitoring | |
| US20150194034A1 (en) | Systems and methods for detecting and/or responding to incapacitated person using video motion analytics | |
| WO2018152009A1 (en) | Entity-tracking computing system | |
| JP2016067641A (en) | Fall detection processing device and fall detection system | |
| US20210365674A1 (en) | System and method for smart monitoring of human behavior and anomaly detection | |
| US20210304339A1 (en) | System and a method for locally assessing a user during a test session | |
| EP4135569A1 (en) | System and method for providing a health care related service | |
| JP6663703B2 (en) | Watching system | |
| US11076778B1 (en) | Hospital bed state detection via camera | |
| SE1951443A1 (en) | Improving machine learning for monitoring a person | |
| US20230317086A1 (en) | Privacy-preserving sound representation | |
| KR102606304B1 (en) | System and method for monitoring indoor space using air quality information based on Artificial Intelligence | |
| SE1951444A1 (en) | Processing an input media feed | |
| US12322261B2 (en) | Premises monitoring using acoustic models of premises | |
| CA3263132A1 (en) | Method and system for identifying home hazards and unsafe conditions using artificial intelligence | |
| US20220060473A1 (en) | Security system | |
| EP4083952A1 (en) | Electronic monitoring system using push notifications with custom audio alerts | |
| JP2025154486A (en) | Life monitoring system and program | |
| US10255775B2 (en) | Intelligent motion detection | |
| SE2051362A1 (en) | Enabling training of a machine-learning model for trigger-word detection |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| NAV | Patent application has lapsed |