[go: up one dir, main page]

WO2025018989A1 - Wearable user identity profile - Google Patents

Wearable user identity profile Download PDF

Info

Publication number
WO2025018989A1
WO2025018989A1 PCT/US2023/028049 US2023028049W WO2025018989A1 WO 2025018989 A1 WO2025018989 A1 WO 2025018989A1 US 2023028049 W US2023028049 W US 2023028049W WO 2025018989 A1 WO2025018989 A1 WO 2025018989A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
computing device
physiological characteristics
wearable computing
machine learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2023/028049
Other languages
French (fr)
Inventor
Shahid Hussain
Shreerag JAYAKRISHNAN
Yifei Zhang
Kamran MUSTAFA
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google LLC filed Critical Google LLC
Priority to PCT/US2023/028049 priority Critical patent/WO2025018989A1/en
Publication of WO2025018989A1 publication Critical patent/WO2025018989A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/06Authentication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/30Security of mobile devices; Security of mobile applications
    • H04W12/33Security of mobile devices; Security of mobile applications using wearable devices, e.g. using a smartwatch or smart-glasses

Definitions

  • a computing device such as a wearable computing device to authenticate one or more users and permit access to increased functionality of the computing device and, in some instances, other devices in the user’s ecosystem, based on unique user profiles associated with physiological characteristics of the one or more users.
  • a user may be granted increased access to a wearable computing device based on the wearable computing device determining with high confidence that the user’s detected physiological characteristics match that of a unique user profile stored within the wearable computing device.
  • a method includes detecting, by a wearable computing device operating in a reduced access mode, a first user input to unlock the wearable computing device. The method further includes, responsive to detecting the first user input, detecting, by one or more sensors of the wearable computing device, one or more physiological characteristics of a first user. The method further includes determining, by the wearable computing device and based on the one or more physiological characteristics of the first user, whether a user profile was previously created for the first user.
  • the method further includes, responsive to determining that the user profile was not previously created for the first user, creating, by the wearable computing device, the user profile for the first user, wherein the user profile for the first user includes a first machine learning model for automatically authenticating the first user based on the one or more physiological characteristics of the first user.
  • the method further includes training, by the wearable computing device, and using the one or more physiological characteristics of the first user, the first machine learning model.
  • a wearable computing device operating in a reduced access mode comprises one or more processors.
  • the wearable computing device further comprises one or more sensors configured to, responsive to detecting a first user input to unlock the wearable computing device, detect one or more physiological characteristics of a first user.
  • the wearable computing device further comprises one or more storage devices that store instructions that, when executed by the one or more processors, cause the one or more processors to, responsive to the one or more sensors detecting one or more physiological characteristics of the first user, determine, based on the one or more physiological characteristics of the first user, whether a user profile was previously created for the first user.
  • the one or more processors are further configured to, responsive to determining that the user profile was not previously created for the first user, create the user profile for the first user, wherein the user profile for the first user includes a first machine learning model for automatically authenticating the first user based on the one or more physiological characteristics of the first user.
  • the one or more processors are further configured to train, using the one or more physiological characteristics of the first user, the first machine learning model.
  • a non-transitory computer-readable storage medium is encoded with instructions that, when executed by one or more processors, cause the one or more processors to, responsive to one or more sensors detecting one or more physiological characteristics of a first user, determine, based on the one or more physiological characteristics of the first user, whether a user profile was previously created for the first user.
  • the instructions are further configured to cause the one or more processors to, responsive to determining that the user profile was not previously created for the first user, create the user profile for the first user, wherein the user profile for the first user includes a first machine learning model for automatically authenticating the first user based on the one or more physiological characteristics of the first user.
  • the instructions are further configured to cause the one or more processors to train, using the one or more physiological characteristics of the first user, the first machine learning model.
  • FIG. 1 is an example wearable computing device configured to automatically authenticate a user based on one or more physiological characteristics of the user, in accordance with one or more techniques of the present disclosure.
  • FIG. 2 is a block diagram further illustrating an example wearable computing device configured to automatically authenticate a user based on one or more physiological characteristics of the user, in accordance with one or more techniques of the present disclosure.
  • FIG. 3 is a block diagram illustrating an example data processing module configured to generate biometric data based on one or more physiological characteristics of a user.
  • FIG. 4 is a block diagram further illustrating an example wearable computing device in communication with a companion device configured to automatically authenticate a user, in accordance with one or more techniques of the present disclosure.
  • FIG. 5 is a flow chart illustrating an example operation of a computing device that automatically authenticates a user based on one or more physiological characteristics of the user, in accordance with one or more techniques of the present disclosure.
  • FIG. 1 is a block diagram illustrating an example wearable computing device 102 configured to automatically authenticate a user 114 based on one or more physiological characteristics of user 114, in accordance with one or more techniques of the present disclosure.
  • the techniques of the present disclosure may enable wearable computing device 102 to authenticate one or more users, such as user 114, and permit access to wearable computing device 102, as well as other devices in the user’s ecosystem, based on unique user profiles associated with user physiological characteristics.
  • wearable computing device 102 includes one or more user interface (UI) components 132 including one or more sensors 104, and at least one user interface (UI) module 106.
  • UI user interface
  • UI user interface
  • wearable computing device 102 may include additional components not shown in FIG. 1.
  • Examples of wearable computing device 102 may include, but are not limited to, portable, mobile, or other devices, such as mobile phones (including smartphones), wearable computing devices (e.g., smart watches, smart glasses, digital bracelets, etc.) laptop computers, desktop computers, tablet computers, smart television platforms, server computers, mainframes, infotainment systems (e.g., vehicle head units), etc.
  • user interface module 106 is shown in the example of FIG. 1 as being located within wearable computing device 102, in other examples, all or part of the functionality provided by interface module (and other modules in other figures described herein) may be delegated to a cloud computing system and/or a companion device.
  • computing device 102 includes one or more user interface components 132 (“UI components 132”).
  • UI components 132 of computing device 102 may be configured to function as input devices and/or output devices for computing device 102.
  • UI components 132 may be implemented using various technologies. For instance, UI components 132 may be configured to receive input from user 114 through tactile, audio, and/or video feedback. Examples of input devices include a presence-sensitive display, a presence-sensitive or touch-sensitive input device, a voice responsive system, video camera, microphone or any other type of device for detecting input from user 114.
  • a presence-sensitive display includes a touch-sensitive or presence-sensitive input screen, such as a resistive touchscreen, a surface acoustic wave touchscreen, a capacitive touchscreen, a projective capacitance touchscreen, a pressure sensitive screen, an acoustic pulse recognition touchscreen, or another presence-sensitive technology.
  • UI components 132 of computing device 102 may include a presencesensitive device that may receive tactile input from user 114.
  • UI components 132 may receive indications of the tactile input by detecting one or more gestures from user 114 (e.g., when user 114 touches or points to one or more locations of UI components 132 with a finger or a stylus pen).
  • UI components 132 may additionally or alternatively be configured to function as output devices by providing output to user 114 using tactile, audio, or video stimuli.
  • output devices include a sound card, a video graphics adapter card, or any of one or more display devices, such as a liquid crystal display (LCD), dot matrix display, light emitting diode (LED) display, microLED, miniLED, organic light-emitting diode (OLED) display, e-ink, or similar monochrome or color display capable of outputting visible information to user 114.
  • display devices such as a liquid crystal display (LCD), dot matrix display, light emitting diode (LED) display, microLED, miniLED, organic light-emitting diode (OLED) display, e-ink, or similar monochrome or color display capable of outputting visible information to user 114.
  • Additional examples of an output device include a speaker, a haptic device, or other device that can generate intelligible output to user 114.
  • UI components 132 may present output to user 114 as a graphical user interface that may be associated with functionality provided by computing device 102.
  • UI components 132 may present various user interfaces of applications executing at or accessible by computing device 102 (e.g., an electronic message application, an Internet browser application, etc.).
  • User 114 may interact with a respective user interface of an application to cause computing device 102 to perform operations relating to a function.
  • UI components 132 of computing device 102 may detect two- dimensional and/or three-dimensional gestures as input from user 114.
  • UI components 132 include one or more sensors 104.
  • Sensor 104 may be configured to, responsive to computing device 102 detecting a user input from user 114 to unlock computing device 102, detect one or more physiological characteristics of user 114. For instance, sensor 104 may detect user 114’s movement (e.g., gait or moving a hand, an arm, face, an eye, etc.) within a threshold distance of sensor 104.
  • Sensor 104 may determine a two- or three-dimensional vector representation of the movement and correlate the vector representation to a gesture input (e g., a hand-wave, a facial expression, etc.) that has multiple dimensions.
  • a gesture input e g., a hand-wave, a facial expression, etc.
  • sensor 104 may, in some examples, detect a multidimension gesture without requiring user 114 to gesture at or near a screen or surface at which UI components 132 output information for display. Instead, sensor 104 may detect a multi-dimensional gesture performed at or near sensor 104 which may or may not be located near the screen or surface at which UI components 132 output information for display.
  • sensor 104 is configured to detect one or more physiological characteristics of user 114 that include, but are not limited to, one or more of base heart rate, skin color, blood pressure reading, skin temperature, electrocardiogram reading, voice, etc.
  • sensor 104 is a photoplethysmography (PPG) sensor, which measures changes in blood volume in the user 114’s skin through the use of light sensors.
  • PPG photoplethysmography
  • a PPG sensor is commonly used in smartwatches to monitor heart rate and can also be used to detect the unique pattern of blood flow in a user's wrist, which can then be used to verify their identity when attempting to unlock wearable computing device 102.
  • sensor 104 is an electrocardiogram (ECG) sensor.
  • ECG electrocardiogram
  • An ECG sensor detects electrical activity in the heart and can be used to generate a unique biometric signature for each user of computing device 102.
  • sensor 104 may be any other biometric sensor used to collect biometric data from a user, such as facial recognition or fingerprint sensors.
  • sensor 104 may capture unique physical features of user 114, such as the pattern of their fingerprints or the structure of their face.
  • sensors 104 may include motion sensors (e.g., accelerometer, gyroscope, compass, etc.), audio and/or visual sensors (e.g., microphones, still and/or video cameras, etc.), or other types of sensors (e.g., pressure sensors, light sensors, proximity sensors, ultrasonic sensors, global positioning system sensors, etc.).
  • computing device 102 further includes user interface (UI) module 106.
  • Module 106 may perform operations described herein using hardware, software, firmware, or a mixture thereof residing in and/or executing at computing device 102.
  • Computing device 102 may execute module 106 with one processor or with multiple processors.
  • computing device 102 may execute module 106 as a virtual machine executing on underlying hardware.
  • Module 106 may execute as one or more services of an operating system or computing platform or may execute as one or more executable programs at an application layer of a computing platform.
  • UI module 106 may be operable by computing device 102 to perform one or more functions, such as receive input and send indications of such input to other components associated with computing device 102.
  • UI module 106 may also receive data from components associated with computing device 102. Using the data received, UI module 106 may cause other components associated with computing device 102, such as UI components 132, to provide output based on the data. For instance, as described above, UI module 106 may send data to UI components 132 of computing device 102 to display GUI 101 to user 114 when computing device 102 is operating in the reduced access mode, and display GUI 105 when computing device 102 is operating in the increased access mode.
  • GUI 101 may be generated by user interface module 106 and configured to display a limited amount of information or computing device functionalities to user 114.
  • GUI 101 comprises applications 103A-103D.
  • Applications 103A-103D may provide user 114 functionalities and information that are not user-specific, e.g., application 103 A may include information pertaining to the time, application 103B may include a calculator, etc.
  • the user input (such as a PIN, code, or other user credentials) may be received by computing device 102 via another GUI that user 114 interacts with, such as a login screen. Responsive to detecting the user input from user 114, one or more sensors 104 of computing device 102 detects one or more physiological characteristics of user 114 (e.g., base heart rate, skin tone, voice, etc.). Computing device 102 then determines, based on the one or more physiological characteristics of user 114, whether a unique user profile was previously created for user 114. Responsive to determining that the unique user profile was not previously created for user 114, computing device 102 may then create the unique user profile for user 114, which may include a unique user identifier (ID).
  • ID unique user identifier
  • user 114 may be provided with an opportunity to provide input to control whether programs or features of computing device 102 can collect and make use of user information (e.g., user 114’s biometric data, information about user 114’s current location, current speed, motion, location history, etc.), or to dictate whether and/or how computing device 102 may receive content that may be relevant to user 114.
  • user information e.g., user 114’s biometric data, information about user 114’s current location, current speed, motion, location history, etc.
  • certain data may be treated in one or more ways before it is stored or used by computing device 102 so that personally identifiable information is removed.
  • a user’s identity may be treated so that no personally identifiable information can be determined about the user, or a user’s geographic location may be generalized where location information is obtained (such as to a city, ZIP code, or state level), so that a particular location of a user cannot be determined.
  • user 114 may have control over how information is collected about them and used by computing device 102. For example, responsive to user 114 confirming they are a new user of computing device 102 and providing explicit consent for computing device 102 to store user 114’s data, computing device 102 may then create the user profile for user 114, wherein the user profile for user 114 comprises the one or more physiological characteristics of user 114. The user profile for user 114 may then be stored by computing device 102 in a memory.
  • GUI 105 comprises applications 103A-103I.
  • applications 103A-103D may still provide user 114 functionalities and information that are not user-specific (e.g., application 103 A may include information pertaining to the time, application 103B may include a calculator, etc ).
  • Applications 103E-103I may provide user 114 functionalities and information that are user-specific (e.g., application 103E may include health information pertaining to user 114, application 103F may include functionality that allows user 114 to access other devices associated with user 114, etc.). As shown in the example of FIG. 1, applications 103E-103I are accessible to user 114 while wearable computing device 102 is operating in the increased access mode, and are not accessible to user 114 while wearable computing device 102 is operating in the reduced access mode.
  • FIG. 2 is a block diagram further illustrating an example wearable computing device configured to automatically authenticate a user based on one or more physiological characteristics of the user, in accordance with one or more techniques of the present disclosure.
  • Computing device 202 may be similar to computing device 102 of FIG. 1.
  • computing device 202 includes user interface components 232 including one or more sensors 204, processors 224, communication units 228, and one or more storage devices 238.
  • Storage device 238 further includes user interface module 206, resolution module 210, operating system 222, user profile data store 212, and analysis module 208.
  • User interface components 232, one or more sensors 204, and user interface module 206 may be similar to user interface components 132, one or more sensors 104, user and interface module 106, respectively, as described with respect to FIG. 1.
  • the one or more communication units 228 of computing system 202 may communicate with external devices by transmitting and/or receiving data at computing device 202, such as to and from remote computer systems or companion devices.
  • Example communication units 228 include a network interface card (e.g., such as an Ethernet card), an optical transceiver, a radio frequency transceiver, or any other type of device that can send and/or receive information.
  • Other examples of communication units 228 may be devices configured to transmit and receive Ultrawideband®, Bluetooth®, GPS, 3G, 4G, and Wi-Fi®, etc. that may be found in computing devices, such as mobile devices and the like.
  • communication channels 230 may interconnect each of the components as shown for inter-component communications (physically, communicatively, and/or operatively).
  • communication channels 230 may include a system bus, a network connection (e.g., to a wireless connection as described above), one or more inter-process communication data structures, or any other components for communicating data between hardware and/or software locally or remotely.
  • User interface module 206, analysis module 208, resolution module 210, user profile data store 212, and operating system 222 may perform operations described herein using software, hardware, firmware, or a mixture of hardware, software, and firmware residing in and executing on computing device 202 or at one or more other remote computing devices (e.g., cloud-based application - not shown) or companion devices.
  • Computing device 202 may execute one or more of modules 206-222, with one or more processors 224 or may execute any or part of one or more of modules 206-222 as or within a virtual machine executing on underlying hardware.
  • modules 206- 222 may be implemented in various ways, for example, as a downloadable or pre-installed application, remotely as a cloud application, or as part of the operating system of computing device 202.
  • Other examples of computing device 202 that implement techniques of this disclosure may include additional components not shown in FIG. 2.
  • one or more processors 224 may implement functionality and/or execute instructions within computing device 202.
  • one or more processors 224 may receive and execute instructions that provide the functionality of UIC 232, communication units 228, one or more storage devices 238 and an operating system to perform one or more operations as described herein.
  • processors 224 may receive and execute instructions that provide the functionality of some or all of modules 206-222 to perform one or more operations and various functions described herein.
  • the one or more processors 224 include a central processing unit (CPU). Examples of CPUs include, but are not limited to, a digital signal processor (DSP), a general-purpose microprocessor, a tensor processing unit (TPU); a neural processing unit (NPU); a neural processing engine; a core of a CPU, VPU, GPU, TPU, NPU or another processing device, an application specific integrated circuit (ASIC), a field programmable logic array (FPGA), or other equivalent integrated or discrete logic circuitry, or other equivalent integrated or discrete logic circuitry.
  • DSP digital signal processor
  • TPU tensor processing unit
  • NPU neural processing unit
  • FPGA field programmable logic array
  • One or more storage devices 238 within computing device 202 may store information for processing during operation of computing device 202 (e.g., computing device 202 may store data that modules 206-222 may access during execution at computing device 202, including user profile data store 212).
  • storage device 238 is a temporary memory, meaning that a primary purpose of storage device 238 is not long-term storage.
  • Storage devices 238 on computing device 202 may configured for short-term storage of information as volatile memory and therefore not retain stored contents if powered off. Examples of volatile memories include random access memories (RAM), dynamic random access memories (DRAM), static random access memories (SRAM), and other forms of volatile memories known in the art.
  • Storage devices 238 may be configured to store larger amounts of information than volatile memory.
  • Storage devices 238 may further be configured for longterm storage of information as non-volatile memory space and retain information after power on/off cycles. Examples of non-volatile memories include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories.
  • Storage devices 238 may store program instructions and/or information (e.g., data) associated with modules 206-222.
  • Operating system 222 controls the operation of components of computing device 202.
  • operating system 222 facilitates the communication of modules 204-232 with processors 224, one or more DI components 232 including one or more sensors 204, one or more communication units 228, and one or more communication channels 230.
  • Modules 206-222 may each include program instructions and/or data that are executable by computing device 202 (e.g., by one or more processors 224).
  • UI module 206, analysis module 208, and resolution module 210 can each include instructions that cause computing device 202 to perform one or more of the operations and actions described in the present disclosure.
  • UI module 206 may cause UI components 232 to output a GUI for display, in which user 214 of computing device 202 may view output and/or provide input at UI components 232.
  • UI module 206 and UI components 232 may receive one or more indications of input from user 214 as he or she interacts with the graphical user interface.
  • UI module 206 and UI components 232 may interpret inputs detected at UI components 232 (e.g., as user 214 provides one or more gestures at one or more locations of UI components 232 at which the graphical user interface is displayed) and may relay information about the inputs detected at UI components 232 to one or more associated platforms, operating systems, applications, and/or services executing at computing device 202 to cause computing device 202 to perform various functions.
  • UI module 206 may receive information and instructions from one or more associated platforms, operating systems, applications, and/or services executing at computing device 202 for generating a graphical user interface.
  • UI module 206 may act as an intermediary between the one or more associated platforms, operating systems, applications, and/or services executing at computing device 202 and various output devices of computing device 202 (e.g., speakers, LED indicators, audio or electrostatic haptic output devices, etc.) to produce output (e.g., a graphic, a flash of light, a sound, a haptic response, etc.) with computing device 202.
  • output devices of computing device 202 e.g., speakers, LED indicators, audio or electrostatic haptic output devices, etc.
  • User profile data store 212 may represent any suitable storage medium for storing data.
  • user profile data store 212 may store all data received by users of computing device 202 that have provided their explicit consent for computing device 202 to receive their data.
  • user profile data store 212 may be indexed by the unique user IDs or other information provided by a user input (e.g., a unique PIN, code, username, etc.) included in the plurality of user profiles.
  • computing device 202 may determine whether user 214 is an authenticated user by determining whether the one or more physiological characteristics of user 214 match one or more physiological characteristics included in a plurality of stored user profiles stored in user profile data store 212.
  • user interface module 206 of wearable computing device 202 may generate a user interface for display on user interface components 132 that prompts user 214 to confirm that they are a new user and to provide explicit consent for computing device 202 to store the user 214’s biometric data in user profile data store 212.
  • computing device 202 may then create the user profile for user 214, wherein the user profile for user 214 comprises the one or more physiological characteristics of user 214, and the user profile for user 214 is stored in user profile data store 212.
  • user 214 may wear computing device 202 for a first time, in which user 214 may provide an input (e.g., a unique ID, PIN, code, username, etc.) to computing device 202 prior to accessing computing device 202 operating in the reduced access mode (e.g., GUI 101 of FIG. 1).
  • Computing device 202 may determine, based on user 214’s physiological characteristics detected by sensors 204 and user profiles stored in user profile data store 212, whether a user profile was previously created for user 214.
  • analysis module 208 may be configured to determine whether a user profile was previously created for user 214 by determining whether the one or more physiological characteristics of user 214 match one or more physiological characteristics included in a plurality of user profiles stored within user profile data store 212.
  • Analysis module 208 may receive information from one or more sensors 204 and store at least an indication of the information received from sensors 204 in user profile data store 212. Responsive to analysis module 208 determining that the information received from sensors 204 does not match any information in the stored user profiles, computing device 202 may prompt user 214 to confirm they are a new user and provide explicit consent for computing device 202 to store user 214’s data.
  • computing device 202 may prompt user 214 to confirm whether they would like to change the unique ID, PIN, code, username, etc. provided as input to access or manually unlock computing device 202.
  • computing device 202 may assign a new unique ID, PIN, code, username, etc. to user 214 that user 214 must provide as input to access or manually unlock computing device 202.
  • user 214 may then access computing device 202 operating in the increased access mode (e.g., GUI 103 of FIG. 1).
  • user 214 may only access computing device 202 operating in the reduced access mode (e.g., GUI 101 of FIG.
  • sensors 204 may detect one or more physiological characteristics of user 214 and store them in user 214’s user profile.
  • Computing device 202 may further generate, based on the one or more physiological characteristics of user 214, a training biometric data set that is also stored in user 214’s user profile.
  • User 214’s user profile may further include a machine learning model that may be trained on the stored training biometric data set. Responsive to training the machine learning model with a threshold amount of the stored training biometric data set, computing device 202 may then use the machine learning model for automatically authenticating user 214, in which user 214 may access computing device 202 operating in the increased access mode (e.g. GUI 105 of FIG. 1 ) without having to manually unlock computing device 202.
  • the increased access mode e.g. GUI 105 of FIG. 1
  • Computing device 202 may be further configured to monitor information generated by sensors 204 while user 214 is wearing computing device 202.
  • analysis module 208 may monitor sensor information and store the sensor information in user profile data store 212.
  • Analysis module 208 may periodically or continually receive and store the sensor information.
  • analysis module 208 may analyze the sensor information using machine learning techniques to determine a likelihood that the sensor information corresponds to a unique user profile stored within computing device 202 and an authenticated user of computing device 202.
  • analysis module 208 may apply an analysis of the sensor data, both the sensor data currently being received as well as the previously received sensor data (e.g., stored within a memory of wearable computing device 202 and/or within user profile data store 212), and output a confidence score.
  • the analysis may be a machine learning algorithm, a rule base, a decision tree, mathematical optimization, or any other algorithm suitable for determining a likelihood that the sensor data corresponds to a unique user profile stored within computing device 202 and an authenticated user of computing device 202.
  • analysis module 208 may periodically store a determined confidence score in user profile data store 212.
  • analysis module 208 may also analyze application usage information, such as the duration, frequency, location, time, etc., of various applications installed at or otherwise executable by computing device 202. At least periodically, analysis module 208 analyzes the sensor information to determine a likelihood that the sensor information corresponds to an authenticated user of computing device 202. For example, analysis module 208 may apply an analysis of the sensor data, both the sensor data currently being received as well as the previously received sensor data (e.g., stored within user profile data store 212), and construct a confidence score.
  • application usage information such as the duration, frequency, location, time, etc.
  • analysis module 208 further includes data processing module 216, machine learning module 218, and training module 220.
  • Data processing module 216 may be configured to receive information from sensors 204 or UI components 232 and generate data that can be stored in user profile data store 212. In some examples, the data stored in user profile data store 212 may be preprocessed by data processing module 216.
  • Data processing module 216 may be configured as a module for processing data stored in user profile data store 212 prior to analysis module 208 sending the data to other components or modules of computing device 202 and/or implementing training module 220 and machine learning module 218.
  • Machine learning module 218 may include one or more machine learning algorithms for determining a likelihood that the sensor data corresponds an authenticated user of computing device 202.
  • Machine learning module 218 may further be trained over time by training module 220, in which training module 220 may use historical user data stored in user profile data store 212 to train and test the one or more machine learning models.
  • a single machine learning model may exist for each user profile stored in user profile data store 212. In this way, computing device 202 may apply different machine learning models to different users of computing device 202, in which each of the different machine learning models are trained on the data received from a single user.
  • the output of the machine learning module 218 may include a confidence score.
  • Analysis module 208 may determine whether the confidence score satisfies a confidence score threshold, and responsive to determining that the confidence score satisfies the confidence score threshold, analysis module 208 may further determine that user 214 is included in the set of authenticated users.
  • machine learning module 218 includes a machine-learned model trained to receive input data of one or more types and, in response, provide output data of one or more types.
  • the input data may include one or more features that are associated with an instance or an example.
  • the one or more features associated with the instance or example can be organized into a feature vector.
  • the output data can include one or more predictions. Predictions can also be referred to as inferences.
  • machine learning module 218 can output a prediction for such instance based on the features.
  • Machine learning module 218 can be or include one or more of various different types of machine-learned models. In particular, in some implementations, machine learning module 218 can perform classification, regression, clustering, anomaly detection, recommendation generation, and/or other tasks.
  • machine learning module 218 can perform various types of classification based on the input data.
  • machine learning module 218 can perform binary classification or multiclass classification.
  • binary classification the output data can include a classification of the input data into one of two different classes.
  • multiclass classification the output data can include a classification of the input data into one (or more) of more than two classes.
  • the classifications can be single label or multi-label.
  • Machine learning module 218 may perform discrete categorical classification in which the input data is simply classified into one or more classes or categories.
  • machine learning module 218 can perform classification in which machine learning module 218 provides, for each of one or more classes, a numerical value descriptive of a degree to which it is believed that the input data should be classified into the corresponding class.
  • the numerical values provided by machine learning module 218 can be referred to as “confidence scores” that are indicative of a respective confidence associated with classification of the input into the respective class.
  • the confidence scores can be compared to one or more thresholds to render a discrete categorical prediction. In some implementations, only a certain number of classes (e.g., one) with the relatively largest confidence scores can be selected to render a discrete categorical prediction.
  • Machine learning module 218 may output a probabilistic classification. For example, machine learning module 218 may predict, given a sample input, a probability distribution over a set of classes. Thus, rather than outputting only the most likely class to which the sample input should belong, machine learning module 218 can output, for each class, a probability that the sample input belongs to such class. In some implementations, the probability distribution over all possible classes can sum to one. In some implementations, a Softmax function, or other type of function or layer can be used to squash a set of real values respectively associated with the possible classes to a set of real values in the range (0, 1) that sum to one.
  • the probabilities provided by the probability distribution can be compared to one or more thresholds to render a discrete categorical prediction.
  • a certain number of classes e.g., one
  • only a certain number of classes e.g., one
  • machine learning module 218 may be trained using supervised learning techniques. For example, machine learning module 218 may be trained on a training dataset that includes training examples labeled as belonging (or not belonging) to one or more classes.
  • machine learning module 218 can perform regression to provide output data in the form of a continuous numeric value.
  • the continuous numeric value can correspond to any number of different metrics or numeric representations, including, for example, currency values, scores, or other numeric representations.
  • machine learning module 218 can perform linear regression, polynomial regression, or nonlinear regression.
  • machine learning module 218 can perform simple regression or multiple regression.
  • a Softmax function or other function or layer can be used to squash a set of real values respectively associated with a two or more possible classes to a set of real values in the range (0, 1) that sum to one.
  • Machine learning module 218 may perform various types of clustering. For example, machine learning module 218 can identify one or more previously-defined clusters to which the input data most likely corresponds. Machine learning module 218 may identify one or more clusters within the input data. That is, in instances in which the input data includes multiple objects, documents, or other entities, machine learning module 218 can sort the multiple entities included in the input data into a number of clusters. In some implementations in which machine learning module 218 performs clustering, machine learning module 218 can be trained using unsupervised learning techniques.
  • Machine learning module 218 may perform anomaly detection or outlier detection. For example, machine learning module 218 can identify input data that does not conform to an expected pattern or other characteristic (e.g., as previously observed from previous input data). As examples, the anomaly detection can be used for fraud detection or system failure detection.
  • machine learning module 218 can provide output data in the form of one or more recommendations.
  • machine learning module 218 can be included in a recommendation system or engine.
  • machine learning module 218 can output a suggestion or recommendation of one or more additional entities that, based on the previous outcomes, are expected to have a desired outcome (e.g., elicit a score, ranking, or rating indicative of success or enjoyment).
  • Machine learning module 218 may, in some cases, act as an agent within an environment For example, machine learning module 218 can be trained using reinforcement learning, which will be discussed in further detail below.
  • machine learning module 218 can be a parametric model while, in other implementations, machine learning module 218 can be a non-parametric model. In some implementations, machine learning module 218 can be a linear model while, in other implementations, machine learning module 218 can be a non-linear model.
  • machine learning module 218 can be or include one or more of various different types of machine-learned models. Examples of such different types of machine-learned models are provided below for illustration. One or more of the example models described below can be used (e.g., combined) to provide the output data in response to the input data. Additional models beyond the example models provided below can be used as well.
  • machine learning module 218 can be or include one or more classifier models such as, for example, linear classification models; quadratic classification models; etc.
  • Machine learning module 218 may be or include one or more regression models such as, for example, simple linear regression models; multiple linear regression models; logistic regression models; stepwise regression models; multivariate adaptive regression splines; locally estimated scatterplot smoothing models; etc.
  • machine learning module 218 can be or include one or more decision tree-based models such as, for example, classification and/or regression trees; iterative dichotomiser 3 decision trees; C4.5 decision trees; chi-squared automatic interaction detection decision trees; decision stumps; conditional decision trees; etc.
  • decision tree-based models such as, for example, classification and/or regression trees; iterative dichotomiser 3 decision trees; C4.5 decision trees; chi-squared automatic interaction detection decision trees; decision stumps; conditional decision trees; etc.
  • Machine learning module 218 may be or include one or more kernel machines.
  • machine learning module 218 can be or include one or more support vector machines.
  • Machine learning module 218 may be or include one or more instancebased learning models such as, for example, learning vector quantization models; selforganizing map models; locally weighted learning models; etc.
  • machine learning module 218 can be or include one or more nearest neighbor models such as, for example, k-nearest neighbor classifications models; k- nearest neighbors regression models; etc.
  • Machine learning module 218 can be or include one or more Bayesian models such as, for example, naive Bayes models; Gaussian naive Bayes models; multinomial naive Bayes models; averaged one-dependence estimators; Bayesian networks; Bayesian belief networks; hidden Markov models; etc.
  • Bayesian models such as, for example, naive Bayes models; Gaussian naive Bayes models; multinomial naive Bayes models; averaged one-dependence estimators; Bayesian networks; Bayesian belief networks; hidden Markov models; etc.
  • machine learning module 218 can be or include one or more artificial neural networks (also referred to simply as neural networks).
  • a neural network can include a group of connected nodes, which also can be referred to as neurons or perceptrons.
  • a neural network can be organized into one or more layers. Neural networks that include multiple layers can be referred to as “deep” networks.
  • a deep network can include an input layer, an output layer, and one or more hidden layers positioned between the input layer and the output layer. The nodes of the neural network can be connected or non- fully connected.
  • Machine learning module 218 can be or include one or more feed forward neural networks.
  • feed forward networks the connections between nodes do not form a cycle.
  • each connection can connect a node from an earlier layer to a node from a later layer.
  • machine learning module 218 can be or include one or more recurrent neural networks.
  • at least some of the nodes of a recurrent neural network can form a cycle.
  • Recurrent neural networks can be especially useful for processing input data that is sequential in nature.
  • a recurrent neural network can pass or retain information from a previous portion of the input data sequence to a subsequent portion of the input data sequence through the use of recurrent or directed cyclical node connections.
  • sequential input data can include time-series data (e.g., sensor data versus time or imagery captured at different times).
  • a recurrent neural network can analyze sensor data versus time to detect or predict a swipe direction, to perform handwriting recognition, etc.
  • Sequential input data may include words in a sentence (e.g., for natural language processing, speech detection or processing, etc.); notes in a musical composition; sequential actions taken by a user (e.g., to detect or predict sequential application usage); sequential object states; etc.
  • Example recurrent neural networks include long short-term (LSTM) recurrent neural networks; gated recurrent units; bi-direction recurrent neural networks; continuous time recurrent neural networks; neural history compressors; echo state networks; Elman networks; Jordan networks; recursive neural networks; Hopfield networks; fully recurrent networks; sequence-to- sequence configurations; etc.
  • LSTM long short-term
  • machine learning module 218 can be or include one or more convolutional neural networks.
  • a convolutional neural network can include one or more convolutional layers that perform convolutions over input data using learned filters.
  • Filters can also be referred to as kernels.
  • Convolutional neural networks can be especially useful for vision problems such as when the input data includes imagery such as still images or video. However, convolutional neural networks can also be applied for natural language processing.
  • machine learning module 218 can be or include one or more generative networks such as, for example, generative adversarial networks.
  • Generative networks can be used to generate new data such as new images or other content.
  • Machine learning module 218 may be or include an autoencoder.
  • the aim of an autoencoder is to learn a representation (e.g., a lower- dimensional encoding) for a set of data, typically for the purpose of dimensionality reduction.
  • an autoencoder can seek to encode the input data and the provide output data that reconstructs the input data from the encoding.
  • the autoencoder concept has become more widely used for learning generative models of data.
  • the autoencoder can include additional losses beyond reconstructing the input data.
  • Machine learning module 218 may be or include one or more other forms of artificial neural networks such as, for example, deep Boltzmann machines; deep belief networks; stacked autoencoders; etc. Any of the neural networks described herein can be combined (e.g., stacked) to form more complex networks.
  • One or more neural networks can be used to provide an embedding based on the input data.
  • the embedding can be a representation of knowledge abstracted from the input data into one or more learned dimensions.
  • embeddings can be a useful source for identifying related entities.
  • embeddings can be extracted from the output of the network, while in other instances embeddings can be extracted from any hidden node or layer of the network (e.g., a close to final but not final layer of the network).
  • Embeddings can be useful for performing auto suggest next video, product suggestion, entity or object recognition, etc.
  • embeddings be useful inputs for downstream models. For example, embeddings can be useful to generalize input data (e.g., search queries) for a downstream model or processing system.
  • Machine learning module 218 may include one or more clustering models such as, for example, k-means clustering models; k-medians clustering models; expectation maximization models; hierarchical clustering models; etc.
  • clustering models such as, for example, k-means clustering models; k-medians clustering models; expectation maximization models; hierarchical clustering models; etc.
  • machine learning module 218 can perform one or more dimensionality reduction techniques such as, for example, principal component analysis; kernel principal component analysis; graph-based kernel principal component analysis; principal component regression; partial least squares regression; Sammon mapping; multidimensional scaling; projection pursuit; linear discriminant analysis; mixture discriminant analysis; quadratic discriminant analysis; generalized discriminant analysis; flexible discriminant analysis; autoencoding; etc.
  • principal component analysis kernel principal component analysis
  • graph-based kernel principal component analysis principal component regression
  • partial least squares regression Sammon mapping
  • multidimensional scaling projection pursuit
  • linear discriminant analysis mixture discriminant analysis
  • quadratic discriminant analysis generalized discriminant analysis
  • flexible discriminant analysis flexible discriminant analysis
  • autoencoding etc.
  • machine learning module 218 can perform or be subjected to one or more reinforcement learning techniques such as Markov decision processes; dynamic programming; Q functions or Q-leaming; value function approaches; deep Q- networks; differentiable neural computers; asynchronous advantage actor-critics; deterministic policy gradient; etc.
  • reinforcement learning techniques such as Markov decision processes; dynamic programming; Q functions or Q-leaming; value function approaches; deep Q- networks; differentiable neural computers; asynchronous advantage actor-critics; deterministic policy gradient; etc.
  • machine learning module 218 can be an autoregressive model.
  • an autoregressive model can specify that the output data depends linearly on its own previous values and on a stochastic term.
  • an autoregressive model can take the form of a stochastic difference equation.
  • WaveNet is a generative model for raw audio.
  • machine learning module 218 can include or form part of a multiple model ensemble.
  • bootstrap aggregating can be performed, which can also be referred to as “bagging.”
  • bootstrap aggregating a training dataset is split into a number of subsets (e.g., through random sampling with replacement) and a plurality of models are respectively trained on the number of subsets.
  • respective outputs of the plurality of models can be combined (e.g., through averaging, voting, or other techniques) and used as the output of the ensemble.
  • One example ensemble is a random forest, which can also be referred to as a random decision forest Random forests are an ensemble learning technique for classification, regression, and other tasks.
  • Random forests are generated by producing a plurality of decision trees at training time.
  • the class that is the mode of the classes (classification) or the mean prediction (regression) of the individual trees can be used as the output of the forest. Random decision forests can correct for decision trees' tendency to overfit their training set [0075]
  • Another example ensemble technique is stacking, which can, in some instances, be referred to as stacked generalization. Stacking includes training a combiner model to blend or otherwise combine the predictions of several other machine-learned models. Thus, a plurality of machine-learned models (e.g., of same or different type) can be trained based on training data.
  • Boosting can include incrementally building an ensemble by iteratively training weak models and then adding to a final strong model. For example, in some instances, each new model can be trained to emphasize the training examples that previous models misinterpreted (e.g., misclassified). For example, a weight associated with each of such misinterpreted examples can be increased.
  • AdaBoost AdaBoost, which can also be referred to as Adaptive Boosting.
  • boosting techniques include LPBoost; TotalBoost; BrownBoost; xgboost; MadaBoost, LogitBoost, gradient boosting; etc.
  • any of the models described above e.g., regression models and artificial neural networks
  • an ensemble can include a top level machine-learned model or a heuristic function to combine and/or weight the outputs of the models that form the ensemble.
  • machine learning module 218 can be used to preprocess the input data for subsequent input into another model.
  • machine learning module 218 can perform dimensionality reduction techniques and embeddings (e.g., matrix factorization, principal components analysis, singular value decomposition, word2vec/GLOVE, and/or related approaches); clustering; and even classification and regression for downstream consumption. Many of these techniques have been discussed above and will be further discussed below.
  • machine learning module 218 can be trained or otherwise configured to receive the input data and, in response, provide the output data.
  • the input data can include different types, forms, or variations of input data.
  • the input data can include features that describe the content (or portion of content) initially selected by the user, e.g., content of user-selected document or image, links pointing to the user selection, links within the user selection relating to other files available on device or cloud, metadata of user selection, etc.
  • the input data includes the context of user usage, either obtained from app itself or from other sources. Examples of usage context include breadth of share (sharing publicly, or with a large group, or privately, or a specific person), context of share, etc.
  • additional input data can include the state of the device, e.g., the location of the device, the apps running on the device, etc.
  • machine learning module 218 can receive and use the input data in its raw form.
  • the raw input data can be preprocessed.
  • machine learning module 218 can receive and use the preprocessed input data.
  • preprocessing the input data can include extracting one or more additional features from the raw input data.
  • feature extraction techniques can be applied to the input data to generate one or more new, additional features.
  • Example feature extraction techniques include edge detection; comer detection; blob detection; ridge detection; scale-invariant feature transform; motion detection; optical flow; Hough transform; etc.
  • the extracted features can include or be derived from transformations of the input data into other domains and/or dimensions.
  • the extracted features can include or be derived from transformations of the input data into the frequency domain. For example, wavelet transformations and/or fast Fourier transforms can be performed on the input data to generate additional features.
  • the extracted features can include statistics calculated from the input data or certain portions or dimensions of the input data.
  • Example statistics include the mode, mean, maximum, minimum, or other metrics of the input data or portions thereof.
  • the input data can be sequential in nature.
  • the sequential input data can be generated by sampling or otherwise segmenting a stream of input data.
  • frames can be extracted from a video.
  • sequential data can be made non-sequential through summarization.
  • portions of the input data can be imputed.
  • additional synthetic input data can be generated through interpolation and/or extrapolation.
  • some or all of the input data can be scaled, standardized, normalized, generalized, and/or regularized.
  • Example regularization techniques include ridge regression; least absolute shrinkage and selection operator (LASSO); elastic net; least-angle regression; cross-validation; LI regularization; L2 regularization; etc.
  • some or all of the input data can be normalized by subtracting the mean across a given dimension's feature values from each individual feature value and then dividing by the standard deviation or other metric.
  • some or all or the input data can be quantized or discretized.
  • qualitative features or variables included in the input data can be converted to quantitative features or variables.
  • one hot encoding can be performed.
  • dimensionality reduction techniques can be applied to the input data prior to input into machine learning module 218.
  • dimensionality reduction techniques including, for example, principal component analysis; kernel principal component analysis; graph-based kernel principal component analysis; principal component regression; partial least squares regression; Sammon mapping; multidimensional scaling; projection pursuit; linear discriminant analysis; mixture discriminant analysis; quadratic discriminant analysis; generalized discriminant analysis; flexible discriminant analysis; autoencoding; etc.
  • the input data can be intentionally deformed in any number of ways to increase model robustness, generalization, or other qualities.
  • Example techniques to deform the input data include adding noise; changing color, shade, or hue; magnification; segmentation; amplification; etc.
  • machine learning module 218 can provide the output data.
  • the output data can include different types, forms, or variations of output data.
  • the output data can include content, either stored locally on the user device or in the cloud, that is relevantly shareable along with the initial content selection.
  • the output data can include various types of classification data (e.g., binary classification, multiclass classification, single label, multi- label, discrete classification, regressive classification, probabilistic classification, etc.) or can include various types of regressive data (e.g., linear regression, polynomial regression, nonlinear regression, simple regression, multiple regression, etc.).
  • classification data e.g., binary classification, multiclass classification, single label, multi- label, discrete classification, regressive classification, probabilistic classification, etc.
  • regressive data e.g., linear regression, polynomial regression, nonlinear regression, simple regression, multiple regression, etc.
  • the output data can include clustering data, anomaly detection data, recommendation data, or any of the other forms of output data discussed above.
  • the output data can influence downstream processes or decision making.
  • the output data can be interpreted and/or acted upon by a rules-based regulator.
  • the present disclosure provides techniques that include or otherwise leverage one or more machine-learned models to suggest content, either stored locally on the uses device or in the cloud, that is relevantly shareable along with the initial content selection based on features of the initial content selection.
  • Any of the different types or forms of input data described above can be combined with any of the different types or forms of machine-learned models described above to provide any of the different types or forms of output data described above.
  • Example computing devices include user computing devices (e.g., laptops, desktops, and mobile computing devices such as tablets, smartphones, wearable computing devices, etc.); embedded computing devices (e.g., devices embedded within a vehicle, camera, image sensor, industrial machine, satellite, gaming console or controller, or home appliance such as a refrigerator, thermostat, energy meter, home energy manager, smart home assistant, etc.); server computing devices (e.g., database servers, parameter servers, file servers, mail servers, print servers, web servers, game servers, application servers, etc.); dedicated, specialized model processing or training devices; virtual computing devices; other computing devices or computing infrastructure; or combinations thereof.
  • user computing devices e.g., laptops, desktops, and mobile computing devices such as tablets, smartphones, wearable computing devices, etc.
  • embedded computing devices e.g., devices embedded within a vehicle, camera, image sensor, industrial machine, satellite, gaming console or controller, or home appliance such as a refrigerator, thermostat, energy meter, home energy manager, smart home assistant, etc.
  • server computing devices e
  • computing device 202 may communicate over a network with an example server computing system that includes a machine-learned model.
  • a server device may store and implement machine learning module 218.
  • output data obtained through machine learning module 218 at a server device can be used to improve other server tasks or can be used by other non-user devices to improve services performed by or for such other non-user devices.
  • the output data can improve other downstream processes performed by server device for a computing device of a user or embedded computing device.
  • output data obtained through implementation of machine learning module 218 at a server device can be sent to and used by a user computing device, such as computing device 202, an embedded computing device.
  • the server device can be said to perform machine learning as a service.
  • different respective portions of machine learning module 218 can be stored at and/or implemented by some combination of a user computing device; an embedded computing device; a server computing device; etc. In other words, portions of machine learning module 218 may be distributed in whole or in part amongst computing device 202 and a server device.
  • Computing device 202 and/or the server device may perform graph processing techniques or other machine learning techniques using one or more machine learning platforms, frameworks, and/or libraries, such as, for example, TensorFlow, Caffe/Caffe2, Theano, Torch/PyTorch, MXnet, CNTK, etc.
  • Computing device 202 and/or the server device may be distributed at different physical locations and connected via one or more networks. If configured as distributed computing devices, computing device 202 and/or the server device may operate according to sequential computing architectures, parallel computing architectures, or combinations thereof. In one example, distributed computing devices can be controlled or guided through use of a parameter server.
  • multiple instances of machine learning module 218 can be parallelized to provide increased processing throughput
  • the multiple instances of machine learning module 218 can be parallelized on a single processing device or computing device or parallelized across multiple processing devices or computing devices.
  • Each computing device that implements machine learning module 218 or other aspects of the present disclosure can include a number of hardware components that enable performance of the techniques described herein.
  • each computing device can include one or more memory devices that store some or all of machine learning module 218.
  • machine learning module 218 can be a structured numerical representation that is stored in memory.
  • the one or more memory devices can also include instructions for implementing machine learning module 218 or performing other operations.
  • Example memory devices include RAM, ROM, EEPROM, EPROM, flash memory devices, magnetic disks, etc., and combinations thereof.
  • Each computing device can also include one or more processing devices that implement some or all of machine learning module 218 and/or perform other related operations.
  • Example processing devices include one or more of: a central processing unit (CPU); a visual processing unit (VPU); a graphics processing unit (GPU); a tensor processing unit (TPU); a neural processing unit (NPU); a neural processing engine; a core of a CPU, VPU, GPU, TPU, NPU or other processing device; an application specific integrated circuit (ASIC); a field programmable gate array (FPGA); a co-processor; a controller; or combinations of the processing devices described above.
  • Processing devices can be embedded within other hardware components such as, for example, an image sensor, accelerometer, etc.
  • Hardware components e.g., memory devices and/or processing devices
  • Machine learning module 218 described herein can be trained with training module 220 and then provided for storage and/or implementation at one or more computing devices, such as computing device 202.
  • training module 220 executes locally at computing device 202.
  • training module 220 can be separate from computing device 202 or any other computing device that implements machine learning module 218.
  • machine learning module 218 may be trained in an offline fashion or an online fashion.
  • offline training also known as batch learning
  • machine learning module 218 is trained on the entirety of a static set of training data.
  • machine learning module 218 is continuously trained (or re-trained) as new training data becomes available (e.g., while the model is used to perform inference).
  • Training module 220 may perform centralized training of machine learning module 218 (e.g., based on a centrally stored dataset).
  • decentralized training techniques such as distributed training, federated learning, or the like can be used to train, update, or personalize machine learning module 218.
  • Machine learning module 218 described herein can be trained according to one or more of various different training types or techniques.
  • machine learning module 218 can be trained by training module 220 using supervised learning, in which machine learning module 218 is trained on a training dataset that includes instances or examples that have labels.
  • the labels can be manually applied by experts, generated through crowd-sourcing, or provided by other techniques (e.g., by physicsbased or complex mathematical models).
  • the training examples can be provided by the user computing device. In some implementations, this process can be referred to as personalizing the model.
  • Training data used by training module 220 can include, upon user permission for use of such data for training, anonymized usage logs of sharing flows, e.g., content items that were shared together, bundled content pieces already identified as belonging together, e.g., from entities in a knowledge graph, etc.
  • training data can include examples of input data that have been assigned labels that correspond to output data.
  • machine learning module 218 can be trained by optimizing an objective function.
  • the objective function may be or include a loss function that compares (e.g., determines a difference between) output data generated by the model from the training data and labels (e.g., ground-truth labels) associated with the training data.
  • the loss function can evaluate a sum or mean of squared differences between the output data and the labels.
  • the objective function may be or include a cost function that describes a cost of a certain outcome or output data.
  • margin-based techniques such as, for example, triplet loss or maximum-margin training.
  • optimization techniques can be performed to optimize an objective function.
  • the optimization technique(s) can minimize or maximize the objective function.
  • Example optimization techniques include Hessian-based techniques and gradient-based techniques, such as, for example, coordinate descent; gradient descent (e.g., stochastic gradient descent); subgradient techniques; etc.
  • Other optimization techniques include black box optimization techniques and heuristics.
  • backward propagation of errors can be used in conjunction with an optimization technique (e.g., gradient based techniques) to train machine learning module 218 (e.g., when machine-learned model is a multi-layer model such as an artificial neural network).
  • an iterative cycle of propagation and model parameter (e.g., weights) update can be performed to train machine learning module 218.
  • Example backpropagation techniques include truncated backpropagation through time, Levenberg- Marquardt backpropagation, etc.
  • machine learning module 218 described herein can be trained using unsupervised learning techniques.
  • Unsupervised learning can include inferring a function to describe hidden structure from unlabeled data. For example, a classification or categorization may not be included in the data.
  • Unsupervised learning techniques can be used to produce machine-learned models capable of performing clustering, anomaly detection, learning latent variable models, or other tasks.
  • Machine learning module 218 can be trained using semi-supervised techniques which combine aspects of supervised learning and unsupervised learning.
  • Machine learning module 218 can be trained or otherwise generated through evolutionary techniques or genetic algorithms.
  • machine learning module 218 described herein can be trained using reinforcement learning.
  • an agent e.g., model
  • Reinforcement learning can differ from the supervised learning problem in that correct input/output pairs are not presented, nor sub-optimal actions explicitly corrected.
  • one or more generalization techniques can be performed during training to improve the generalization of machine learning module 218.
  • Example generalization techniques can help reduce overfitting of machine learning module 218 to the training data.
  • Example generalization techniques include dropout techniques; weight decay techniques; batch normalization; early stopping; subset selection; stepwise selection; etc.
  • machine learning module 218 described herein can include or otherwise be impacted by a number of hyperparameters, such as, for example, learning rate, number of layers, number of nodes in each layer, number of leaves in a tree, number of clusters; etc.
  • Hyperparameters can affect model performance. Hyperparameters can be hand selected or can be automatically selected through application of techniques such as, for example, grid search; black box optimization techniques (e.g., Bayesian optimization, random search, etc.); gradient-based optimization; etc.
  • Example techniques and/or tools for performing automatic hyperparameter optimization include Hyperopt; Auto-WEKA; Spearmint; Metric Optimization Engine (MOE); etc.
  • various techniques can be used to optimize and/or adapt the learning rate when the model is trained.
  • Example techniques and/or tools for performing learning rate optimization or adaptation include Adagrad; Adaptive Moment Estimation (ADAM); Adadelta; RMSprop; etc.
  • transfer learning techniques can be used to provide an initial model from which to begin training of machine learning module 218 described herein.
  • machine learning module 218 described herein can be included in different portions of computer-readable code on a computing device.
  • machine learning module 218 can be included in a particular application or program and used (e.g., exclusively) by such particular application or program.
  • a computing device can include a number of applications and one or more of such applications can contain its own respective machine learning library and machine-learned model(s).
  • machine learning module 218 described herein can be included in an operating system of a computing device (e.g., in a central intelligence layer of an operating system) and can be called or otherwise used by one or more applications that interact with the operating system.
  • each application can communicate with the central intelligence layer (and model(s) stored therein) using an application programming interface (API) (e.g., a common, public API across all applications).
  • API application programming interface
  • the central intelligence layer can communicate with a central device data layer.
  • the central device data layer can be a centralized repository of data for the computing device.
  • the central device data layer can communicate with a number of other components of the computing device, such as, for example, one or more sensors, a context manager, a device state component, and/or additional components.
  • the central device data layer can communicate with each device component using an API (e.g., a private API).
  • Databases and applications can be implemented on a single system or distributed across multiple systems. Distributed components can operate sequentially or in parallel.
  • machine learning techniques described herein are readily interchangeable and combinable. Although certain example techniques have been described, many others exist and can be used in conjunction with aspects of the present disclosure.
  • a first set of physical characteristics may include one or more physiological characteristics of the user 214 detected by sensors 204.
  • Computing device 202 may determine, based on the sensor data generated by sensors 204, whether computing device 202 is being worn.
  • computing device 202 may then apply machine learning module 218 to a second set of physical characteristics detected by sensors 204 to generate an output
  • Computing device 202 may then determine, based on the output of machine learning module 218, whether user 214 is included in a set of authenticated users. Responsive to determining that user 214 is included in the set of authenticated users, computing device 202 may then automatically transition from operating in the reduced access mode to operating in the increased access mode, wherein, while operating in the increased access mode, additional functionality of the computing device 202 is accessible to user 214 that is not accessible to user 214 while computing device 202 is operating in the reduced access mode.
  • resolution module 210 may be configured to determine whether or not user 214 is authenticated based on the confidence score outputted by machine learning module 218, additional information received from one or more of modules 206-222, and/or information stored by user profile data store 212.
  • Resolution module 210 may require the confidence score to satisfy a strict threshold (i.e., require a higher likelihood or confidence that computing device 202 is being used by an authenticated user before computing device 202 transitions from the reduced access mode to an increased access mode).
  • analysis module 208 may output a confidence score to resolution module 210, and responsive to resolution module 210 determining that the confidence score satisfies a specified confidence score threshold, user 214 may be considered an authenticated user and use computing device 202 in the increased access mode.
  • resolution module 210 may alter the confidence score threshold based on the amount of user data that training module 220 has used to train a machine learning model.
  • training module 220 may continuously train a machine learning model included in machine learning module 218 until the machine learning model is trained on a threshold amount of training data.
  • analysis module 208 may periodically store confidence scores determined by machine learning module 218 in user profile data store 212.
  • resolution module 210 may query user profile data store 212 to retrieve past user data for comparison to the current user data in order to determine whether the current user data is typical.
  • the past user data may include past biometric information, location information, date and time information, etc.
  • Resolution module 210 may compare such information to information received from one or more of modules 206-222 as well as the confidence score received from analysis module 208 in order to determine whether to authenticate, reject, or require reauthentication (including which level of reauthentication is required, such as low or high security level reauthentication).
  • an owner or authoritative user of computing device 202 may select a confidence score threshold.
  • resolution module 210 may compare a confidence score determined by analysis module 208 and a selected confidence score (e.g., stored in user profile data store 212, stored in a cloud computing system, etc.) in order to determine that user 214 is authenticated, rejected, or requires reauthentication.
  • the owner of computing device 202 that desires to minimize the potential for access to sensitive information by unauthenticated persons may select a high confidence score, and resolution module 210 may only authenticate user 214 if the confidence score determined by analysis module 208 is above the selected high confidence score.
  • resolution module 210 may further determine if a higher level of reauthentication (i.e., greater security measure) is required, which may detract from the user experience, or a lower level of reauthentication (i.e., lower security measure) is required, which may result in a smoother user experience. In order to satisfy the lower level reauthentication requirement, resolution module 210 may use less secure data, such as GPS location information, network neighborhood information determined using Wi-Fi, etc.
  • resolution module 210 may use more reliable data for particularly identifying user 214, such as the one or more physiological characteristics of user 214, passwords, pin patterns, visual data for facial recognition, motion data (e.g., when requiring the user to perform a particular gesture using computing device 202), etc. While the various types of data are described as being used for lower level or higher level reauthentication requirements, any of the various types of data may be used for either or both levels of reauthentication requirement and a user may configure which types of data may be used for each level of reauthentication requirement. [0129] In some examples, a security challenge required to reauthenticate the user may be performed using either computing device 202.
  • resolution module 210 may be able to enter the password by providing input to a user interface generated by user interface module 206 for display on user interface component 232.
  • user 214 may place his/her finger on a sensor 204 of computing device 202, and computing device 202 may generate the fingerprint biometric information and provide it to resolution module 210.
  • reauthentication processes may be performed by analysis module 208 and resolution module 210.
  • resolution module 210 may receive a current location of computing device 202 from data processing module 216 and compare the current location to previous locations of computing device 202 retrieved from user profile data store 212. If the current location does not correspond to a location computing device 202 previously visited or infrequently visited as determined based on the previous location information, resolution module 210 may increase the confidence score threshold, thus making it less likely that user 214 will be authenticated without at least some level of reauthentication.
  • data processing module 216 may receive image data captured by one of UI components 232 (e.g., video data, still image data, etc. captured by a camera) and determine if the image data includes one or more individuals. In some examples, data processing module 216 may determine if the image data includes one or more faces. If the image data includes the face of an authenticated user, data processing module 216 may determine that the authenticated user is currently using computing device 202. If the image data does not include the face of an authenticated user, data processing module 216 may determine that an authenticated user is not currently using computing device 202. In either instance, data processing module 216 may provide a result of the determination to resolution module 210. Resolution module 106 may decrease the confidence score threshold in response to data processing module 216 determining that an authenticated user is currently using computing device 202 and vice versa.
  • UI components 232 e.g., video data, still image data, etc. captured by a camera
  • resolution module 210 may cause UI module 206 to output instructions for user 214 of computing device 202 to complete a security challenge and how to complete the security challenge.
  • user 214 may be required to submit to a facial recognition process, provide a fingerprint for fingerprint authentication, enter a passcode, perform an input pattern, provide a voice sample for voice authentication, move computing device 202 in a particular pattern, etc.
  • resolution module 210 may use information from one or more of modules 206-222 to complete the reauthentication and determine whether or not the user will be authenticated.
  • computing device 202 may be configured to predictively authenticate user 214.
  • computing device 202 may determine a confidence score prior to user 214 initiating an unlock of computing device 202 and use the predefined confidence score as well as other sensor information and stored user data to authenticate user 214 without requiring user 214 to manually unlock computing device 202.
  • FIG. 3 is a block diagram illustrating an example data processing module 316 configured to generate biometric data based on one or more physiological characteristics of a user.
  • Data processing module 316 may be similar if not substantially similar to data processing module 216 of FIG. 2.
  • data processing module 316 may be configured to receive information indicating a user’s physiological characteristics, such as base heart rate, skin color, blood pressure reading, skin temperature, electrocardiogram reading, gait, or voice, and generate data that can be stored in a user’s profile.
  • data processing module 315 further includes fingerprint detection module 338, skin color detection module 340, device location module 342, motion detection module 344, heart rate detection module 346, voice detection module 448, and data preprocessing module 350.
  • Fingerprint detection module 338 may receive fingerprint information from a fingerprint sensor (e.g., one or more of sensors 204 of FIG. 2) and/or a user interface component (i.e., in examples where user interface components 232 of FIG. 2 include a presence-sensitive input device capable of capturing a fingerprint).
  • Skin color detection module 340 may be configured to receive visual data from an image sensor or through an input mechanism capable of capturing skin color details, such as a specialized sensor or touch-sensitive display. Skin color detection module 340 may analyze the received skin color data to determine the skin color characteristics of a user.
  • Device location module 342 may be configured to determine the location of computing device 202 by accessing location data from various sources (e.g., GPS receivers, Wi-Fi positioning systems, cellular network signals, or other location-aware technologies available on the computing device). Device location module 342 may compare the obtained location data with historical location data stored in user’s profile to determine whether the user is in a location that is typical for the user. Motion detection module 344 may be configured to gather data from motion sensors integrated within computing device 202, such as accelerometers, gyroscopes, or magnetometers. These sensors may capture changes in motion, orientation, and position of the computing device, which may be used in gesture recognition. Motion detection module 344 may compare motion data with the occurrence and characteristics of a user’s historical frequent motions.
  • sources e.g., GPS receivers, Wi-Fi positioning systems, cellular network signals, or other location-aware technologies available on the computing device.
  • Device location module 342 may compare the obtained location data with historical location data stored in user’s profile to determine whether the user is
  • Heart rate detection module 346 may be configured to measure and monitor the user's heart rate using specialized sensors integrated within the computing device, such as optical heart rate sensors or electrodes. These sensors may capture changes in blood flow and heartbeat patterns to accurately determine the user's heart rate. Heart rate detection module 346 may compare the detected heart rate data with the user’s historical heart rate data.
  • Voice detection module 348 may be configured to process audio input from the computing device's microphone or other audio input sources. Voice detection module 348 may detect human voice patterns within the captured sound and use voice reignition technology. Voice detection module 348 may compare the user's voice data with voice data stored in the user’s profile.
  • one or more of modules 338-348 may be implemented by the computing device while a user is wearing the computing device. Further, while modules 338-348 are example modules configured to detect or determine the example physiological characteristics of a user described herein, computing device 202 may include other modules configured to detect other physiological characteristics of a user.
  • fingerprint detection module 338, skin color detection module 340, device location module 342, motion detection module 344, heart rate detection module 346, and voice detection module 348 may compare biometric data to stored biometric data of an authenticated user of computing device 202 that is stored in user profile data store 312. If the captured data sufficiently matches the stored data in user profile data store 312, modules 338- 348 may provide, to resolution module 210 of FIG. 2, an indication that the current user of computing device 202 is an authenticated user. Similarly, if the data does not match, modules 338-348 may provide, to resolution module 210, an indication that the current user is not an authenticated user. As described above, resolution module 210 may adjust the confidence score threshold based on the result of the comparison received from modules 338-348 (i.e., increasing the confidence score threshold if the user is not an authenticated user and vice versa).
  • data preprocessing module 350 may be configured as a module for processing data received by modules 338-348 (hereinafter referred to as “input data”) or any other data stored in user profile data store 312 prior to computing device 202 sending the data to other components or modules and/or implementing machine learning techniques.
  • input data data received by modules 338-348
  • output data any other data stored in user profile data store 312 prior to computing device 202 sending the data to other components or modules and/or implementing machine learning techniques.
  • information indicating a user’s skin color determined by skin color detection module 340 may be sent to data preprocessing module 350, wherein data preprocessing module 350 processes the information and performs steps to transform the information into data that can be used by a machine learning model or other components of computing device 202.
  • preprocessing the input data may include extracting one or more additional features from raw input data.
  • feature extraction techniques may be applied to the input data to generate one or more new, additional features.
  • Example feature extraction techniques include edge detection; comer detection; blob detection; ridge detection; scale-invariant feature transform; motion detection; optical flow; Hough transform; etc.
  • the extracted features may include or be derived from transformations of the input data into other domains and/or dimensions.
  • the extracted features may include or be derived from transformations of the input data into the frequency domain. For example, wavelet transformations and/or fast Fourier transforms may be performed on the input data to generate additional features.
  • the extracted features may include statistics calculated from the input data or certain portions or dimensions of the input data.
  • Example statistics include the mode, mean, maximum, minimum, or other metrics of the input data or portions thereof.
  • the input data may be sequential in nature.
  • the sequential input data may be generated by sampling or otherwise segmenting a stream of input data.
  • frames may be extracted from a video.
  • sequential data may be made non-sequential through summarization.
  • portions of the input data may be imputed.
  • additional synthetic input data may be generated through interpolation and/or extrapolation.
  • some or all of the input data may be scaled, standardized, normalized, generalized, and/or regularized.
  • Example regularization techniques include ridge regression; least absolute shrinkage and selection operator (LASSO); elastic net; least-angle regression; cross-validation; LI regularization; L2 regularization; etc.
  • LASSO least absolute shrinkage and selection operator
  • some or all of the input data may be normalized by subtracting the mean across a given dimension’s feature values from each feature value and then dividing by the standard deviation or another metric.
  • some or all or the input data may be quantized or discretized.
  • qualitative features or variables included in the input data may be converted to quantitative features or variables. For example, one hot encoding may be performed.
  • dimensionality reduction techniques may be applied to the input data.
  • dimensionality reduction techniques including, for example, principal component analysis; kernel principal component analysis; graph-based kernel principal component analysis; principal component regression; partial least squares regression; Sammon mapping; multidimensional scaling; projection pursuit; linear discriminant analysis; mixture discriminant analysis; quadratic discriminant analysis; generalized discriminant analysis; flexible discriminant analysis; autoencoding; etc.
  • Data preprocessing module 350 may send processed input data to user profile data store 312, in which computing device 202 may then access and use the processed input data to determine whether a user is an authenticated user.
  • the unique physiological characteristics of different users can be used to automatically authenticate users and provide them access to a computing device while still protecting each user’s sensitive information included in their user profiles.
  • the computing device operating in the reduced access mode may detect a second user input to unlock the computing device. Responsive to detecting the second user input, one or more sensors of the computing device may detect one or more physiological characteristics, such as those described above, of a second user. The computing device may then determine, based on the one or more physiological characteristics of the second user, whether a user profile was previously created for the second user.
  • the computing device may then create the user profile for the second user, wherein the user profile for the second user includes a second machine learning model for automatically authenticating the second user based on the one or more detected physiological characteristics of the second user.
  • the second machine learning model may further be trained using the one or more physiological characteristics of the second user, in which the physiological characteristics of the second user are detected or determined by one or more of modules 338-350.
  • FIG. 4 is a block diagram further illustrating an example computing device in communication with a companion device configured to automatically authenticate a user, in accordance with one or more techniques of the present disclosure.
  • Computing device 402 may be substantially similar to, if not similar to, computing device 102 and 202 of FIG. 1 and FIG. 2, respectively.
  • Computing device 402 may determine that a companion device 462 from a plurality of companion devices operating in a reduced access mode is proximate or adjacent to computing device 402.
  • “proximate” may be defined as within a range over which wireless communication (e.g., wireless networks including Bluetooth, 3G, LTE, and Wi-Fi wireless networks) spans (e.g., 9 kHz to 300 GHz).
  • computing device 402 may determine that companion device 462 is proximate to computing device 402 when companion device 462 is within approximately 10 meters from computing device 402. In another example, if computing device 402 and companion device 462 are in communication via Wi-Fi, computing device 402 may determine that companion device 462 is proximate to computing device 402 when companion device 462 is within approximately 100 meters from computing device 402. In other words, computing device 402 may determine that companion device 462 is proximate to computing device 402 when the distance between companion device 462 and computing device 402 supports communication between companion device 462 and computing device 402.
  • Telemetry module 460 of computing device 402 and telemetry module 464 of companion device 462 may be used to communicate with each other or other external devices via one or more networks, such as the one or more wireless networks described above.
  • companion device 462 utilizes telemetry module 464 to wirelessly communicate with telemetry module 460 of computing device 402.
  • computing device 402 Responsive to determining that companion device 462 is proximate to the computing device 402 and user 414 is included in the set of authenticated users, computing device 402 may then send, to companion device 462, the output of analysis module 408, which may be substantially similar to analysis module 208 of FIG. 2.
  • companion device 462 may then determine, based on the output, whether user 414 is included in a set of authenticated users for computing device 402. Responsive to determining that user 414 is included in the set of authenticated users for computing device 402, computing device 402 may provide authentication information to companion device 462 that causes companion device 462 to automatically transition from operating in the reduced access mode to operating in an increased access mode. Furthermore, as shown in the example of FIG. 4, user interface components 468 (which may be substantially similar to user interface components 132 of FIG. 1) may generate a user interface for display on companion device 462 that indicates companion device 462 is unlocked and/or user 414 can access companion device 462 operating in the increased access mode.
  • user interface components 468 may generate GUI 470, which displays “DEVICE UNLOCKED” to user 414.
  • GUI 470 displays “DEVICE UNLOCKED” to user 414.
  • additional functionality of companion device 462 may be accessible to user 414 that is not accessible to user 414 while companion device 462 is operating in the reduced access mode.
  • FIG. 5 is a flow chart illustrating an example operation of a computing device that automatically authenticates a user based on one or more physiological characteristics of the user, in accordance with one or more techniques of the present disclosure.
  • the components of FIG. 5 are described with respect to FIG. 2.
  • Computing device 202 operating in a reduced access mode detects, via user interface components 232, a first user input from a first user 214 to unlock computing device 202 (572). Responsive to detecting the first user input from user 214, one or more sensors 204 of computing device 202 detect one or more physiological characteristics of first user 214 (574). Computing device 202 then determines, based on the one or more physiological characteristics of user 214, whether a user profile was previously created for first user 214 (576).
  • computing system 202 may determine whether the one or more physiological characteristics of first user 214 match one or more physiological characteristics included in a plurality of stored user profiles stored in user profile data store 212. Responsive to determining that the user profile was not previously created for first user 214 (NO), computing device 202 creates the user profile for first user 214 (578). In some examples, computing device 202 may prompt the first user to confirm whether they are a new user of computing device 202 via user interface components 232 before creating the user profile for first user 214.
  • the user profile for first user 214 may comprise the one or more physiological characteristics of first user 214, and is stored by computing device 202 in user profile data store 212.
  • a first set of physical characteristics includes the one or more physiological characteristics of first user 214.
  • computing device 202 determines, based on sensor data generated by sensors 204 of computing device 202, whether computing device 202 is being worn. Responsive to determining that computing device 202 is being worn, and responsive to computing device 202 creating a user profile for first user 214 or determining that the user profile was created for first user 214 (YES), machine learning module 218 applies a first machine learning model included in the user profile for first user 214 for automatically authenticating first user 214 based on the one or more physiological characteristics of first user 214 (580).
  • analysis module 208 may apply the first machine learning model to a second set of physical characteristics detected by sensors 204 to generate an output
  • the output of the first machine learning model includes a confidence score.
  • Resolution module 210 of computing device 202 may determine whether the confidence score satisfies a confidence score threshold. Responsive to resolution module 210 determining that the confidence score satisfies the confidence score threshold, computing device 202 may then determine that first user 214 is included in the set of authenticated users.
  • Computing device 202 trains, using the one or more physiological characteristics of first user 214, the first machine learning model (582).
  • training module 220 may train the first machine learning model executed by machine learning module 218.
  • Training the first machine learning model may further comprise generating, by computing device 202, and based on the one or more physiological characteristics of first user 214, a training biometric data set.
  • the training biometric data set may include data based on physiological characteristics including one or more of base heart rate, skin color, blood pressure reading, skin temperature, electrocardiogram reading, gait, or voice that are detected or determined by data processing module 216.
  • the training biometric data set may then be stored in a data store, and the first machine learning model may be trained using a portion of the stored training biometric data set.
  • computing device 202 may use the first machine learning model for automatically authenticating first user 214.
  • computing device 202 may then automatically transition from operating in the reduced access mode to operating in an increased access mode, wherein, while operating in the increased access mode, additional functionality of computing device 202 is accessible to first user 214 that is not accessible to first user 214 while computing device 202 is operating in the reduced access mode.
  • Computer-readable media may include computer-readable storage media, which corresponds to a tangible medium such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another, e.g., according to a communication protocol.
  • computer-readable media generally may correspond to (1) tangible computer-readable storage media, which is non-transitory or (2) a communication medium such as a signal or carrier wave.
  • Data storage media may be any available media that may be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure.
  • a computer program product may include a computer-readable medium.
  • such computer-readable storage media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, or any other storage medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer. Also, any connection is properly termed a computer- readable medium.
  • coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave are included in the definition of medium.
  • DSL digital subscriber line
  • computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transient media, but are instead directed to non-transient, tangible storage media.
  • Disk and disc includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
  • processors such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry.
  • DSPs digital signal processors
  • ASICs application specific integrated circuits
  • FPGAs field programmable logic arrays
  • processors may refer to any of the foregoing structures or any other structure suitable for implementation of the techniques described herein.
  • the functionality described herein may be provided within dedicated hardware and/or software modules. Also, the techniques could be fully implemented in one or more circuits or logic elements.
  • the techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC) or a set of ICs (e.g., a chip set).
  • IC integrated circuit
  • a set of ICs e.g., a chip set.
  • Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Rather, as described above, various units may be combined in a hardware unit or provided by a collection of intraoperative hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware.
  • a computer-readable storage medium comprises a non-transitory medium.
  • the term “non-transitory” indicates that the storage medium is not embodied in a carrier wave or a propagated signal.
  • a non-transitory storage medium may store data that can, over time, change (e.g., in RAM or cache).
  • Example 1 A method includes detecting, by a wearable computing device operating in a reduced access mode, a first user input to unlock the wearable computing device; responsive to detecting the first user input, detecting, by one or more sensors of the wearable computing device, one or more physiological characteristics of a first user; determining, by the wearable computing device and based on the one or more physiological characteristics of the first user, whether a user profile was previously created for the first user; responsive to determining that the user profile was not previously created for the first user, creating, by the wearable computing device, the user profile for the first user, wherein the user profile for the first user includes a first machine learning model for automatically authenticating the first user based on the one or more physiological characteristics of the first user; and training, by the wearable computing device, and using the one or more physiological characteristics of the first user, the first machine learning model.
  • Example 2 The method of example 1, wherein determining whether a user profile was previously created for the first user further comprises: determining, by the wearable computing device, whether the one or more physiological characteristics of the first user matches one or more physiological characteristics included in a plurality of stored user profiles stored in a memory; responsive to determining that the one or more physiological characteristics of the first user do not match one or more physiological characteristics included in the plurality of stored user profiles, prompting, by the wearable computing device, the first user to confirm whether they are a new user of the wearable computing device; and responsive to the first user confirming they are a new user of the wearable computing device, creating, by the wearable computing device, the user profile for the first user, wherein the user profile for the first user comprises the one or more physiological characteristics of the first user, and wherein the user profile for the first user is stored by the wearable computing device in the memory.
  • Example 3 The method of any of examples 1 through 2, wherein training the first machine learning model further comprises: generating, by the wearable computing device, and based on the one or more physiological characteristics of the first user, a training biometric data set; and training the first machine learning model using a portion of the training biometric data set.
  • Example 4 The method of example 3, wherein responsive to training the first machine learning model with a threshold amount of the training biometric data set, the wearable computing device uses the first machine learning model for automatically authenticating the first user.
  • Example 5 The method of any of examples 1 through 4, wherein a first set of physical characteristics includes the one or more physiological characteristics of the first user, the method further comprising: determining, by the wearable computing device and based on sensor data generated by at least one of the one or more sensors of the wearable computing device, whether the wearable computing device is being worn; and responsive to determining that the wearable computing device is being worn: applying, by the wearable computing device, the first machine learning model to a second set of physical characteristics detected by the one or more sensors to generate an output; determining, by the wearable computing device and based on the output of the first machine learning model, whether the first user is included in a set of authenticated users; and responsive to determining that the first user is included in the set of authenticated users, automatically transitioning, by the wearable computing device, from operating in the reduced access mode to operating in an increased access mode, wherein, while operating in the increased access mode, additional functionality of the wearable computing device is accessible to the first user that is not accessible to the first user while the wearable computing
  • Example 6 The method of example 5, wherein the output of the first machine learning model includes a confidence score, the method further comprising: determining, by the wearable computing device, whether the confidence score satisfies a confidence score threshold; and responsive to the wearable computing device determining that the confidence score satisfies the confidence score threshold, determining that the first user is included in the set of authenticated users.
  • Example 7 The method of example 5, the method further comprising: determining, by the wearable computing device, that a companion device from a plurality of companion devices operating in a reduced access mode is proximate to the wearable computing device; and responsive to determining that the companion device is proximate to the wearable computing device and that the first user is included in the set of authenticated users, sending, from the wearable computing device and to the companion device, authentication information, wherein the companion device automatically transitions from operating in the reduced access mode to operating in an increased access mode responsive to receiving the authentication information, and wherein, while operating in the increased access mode, additional functionality of the companion device is accessible to the first user that is not accessible to the first user while the companion device is operating in the reduced access mode.
  • Example 8 The method of any of examples 1 through 7, wherein the physiological characteristics include one or more of base heart rate, skin color, blood pressure reading, skin temperature, electrocardiogram reading, gait, or voice.
  • Example 9 The method of any of examples 1 through 8, the method further comprising: detecting, by the wearable computing device operating in the reduced access mode, a second user input to unlock the wearable computing device; responsive to detecting the second user input, detecting, by the one or more sensors of the wearable computing device, one or more physiological characteristics of a second user; determining, by the wearable computing device and based on the one or more physiological characteristics of the second user, whether a user profile was previously created for the second user; responsive to determining that the user profile was not previously created for the second user, creating, by the wearable computing device, the user profile for the second user, wherein the user profile for the second user includes a second machine learning model for automatically authenticating the second user based on the one or more physiological characteristics of the second user; and training, by the wearable computing device and using the one or more physiological characteristics of the second user, the second machine learning model.
  • Example 10 A wearable computing device operating in a reduced access mode comprising: one or more processors; one or more sensors configured to, responsive to detecting a first user input to unlock the wearable computing device, detect one or more physiological characteristics of a first user; and one or more storage devices that store instructions that, when executed by the one or more processors, cause the one or more processors to: responsive to the one or more sensors detecting one or more physiological characteristics of the first user, determine, based on the one or more physiological characteristics of the first user, whether a user profile was previously created for the first user; responsive to determining that the user profile was not previously created for the first user, create the user profile for the first user, wherein the user profile for the first user includes a first machine learning model for automatically authenticating the first user based on the one or more physiological characteristics of the first user; and train, using the one or more physiological characteristics of the first user, the first machine learning model.
  • Example 11 The wearable computing device of example 10, wherein to determine whether a user profile was previously created for the first user, the one or more processors are further configured to: determine whether the one or more physiological characteristics of the first user matches one or more physiological characteristics included in a plurality of stored user profiles stored in a memory; responsive to determining that the one or more physiological characteristics of the first user do not match one or more physiological characteristics included in the plurality of stored user profiles, prompt the first user to confirm whether they are a new user of the wearable computing device; and responsive to the first user confirming they are a new user of the wearable computing device, create the user profile for the first user, wherein the user profile for the first user comprises the one or more physiological characteristics of the first user, and wherein the user profile for the first user is stored by the wearable computing device in the memory.
  • Example 12 The wearable computing device of examples 10 through 11, wherein to train the first machine learning model, the one or more processors are further configured to: generate, based on the one or more physiological characteristics of the first user, a training biometric data set; and train the first machine learning model using a portion of the training biometric data set.
  • Example 13 The wearable computing device of example 12, wherein responsive to training the first machine learning model with a threshold amount of the training biometric data set, the one or more processors are further configured to use the first machine learning model for automatically authenticating the first user.
  • Example 14 The wearable computing device of examples 10 through 13, wherein a first set of physical characteristics includes the one or more physiological characteristics of the first user, and wherein the one or more processors are further configured to: determine, based on sensor data generated by at least one of the one or more sensors of the wearable computing device, whether the wearable computing device is being worn; and responsive to determining that the wearable computing device is being worn: apply the first machine learning model to a second set of physical characteristics detected by the one or more sensors to generate an output; determine, based on the output of the first machine learning model, whether the first user is included in a set of authenticated users; and responsive to determining that the first user is included in the set of authenticated users, automatically transition from operating in the reduced access mode to operating in an increased access mode, wherein, while operating in the increased access mode, additional functionality of the wearable computing device is accessible to the first user that is not accessible to the first user while the wearable computing device is operating in the reduced access mode.
  • Example 15 The wearable computing device of example 14, wherein the output of the first machine learning model includes a confidence score, and wherein the one or more processors are further configured to: determine whether the confidence score satisfies a confidence score threshold; and responsive to determining that the confidence score satisfies the confidence score threshold, determining that the first user is included in the set of authenticated users.
  • Example 16 The wearable computing device of example 14, wherein the one or more processors are further configured to: determine that a companion device from a plurality of companion devices operating in a reduced access mode is proximate to the wearable computing device; and responsive to determining that the companion device is proximate to the wearable computing device and that the first user is included in the set of authenticated users, sending, from the wearable computing device and to the companion device, authentication information, wherein the companion device automatically transitions from operating in the reduced access mode to operating in an increased access mode responsive to receiving the authentication information, and wherein, while operating in the increased access mode, additional functionality of the companion device is accessible to the first user that is not accessible to the first user while the companion device is operating in the reduced access mode.
  • Example 17 The wearable computing device of any of examples 10 through 16, wherein the physiological characteristics include one or more of base heart rate, skin color, blood pressure reading, skin temperature, electrocardiogram reading, gait, or voice.
  • Example 18 The wearable computing device of any of examples 10 through 17, wherein the one or more sensors of the wearable computing device detect one or more physiological characteristics of a second user responsive to the wearable computing device detecting a second user input to unlock the wearable computing device, and wherein the one or more processors are further configured to: determine, based on the one or more physiological characteristics of the second user, whether a user profile was previously created for the second user; responsive to determining that the user profile was not previously created for the second user, create the user profile for the second user, wherein the user profile for the second user includes a second machine learning model for automatically authenticating the second user based on the one or more physiological characteristics of the second user; and train, using the one or more physiological characteristics of the second user, the second machine learning model.
  • Example 19 A non-transitory computer-readable storage medium encoded with instructions that, when executed by one or more processors, cause one or more processors to: responsive to one or more sensors detecting one or more physiological characteristics of a first user, determine, based on the one or more physiological characteristics of the first user, whether a user profile was previously created for the first user; responsive to determining that the user profile was not previously created for the first user, create the user profile for the first user, wherein the user profile for the first user includes a first machine learning model for automatically authenticating the first user based on the one or more physiological characteristics of the first user; and train, using the one or more physiological characteristics of the first user, the first machine learning model.
  • Example 21 The non-transitory computer-readable storage medium of examples 19 through 20, wherein to train the first machine learning model, the one or more processors are further configured to: generate, based on the one or more physiological characteristics of the first user, a training biometric data set; and train the first machine learning model using a portion of the training biometric data set
  • Example 23 The non-transitory computer-readable storage medium of examples 19 through 22, wherein a first set of physical characteristics includes the one or more physiological characteristics of the first user, and wherein the one or more processors are further configured to: determine, based on sensor data generated by at least one of the one or more sensors of the wearable computing device, whether the wearable computing device is being worn; and responsive to determining that the wearable computing device is being worn: apply the first machine learning model to a second set of physical characteristics detected by the one or more sensors to generate an output; determine, based on the output of the first machine learning model, whether the first user is included in a set of authenticated users; and responsive to determining that the first user is included in the set of authenticated users, automatically transition from operating in the reduced access mode to operating in an increased access mode, wherein, while operating in the increased access mode, additional functionality of the wearable computing device is accessible to the first user that is not accessible to the first user while the wearable computing device is operating in the reduced access mode.
  • Example 24 The non-transitory computer-readable storage medium of example 23, wherein the output of the first machine learning model includes a confidence score, and wherein the one or more processors are further configured to: determine whether the confidence score satisfies a confidence score threshold; and responsive to determining that the confidence score satisfies the confidence score threshold, determining that the first user is included in the set of authenticated users.
  • Example 25 The non-transitory computer-readable storage medium of example 24, wherein the one or more processors are further configured to: determine that a companion device from a plurality of companion devices operating in a reduced access mode is proximate to the wearable computing device; and responsive to determining that the companion device is proximate to the wearable computing device and that the first user is included in the set of authenticated users, sending, from the wearable computing device and to the companion device, authentication information, wherein the companion device automatically transitions from operating in the reduced access mode to operating in an increased access mode responsive to receiving the authentication information, and wherein, while operating in the increased access mode, additional functionality of the companion device is accessible to the first user that is not accessible to the first user while the companion device is operating in the reduced access mode.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Computer Hardware Design (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A wearable computing device operating in a reduced access mode detects physiological characteristics of a user. The computing device determines, based on the physiological characteristics of the user, whether a user profile was previously created for the user. Responsive to determining that the profile was not previously created for the user, the computing device creates the user profile for the user that includes a machine learning model for automatically authenticating the user based on the physiological characteristics of the user. The computing device trains, using the physiological characteristics of the user, the machine learning model. The computing device determines, based, on the output of the machine learning model, whether the user is included in a set of authenticated users, and responsive to determining that the user is included in the set of authenticated users, automatically transitions from operating in the reduced access mode to operating in an increased access mode.

Description

WEARABLE USER IDENTITY PROFILE
BACKGROUND
[0001] There are a number of commonly used techniques to authenticate users to use or gain access to devices. In the interests of privacy and security, it is important that these techniques identify users correctly and with high confidence. Some of these techniques include two- factor authentication or biometrics (e.g., facial recognition, fingerprinting, or iris recognition). However, two-factor authentication may be difficult for users with only one device, and some devices utilizing biometrics, such as wearable devices, may not be configured to enable more than one user to securely access the device.
SUMMARY
[0002] In general, techniques of this disclosure enable a computing device, such as a wearable computing device to authenticate one or more users and permit access to increased functionality of the computing device and, in some instances, other devices in the user’s ecosystem, based on unique user profiles associated with physiological characteristics of the one or more users. In general, a user may be granted increased access to a wearable computing device based on the wearable computing device determining with high confidence that the user’s detected physiological characteristics match that of a unique user profile stored within the wearable computing device.
[0003] In some examples, a method includes detecting, by a wearable computing device operating in a reduced access mode, a first user input to unlock the wearable computing device. The method further includes, responsive to detecting the first user input, detecting, by one or more sensors of the wearable computing device, one or more physiological characteristics of a first user. The method further includes determining, by the wearable computing device and based on the one or more physiological characteristics of the first user, whether a user profile was previously created for the first user. The method further includes, responsive to determining that the user profile was not previously created for the first user, creating, by the wearable computing device, the user profile for the first user, wherein the user profile for the first user includes a first machine learning model for automatically authenticating the first user based on the one or more physiological characteristics of the first user. The method further includes training, by the wearable computing device, and using the one or more physiological characteristics of the first user, the first machine learning model. [0004] In some examples, a wearable computing device operating in a reduced access mode comprises one or more processors. The wearable computing device further comprises one or more sensors configured to, responsive to detecting a first user input to unlock the wearable computing device, detect one or more physiological characteristics of a first user. The wearable computing device further comprises one or more storage devices that store instructions that, when executed by the one or more processors, cause the one or more processors to, responsive to the one or more sensors detecting one or more physiological characteristics of the first user, determine, based on the one or more physiological characteristics of the first user, whether a user profile was previously created for the first user. The one or more processors are further configured to, responsive to determining that the user profile was not previously created for the first user, create the user profile for the first user, wherein the user profile for the first user includes a first machine learning model for automatically authenticating the first user based on the one or more physiological characteristics of the first user. The one or more processors are further configured to train, using the one or more physiological characteristics of the first user, the first machine learning model.
[0005] In some examples, a non-transitory computer-readable storage medium is encoded with instructions that, when executed by one or more processors, cause the one or more processors to, responsive to one or more sensors detecting one or more physiological characteristics of a first user, determine, based on the one or more physiological characteristics of the first user, whether a user profile was previously created for the first user. The instructions are further configured to cause the one or more processors to, responsive to determining that the user profile was not previously created for the first user, create the user profile for the first user, wherein the user profile for the first user includes a first machine learning model for automatically authenticating the first user based on the one or more physiological characteristics of the first user. The instructions are further configured to cause the one or more processors to train, using the one or more physiological characteristics of the first user, the first machine learning model. [0006] The details of one or more examples of the disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages will be apparent from the description and drawings, and from the claims.
BRIEF DESCRIPTION OF DRAWINGS
[0007] FIG. 1 is an example wearable computing device configured to automatically authenticate a user based on one or more physiological characteristics of the user, in accordance with one or more techniques of the present disclosure.
[0008] FIG. 2 is a block diagram further illustrating an example wearable computing device configured to automatically authenticate a user based on one or more physiological characteristics of the user, in accordance with one or more techniques of the present disclosure.
[0009] FIG. 3 is a block diagram illustrating an example data processing module configured to generate biometric data based on one or more physiological characteristics of a user.
[0010] FIG. 4 is a block diagram further illustrating an example wearable computing device in communication with a companion device configured to automatically authenticate a user, in accordance with one or more techniques of the present disclosure.
[0011] FIG. 5 is a flow chart illustrating an example operation of a computing device that automatically authenticates a user based on one or more physiological characteristics of the user, in accordance with one or more techniques of the present disclosure.
DETAILED DESCRIPTION
[0012] FIG. 1 is a block diagram illustrating an example wearable computing device 102 configured to automatically authenticate a user 114 based on one or more physiological characteristics of user 114, in accordance with one or more techniques of the present disclosure. Specifically, the techniques of the present disclosure may enable wearable computing device 102 to authenticate one or more users, such as user 114, and permit access to wearable computing device 102, as well as other devices in the user’s ecosystem, based on unique user profiles associated with user physiological characteristics. [0013] As shown in the example of FIG. 1, wearable computing device 102 includes one or more user interface (UI) components 132 including one or more sensors 104, and at least one user interface (UI) module 106. Other examples of wearable computing device 102 that implement techniques of this disclosure may include additional components not shown in FIG. 1. Examples of wearable computing device 102 may include, but are not limited to, portable, mobile, or other devices, such as mobile phones (including smartphones), wearable computing devices (e.g., smart watches, smart glasses, digital bracelets, etc.) laptop computers, desktop computers, tablet computers, smart television platforms, server computers, mainframes, infotainment systems (e.g., vehicle head units), etc. While user interface module 106 is shown in the example of FIG. 1 as being located within wearable computing device 102, in other examples, all or part of the functionality provided by interface module (and other modules in other figures described herein) may be delegated to a cloud computing system and/or a companion device.
[0014] As further shown in the example of FIG. 1, computing device 102 includes one or more user interface components 132 (“UI components 132”). UI components 132 of computing device 102 may be configured to function as input devices and/or output devices for computing device 102. UI components 132 may be implemented using various technologies. For instance, UI components 132 may be configured to receive input from user 114 through tactile, audio, and/or video feedback. Examples of input devices include a presence-sensitive display, a presence-sensitive or touch-sensitive input device, a voice responsive system, video camera, microphone or any other type of device for detecting input from user 114. In some examples, a presence-sensitive display includes a touch-sensitive or presence-sensitive input screen, such as a resistive touchscreen, a surface acoustic wave touchscreen, a capacitive touchscreen, a projective capacitance touchscreen, a pressure sensitive screen, an acoustic pulse recognition touchscreen, or another presence-sensitive technology. That is, UI components 132 of computing device 102 may include a presencesensitive device that may receive tactile input from user 114. UI components 132 may receive indications of the tactile input by detecting one or more gestures from user 114 (e.g., when user 114 touches or points to one or more locations of UI components 132 with a finger or a stylus pen). [0015] UI components 132 may additionally or alternatively be configured to function as output devices by providing output to user 114 using tactile, audio, or video stimuli. Examples of output devices include a sound card, a video graphics adapter card, or any of one or more display devices, such as a liquid crystal display (LCD), dot matrix display, light emitting diode (LED) display, microLED, miniLED, organic light-emitting diode (OLED) display, e-ink, or similar monochrome or color display capable of outputting visible information to user 114. Additional examples of an output device include a speaker, a haptic device, or other device that can generate intelligible output to user 114. For instance, UI components 132 may present output to user 114 as a graphical user interface that may be associated with functionality provided by computing device 102. In this way, UI components 132 may present various user interfaces of applications executing at or accessible by computing device 102 (e.g., an electronic message application, an Internet browser application, etc.). User 114 may interact with a respective user interface of an application to cause computing device 102 to perform operations relating to a function.
[0016] In some examples, UI components 132 of computing device 102 may detect two- dimensional and/or three-dimensional gestures as input from user 114. As shown in the example of FIG. 1, UI components 132 include one or more sensors 104. Sensor 104 may be configured to, responsive to computing device 102 detecting a user input from user 114 to unlock computing device 102, detect one or more physiological characteristics of user 114. For instance, sensor 104 may detect user 114’s movement (e.g., gait or moving a hand, an arm, face, an eye, etc.) within a threshold distance of sensor 104. Sensor 104 may determine a two- or three-dimensional vector representation of the movement and correlate the vector representation to a gesture input (e g., a hand-wave, a facial expression, etc.) that has multiple dimensions. In other words, sensor 104 may, in some examples, detect a multidimension gesture without requiring user 114 to gesture at or near a screen or surface at which UI components 132 output information for display. Instead, sensor 104 may detect a multi-dimensional gesture performed at or near sensor 104 which may or may not be located near the screen or surface at which UI components 132 output information for display. In some examples, sensor 104 is configured to detect one or more physiological characteristics of user 114 that include, but are not limited to, one or more of base heart rate, skin color, blood pressure reading, skin temperature, electrocardiogram reading, voice, etc. [0017] In some examples, sensor 104 is a photoplethysmography (PPG) sensor, which measures changes in blood volume in the user 114’s skin through the use of light sensors. A PPG sensor is commonly used in smartwatches to monitor heart rate and can also be used to detect the unique pattern of blood flow in a user's wrist, which can then be used to verify their identity when attempting to unlock wearable computing device 102. In another example, sensor 104 is an electrocardiogram (ECG) sensor. An ECG sensor detects electrical activity in the heart and can be used to generate a unique biometric signature for each user of computing device 102. In other examples, sensor 104 may be any other biometric sensor used to collect biometric data from a user, such as facial recognition or fingerprint sensors. In these examples, sensor 104 may capture unique physical features of user 114, such as the pattern of their fingerprints or the structure of their face. In some examples, sensors 104 may include motion sensors (e.g., accelerometer, gyroscope, compass, etc.), audio and/or visual sensors (e.g., microphones, still and/or video cameras, etc.), or other types of sensors (e.g., pressure sensors, light sensors, proximity sensors, ultrasonic sensors, global positioning system sensors, etc.).
[0018] In the example of FIG. 1, computing device 102 further includes user interface (UI) module 106. Module 106 may perform operations described herein using hardware, software, firmware, or a mixture thereof residing in and/or executing at computing device 102. Computing device 102 may execute module 106 with one processor or with multiple processors. In some examples, computing device 102 may execute module 106 as a virtual machine executing on underlying hardware. Module 106 may execute as one or more services of an operating system or computing platform or may execute as one or more executable programs at an application layer of a computing platform.
[0019] UI module 106, as shown in the example of FIG. 1, may be operable by computing device 102 to perform one or more functions, such as receive input and send indications of such input to other components associated with computing device 102. UI module 106 may also receive data from components associated with computing device 102. Using the data received, UI module 106 may cause other components associated with computing device 102, such as UI components 132, to provide output based on the data. For instance, as described above, UI module 106 may send data to UI components 132 of computing device 102 to display GUI 101 to user 114 when computing device 102 is operating in the reduced access mode, and display GUI 105 when computing device 102 is operating in the increased access mode.
[0020] In accordance with techniques of this disclosure, user 114 may be granted increased access to computing device 102 based on computing device 102 determining with high confidence that the detected physiological characteristics of user 114 match that of a unique user profile stored within wearable computing device 102. For example, computing device 102 operating in a reduced access mode, as shown in generated user interface (GUI) 101, detects a user input from user 114 to unlock computing device 102. GUI 101 may be generated by user interface module 106 and configured to display a limited amount of information or computing device functionalities to user 114. For example, as shown in the example of FIG. 1, GUI 101 comprises applications 103A-103D. Applications 103A-103D may provide user 114 functionalities and information that are not user-specific, e.g., application 103 A may include information pertaining to the time, application 103B may include a calculator, etc.
[0021] In some examples, the user input (such as a PIN, code, or other user credentials) may be received by computing device 102 via another GUI that user 114 interacts with, such as a login screen. Responsive to detecting the user input from user 114, one or more sensors 104 of computing device 102 detects one or more physiological characteristics of user 114 (e.g., base heart rate, skin tone, voice, etc.). Computing device 102 then determines, based on the one or more physiological characteristics of user 114, whether a unique user profile was previously created for user 114. Responsive to determining that the unique user profile was not previously created for user 114, computing device 102 may then create the unique user profile for user 114, which may include a unique user identifier (ID).
[0022] In general, user 114 may be provided with an opportunity to provide input to control whether programs or features of computing device 102 can collect and make use of user information (e.g., user 114’s biometric data, information about user 114’s current location, current speed, motion, location history, etc.), or to dictate whether and/or how computing device 102 may receive content that may be relevant to user 114. In addition, certain data may be treated in one or more ways before it is stored or used by computing device 102 so that personally identifiable information is removed. For example, a user’s identity may be treated so that no personally identifiable information can be determined about the user, or a user’s geographic location may be generalized where location information is obtained (such as to a city, ZIP code, or state level), so that a particular location of a user cannot be determined. Thus, user 114 may have control over how information is collected about them and used by computing device 102. For example, responsive to user 114 confirming they are a new user of computing device 102 and providing explicit consent for computing device 102 to store user 114’s data, computing device 102 may then create the user profile for user 114, wherein the user profile for user 114 comprises the one or more physiological characteristics of user 114. The user profile for user 114 may then be stored by computing device 102 in a memory.
[0023] Responsive to computing device 102 determining that user 114 is an authenticated user, which may be determined from whether user 114’s detected physiological characteristics match the one or more physiological characteristics stored in user 114’s user profile, computing device 102 may then automatically transition from operating in the reduced access mode to operating in an increased access mode, which is shown as example GUI 105 in FIG. 1. For example, GUI 105 comprises applications 103A-103I. In this example, applications 103A-103D may still provide user 114 functionalities and information that are not user-specific (e.g., application 103 A may include information pertaining to the time, application 103B may include a calculator, etc ). Applications 103E-103I, however, may provide user 114 functionalities and information that are user-specific (e.g., application 103E may include health information pertaining to user 114, application 103F may include functionality that allows user 114 to access other devices associated with user 114, etc.). As shown in the example of FIG. 1, applications 103E-103I are accessible to user 114 while wearable computing device 102 is operating in the increased access mode, and are not accessible to user 114 while wearable computing device 102 is operating in the reduced access mode.
[0024] FIG. 2 is a block diagram further illustrating an example wearable computing device configured to automatically authenticate a user based on one or more physiological characteristics of the user, in accordance with one or more techniques of the present disclosure. Computing device 202 may be similar to computing device 102 of FIG. 1. As shown in FIG. 2, computing device 202 includes user interface components 232 including one or more sensors 204, processors 224, communication units 228, and one or more storage devices 238. Storage device 238 further includes user interface module 206, resolution module 210, operating system 222, user profile data store 212, and analysis module 208. User interface components 232, one or more sensors 204, and user interface module 206 may be similar to user interface components 132, one or more sensors 104, user and interface module 106, respectively, as described with respect to FIG. 1.
[0025] The one or more communication units 228 of computing system 202, for example, may communicate with external devices by transmitting and/or receiving data at computing device 202, such as to and from remote computer systems or companion devices. Example communication units 228 include a network interface card (e.g., such as an Ethernet card), an optical transceiver, a radio frequency transceiver, or any other type of device that can send and/or receive information. Other examples of communication units 228 may be devices configured to transmit and receive Ultrawideband®, Bluetooth®, GPS, 3G, 4G, and Wi-Fi®, etc. that may be found in computing devices, such as mobile devices and the like.
[0026] As shown in the example of FIG 2, communication channels 230 may interconnect each of the components as shown for inter-component communications (physically, communicatively, and/or operatively). In some examples, communication channels 230 may include a system bus, a network connection (e.g., to a wireless connection as described above), one or more inter-process communication data structures, or any other components for communicating data between hardware and/or software locally or remotely.
[0027] User interface module 206, analysis module 208, resolution module 210, user profile data store 212, and operating system 222 (hereinafter “modules 206-222”) may perform operations described herein using software, hardware, firmware, or a mixture of hardware, software, and firmware residing in and executing on computing device 202 or at one or more other remote computing devices (e.g., cloud-based application - not shown) or companion devices. Computing device 202 may execute one or more of modules 206-222, with one or more processors 224 or may execute any or part of one or more of modules 206-222 as or within a virtual machine executing on underlying hardware. One or more of modules 206- 222 may be implemented in various ways, for example, as a downloadable or pre-installed application, remotely as a cloud application, or as part of the operating system of computing device 202. Other examples of computing device 202 that implement techniques of this disclosure may include additional components not shown in FIG. 2. [0028] In the example of FIG. 2, one or more processors 224 may implement functionality and/or execute instructions within computing device 202. For example, one or more processors 224 may receive and execute instructions that provide the functionality of UIC 232, communication units 228, one or more storage devices 238 and an operating system to perform one or more operations as described herein. For example, one or more processors 224 may receive and execute instructions that provide the functionality of some or all of modules 206-222 to perform one or more operations and various functions described herein. The one or more processors 224 include a central processing unit (CPU). Examples of CPUs include, but are not limited to, a digital signal processor (DSP), a general-purpose microprocessor, a tensor processing unit (TPU); a neural processing unit (NPU); a neural processing engine; a core of a CPU, VPU, GPU, TPU, NPU or another processing device, an application specific integrated circuit (ASIC), a field programmable logic array (FPGA), or other equivalent integrated or discrete logic circuitry, or other equivalent integrated or discrete logic circuitry.
[0029] One or more storage devices 238 within computing device 202 may store information for processing during operation of computing device 202 (e.g., computing device 202 may store data that modules 206-222 may access during execution at computing device 202, including user profile data store 212). In some examples, storage device 238 is a temporary memory, meaning that a primary purpose of storage device 238 is not long-term storage. Storage devices 238 on computing device 202 may configured for short-term storage of information as volatile memory and therefore not retain stored contents if powered off. Examples of volatile memories include random access memories (RAM), dynamic random access memories (DRAM), static random access memories (SRAM), and other forms of volatile memories known in the art.
[0030] Storage devices 238, in some examples, also include one or more computer-readable storage media. Storage devices 238 may be configured to store larger amounts of information than volatile memory. Storage devices 238 may further be configured for longterm storage of information as non-volatile memory space and retain information after power on/off cycles. Examples of non-volatile memories include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories. Storage devices 238 may store program instructions and/or information (e.g., data) associated with modules 206-222. [0031] Operating system 222, in some examples, controls the operation of components of computing device 202. For example, operating system 222, in one example, facilitates the communication of modules 204-232 with processors 224, one or more DI components 232 including one or more sensors 204, one or more communication units 228, and one or more communication channels 230. Modules 206-222 may each include program instructions and/or data that are executable by computing device 202 (e.g., by one or more processors 224). As one example, UI module 206, analysis module 208, and resolution module 210 can each include instructions that cause computing device 202 to perform one or more of the operations and actions described in the present disclosure.
[0032] As described previously, UI module 206 may cause UI components 232 to output a GUI for display, in which user 214 of computing device 202 may view output and/or provide input at UI components 232. UI module 206 and UI components 232 may receive one or more indications of input from user 214 as he or she interacts with the graphical user interface. UI module 206 and UI components 232 may interpret inputs detected at UI components 232 (e.g., as user 214 provides one or more gestures at one or more locations of UI components 232 at which the graphical user interface is displayed) and may relay information about the inputs detected at UI components 232 to one or more associated platforms, operating systems, applications, and/or services executing at computing device 202 to cause computing device 202 to perform various functions.
[0033] UI module 206 may receive information and instructions from one or more associated platforms, operating systems, applications, and/or services executing at computing device 202 for generating a graphical user interface. In addition, UI module 206 may act as an intermediary between the one or more associated platforms, operating systems, applications, and/or services executing at computing device 202 and various output devices of computing device 202 (e.g., speakers, LED indicators, audio or electrostatic haptic output devices, etc.) to produce output (e.g., a graphic, a flash of light, a sound, a haptic response, etc.) with computing device 202.
[0034] User profile data store 212 may represent any suitable storage medium for storing data. In some examples, user profile data store 212 may store all data received by users of computing device 202 that have provided their explicit consent for computing device 202 to receive their data. In some examples, user profile data store 212 may be indexed by the unique user IDs or other information provided by a user input (e.g., a unique PIN, code, username, etc.) included in the plurality of user profiles. As described above with respect to computing device 102, computing device 202 may determine whether user 214 is an authenticated user by determining whether the one or more physiological characteristics of user 214 match one or more physiological characteristics included in a plurality of stored user profiles stored in user profile data store 212. Responsive to wearable computing device 202 determining that the one or more physiological characteristics of user 214 do not match one or more physiological characteristics included in the plurality of user profiles stored in user profile data store 212, user interface module 206 of wearable computing device 202 may generate a user interface for display on user interface components 132 that prompts user 214 to confirm that they are a new user and to provide explicit consent for computing device 202 to store the user 214’s biometric data in user profile data store 212. Responsive to user 214 confirming they are a new user of wearable computing device and providing their explicit consent, computing device 202 may then create the user profile for user 214, wherein the user profile for user 214 comprises the one or more physiological characteristics of user 214, and the user profile for user 214 is stored in user profile data store 212.
[0035] For example, user 214 may wear computing device 202 for a first time, in which user 214 may provide an input (e.g., a unique ID, PIN, code, username, etc.) to computing device 202 prior to accessing computing device 202 operating in the reduced access mode (e.g., GUI 101 of FIG. 1). Computing device 202 may determine, based on user 214’s physiological characteristics detected by sensors 204 and user profiles stored in user profile data store 212, whether a user profile was previously created for user 214. Specifically, analysis module 208 may be configured to determine whether a user profile was previously created for user 214 by determining whether the one or more physiological characteristics of user 214 match one or more physiological characteristics included in a plurality of user profiles stored within user profile data store 212. Analysis module 208 may receive information from one or more sensors 204 and store at least an indication of the information received from sensors 204 in user profile data store 212. Responsive to analysis module 208 determining that the information received from sensors 204 does not match any information in the stored user profiles, computing device 202 may prompt user 214 to confirm they are a new user and provide explicit consent for computing device 202 to store user 214’s data. In some examples, while creating the user profile for user 214, computing device 202 may prompt user 214 to confirm whether they would like to change the unique ID, PIN, code, username, etc. provided as input to access or manually unlock computing device 202. In some examples, computing device 202 may assign a new unique ID, PIN, code, username, etc. to user 214 that user 214 must provide as input to access or manually unlock computing device 202. Responsive to computing device 202 creating a user profile for user 214, user 214 may then access computing device 202 operating in the increased access mode (e.g., GUI 103 of FIG. 1). In some examples, user 214 may only access computing device 202 operating in the reduced access mode (e.g., GUI 101 of FIG. 1) until user 214 manually unlocks computing device 202 using the unique ID, PIN, code, username, etc. and/or until computing device 202 determines with high confidence that user 214 is an authenticated user of computing device 202. As described above, with each subsequent manual unlock, i.e., responsive to computing device 202 receiving an input (the unique ID, PIN, code, username, etc.) from user 214, sensors 204 may detect one or more physiological characteristics of user 214 and store them in user 214’s user profile. Computing device 202 may further generate, based on the one or more physiological characteristics of user 214, a training biometric data set that is also stored in user 214’s user profile. User 214’s user profile may further include a machine learning model that may be trained on the stored training biometric data set. Responsive to training the machine learning model with a threshold amount of the stored training biometric data set, computing device 202 may then use the machine learning model for automatically authenticating user 214, in which user 214 may access computing device 202 operating in the increased access mode (e.g. GUI 105 of FIG. 1 ) without having to manually unlock computing device 202.
[0036] Computing device 202 may be further configured to monitor information generated by sensors 204 while user 214 is wearing computing device 202. For example, analysis module 208 may monitor sensor information and store the sensor information in user profile data store 212. Analysis module 208 may periodically or continually receive and store the sensor information. At least periodically, analysis module 208 may analyze the sensor information using machine learning techniques to determine a likelihood that the sensor information corresponds to a unique user profile stored within computing device 202 and an authenticated user of computing device 202. For example, analysis module 208 may apply an analysis of the sensor data, both the sensor data currently being received as well as the previously received sensor data (e.g., stored within a memory of wearable computing device 202 and/or within user profile data store 212), and output a confidence score. The analysis may be a machine learning algorithm, a rule base, a decision tree, mathematical optimization, or any other algorithm suitable for determining a likelihood that the sensor data corresponds to a unique user profile stored within computing device 202 and an authenticated user of computing device 202. In various instances, analysis module 208 may periodically store a determined confidence score in user profile data store 212.
[0037] In some examples, analysis module 208 may also analyze application usage information, such as the duration, frequency, location, time, etc., of various applications installed at or otherwise executable by computing device 202. At least periodically, analysis module 208 analyzes the sensor information to determine a likelihood that the sensor information corresponds to an authenticated user of computing device 202. For example, analysis module 208 may apply an analysis of the sensor data, both the sensor data currently being received as well as the previously received sensor data (e.g., stored within user profile data store 212), and construct a confidence score.
[0038] As shown in the example of FIG. 2, analysis module 208 further includes data processing module 216, machine learning module 218, and training module 220. Data processing module 216 may be configured to receive information from sensors 204 or UI components 232 and generate data that can be stored in user profile data store 212. In some examples, the data stored in user profile data store 212 may be preprocessed by data processing module 216. Data processing module 216 may be configured as a module for processing data stored in user profile data store 212 prior to analysis module 208 sending the data to other components or modules of computing device 202 and/or implementing training module 220 and machine learning module 218.
[0039] Machine learning module 218 may include one or more machine learning algorithms for determining a likelihood that the sensor data corresponds an authenticated user of computing device 202. Machine learning module 218 may further be trained over time by training module 220, in which training module 220 may use historical user data stored in user profile data store 212 to train and test the one or more machine learning models. In some examples, a single machine learning model may exist for each user profile stored in user profile data store 212. In this way, computing device 202 may apply different machine learning models to different users of computing device 202, in which each of the different machine learning models are trained on the data received from a single user.
[0040] The output of the machine learning module 218 may include a confidence score. Analysis module 208 may determine whether the confidence score satisfies a confidence score threshold, and responsive to determining that the confidence score satisfies the confidence score threshold, analysis module 208 may further determine that user 214 is included in the set of authenticated users.
[0041] In some implementations, machine learning module 218 includes a machine-learned model trained to receive input data of one or more types and, in response, provide output data of one or more types. The input data may include one or more features that are associated with an instance or an example. In some implementations, the one or more features associated with the instance or example can be organized into a feature vector. In some implementations, the output data can include one or more predictions. Predictions can also be referred to as inferences. Thus, given features associated with a particular instance, machine learning module 218 can output a prediction for such instance based on the features.
[0042] Machine learning module 218 can be or include one or more of various different types of machine-learned models. In particular, in some implementations, machine learning module 218 can perform classification, regression, clustering, anomaly detection, recommendation generation, and/or other tasks.
[0043] In some implementations, machine learning module 218 can perform various types of classification based on the input data. For example, machine learning module 218 can perform binary classification or multiclass classification. In binary classification, the output data can include a classification of the input data into one of two different classes. In multiclass classification, the output data can include a classification of the input data into one (or more) of more than two classes. The classifications can be single label or multi-label. Machine learning module 218 may perform discrete categorical classification in which the input data is simply classified into one or more classes or categories. [0044] In some implementations, machine learning module 218 can perform classification in which machine learning module 218 provides, for each of one or more classes, a numerical value descriptive of a degree to which it is believed that the input data should be classified into the corresponding class. In some instances, the numerical values provided by machine learning module 218 can be referred to as “confidence scores” that are indicative of a respective confidence associated with classification of the input into the respective class. In some implementations, the confidence scores can be compared to one or more thresholds to render a discrete categorical prediction. In some implementations, only a certain number of classes (e.g., one) with the relatively largest confidence scores can be selected to render a discrete categorical prediction.
[0045] Machine learning module 218 may output a probabilistic classification. For example, machine learning module 218 may predict, given a sample input, a probability distribution over a set of classes. Thus, rather than outputting only the most likely class to which the sample input should belong, machine learning module 218 can output, for each class, a probability that the sample input belongs to such class. In some implementations, the probability distribution over all possible classes can sum to one. In some implementations, a Softmax function, or other type of function or layer can be used to squash a set of real values respectively associated with the possible classes to a set of real values in the range (0, 1) that sum to one.
[0046] In some examples, the probabilities provided by the probability distribution can be compared to one or more thresholds to render a discrete categorical prediction. In some implementations, only a certain number of classes (e.g., one) with the relatively largest predicted probability can be selected to render a discrete categorical prediction.
[0047] In cases in which machine learning module 218 performs classification, machine learning module 218 may be trained using supervised learning techniques. For example, machine learning module 218 may be trained on a training dataset that includes training examples labeled as belonging (or not belonging) to one or more classes.
[0048] In some implementations, machine learning module 218 can perform regression to provide output data in the form of a continuous numeric value. The continuous numeric value can correspond to any number of different metrics or numeric representations, including, for example, currency values, scores, or other numeric representations. As examples, machine learning module 218 can perform linear regression, polynomial regression, or nonlinear regression. As examples, machine learning module 218 can perform simple regression or multiple regression. As described above, in some implementations, a Softmax function or other function or layer can be used to squash a set of real values respectively associated with a two or more possible classes to a set of real values in the range (0, 1) that sum to one.
[0049] Machine learning module 218 may perform various types of clustering. For example, machine learning module 218 can identify one or more previously-defined clusters to which the input data most likely corresponds. Machine learning module 218 may identify one or more clusters within the input data. That is, in instances in which the input data includes multiple objects, documents, or other entities, machine learning module 218 can sort the multiple entities included in the input data into a number of clusters. In some implementations in which machine learning module 218 performs clustering, machine learning module 218 can be trained using unsupervised learning techniques.
[0050] Machine learning module 218 may perform anomaly detection or outlier detection. For example, machine learning module 218 can identify input data that does not conform to an expected pattern or other characteristic (e.g., as previously observed from previous input data). As examples, the anomaly detection can be used for fraud detection or system failure detection.
[0051] In some implementations, machine learning module 218 can provide output data in the form of one or more recommendations. For example, machine learning module 218 can be included in a recommendation system or engine. As an example, given input data that describes previous outcomes for certain entities (e.g., a score, ranking, or rating indicative of an amount of success or enjoyment), machine learning module 218 can output a suggestion or recommendation of one or more additional entities that, based on the previous outcomes, are expected to have a desired outcome (e.g., elicit a score, ranking, or rating indicative of success or enjoyment). As one example, given input data descriptive of a context of a computing device, such as computing device 202, a recommendation system can output a suggestion or recommendation of an application that the user might enjoy or wish to download to computing device 202. [0052] Machine learning module 218 may, in some cases, act as an agent within an environment For example, machine learning module 218 can be trained using reinforcement learning, which will be discussed in further detail below.
[0053] In some implementations, machine learning module 218 can be a parametric model while, in other implementations, machine learning module 218 can be a non-parametric model. In some implementations, machine learning module 218 can be a linear model while, in other implementations, machine learning module 218 can be a non-linear model.
[0054] As described above, machine learning module 218 can be or include one or more of various different types of machine-learned models. Examples of such different types of machine-learned models are provided below for illustration. One or more of the example models described below can be used (e.g., combined) to provide the output data in response to the input data. Additional models beyond the example models provided below can be used as well.
[0055] In some implementations, machine learning module 218 can be or include one or more classifier models such as, for example, linear classification models; quadratic classification models; etc. Machine learning module 218 may be or include one or more regression models such as, for example, simple linear regression models; multiple linear regression models; logistic regression models; stepwise regression models; multivariate adaptive regression splines; locally estimated scatterplot smoothing models; etc.
[0056] In some examples, machine learning module 218 can be or include one or more decision tree-based models such as, for example, classification and/or regression trees; iterative dichotomiser 3 decision trees; C4.5 decision trees; chi-squared automatic interaction detection decision trees; decision stumps; conditional decision trees; etc.
[0057] Machine learning module 218 may be or include one or more kernel machines. In some implementations, machine learning module 218 can be or include one or more support vector machines. Machine learning module 218 may be or include one or more instancebased learning models such as, for example, learning vector quantization models; selforganizing map models; locally weighted learning models; etc. In some implementations, machine learning module 218 can be or include one or more nearest neighbor models such as, for example, k-nearest neighbor classifications models; k- nearest neighbors regression models; etc. Machine learning module 218 can be or include one or more Bayesian models such as, for example, naive Bayes models; Gaussian naive Bayes models; multinomial naive Bayes models; averaged one-dependence estimators; Bayesian networks; Bayesian belief networks; hidden Markov models; etc.
[0058] In some implementations, machine learning module 218 can be or include one or more artificial neural networks (also referred to simply as neural networks). A neural network can include a group of connected nodes, which also can be referred to as neurons or perceptrons. A neural network can be organized into one or more layers. Neural networks that include multiple layers can be referred to as “deep” networks. A deep network can include an input layer, an output layer, and one or more hidden layers positioned between the input layer and the output layer. The nodes of the neural network can be connected or non- fully connected.
[0059] Machine learning module 218 can be or include one or more feed forward neural networks. In feed forward networks, the connections between nodes do not form a cycle. For example, each connection can connect a node from an earlier layer to a node from a later layer.
[0060] In some instances, machine learning module 218 can be or include one or more recurrent neural networks. In some instances, at least some of the nodes of a recurrent neural network can form a cycle. Recurrent neural networks can be especially useful for processing input data that is sequential in nature. In particular, in some instances, a recurrent neural network can pass or retain information from a previous portion of the input data sequence to a subsequent portion of the input data sequence through the use of recurrent or directed cyclical node connections.
[0061] In some examples, sequential input data can include time-series data (e.g., sensor data versus time or imagery captured at different times). For example, a recurrent neural network can analyze sensor data versus time to detect or predict a swipe direction, to perform handwriting recognition, etc. Sequential input data may include words in a sentence (e.g., for natural language processing, speech detection or processing, etc.); notes in a musical composition; sequential actions taken by a user (e.g., to detect or predict sequential application usage); sequential object states; etc.
[0062] Example recurrent neural networks include long short-term (LSTM) recurrent neural networks; gated recurrent units; bi-direction recurrent neural networks; continuous time recurrent neural networks; neural history compressors; echo state networks; Elman networks; Jordan networks; recursive neural networks; Hopfield networks; fully recurrent networks; sequence-to- sequence configurations; etc.
[0063] In some implementations, machine learning module 218 can be or include one or more convolutional neural networks. In some instances, a convolutional neural network can include one or more convolutional layers that perform convolutions over input data using learned filters.
[0064] Filters can also be referred to as kernels. Convolutional neural networks can be especially useful for vision problems such as when the input data includes imagery such as still images or video. However, convolutional neural networks can also be applied for natural language processing.
[0065] In some examples, machine learning module 218 can be or include one or more generative networks such as, for example, generative adversarial networks. Generative networks can be used to generate new data such as new images or other content.
[0066] Machine learning module 218 may be or include an autoencoder. In some instances, the aim of an autoencoder is to learn a representation (e.g., a lower- dimensional encoding) for a set of data, typically for the purpose of dimensionality reduction. For example, in some instances, an autoencoder can seek to encode the input data and the provide output data that reconstructs the input data from the encoding. Recently, the autoencoder concept has become more widely used for learning generative models of data. In some instances, the autoencoder can include additional losses beyond reconstructing the input data.
[0067] Machine learning module 218 may be or include one or more other forms of artificial neural networks such as, for example, deep Boltzmann machines; deep belief networks; stacked autoencoders; etc. Any of the neural networks described herein can be combined (e.g., stacked) to form more complex networks.
[0068] One or more neural networks can be used to provide an embedding based on the input data. For example, the embedding can be a representation of knowledge abstracted from the input data into one or more learned dimensions. In some instances, embeddings can be a useful source for identifying related entities. In some instances, embeddings can be extracted from the output of the network, while in other instances embeddings can be extracted from any hidden node or layer of the network (e.g., a close to final but not final layer of the network). Embeddings can be useful for performing auto suggest next video, product suggestion, entity or object recognition, etc. In some instances, embeddings be useful inputs for downstream models. For example, embeddings can be useful to generalize input data (e.g., search queries) for a downstream model or processing system.
[0069] Machine learning module 218 may include one or more clustering models such as, for example, k-means clustering models; k-medians clustering models; expectation maximization models; hierarchical clustering models; etc.
[0070] In some implementations, machine learning module 218 can perform one or more dimensionality reduction techniques such as, for example, principal component analysis; kernel principal component analysis; graph-based kernel principal component analysis; principal component regression; partial least squares regression; Sammon mapping; multidimensional scaling; projection pursuit; linear discriminant analysis; mixture discriminant analysis; quadratic discriminant analysis; generalized discriminant analysis; flexible discriminant analysis; autoencoding; etc.
[0071] In some implementations, machine learning module 218 can perform or be subjected to one or more reinforcement learning techniques such as Markov decision processes; dynamic programming; Q functions or Q-leaming; value function approaches; deep Q- networks; differentiable neural computers; asynchronous advantage actor-critics; deterministic policy gradient; etc.
[0072] In some implementations, machine learning module 218 can be an autoregressive model. In some instances, an autoregressive model can specify that the output data depends linearly on its own previous values and on a stochastic term. In some instances, an autoregressive model can take the form of a stochastic difference equation. One example autoregressive model is WaveNet, which is a generative model for raw audio.
[0073] In some implementations, machine learning module 218 can include or form part of a multiple model ensemble. As one example, bootstrap aggregating can be performed, which can also be referred to as “bagging.” In bootstrap aggregating, a training dataset is split into a number of subsets (e.g., through random sampling with replacement) and a plurality of models are respectively trained on the number of subsets. At inference time, respective outputs of the plurality of models can be combined (e.g., through averaging, voting, or other techniques) and used as the output of the ensemble. [0074] One example ensemble is a random forest, which can also be referred to as a random decision forest Random forests are an ensemble learning technique for classification, regression, and other tasks. Random forests are generated by producing a plurality of decision trees at training time. In some instances, at inference time, the class that is the mode of the classes (classification) or the mean prediction (regression) of the individual trees can be used as the output of the forest. Random decision forests can correct for decision trees' tendency to overfit their training set [0075] Another example ensemble technique is stacking, which can, in some instances, be referred to as stacked generalization. Stacking includes training a combiner model to blend or otherwise combine the predictions of several other machine-learned models. Thus, a plurality of machine-learned models (e.g., of same or different type) can be trained based on training data. In addition, a combiner model can be trained to take the predictions from the other machine-learned models as inputs and, in response, produce a final inference or prediction. In some instances, a single-layer logistic regression model can be used as the combiner model. [0076] Another example ensemble technique is boosting. Boosting can include incrementally building an ensemble by iteratively training weak models and then adding to a final strong model. For example, in some instances, each new model can be trained to emphasize the training examples that previous models misinterpreted (e.g., misclassified). For example, a weight associated with each of such misinterpreted examples can be increased. One common implementation of boosting is AdaBoost, which can also be referred to as Adaptive Boosting. Other example boosting techniques include LPBoost; TotalBoost; BrownBoost; xgboost; MadaBoost, LogitBoost, gradient boosting; etc. Furthermore, any of the models described above (e.g., regression models and artificial neural networks) can be combined to form an ensemble. As an example, an ensemble can include a top level machine-learned model or a heuristic function to combine and/or weight the outputs of the models that form the ensemble.
[0077] In some implementations, multiple machine-learned models (e.g., that form an ensemble can be linked and trained jointly (e.g., through backpropagation of errors sequentially through the model ensemble). However, in some implementations, only a subset (e.g., one) of the jointly trained models is used for inference. [0078] In some implementations, machine learning module 218 can be used to preprocess the input data for subsequent input into another model. For example, machine learning module 218 can perform dimensionality reduction techniques and embeddings (e.g., matrix factorization, principal components analysis, singular value decomposition, word2vec/GLOVE, and/or related approaches); clustering; and even classification and regression for downstream consumption. Many of these techniques have been discussed above and will be further discussed below.
[0079] As discussed above, machine learning module 218 can be trained or otherwise configured to receive the input data and, in response, provide the output data. The input data can include different types, forms, or variations of input data. As examples, in various implementations, the input data can include features that describe the content (or portion of content) initially selected by the user, e.g., content of user-selected document or image, links pointing to the user selection, links within the user selection relating to other files available on device or cloud, metadata of user selection, etc. Additionally, with user permission, the input data includes the context of user usage, either obtained from app itself or from other sources. Examples of usage context include breadth of share (sharing publicly, or with a large group, or privately, or a specific person), context of share, etc. When permitted by the user, additional input data can include the state of the device, e.g., the location of the device, the apps running on the device, etc.
[0080] In some implementations, machine learning module 218 can receive and use the input data in its raw form. In some implementations, the raw input data can be preprocessed. Thus, in addition or alternatively to the raw input data, machine learning module 218 can receive and use the preprocessed input data.
[0081] In some implementations, preprocessing the input data can include extracting one or more additional features from the raw input data. For example, feature extraction techniques can be applied to the input data to generate one or more new, additional features. Example feature extraction techniques include edge detection; comer detection; blob detection; ridge detection; scale-invariant feature transform; motion detection; optical flow; Hough transform; etc.
[0082] In some implementations, the extracted features can include or be derived from transformations of the input data into other domains and/or dimensions. As an example, the extracted features can include or be derived from transformations of the input data into the frequency domain. For example, wavelet transformations and/or fast Fourier transforms can be performed on the input data to generate additional features.
[0083] In some implementations, the extracted features can include statistics calculated from the input data or certain portions or dimensions of the input data. Example statistics include the mode, mean, maximum, minimum, or other metrics of the input data or portions thereof. [0084] In some implementations, as described above, the input data can be sequential in nature. In some instances, the sequential input data can be generated by sampling or otherwise segmenting a stream of input data. As one example, frames can be extracted from a video. In some implementations, sequential data can be made non-sequential through summarization.
[0085] As another example preprocessing technique, portions of the input data can be imputed. For example, additional synthetic input data can be generated through interpolation and/or extrapolation.
[0086] As another example preprocessing technique, some or all of the input data can be scaled, standardized, normalized, generalized, and/or regularized. Example regularization techniques include ridge regression; least absolute shrinkage and selection operator (LASSO); elastic net; least-angle regression; cross-validation; LI regularization; L2 regularization; etc. As one example, some or all of the input data can be normalized by subtracting the mean across a given dimension's feature values from each individual feature value and then dividing by the standard deviation or other metric.
[0087] As another example preprocessing technique, some or all or the input data can be quantized or discretized. In some cases, qualitative features or variables included in the input data can be converted to quantitative features or variables. For example, one hot encoding can be performed.
[0088] In some examples, dimensionality reduction techniques can be applied to the input data prior to input into machine learning module 218. Several examples of dimensionality reduction techniques are provided above, including, for example, principal component analysis; kernel principal component analysis; graph-based kernel principal component analysis; principal component regression; partial least squares regression; Sammon mapping; multidimensional scaling; projection pursuit; linear discriminant analysis; mixture discriminant analysis; quadratic discriminant analysis; generalized discriminant analysis; flexible discriminant analysis; autoencoding; etc.
[0089] In some implementations, during training, the input data can be intentionally deformed in any number of ways to increase model robustness, generalization, or other qualities. Example techniques to deform the input data include adding noise; changing color, shade, or hue; magnification; segmentation; amplification; etc.
[0090] In response to receipt of the input data, machine learning module 218 can provide the output data. The output data can include different types, forms, or variations of output data. As examples, in various implementations, the output data can include content, either stored locally on the user device or in the cloud, that is relevantly shareable along with the initial content selection.
[0091] As discussed above, in some implementations, the output data can include various types of classification data (e.g., binary classification, multiclass classification, single label, multi- label, discrete classification, regressive classification, probabilistic classification, etc.) or can include various types of regressive data (e.g., linear regression, polynomial regression, nonlinear regression, simple regression, multiple regression, etc.). In other instances, the output data can include clustering data, anomaly detection data, recommendation data, or any of the other forms of output data discussed above.
[0092] In some implementations, the output data can influence downstream processes or decision making. As one example, in some implementations, the output data can be interpreted and/or acted upon by a rules-based regulator.
[0093] The present disclosure provides techniques that include or otherwise leverage one or more machine-learned models to suggest content, either stored locally on the uses device or in the cloud, that is relevantly shareable along with the initial content selection based on features of the initial content selection. Any of the different types or forms of input data described above can be combined with any of the different types or forms of machine-learned models described above to provide any of the different types or forms of output data described above.
[0094] The techniques of the present disclosure can be implemented by or otherwise executed on one or more computing devices. Example computing devices include user computing devices (e.g., laptops, desktops, and mobile computing devices such as tablets, smartphones, wearable computing devices, etc.); embedded computing devices (e.g., devices embedded within a vehicle, camera, image sensor, industrial machine, satellite, gaming console or controller, or home appliance such as a refrigerator, thermostat, energy meter, home energy manager, smart home assistant, etc.); server computing devices (e.g., database servers, parameter servers, file servers, mail servers, print servers, web servers, game servers, application servers, etc.); dedicated, specialized model processing or training devices; virtual computing devices; other computing devices or computing infrastructure; or combinations thereof.
[0095] In some examples, computing device 202 may communicate over a network with an example server computing system that includes a machine-learned model. For example, a server device may store and implement machine learning module 218. In some instances, output data obtained through machine learning module 218 at a server device can be used to improve other server tasks or can be used by other non-user devices to improve services performed by or for such other non-user devices. For example, the output data can improve other downstream processes performed by server device for a computing device of a user or embedded computing device. In other instances, output data obtained through implementation of machine learning module 218 at a server device can be sent to and used by a user computing device, such as computing device 202, an embedded computing device. For example, the server device can be said to perform machine learning as a service.
[0096] In yet other implementations, different respective portions of machine learning module 218 can be stored at and/or implemented by some combination of a user computing device; an embedded computing device; a server computing device; etc. In other words, portions of machine learning module 218 may be distributed in whole or in part amongst computing device 202 and a server device.
[0097] Computing device 202 and/or the server device may perform graph processing techniques or other machine learning techniques using one or more machine learning platforms, frameworks, and/or libraries, such as, for example, TensorFlow, Caffe/Caffe2, Theano, Torch/PyTorch, MXnet, CNTK, etc. Computing device 202 and/or the server device may be distributed at different physical locations and connected via one or more networks. If configured as distributed computing devices, computing device 202 and/or the server device may operate according to sequential computing architectures, parallel computing architectures, or combinations thereof. In one example, distributed computing devices can be controlled or guided through use of a parameter server.
[0098] In some implementations, multiple instances of machine learning module 218 can be parallelized to provide increased processing throughput For example, the multiple instances of machine learning module 218 can be parallelized on a single processing device or computing device or parallelized across multiple processing devices or computing devices. [0099] Each computing device that implements machine learning module 218 or other aspects of the present disclosure can include a number of hardware components that enable performance of the techniques described herein. For example, each computing device can include one or more memory devices that store some or all of machine learning module 218. For example, machine learning module 218 can be a structured numerical representation that is stored in memory. The one or more memory devices can also include instructions for implementing machine learning module 218 or performing other operations. Example memory devices include RAM, ROM, EEPROM, EPROM, flash memory devices, magnetic disks, etc., and combinations thereof.
[0100] Each computing device can also include one or more processing devices that implement some or all of machine learning module 218 and/or perform other related operations. Example processing devices include one or more of: a central processing unit (CPU); a visual processing unit (VPU); a graphics processing unit (GPU); a tensor processing unit (TPU); a neural processing unit (NPU); a neural processing engine; a core of a CPU, VPU, GPU, TPU, NPU or other processing device; an application specific integrated circuit (ASIC); a field programmable gate array (FPGA); a co-processor; a controller; or combinations of the processing devices described above. Processing devices can be embedded within other hardware components such as, for example, an image sensor, accelerometer, etc.
[0101] Hardware components (e.g., memory devices and/or processing devices) can be spread across multiple physically distributed computing devices and/or virtually distributed computing systems.
[0102] Machine learning module 218 described herein can be trained with training module 220 and then provided for storage and/or implementation at one or more computing devices, such as computing device 202. For example, training module 220 executes locally at computing device 202. However, in some examples, training module 220 can be separate from computing device 202 or any other computing device that implements machine learning module 218.
[0103] In some implementations, machine learning module 218 may be trained in an offline fashion or an online fashion. In offline training (also known as batch learning), machine learning module 218 is trained on the entirety of a static set of training data. In online learning, machine learning module 218 is continuously trained (or re-trained) as new training data becomes available (e.g., while the model is used to perform inference).
[0104] Training module 220 may perform centralized training of machine learning module 218 (e.g., based on a centrally stored dataset). In other implementations, decentralized training techniques such as distributed training, federated learning, or the like can be used to train, update, or personalize machine learning module 218.
[0105] Machine learning module 218 described herein can be trained according to one or more of various different training types or techniques. For example, in some implementations, machine learning module 218 can be trained by training module 220 using supervised learning, in which machine learning module 218 is trained on a training dataset that includes instances or examples that have labels. The labels can be manually applied by experts, generated through crowd-sourcing, or provided by other techniques (e.g., by physicsbased or complex mathematical models). In some implementations, if the user has provided consent, the training examples can be provided by the user computing device. In some implementations, this process can be referred to as personalizing the model.
[0106] Training data used by training module 220 can include, upon user permission for use of such data for training, anonymized usage logs of sharing flows, e.g., content items that were shared together, bundled content pieces already identified as belonging together, e.g., from entities in a knowledge graph, etc. In some implementations, training data can include examples of input data that have been assigned labels that correspond to output data.
[0107] In some implementations, machine learning module 218 can be trained by optimizing an objective function. For example, in some implementations, the objective function may be or include a loss function that compares (e.g., determines a difference between) output data generated by the model from the training data and labels (e.g., ground-truth labels) associated with the training data. For example, the loss function can evaluate a sum or mean of squared differences between the output data and the labels. In some examples, the objective function may be or include a cost function that describes a cost of a certain outcome or output data. Other examples of an objective function can include margin-based techniques such as, for example, triplet loss or maximum-margin training.
[0108] One or more of various optimization techniques can be performed to optimize an objective function. For example, the optimization technique(s) can minimize or maximize the objective function. Example optimization techniques include Hessian-based techniques and gradient-based techniques, such as, for example, coordinate descent; gradient descent (e.g., stochastic gradient descent); subgradient techniques; etc. Other optimization techniques include black box optimization techniques and heuristics.
[0109] In some implementations, backward propagation of errors can be used in conjunction with an optimization technique (e.g., gradient based techniques) to train machine learning module 218 (e.g., when machine-learned model is a multi-layer model such as an artificial neural network). For example, an iterative cycle of propagation and model parameter (e.g., weights) update can be performed to train machine learning module 218. Example backpropagation techniques include truncated backpropagation through time, Levenberg- Marquardt backpropagation, etc.
[0110] In some implementations, machine learning module 218 described herein can be trained using unsupervised learning techniques. Unsupervised learning can include inferring a function to describe hidden structure from unlabeled data. For example, a classification or categorization may not be included in the data. Unsupervised learning techniques can be used to produce machine-learned models capable of performing clustering, anomaly detection, learning latent variable models, or other tasks.
[0111] Machine learning module 218 can be trained using semi-supervised techniques which combine aspects of supervised learning and unsupervised learning. Machine learning module 218 can be trained or otherwise generated through evolutionary techniques or genetic algorithms. In some implementations, machine learning module 218 described herein can be trained using reinforcement learning. In reinforcement learning, an agent (e.g., model) can take actions in an environment and learn to maximize rewards and/or minimize penalties that result from such actions. Reinforcement learning can differ from the supervised learning problem in that correct input/output pairs are not presented, nor sub-optimal actions explicitly corrected.
[0112] In some implementations, one or more generalization techniques can be performed during training to improve the generalization of machine learning module 218.
Generalization techniques can help reduce overfitting of machine learning module 218 to the training data. Example generalization techniques include dropout techniques; weight decay techniques; batch normalization; early stopping; subset selection; stepwise selection; etc. [0113] In some implementations, machine learning module 218 described herein can include or otherwise be impacted by a number of hyperparameters, such as, for example, learning rate, number of layers, number of nodes in each layer, number of leaves in a tree, number of clusters; etc. Hyperparameters can affect model performance. Hyperparameters can be hand selected or can be automatically selected through application of techniques such as, for example, grid search; black box optimization techniques (e.g., Bayesian optimization, random search, etc.); gradient-based optimization; etc. Example techniques and/or tools for performing automatic hyperparameter optimization include Hyperopt; Auto-WEKA; Spearmint; Metric Optimization Engine (MOE); etc.
[0114] In some implementations, various techniques can be used to optimize and/or adapt the learning rate when the model is trained. Example techniques and/or tools for performing learning rate optimization or adaptation include Adagrad; Adaptive Moment Estimation (ADAM); Adadelta; RMSprop; etc.
[0115] In some implementations, transfer learning techniques can be used to provide an initial model from which to begin training of machine learning module 218 described herein. [0116] In some implementations, machine learning module 218 described herein can be included in different portions of computer-readable code on a computing device. In one example, machine learning module 218 can be included in a particular application or program and used (e.g., exclusively) by such particular application or program. Thus, in one example, a computing device can include a number of applications and one or more of such applications can contain its own respective machine learning library and machine-learned model(s).
[0117] In another example, machine learning module 218 described herein can be included in an operating system of a computing device (e.g., in a central intelligence layer of an operating system) and can be called or otherwise used by one or more applications that interact with the operating system. In some implementations, each application can communicate with the central intelligence layer (and model(s) stored therein) using an application programming interface (API) (e.g., a common, public API across all applications).
[0118] In some implementations, the central intelligence layer can communicate with a central device data layer. The central device data layer can be a centralized repository of data for the computing device. The central device data layer can communicate with a number of other components of the computing device, such as, for example, one or more sensors, a context manager, a device state component, and/or additional components. In some implementations, the central device data layer can communicate with each device component using an API (e.g., a private API).
[0119] The technology discussed herein makes reference to servers, databases, software applications, and other computer-based systems, as well as actions taken and information sent to and from such systems. The inherent flexibility of computer-based systems allows for a great variety of possible configurations, combinations, and divisions of tasks and functionality between and among components. For instance, processes discussed herein can be implemented using a single device or component or multiple devices or components working in combination.
[0120] Databases and applications can be implemented on a single system or distributed across multiple systems. Distributed components can operate sequentially or in parallel. [0121] In addition, the machine learning techniques described herein are readily interchangeable and combinable. Although certain example techniques have been described, many others exist and can be used in conjunction with aspects of the present disclosure.
[0122] A brief overview of example machine-learned models and associated techniques has been provided by the present disclosure. For additional details, readers should review the following references: Machine Learning A Probabilistic Perspective (Murphy); Rules of Machine Learning: Best Practices for ML Engineering (Zinkevich); Deep Learning (Goodfellow); Reinforcement Learning: An Introduction (Sutton); and Artificial Intelligence: A Modem Approach (Norvig). [0123] In one example, a first set of physical characteristics may include one or more physiological characteristics of the user 214 detected by sensors 204. Computing device 202 may determine, based on the sensor data generated by sensors 204, whether computing device 202 is being worn. Responsive to determining that computing device 202 is being worn, computing device 202 may then apply machine learning module 218 to a second set of physical characteristics detected by sensors 204 to generate an output Computing device 202 may then determine, based on the output of machine learning module 218, whether user 214 is included in a set of authenticated users. Responsive to determining that user 214 is included in the set of authenticated users, computing device 202 may then automatically transition from operating in the reduced access mode to operating in the increased access mode, wherein, while operating in the increased access mode, additional functionality of the computing device 202 is accessible to user 214 that is not accessible to user 214 while computing device 202 is operating in the reduced access mode.
[0124] Specifically, resolution module 210 may be configured to determine whether or not user 214 is authenticated based on the confidence score outputted by machine learning module 218, additional information received from one or more of modules 206-222, and/or information stored by user profile data store 212. Resolution module 210 may require the confidence score to satisfy a strict threshold (i.e., require a higher likelihood or confidence that computing device 202 is being used by an authenticated user before computing device 202 transitions from the reduced access mode to an increased access mode). For example, analysis module 208 may output a confidence score to resolution module 210, and responsive to resolution module 210 determining that the confidence score satisfies a specified confidence score threshold, user 214 may be considered an authenticated user and use computing device 202 in the increased access mode.
[0125] In some examples, resolution module 210 may alter the confidence score threshold based on the amount of user data that training module 220 has used to train a machine learning model. In some examples, training module 220 may continuously train a machine learning model included in machine learning module 218 until the machine learning model is trained on a threshold amount of training data. In various instances, analysis module 208 may periodically store confidence scores determined by machine learning module 218 in user profile data store 212. [0126] In some examples, resolution module 210 may query user profile data store 212 to retrieve past user data for comparison to the current user data in order to determine whether the current user data is typical. The past user data may include past biometric information, location information, date and time information, etc. Resolution module 210 may compare such information to information received from one or more of modules 206-222 as well as the confidence score received from analysis module 208 in order to determine whether to authenticate, reject, or require reauthentication (including which level of reauthentication is required, such as low or high security level reauthentication).
[0127] In some instances, an owner or authoritative user of computing device 202 may select a confidence score threshold. For example, resolution module 210 may compare a confidence score determined by analysis module 208 and a selected confidence score (e.g., stored in user profile data store 212, stored in a cloud computing system, etc.) in order to determine that user 214 is authenticated, rejected, or requires reauthentication. As an example, the owner of computing device 202 that desires to minimize the potential for access to sensitive information by unauthenticated persons may select a high confidence score, and resolution module 210 may only authenticate user 214 if the confidence score determined by analysis module 208 is above the selected high confidence score.
[0128] If reauthentication is required, resolution module 210 may further determine if a higher level of reauthentication (i.e., greater security measure) is required, which may detract from the user experience, or a lower level of reauthentication (i.e., lower security measure) is required, which may result in a smoother user experience. In order to satisfy the lower level reauthentication requirement, resolution module 210 may use less secure data, such as GPS location information, network neighborhood information determined using Wi-Fi, etc. In order to satisfy the higher level reauthentication requirement, resolution module 210 may use more reliable data for particularly identifying user 214, such as the one or more physiological characteristics of user 214, passwords, pin patterns, visual data for facial recognition, motion data (e.g., when requiring the user to perform a particular gesture using computing device 202), etc. While the various types of data are described as being used for lower level or higher level reauthentication requirements, any of the various types of data may be used for either or both levels of reauthentication requirement and a user may configure which types of data may be used for each level of reauthentication requirement. [0129] In some examples, a security challenge required to reauthenticate the user may be performed using either computing device 202. For example, if resolution module 210 requires user 214 to enter a password in order to be reauthenticated, user 214 may be able to enter the password by providing input to a user interface generated by user interface module 206 for display on user interface component 232. As another example, user 214 may place his/her finger on a sensor 204 of computing device 202, and computing device 202 may generate the fingerprint biometric information and provide it to resolution module 210. In other examples, reauthentication processes may be performed by analysis module 208 and resolution module 210.
[0130] For example, resolution module 210 may receive a current location of computing device 202 from data processing module 216 and compare the current location to previous locations of computing device 202 retrieved from user profile data store 212. If the current location does not correspond to a location computing device 202 previously visited or infrequently visited as determined based on the previous location information, resolution module 210 may increase the confidence score threshold, thus making it less likely that user 214 will be authenticated without at least some level of reauthentication.
[0131] As another example, data processing module 216 may receive image data captured by one of UI components 232 (e.g., video data, still image data, etc. captured by a camera) and determine if the image data includes one or more individuals. In some examples, data processing module 216 may determine if the image data includes one or more faces. If the image data includes the face of an authenticated user, data processing module 216 may determine that the authenticated user is currently using computing device 202. If the image data does not include the face of an authenticated user, data processing module 216 may determine that an authenticated user is not currently using computing device 202. In either instance, data processing module 216 may provide a result of the determination to resolution module 210. Resolution module 106 may decrease the confidence score threshold in response to data processing module 216 determining that an authenticated user is currently using computing device 202 and vice versa.
[0132] In examples where resolution module 210 determining that reauthentication is required, resolution module 210 may cause UI module 206 to output instructions for user 214 of computing device 202 to complete a security challenge and how to complete the security challenge. Depending on the security challenge, user 214 may be required to submit to a facial recognition process, provide a fingerprint for fingerprint authentication, enter a passcode, perform an input pattern, provide a voice sample for voice authentication, move computing device 202 in a particular pattern, etc. Regardless of the security challenge required, resolution module 210 may use information from one or more of modules 206-222 to complete the reauthentication and determine whether or not the user will be authenticated. [0133] In this way, computing device 202 may be configured to predictively authenticate user 214. That is, computing device 202 may determine a confidence score prior to user 214 initiating an unlock of computing device 202 and use the predefined confidence score as well as other sensor information and stored user data to authenticate user 214 without requiring user 214 to manually unlock computing device 202.
[0134] FIG. 3 is a block diagram illustrating an example data processing module 316 configured to generate biometric data based on one or more physiological characteristics of a user. Data processing module 316 may be similar if not substantially similar to data processing module 216 of FIG. 2. As described above, data processing module 316 may be configured to receive information indicating a user’s physiological characteristics, such as base heart rate, skin color, blood pressure reading, skin temperature, electrocardiogram reading, gait, or voice, and generate data that can be stored in a user’s profile. As shown in FIG. 3, data processing module 315 further includes fingerprint detection module 338, skin color detection module 340, device location module 342, motion detection module 344, heart rate detection module 346, voice detection module 448, and data preprocessing module 350. [0135] Fingerprint detection module 338 may receive fingerprint information from a fingerprint sensor (e.g., one or more of sensors 204 of FIG. 2) and/or a user interface component (i.e., in examples where user interface components 232 of FIG. 2 include a presence-sensitive input device capable of capturing a fingerprint). Skin color detection module 340 may be configured to receive visual data from an image sensor or through an input mechanism capable of capturing skin color details, such as a specialized sensor or touch-sensitive display. Skin color detection module 340 may analyze the received skin color data to determine the skin color characteristics of a user. Device location module 342 may be configured to determine the location of computing device 202 by accessing location data from various sources (e.g., GPS receivers, Wi-Fi positioning systems, cellular network signals, or other location-aware technologies available on the computing device). Device location module 342 may compare the obtained location data with historical location data stored in user’s profile to determine whether the user is in a location that is typical for the user. Motion detection module 344 may be configured to gather data from motion sensors integrated within computing device 202, such as accelerometers, gyroscopes, or magnetometers. These sensors may capture changes in motion, orientation, and position of the computing device, which may be used in gesture recognition. Motion detection module 344 may compare motion data with the occurrence and characteristics of a user’s historical frequent motions. Heart rate detection module 346 may be configured to measure and monitor the user's heart rate using specialized sensors integrated within the computing device, such as optical heart rate sensors or electrodes. These sensors may capture changes in blood flow and heartbeat patterns to accurately determine the user's heart rate. Heart rate detection module 346 may compare the detected heart rate data with the user’s historical heart rate data. Voice detection module 348 may be configured to process audio input from the computing device's microphone or other audio input sources. Voice detection module 348 may detect human voice patterns within the captured sound and use voice reignition technology. Voice detection module 348 may compare the user's voice data with voice data stored in the user’s profile. In some examples, one or more of modules 338-348 may be implemented by the computing device while a user is wearing the computing device. Further, while modules 338-348 are example modules configured to detect or determine the example physiological characteristics of a user described herein, computing device 202 may include other modules configured to detect other physiological characteristics of a user.
[0136] As described, fingerprint detection module 338, skin color detection module 340, device location module 342, motion detection module 344, heart rate detection module 346, and voice detection module 348 may compare biometric data to stored biometric data of an authenticated user of computing device 202 that is stored in user profile data store 312. If the captured data sufficiently matches the stored data in user profile data store 312, modules 338- 348 may provide, to resolution module 210 of FIG. 2, an indication that the current user of computing device 202 is an authenticated user. Similarly, if the data does not match, modules 338-348 may provide, to resolution module 210, an indication that the current user is not an authenticated user. As described above, resolution module 210 may adjust the confidence score threshold based on the result of the comparison received from modules 338-348 (i.e., increasing the confidence score threshold if the user is not an authenticated user and vice versa).
[0137] In some examples, the data received by modules 338-348 or stored in user profile data store 312 may be preprocessed by data preprocessing module 350. Specifically, data preprocessing module 350 may be configured as a module for processing data received by modules 338-348 (hereinafter referred to as “input data”) or any other data stored in user profile data store 312 prior to computing device 202 sending the data to other components or modules and/or implementing machine learning techniques. For example, information indicating a user’s skin color determined by skin color detection module 340 may be sent to data preprocessing module 350, wherein data preprocessing module 350 processes the information and performs steps to transform the information into data that can be used by a machine learning model or other components of computing device 202.
[0138] In some implementations, preprocessing the input data may include extracting one or more additional features from raw input data. For example, feature extraction techniques may be applied to the input data to generate one or more new, additional features. Example feature extraction techniques include edge detection; comer detection; blob detection; ridge detection; scale-invariant feature transform; motion detection; optical flow; Hough transform; etc.
[0139] In some implementations, the extracted features may include or be derived from transformations of the input data into other domains and/or dimensions. As an example, the extracted features may include or be derived from transformations of the input data into the frequency domain. For example, wavelet transformations and/or fast Fourier transforms may be performed on the input data to generate additional features.
[0140] In some implementations, the extracted features may include statistics calculated from the input data or certain portions or dimensions of the input data. Example statistics include the mode, mean, maximum, minimum, or other metrics of the input data or portions thereof.
[0141] In some implementations, as described above, the input data may be sequential in nature. In some instances, the sequential input data may be generated by sampling or otherwise segmenting a stream of input data. As one example, frames may be extracted from a video. In some implementations, sequential data may be made non-sequential through summarization.
[0142] As another example of data preprocessing techniques, portions of the input data may be imputed. For example, additional synthetic input data may be generated through interpolation and/or extrapolation.
[0143] As another example of data preprocessing techniques, some or all of the input data may be scaled, standardized, normalized, generalized, and/or regularized. Example regularization techniques include ridge regression; least absolute shrinkage and selection operator (LASSO); elastic net; least-angle regression; cross-validation; LI regularization; L2 regularization; etc. For example, some or all of the input data may be normalized by subtracting the mean across a given dimension’s feature values from each feature value and then dividing by the standard deviation or another metric.
[0144] As another example of data preprocessing techniques, some or all or the input data may be quantized or discretized. In some cases, qualitative features or variables included in the input data may be converted to quantitative features or variables. For example, one hot encoding may be performed.
[0145] In some examples, dimensionality reduction techniques may be applied to the input data. Several examples of dimensionality reduction techniques are provided above, including, for example, principal component analysis; kernel principal component analysis; graph-based kernel principal component analysis; principal component regression; partial least squares regression; Sammon mapping; multidimensional scaling; projection pursuit; linear discriminant analysis; mixture discriminant analysis; quadratic discriminant analysis; generalized discriminant analysis; flexible discriminant analysis; autoencoding; etc.
[0146] Data preprocessing module 350 may send processed input data to user profile data store 312, in which computing device 202 may then access and use the processed input data to determine whether a user is an authenticated user.
[0147] In this way, the unique physiological characteristics of different users can be used to automatically authenticate users and provide them access to a computing device while still protecting each user’s sensitive information included in their user profiles. For example, the computing device operating in the reduced access mode may detect a second user input to unlock the computing device. Responsive to detecting the second user input, one or more sensors of the computing device may detect one or more physiological characteristics, such as those described above, of a second user. The computing device may then determine, based on the one or more physiological characteristics of the second user, whether a user profile was previously created for the second user. Responsive to determining that the user profile was not previously created for the second user, the computing device may then create the user profile for the second user, wherein the user profile for the second user includes a second machine learning model for automatically authenticating the second user based on the one or more detected physiological characteristics of the second user. The second machine learning model may further be trained using the one or more physiological characteristics of the second user, in which the physiological characteristics of the second user are detected or determined by one or more of modules 338-350.
[0148] FIG. 4 is a block diagram further illustrating an example computing device in communication with a companion device configured to automatically authenticate a user, in accordance with one or more techniques of the present disclosure. Computing device 402 may be substantially similar to, if not similar to, computing device 102 and 202 of FIG. 1 and FIG. 2, respectively. Computing device 402 may determine that a companion device 462 from a plurality of companion devices operating in a reduced access mode is proximate or adjacent to computing device 402. In some examples, “proximate” may be defined as within a range over which wireless communication (e.g., wireless networks including Bluetooth, 3G, LTE, and Wi-Fi wireless networks) spans (e.g., 9 kHz to 300 GHz). For example, if computing device 402 and companion device 462 are in communication via Bluetooth, computing device 402 may determine that companion device 462 is proximate to computing device 402 when companion device 462 is within approximately 10 meters from computing device 402. In another example, if computing device 402 and companion device 462 are in communication via Wi-Fi, computing device 402 may determine that companion device 462 is proximate to computing device 402 when companion device 462 is within approximately 100 meters from computing device 402. In other words, computing device 402 may determine that companion device 462 is proximate to computing device 402 when the distance between companion device 462 and computing device 402 supports communication between companion device 462 and computing device 402. [0149] Telemetry module 460 of computing device 402 and telemetry module 464 of companion device 462 may be used to communicate with each other or other external devices via one or more networks, such as the one or more wireless networks described above. In some examples, such as in the example of FIG. 4, companion device 462 utilizes telemetry module 464 to wirelessly communicate with telemetry module 460 of computing device 402. [0150] Responsive to determining that companion device 462 is proximate to the computing device 402 and user 414 is included in the set of authenticated users, computing device 402 may then send, to companion device 462, the output of analysis module 408, which may be substantially similar to analysis module 208 of FIG. 2.
[0151] Responsive to receiving the output of analysis module 408, companion device 462 may then determine, based on the output, whether user 414 is included in a set of authenticated users for computing device 402. Responsive to determining that user 414 is included in the set of authenticated users for computing device 402, computing device 402 may provide authentication information to companion device 462 that causes companion device 462 to automatically transition from operating in the reduced access mode to operating in an increased access mode. Furthermore, as shown in the example of FIG. 4, user interface components 468 (which may be substantially similar to user interface components 132 of FIG. 1) may generate a user interface for display on companion device 462 that indicates companion device 462 is unlocked and/or user 414 can access companion device 462 operating in the increased access mode. For example, user interface components 468 may generate GUI 470, which displays “DEVICE UNLOCKED” to user 414. As described above, while operating in the increased access mode, additional functionality of companion device 462 may be accessible to user 414 that is not accessible to user 414 while companion device 462 is operating in the reduced access mode.
[0152] FIG. 5 is a flow chart illustrating an example operation of a computing device that automatically authenticates a user based on one or more physiological characteristics of the user, in accordance with one or more techniques of the present disclosure. The components of FIG. 5 are described with respect to FIG. 2. Computing device 202 operating in a reduced access mode detects, via user interface components 232, a first user input from a first user 214 to unlock computing device 202 (572). Responsive to detecting the first user input from user 214, one or more sensors 204 of computing device 202 detect one or more physiological characteristics of first user 214 (574). Computing device 202 then determines, based on the one or more physiological characteristics of user 214, whether a user profile was previously created for first user 214 (576). In some examples, to determine whether a user profile was previously created for the first user 214, computing system 202 may determine whether the one or more physiological characteristics of first user 214 match one or more physiological characteristics included in a plurality of stored user profiles stored in user profile data store 212. Responsive to determining that the user profile was not previously created for first user 214 (NO), computing device 202 creates the user profile for first user 214 (578). In some examples, computing device 202 may prompt the first user to confirm whether they are a new user of computing device 202 via user interface components 232 before creating the user profile for first user 214. The user profile for first user 214 may comprise the one or more physiological characteristics of first user 214, and is stored by computing device 202 in user profile data store 212.
[0153] In some examples, a first set of physical characteristics includes the one or more physiological characteristics of first user 214. In some examples, computing device 202 determines, based on sensor data generated by sensors 204 of computing device 202, whether computing device 202 is being worn. Responsive to determining that computing device 202 is being worn, and responsive to computing device 202 creating a user profile for first user 214 or determining that the user profile was created for first user 214 (YES), machine learning module 218 applies a first machine learning model included in the user profile for first user 214 for automatically authenticating first user 214 based on the one or more physiological characteristics of first user 214 (580). In some examples, analysis module 208 may apply the first machine learning model to a second set of physical characteristics detected by sensors 204 to generate an output In some examples, the output of the first machine learning model includes a confidence score. Resolution module 210 of computing device 202 may determine whether the confidence score satisfies a confidence score threshold. Responsive to resolution module 210 determining that the confidence score satisfies the confidence score threshold, computing device 202 may then determine that first user 214 is included in the set of authenticated users.
[0154] Computing device 202 trains, using the one or more physiological characteristics of first user 214, the first machine learning model (582). For example, training module 220 may train the first machine learning model executed by machine learning module 218. Training the first machine learning model may further comprise generating, by computing device 202, and based on the one or more physiological characteristics of first user 214, a training biometric data set. The training biometric data set may include data based on physiological characteristics including one or more of base heart rate, skin color, blood pressure reading, skin temperature, electrocardiogram reading, gait, or voice that are detected or determined by data processing module 216. The training biometric data set may then be stored in a data store, and the first machine learning model may be trained using a portion of the stored training biometric data set. In some examples, responsive to training the first machine learning model with a threshold amount of the stored training biometric data set, computing device 202 may use the first machine learning model for automatically authenticating first user 214.
[0155] Responsive to determining that first user 214 is included in the set of authenticated users, computing device 202 may then automatically transition from operating in the reduced access mode to operating in an increased access mode, wherein, while operating in the increased access mode, additional functionality of computing device 202 is accessible to first user 214 that is not accessible to first user 214 while computing device 202 is operating in the reduced access mode.
[0156] In one or more examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over, as one or more instructions or code, a computer- readable medium and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media, which corresponds to a tangible medium such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another, e.g., according to a communication protocol. In this manner, computer-readable media generally may correspond to (1) tangible computer-readable storage media, which is non-transitory or (2) a communication medium such as a signal or carrier wave. Data storage media may be any available media that may be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure. A computer program product may include a computer-readable medium.
[0157] By way of example, and not limitation, such computer-readable storage media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, or any other storage medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer. Also, any connection is properly termed a computer- readable medium. For example, if instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transient media, but are instead directed to non-transient, tangible storage media. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
[0158] Instructions may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term “processor,” as used herein may refer to any of the foregoing structures or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules. Also, the techniques could be fully implemented in one or more circuits or logic elements.
[0159] The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC) or a set of ICs (e.g., a chip set). Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Rather, as described above, various units may be combined in a hardware unit or provided by a collection of intraoperative hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware.
[0160] It is to be recognized that, depending on the example, certain acts or events of any of the techniques described herein may be performed in a different sequence, may be added, merged, or left out altogether (e.g., not all described acts or events are necessary for the practice of the techniques). Moreover, in certain examples, acts or events may be performed concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors, rather than sequentially.
[0161] In some examples, a computer-readable storage medium comprises a non-transitory medium. The term “non-transitory” indicates that the storage medium is not embodied in a carrier wave or a propagated signal. In certain examples, a non-transitory storage medium may store data that can, over time, change (e.g., in RAM or cache).
[0162] Example 1: A method includes detecting, by a wearable computing device operating in a reduced access mode, a first user input to unlock the wearable computing device; responsive to detecting the first user input, detecting, by one or more sensors of the wearable computing device, one or more physiological characteristics of a first user; determining, by the wearable computing device and based on the one or more physiological characteristics of the first user, whether a user profile was previously created for the first user; responsive to determining that the user profile was not previously created for the first user, creating, by the wearable computing device, the user profile for the first user, wherein the user profile for the first user includes a first machine learning model for automatically authenticating the first user based on the one or more physiological characteristics of the first user; and training, by the wearable computing device, and using the one or more physiological characteristics of the first user, the first machine learning model.
[0163] Example 2: The method of example 1, wherein determining whether a user profile was previously created for the first user further comprises: determining, by the wearable computing device, whether the one or more physiological characteristics of the first user matches one or more physiological characteristics included in a plurality of stored user profiles stored in a memory; responsive to determining that the one or more physiological characteristics of the first user do not match one or more physiological characteristics included in the plurality of stored user profiles, prompting, by the wearable computing device, the first user to confirm whether they are a new user of the wearable computing device; and responsive to the first user confirming they are a new user of the wearable computing device, creating, by the wearable computing device, the user profile for the first user, wherein the user profile for the first user comprises the one or more physiological characteristics of the first user, and wherein the user profile for the first user is stored by the wearable computing device in the memory.
[0164] Example 3: The method of any of examples 1 through 2, wherein training the first machine learning model further comprises: generating, by the wearable computing device, and based on the one or more physiological characteristics of the first user, a training biometric data set; and training the first machine learning model using a portion of the training biometric data set.
[0165] Example 4: The method of example 3, wherein responsive to training the first machine learning model with a threshold amount of the training biometric data set, the wearable computing device uses the first machine learning model for automatically authenticating the first user.
[0166] Example 5: The method of any of examples 1 through 4, wherein a first set of physical characteristics includes the one or more physiological characteristics of the first user, the method further comprising: determining, by the wearable computing device and based on sensor data generated by at least one of the one or more sensors of the wearable computing device, whether the wearable computing device is being worn; and responsive to determining that the wearable computing device is being worn: applying, by the wearable computing device, the first machine learning model to a second set of physical characteristics detected by the one or more sensors to generate an output; determining, by the wearable computing device and based on the output of the first machine learning model, whether the first user is included in a set of authenticated users; and responsive to determining that the first user is included in the set of authenticated users, automatically transitioning, by the wearable computing device, from operating in the reduced access mode to operating in an increased access mode, wherein, while operating in the increased access mode, additional functionality of the wearable computing device is accessible to the first user that is not accessible to the first user while the wearable computing device is operating in the reduced access mode.
[0167] Example 6: The method of example 5, wherein the output of the first machine learning model includes a confidence score, the method further comprising: determining, by the wearable computing device, whether the confidence score satisfies a confidence score threshold; and responsive to the wearable computing device determining that the confidence score satisfies the confidence score threshold, determining that the first user is included in the set of authenticated users.
[0168] Example 7: The method of example 5, the method further comprising: determining, by the wearable computing device, that a companion device from a plurality of companion devices operating in a reduced access mode is proximate to the wearable computing device; and responsive to determining that the companion device is proximate to the wearable computing device and that the first user is included in the set of authenticated users, sending, from the wearable computing device and to the companion device, authentication information, wherein the companion device automatically transitions from operating in the reduced access mode to operating in an increased access mode responsive to receiving the authentication information, and wherein, while operating in the increased access mode, additional functionality of the companion device is accessible to the first user that is not accessible to the first user while the companion device is operating in the reduced access mode.
[0169] Example 8: The method of any of examples 1 through 7, wherein the physiological characteristics include one or more of base heart rate, skin color, blood pressure reading, skin temperature, electrocardiogram reading, gait, or voice.
[0170] Example 9: The method of any of examples 1 through 8, the method further comprising: detecting, by the wearable computing device operating in the reduced access mode, a second user input to unlock the wearable computing device; responsive to detecting the second user input, detecting, by the one or more sensors of the wearable computing device, one or more physiological characteristics of a second user; determining, by the wearable computing device and based on the one or more physiological characteristics of the second user, whether a user profile was previously created for the second user; responsive to determining that the user profile was not previously created for the second user, creating, by the wearable computing device, the user profile for the second user, wherein the user profile for the second user includes a second machine learning model for automatically authenticating the second user based on the one or more physiological characteristics of the second user; and training, by the wearable computing device and using the one or more physiological characteristics of the second user, the second machine learning model.
[0171] Example 10: A wearable computing device operating in a reduced access mode comprising: one or more processors; one or more sensors configured to, responsive to detecting a first user input to unlock the wearable computing device, detect one or more physiological characteristics of a first user; and one or more storage devices that store instructions that, when executed by the one or more processors, cause the one or more processors to: responsive to the one or more sensors detecting one or more physiological characteristics of the first user, determine, based on the one or more physiological characteristics of the first user, whether a user profile was previously created for the first user; responsive to determining that the user profile was not previously created for the first user, create the user profile for the first user, wherein the user profile for the first user includes a first machine learning model for automatically authenticating the first user based on the one or more physiological characteristics of the first user; and train, using the one or more physiological characteristics of the first user, the first machine learning model.
[0172] Example 11. The wearable computing device of example 10, wherein to determine whether a user profile was previously created for the first user, the one or more processors are further configured to: determine whether the one or more physiological characteristics of the first user matches one or more physiological characteristics included in a plurality of stored user profiles stored in a memory; responsive to determining that the one or more physiological characteristics of the first user do not match one or more physiological characteristics included in the plurality of stored user profiles, prompt the first user to confirm whether they are a new user of the wearable computing device; and responsive to the first user confirming they are a new user of the wearable computing device, create the user profile for the first user, wherein the user profile for the first user comprises the one or more physiological characteristics of the first user, and wherein the user profile for the first user is stored by the wearable computing device in the memory. [0173] Example 12. The wearable computing device of examples 10 through 11, wherein to train the first machine learning model, the one or more processors are further configured to: generate, based on the one or more physiological characteristics of the first user, a training biometric data set; and train the first machine learning model using a portion of the training biometric data set.
[0174] Example 13. The wearable computing device of example 12, wherein responsive to training the first machine learning model with a threshold amount of the training biometric data set, the one or more processors are further configured to use the first machine learning model for automatically authenticating the first user.
[0175] Example 14. The wearable computing device of examples 10 through 13, wherein a first set of physical characteristics includes the one or more physiological characteristics of the first user, and wherein the one or more processors are further configured to: determine, based on sensor data generated by at least one of the one or more sensors of the wearable computing device, whether the wearable computing device is being worn; and responsive to determining that the wearable computing device is being worn: apply the first machine learning model to a second set of physical characteristics detected by the one or more sensors to generate an output; determine, based on the output of the first machine learning model, whether the first user is included in a set of authenticated users; and responsive to determining that the first user is included in the set of authenticated users, automatically transition from operating in the reduced access mode to operating in an increased access mode, wherein, while operating in the increased access mode, additional functionality of the wearable computing device is accessible to the first user that is not accessible to the first user while the wearable computing device is operating in the reduced access mode.
[0176] Example 15. The wearable computing device of example 14, wherein the output of the first machine learning model includes a confidence score, and wherein the one or more processors are further configured to: determine whether the confidence score satisfies a confidence score threshold; and responsive to determining that the confidence score satisfies the confidence score threshold, determining that the first user is included in the set of authenticated users.
[0177] Example 16. The wearable computing device of example 14, wherein the one or more processors are further configured to: determine that a companion device from a plurality of companion devices operating in a reduced access mode is proximate to the wearable computing device; and responsive to determining that the companion device is proximate to the wearable computing device and that the first user is included in the set of authenticated users, sending, from the wearable computing device and to the companion device, authentication information, wherein the companion device automatically transitions from operating in the reduced access mode to operating in an increased access mode responsive to receiving the authentication information, and wherein, while operating in the increased access mode, additional functionality of the companion device is accessible to the first user that is not accessible to the first user while the companion device is operating in the reduced access mode.
[0178] Example 17. The wearable computing device of any of examples 10 through 16, wherein the physiological characteristics include one or more of base heart rate, skin color, blood pressure reading, skin temperature, electrocardiogram reading, gait, or voice.
[0179] Example 18. The wearable computing device of any of examples 10 through 17, wherein the one or more sensors of the wearable computing device detect one or more physiological characteristics of a second user responsive to the wearable computing device detecting a second user input to unlock the wearable computing device, and wherein the one or more processors are further configured to: determine, based on the one or more physiological characteristics of the second user, whether a user profile was previously created for the second user; responsive to determining that the user profile was not previously created for the second user, create the user profile for the second user, wherein the user profile for the second user includes a second machine learning model for automatically authenticating the second user based on the one or more physiological characteristics of the second user; and train, using the one or more physiological characteristics of the second user, the second machine learning model.
[0180] Example 19. A non-transitory computer-readable storage medium encoded with instructions that, when executed by one or more processors, cause one or more processors to: responsive to one or more sensors detecting one or more physiological characteristics of a first user, determine, based on the one or more physiological characteristics of the first user, whether a user profile was previously created for the first user; responsive to determining that the user profile was not previously created for the first user, create the user profile for the first user, wherein the user profile for the first user includes a first machine learning model for automatically authenticating the first user based on the one or more physiological characteristics of the first user; and train, using the one or more physiological characteristics of the first user, the first machine learning model.
[0181] Example 20. The non-transitory computer-readable storage medium of example 19, wherein to determine whether a user profile was previously created for the first user, the one or more processors are further configured to: determine whether the one or more physiological characteristics of the first user matches one or more physiological characteristics included in a plurality of stored user profiles stored in a memory; responsive to determining that the one or more physiological characteristics of the first user do not match one or more physiological characteristics included in the plurality of stored user profiles, prompt the first user to confirm whether they are a new user; and responsive to the first user confirming they are a new user, create the user profile for the first user, wherein the user profile for the first user comprises the one or more physiological characteristics of the first user, and wherein the user profile for the first user is stored in the memory.
[0182] Example 21. The non-transitory computer-readable storage medium of examples 19 through 20, wherein to train the first machine learning model, the one or more processors are further configured to: generate, based on the one or more physiological characteristics of the first user, a training biometric data set; and train the first machine learning model using a portion of the training biometric data set
[0183] Example 22. The non-transitory computer-readable storage medium of example 21, wherein responsive to training the first machine learning model with a threshold amount of the training biometric data set, the one or more processors are further configured to use the first machine learning model for automatically authenticating the first user.
[0184] Example 23. The non-transitory computer-readable storage medium of examples 19 through 22, wherein a first set of physical characteristics includes the one or more physiological characteristics of the first user, and wherein the one or more processors are further configured to: determine, based on sensor data generated by at least one of the one or more sensors of the wearable computing device, whether the wearable computing device is being worn; and responsive to determining that the wearable computing device is being worn: apply the first machine learning model to a second set of physical characteristics detected by the one or more sensors to generate an output; determine, based on the output of the first machine learning model, whether the first user is included in a set of authenticated users; and responsive to determining that the first user is included in the set of authenticated users, automatically transition from operating in the reduced access mode to operating in an increased access mode, wherein, while operating in the increased access mode, additional functionality of the wearable computing device is accessible to the first user that is not accessible to the first user while the wearable computing device is operating in the reduced access mode.
[0185] Example 24. The non-transitory computer-readable storage medium of example 23, wherein the output of the first machine learning model includes a confidence score, and wherein the one or more processors are further configured to: determine whether the confidence score satisfies a confidence score threshold; and responsive to determining that the confidence score satisfies the confidence score threshold, determining that the first user is included in the set of authenticated users.
[0186] Example 25. The non-transitory computer-readable storage medium of example 24, wherein the one or more processors are further configured to: determine that a companion device from a plurality of companion devices operating in a reduced access mode is proximate to the wearable computing device; and responsive to determining that the companion device is proximate to the wearable computing device and that the first user is included in the set of authenticated users, sending, from the wearable computing device and to the companion device, authentication information, wherein the companion device automatically transitions from operating in the reduced access mode to operating in an increased access mode responsive to receiving the authentication information, and wherein, while operating in the increased access mode, additional functionality of the companion device is accessible to the first user that is not accessible to the first user while the companion device is operating in the reduced access mode.
[0187] Example 26. The non-transitory computer-readable storage medium of any of examples 19 through 25, wherein the physiological characteristics include one or more of base heart rate, skin color, blood pressure reading, skin temperature, electrocardiogram reading, gait, or voice. [0188] Example 27. The non-transitory computer-readable storage medium of any of examples 19 through 26, wherein the one or more sensors of the wearable computing device detect one or more physiological characteristics of a second user responsive to the wearable computing device detecting a second user input to unlock the wearable computing device, and wherein the one or more processors are further configured to: determine, based on the one or more physiological characteristics of the second user, whether a user profile was previously created for the second user; responsive to determining that the user profile was not previously created for the second user, create the user profile for the second user, wherein the user profile for the second user includes a second machine learning model for automatically authenticating the second user based on the one or more physiological characteristics of the second user; and train, using the one or more physiological characteristics of the second user, the second machine learning model.
[0189] Various examples have been described. These and other examples are within the scope of the following claims.

Claims

WHAT IS CLAIMED IS:
1. A method comprising: detecting, by a wearable computing device operating in a reduced access mode, a first user input to unlock the wearable computing device; responsive to detecting the first user input, detecting, by one or more sensors of the wearable computing device, one or more physiological characteristics of a first user; determining, by the wearable computing device and based on the one or more physiological characteristics of the first user, whether a user profile was previously created for the first user; responsive to determining that the user profile was not previously created for the first user, creating, by the wearable computing device, the user profile for the first user, wherein the user profile for the first user includes a first machine learning model for automatically authenticating the first user based on the one or more physiological characteristics of the first user; and training, by the wearable computing device, and using the one or more physiological characteristics of the first user, the first machine learning model.
2. The method of claim 1, wherein determining whether a user profile was previously created for the first user further comprises: determining, by the wearable computing device, whether the one or more physiological characteristics of the first user matches one or more physiological characteristics included in a plurality of stored user profiles stored in a memory; responsive to determining that the one or more physiological characteristics of the first user do not match one or more physiological characteristics included in the plurality of stored user profiles, prompting, by the wearable computing device, the first user to confirm whether they are a new user of the wearable computing device; and responsive to the first user confirming they are a new user of the wearable computing device, creating, by the wearable computing device, the user profile for the first user, wherein the user profile for the first user comprises the one or more physiological characteristics of the first user, and wherein the user profile for the first user is stored by the wearable computing device in the memory.
3. The method of any of claims 1-2, wherein training the first machine learning model further comprises: generating, by the wearable computing device, and based on the one or more physiological characteristics of the first user, a training biometric data set; and training the first machine learning model using a portion of the training biometric data set.
4. The method of any of claims 1-3, wherein responsive to training the first machine learning model with a threshold amount of the training biometric data set, the wearable computing device uses the first machine learning model for automatically authenticating the first user.
The method of any of claims 1-4, wherein a first set of physical characteristics includes the one or more physiological characteristics of the first user, the method further comprising: determining, by the wearable computing device and based on sensor data generated by at least one of the one or more sensors of the wearable computing device, whether the wearable computing device is being worn; and responsive to determining that the wearable computing device is being worn: applying, by the wearable computing device, the first machine learning model to a second set of physical characteristics detected by the one or more sensors to generate an output; determining, by the wearable computing device and based on the output of the first machine learning model, whether the first user is included in a set of authenticated users; and responsive to determining that the first user is included in the set of authenticated users, automatically transitioning, by the wearable computing device, from operating in the reduced access mode to operating in an increased access mode, wherein, while operating in the increased access mode, additional functionality of the wearable computing device is accessible to the first user that is not accessible to the first user while the wearable computing device is operating in the reduced access mode.
6. The method of any of claims 1-5, wherein the output of the first machine learning model includes a confidence score, the method further comprising: determining, by the wearable computing device, whether the confidence score satisfies a confidence score threshold; and responsive to the wearable computing device determining that the confidence score satisfies the confidence score threshold, determining that the first user is included in the set of authenticated users.
7. The method of any of claims 1-6, further comprising: determining, by the wearable computing device, that a companion device from a plurality of companion devices operating in a reduced access mode is proximate to the wearable computing device; and responsive to determining that the companion device is proximate to the wearable computing device and that the first user is included in the set of authenticated users, sending, from the wearable computing device and to the companion device, authentication information, wherein the companion device automatically transitions from operating in the reduced access mode to operating in an increased access mode responsive to receiving the authentication information, and wherein, while operating in the increased access mode, additional functionality of the companion device is accessible to the first user that is not accessible to the first user while the companion device is operating in the reduced access mode.
8. The method of any of claims 1-7, wherein the physiological characteristics include one or more of base heart rate, skin color, blood pressure reading, skin temperature, electrocardiogram reading, gait, or voice.
9. The method of any of claims 1-8, further comprising: detecting, by the wearable computing device operating in the reduced access mode, a second user input to unlock the wearable computing device; responsive to detecting the second user input, detecting, by the one or more sensors of the wearable computing device, one or more physiological characteristics of a second user; determining, by the wearable computing device and based on the one or more physiological characteristics of the second user, whether a user profile was previously created for the second user; responsive to determining that the user profile was not previously created for the second user, creating, by the wearable computing device, the user profile for the second user, wherein the user profile for the second user includes a second machine learning model for automatically authenticating the second user based on the one or more physiological characteristics of the second user; and training, by the wearable computing device and using the one or more physiological characteristics of the second user, the second machine learning model.
10. A wearable computing device operating in a reduced access mode comprising: one or more processors; one or more sensors configured to, responsive to detecting a first user input to unlock the wearable computing device, detect one or more physiological characteristics of a first user; and one or more storage devices that store instructions that, when executed by the one or more processors, cause the one or more processors to: responsive to the one or more sensors detecting one or more physiological characteristics of the first user, determine, based on the one or more physiological characteristics of the first user, whether a user profile was previously created for the first user; responsive to determining that the user profile was not previously created for the first user, create the user profile for the first user, wherein the user profile for the first user includes a first machine learning model for automatically authenticating the first user based on the one or more physiological characteristics of the first user; and train, using the one or more physiological characteristics of the first user, the first machine learning model.
11. The wearable computing device of claim 10, wherein to determine whether a user profile was previously created for the first user, the one or more processors are further configured to: determine whether the one or more physiological characteristics of the first user matches one or more physiological characteristics included in a plurality of stored user profiles stored in a memory; responsive to determining that the one or more physiological characteristics of the first user do not match one or more physiological characteristics included in the plurality of stored user profiles, prompt the first user to confirm whether they are a new user of the wearable computing device; and responsive to the first user confirming they are a new user of the wearable computing device, create the user profile for the first user, wherein the user profile for the first user comprises the one or more physiological characteristics of the first user, and wherein the user profile for the first user is stored by the wearable computing device in the memory.
12. The computing device of claim 11, comprising means for performing any of the methods of claims 3-9.
13. A non-transitory computer-readable storage medium encoded with instructions that, when executed by one or more processors, cause the one or more processors to: responsive to one or more sensors detecting one or more physiological characteristics of a first user, determine, based on the one or more physiological characteristics of the first user, whether a user profile was previously created for the first user; responsive to determining that the user profile was not previously created for the first user, create the user profile for the first user, wherein the user profile for the first user includes a first machine learning model for automatically authenticating the first user based on the one or more physiological characteristics of the first user; and train, using the one or more physiological characteristics of the first user, the first machine learning model.
14. The non-transitory computer-readable storage medium of claim 13, wherein to determine whether a user profile was previously created for the first user, the instructions are further configured to cause the one or more processors to: determine whether the one or more physiological characteristics of the first user matches one or more physiological characteristics included in a plurality of stored user profiles stored in a memory; responsive to determining that the one or more physiological characteristics of the first user do not match one or more physiological characteristics included in the plurality of stored user profiles, prompt the first user to confirm whether they are a new user; and responsive to the first user confirming they are a new user, create the user profile for the first user, wherein the user profile for the first user comprises the one or more physiological characteristics of the first user, and wherein the user profile for the first user is stored in the memory.
15. The non-transitory computer-readable storage medium of claim 14, the instructions further configured to cause the one or more processors to perform any of the methods of claims 3-9.
PCT/US2023/028049 2023-07-18 2023-07-18 Wearable user identity profile Pending WO2025018989A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2023/028049 WO2025018989A1 (en) 2023-07-18 2023-07-18 Wearable user identity profile

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2023/028049 WO2025018989A1 (en) 2023-07-18 2023-07-18 Wearable user identity profile

Publications (1)

Publication Number Publication Date
WO2025018989A1 true WO2025018989A1 (en) 2025-01-23

Family

ID=87570829

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2023/028049 Pending WO2025018989A1 (en) 2023-07-18 2023-07-18 Wearable user identity profile

Country Status (1)

Country Link
WO (1) WO2025018989A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080216171A1 (en) * 2007-02-14 2008-09-04 Sony Corporation Wearable device, authentication method, and recording medium
US20170238138A1 (en) * 2013-07-30 2017-08-17 Google Inc. Mobile computing device and wearable computing device having automatic access mode control
US20220237271A1 (en) * 2021-01-26 2022-07-28 Callsign Ltd. Authentication based on physical interaction and characteristic noise patterns

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080216171A1 (en) * 2007-02-14 2008-09-04 Sony Corporation Wearable device, authentication method, and recording medium
US20170238138A1 (en) * 2013-07-30 2017-08-17 Google Inc. Mobile computing device and wearable computing device having automatic access mode control
US20220237271A1 (en) * 2021-01-26 2022-07-28 Callsign Ltd. Authentication based on physical interaction and characteristic noise patterns

Similar Documents

Publication Publication Date Title
US11847858B2 (en) Vehicle occupant engagement using three-dimensional eye gaze vectors
KR102637133B1 (en) On-device activity recognition
Ali et al. Keystroke biometric systems for user authentication
Zhai et al. BeautyNet: Joint multiscale CNN and transfer learning method for unconstrained facial beauty prediction
CN112384938A (en) Text prediction based on recipient's electronic messages
KR102756879B1 (en) Anti-spoofing method and apparatus for biometric recognition
US20230036737A1 (en) Determining available memory on a mobile platform
Wang et al. User authentication method based on MKL for keystroke and mouse behavioral feature fusion
Babu et al. A new design of iris recognition using hough transform with K-means clustering and enhanced faster R-CNN
US20240163232A1 (en) System and method for personalization of a chat bot
US11921821B2 (en) System and method for labelling data for trigger identification
US20250047756A1 (en) Usage-based network connection management
US20240152440A1 (en) Game performance prediction across a device ecosystem
WO2024249180A1 (en) Heterogeneous feature interactions with transformers
WO2025018989A1 (en) Wearable user identity profile
US20250363199A1 (en) Processing access by database query
US12271558B2 (en) Automatic liquid detection
US20250018299A1 (en) Performance prediction for virtualized gaming applications
US12333613B2 (en) Computer network back-end system for transmitting graphical user interface data and control signals to a disparate device
US20250307450A1 (en) Encryption for secured documentation authorization and production
Agrawal et al. A Novel Adaptive Authentication System using Deep Learning and Touch-Dynamics.
US20240330926A1 (en) Crypto document
US20240193582A1 (en) Card customization, personalized image
WO2025212094A1 (en) Generating user interfaces across devices
Nawal et al. Expertise as a signature: continuous implicit authentication based on behavior

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23754525

Country of ref document: EP

Kind code of ref document: A1