[go: up one dir, main page]

WO2024258782A1 - Systems and methods for selecting and providing media content to improve neurotransmitter levels - Google Patents

Systems and methods for selecting and providing media content to improve neurotransmitter levels Download PDF

Info

Publication number
WO2024258782A1
WO2024258782A1 PCT/US2024/033243 US2024033243W WO2024258782A1 WO 2024258782 A1 WO2024258782 A1 WO 2024258782A1 US 2024033243 W US2024033243 W US 2024033243W WO 2024258782 A1 WO2024258782 A1 WO 2024258782A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
media content
reward
molecule
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2024/033243
Other languages
French (fr)
Inventor
Axel Bouchon
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Matter Neuroscience Inc
Original Assignee
Matter Neuroscience Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Matter Neuroscience Inc filed Critical Matter Neuroscience Inc
Publication of WO2024258782A1 publication Critical patent/WO2024258782A1/en
Anticipated expiration legal-status Critical
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Recommending goods or services

Definitions

  • This disclosure relates generally to systems and methods for selecting and providing personalized media content to a user to modulate the neurotransmitter activity of the user.
  • media services such as streaming services for music and television, may curate collections of media content that are associated with a particular mood or emotion (e.g., an energizing playlist for the gym including upbeat music, or a date night playlist including songs typically associated with romance).
  • these media collections are generally compiled based on presumed associations between the media content and certain emotions or by algorithms that use generalized information to categorize the content into moods, rather than user-specific information that correlates the emotional response of a particular user to a particular piece of media and certain personal, positive memories associated with the piece of media.
  • currently available media services and applications may not account for user-specific experiences about various media content that causes the media content to elicit a particular emotional state and/or neurotransmitter activity in a user.
  • This user-specific information may be provided as input into an application, such as an application for a mobile device, which compiles the media content and creates associations between the media content and a physiological state of a user at the time at which the user consumed the media content.
  • the user may then recall specific media content from the application, such as a song or playlist, by querying the application to present media associated with a particular emotional or physiological state of the user.
  • the application can enable users to address deficiencies in their current mood and to promote overall emotional and physiological wellbeing, such as by modulating the level of various neurotransmitters in the nervous system.
  • Such media content applications can be deployed consciously by a user as a highly personalized positive emotion booster.
  • These applications may also generate media content sua sponte (i.e., without user input) by predicting or detecting a user’s current emotional state and automatically presenting media content to meet the user’s needs.
  • a system for selecting and providing media content is provided.
  • the system comprises one or more processors configured to receive first information indicating an association between first reward molecule of a user and a first media content object, generate and store first data indicating an association between the first media content object and the first reward molecule associated for the user with the first media content object; receive second information indicating a current level of the reward molecule for the user; select, based at least in part on the current level of the reward molecule and the first stored data, the first media content object; and provide the first media content object to the user.
  • receiving the first information comprises receiving an input from the user explicitly indicating the association between the first media content object and the first reward molecule.
  • receiving the first information comprises receiving an input from the user indicating an association between the first media content object and a location, time, date, event, individual, or group of individuals.
  • receiving the first information comprises receiving an input from the user indicating an association between the first media content object and a first emotional state of the user and determining, based at least in part on the first emotional state of the user, the reward molecule.
  • determining the reward molecule comprises applying the first emotional state to a positive-emotion-to-neurotransmitter (PE-NT) matrix.
  • PE-NT positive-emotion-to-neurotransmitter
  • the first emotional state of the user comprises an emotional state from the group comprising: enthusiasm, sexual desire, recognition, nurturant/family love, contentment, friendship/attachment love, amusement, pleasure, and gratitude.
  • receiving the first information comprises receiving information from a first sensor indicating a first physiological state of the user; receiving information indicating that the user was exposed to the first media content object during the time at which the user experienced the first physiological state; and determining, based at least in part on the first physiological state of the user, the reward molecule.
  • determining the reward molecule comprises: providing the received information from the first sensor indicating the first physiological state of the user to a machine-learning algorithm trained using brain scan data for the user; and receiving, from the machine-learning algorithm, output data comprising an indication of the reward molecule [0016] In some examples, determining the reward molecule comprises: providing the received information from the first sensor indicating the first physiological state of the user to a machine-learning algorithm trained using brain scan data for the user; receiving, from the machine-learning algorithm, output data comprising an indication of a first emotional state of the user; and determining, based at least in part on the first emotional state of the user, the reward molecule.
  • determining the reward molecule comprises applying the first emotional state to a positive-emotion-to-neurotransmitter (PE-NT) matrix.
  • PE-NT positive-emotion-to-neurotransmitter
  • receiving the second information comprises: receiving an input from the user explicitly indicating the current level of the reward molecule.
  • receiving the second information comprises: receiving an input from the user indicating a second emotional state of the user; and determining, based at least in part on the second emotional state of the user, the current level of the reward molecule.
  • receiving the second information comprises receiving an input from the user indicating a location, time, date, event, individual, or group of individuals and determining, based at least in part on the input, the current level of the reward molecule.
  • receiving the second information comprises receiving the second information from a prediction model configured to predict the current level of the reward molecule for the user.
  • determining the current level of the reward molecule comprises applying the second emotional state to a positive-emotion-to-neurotransmitter (PENT) matrix.
  • PENT positive-emotion-to-neurotransmitter
  • receiving the second information comprises: receiving information from a second sensor indicating a second physiological state of the user; and determining, based at least in part on the second physiological state of the user, the current level of the reward molecule.
  • determining the current level of the reward molecule comprises: providing the received information from the second sensor indicating the second physiological state of the user to a machine-learning algorithm trained using brain scan data for the user; and receiving, from the machine-learning algorithm, output data comprising an indication of the current level of the reward molecule.
  • determining the current level of the reward molecule comprises: providing the received information from the second sensor indicating the second physiological state of the user to a machine-learning algorithm trained using brain scan data for the user; receiving, from the machine-learning algorithm, output data comprising an indication of a second emotional state of the user; and determining, based at least in part on the second emotional state of the user, the current level of the reward molecule
  • determining the current level of the reward molecule comprises applying the second emotional state to a positive-emotion-to-neurotransmitter (PENT) matrix.
  • PENT positive-emotion-to-neurotransmitter
  • providing the first media content object comprises causing one or more speakers of the system to output audio content of the first media content object.
  • providing the first media content object comprises causing one or more displays of the system to display an interactive affordance to the user prompting the user to play audio content of the first media content object.
  • an identity of the reward molecule is selected from a group consisting of dopamine, serotonin, testosterone, oxytocin, cannabinoids, and opioids.
  • a method for selecting and providing media content is provided.
  • the method is performed by a system comprising one or more processors, and comprises: receiving first information indicating an association between first reward molecule of a user and a first media content object; generating and storing first data indicating an association between the first media content object and the first reward molecule associated for the user with the first media content object; receiving second information indicating a current level of the reward molecule for the user; selecting, based at least in part on the current level of the reward molecule and the first stored data, the first media content object; and providing the first media content object to the user.
  • a non-transitory computer-readable storage medium storing instructions for selecting and providing media content.
  • the instructions are configured to be executed by one or more processors of a system to cause the system to: receive first information indicating an association between first reward molecule of a user and a first media content object; generate and store first data indicating an association between the first media content object and the first reward molecule associated for the user with the first media content object; receive second information indicating a current level of the reward molecule for the user; select, based at least in part on the current level of the reward molecule and the first stored data, the first media content object; and provide the first media content object to the user.
  • FIG. 1 illustrates an exemplary system for providing media content to a user based on the emotional state of the user, according to some embodiments of present disclosure.
  • FIG. 2 illustrates an exemplary positive emotion to neurotransmitter (PE-NT) matrix, according to some examples of the present disclosure.
  • FIG. 3 illustrates an exemplary method for selecting and providing media content to a user, according to some examples of the present disclosure.
  • FIG. 4 illustrates an exemplary computing device, according to examples of the present disclosure.
  • Certain aspects of the present disclosure include process steps and instructions that may be described herein in the form of an algorithm. It should be noted that the process steps and instructions of the present disclosure could be embodied in software, firmware, or hardware, and, when embodied in software, they could be downloaded to reside on and be operated from different platforms used by a variety of operating systems.
  • the present disclosure also relates to a system and devices for performing the operations herein.
  • This system and/or devices may be specially constructed for the required purposes, or they may comprise general-purpose computers selectively activated or reconfigured by a computer program stored in the computer(s).
  • a computer program may be stored in a non-transitory, computer-readable storage medium such as, but not limited to, any type of disk, including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, application-specific integrated circuits (ASICs), or any type of media suitable for storing electronic instructions and each coupled to a computer system bus.
  • the computing devices referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
  • real time or “real-time,” as used interchangeably herein, generally refers to an event (e.g., an operation, a process, a method, a technique, a computation, a calculation, an analysis, a visualization, an optimization, etc.) that is performed using recently obtained (e.g., collected or received) data.
  • a real time event may be performed almost immediately or within a short enough time span, such as within at least 1 millisecond (ms), 5 ms, 0.01 seconds, 0.05 seconds, 0.1 seconds, 0.5 seconds, 1 second, 0.1 minute, 0.5 minutes, 1 minute, or more.
  • a real time event may be performed almost immediately or within a short enough time span, such as within at most 1 second, 0.5 seconds, 0.1 seconds, 0.05 seconds, 0.01 seconds, 5 ms, 1 ms, or less.
  • a system for selecting and presenting media content to a user to improve a deficiency in reward molecules of a user, such as neurotransmitters of the nervous system or some other biomolecule known to affect a user’s mental and physical wellbeing.
  • the identity of a reward molecule is selected from a group consisting of dopamine, serotonin, testosterone, oxytocin, cannabinoids, and opioids.
  • the media content may be any form of media, such as auditory media (e.g., songs, podcasts, speeches, audio recordings, and/or compilations thereof), visual media (e.g., visual artwork or photographs), or mixed media (e.g., media that includes both auditory and visual stimuli).
  • FIG. 1 illustrates an exemplary system 100, according to examples of the disclosure.
  • the system 100 includes a user-controlled device 102, such as a phone or personal computer, and, optionally, a remote system 110.
  • the device 102 may be configured to provide a user interface 103 (e.g., a graphical user interface presented on a touchscreen display, a keyboard or keypad, a voicerecognition device, etc.), one or more applications 105 (e.g., a mobile device application operable on device 102), one or more processors 106, local data storage 107, and a network communication device 108.
  • the device 102 e.g., user interface 103 of device 102
  • the device 102 is a handheld electronic device such as a phone or a tablet, and the user can engage with an application 105 of the device to input information about media content and/or an emotional state, e.g., by using a user interface 103 of the device 102 to provide the information to an application 105 stored on the device 102.
  • the device 102 may include a wearable device such as a watch, glasses, a head-mounted device, or another device configured to be worn by a user of the system 100.
  • a wearable device may be configured to receive information about the emotional state or neurotransmitter activity of the user using one or more sensors 104 configured to measure physiological information about the user indicative of an emotional state or the activity of one or more neurotransmitters.
  • the wearable device may include one or more sensors to measure one or more parameters including, but not limited to: [0048] Cardiac Interbeat Interval (CBI or IB I), ms, as measured, for example, by sensors based on (1) electrical activity (such as Electrocardiogram based on wet electrodes, dry electrodes, or capacitive electrodes), (2) sensors detecting arterial pulse using photoplethysmography (PPG), or sensors such as PhysioCam (PhyC), a non-contact system capable of measuring arterial pulse with sufficient precision to derive HRV during different challenges, (3) sensors based on mechanical activity (balistocardiogram (BCG) using e.g. Hydraulic sensors, EMFi film sensors, Accelerometer), radio frequency or seismocardiogram (SCG) using e
  • Cardiac Pre-Ejection Period (PEP), ms, as measured, for example, by one or more of the same or similar sensor types as described above with reference to IBI (optionally, with a preference for Forcecardiography and/or Seismocardiography).
  • PEP may be measured by simultaneously collecting both ECG, as described earlier, and impedance cardiography;
  • SCRs Skin Conductance Responses
  • SCRs Skin Conductance Responses
  • sensors detecting galvanic skin response such as Ag/ AgCl, stainless steel, silver, brass, and gold electrodes, Flexcomp Infiniti physiological monitoring and data acquisition unit, Empatica E4 and Refa System, Microsoft Band 2, Empatica E4, Health Sensor Platform, BITalino, Polar H6, Wearable Zephyr BioHamess 3, and/or Obimon EDA;
  • Respiratory Sinus Arrhythmia RSA
  • ms 2 Respiratory Sinus Arrhythmia
  • electrocardiogram sensors such as any one or more of those described above
  • Mean Arterial Pressure MAP
  • mmHg Mean Arterial Pressure
  • Measured parameters may be stored in local data storage 107 provided as a part of device 102 for local analysis by one or more processors 106 provided as a part of device 102. Additionally or alternatively, measured parameters may be transmitted via network communication device 108 to remote system 110, for example for remote storage, remote display, and/or remote data processing.
  • the device 102 is configured for communicating with a remote system 110 via a network communication device 108.
  • the remote system 110 may include a network communication device 118 configured to receive data from and/or send data to one or more device, such as device 102.
  • remote system 110 may receive various information indicating an association between one or more neurotransmitters of the user and media content consumed by the user from the device 102 and/or communicate various information about the media content and/or association with neurotransmitters to the device 102.
  • the remote system may further include one or more processors 116 and data storage 117.
  • Receiving first input associating media e.g., a song
  • media e.g., a song
  • a user of the system 100 may use the device 102 to play various forms of media content throughout the day, such as songs from a streaming application or images from a photo application.
  • the user may also experience media content in the ambient environment (e.g., outside of the device 102), such as a song played at a concert or in a public space, a television series viewed on a television, or a piece of artwork viewed at a museum.
  • the device 102 may be configured to monitor the environment to detect media content in the ambient environment (e.g., songs played in an environment may be detected by speakers of device 102) and/or to detect media content via electronic monitoring of one or more other devices or systems (e.g., the device may monitor, via a network, and detect when media is played using another network connected device).
  • the device 102 may be used to record various information about media content consumed by a user and their emotional state while consuming the media content on the application 105 using the user interface 103.
  • a user may record the form of the media content, the genre of media content, the artist or title of the media content and/or the viewing location of the media content, as well as the emotional state of the user, the intensity of the emotion experienced, the neurotransmitter activity of the user, and/or other physiological information about the user while the user consumed the media content.
  • the user may also input information about other persons involved in the memory or who were present while consuming the media content.
  • the user may upload the media content to the application 105, such as an image or a snippet of a song associated with the particular emotional state or the activity or one or more neurotransmitters.
  • the device 102 is configured to accept an input from a user, e.g., on a user interface 103, regarding the media content and/or emotional state experienced by the user.
  • the user interface 103 of the device 102 can include one or more user selectable buttons, a touchscreen display, a keypad, a voice-control device, or some other means for inputting information from a user.
  • the user interface 103 is configured to display various categories associated with emotional states and the user is prompted to categorize their emotional state based on the one or more predefined categories. For instance, in one or more examples, the user may be prompted to input information by selecting one or more categories representing various positive emotional states, such as: enthusiasm; sexual desire; recognition/pride; nurturant/family love; contentment; friendship love; amusement; pleasure; and gratitude.
  • categories representing various positive emotional states, such as: enthusiasm; sexual desire; recognition/pride; nurturant/family love; contentment; friendship love; amusement; pleasure; and gratitude.
  • a user of the device 102 may record the information about the media content and/or their associated emotional state in natural language. For example, a user may input information into application 105 by typing words or speaking into a user interface 103 of the device 102.
  • the device 102 may be configured to determine, based on the natural language input, one or more emotional states of the user.
  • the processor 106 may include a natural language processing module configured to determine, based on the natural language input, one or more categories representing various emotional states of the user. In some examples, the various emotional states of the user may be defined by the categories above.
  • More advanced users may be capable of identifying, based on their own perceived emotional state, one or more neurotransmitters associated with the emotional state.
  • a user may input one or more neurotransmitters that they believe are elevated or active while consuming a particular media content object.
  • the user may forego entering an emotional state associated with the media content object.
  • the application 105 may prompt the user to input information about media content and their associated emotional state or neurotransmitter activity, e.g., by displaying a prompt or notification on a user interface 103 of the device 102.
  • the user interface 103 may display a prompt if the device determines that the user is consuming media content.
  • the user may choose to ignore the prompt and the information will not be used to generate data associating the user’s emotional state and/or neurotransmitter levels associated with the media content.
  • the application 105 may prompt the user to input information at a particular time or responsive to a particular event, such as prompting the user at a particular time or the day or responsive to determining that the user is engaged in a particular activity or at a particular location.
  • the application 105 is configured to automatically record information about media content consumed by the user in real-time based on information about the media content presented by an application on the device 102.
  • the application 105 may be configured to record the title and artist of music played by a streaming application used by the user on the device 102 or an image presented to the user on a photography application on the device 102.
  • the application 105 may automatically record information about the media content based on information from one or more sensors configured for detecting various media content (e.g., microphones capable of detecting various songs played by the device 102 or in the ambient environment around the user).
  • the device 102 includes one or more physical and/or chemical sensors 104 for monitoring physiological responses indicative of autonomic nervous system activity of the wearer.
  • the device 102 may include one or more sensors 104 configured for determining various physiological parameters of the user that are indicative of a user’s mood or the level of a particular neurotransmitter of the user.
  • the sensor(s) 104 may be configured to automatically receive sensor data indicating an association between the media content currently presented by the device 102 and the neurotransmitter activity of the user.
  • the measured sensor data may be stored in local data storage 107 provided as a part of the device 102 for local analysis by one or more processors 106. Additionally or alternatively, one or more sensors may be configured to receive sensor data about the location of media consumption, the time of media consumption, or some other information. The measured sensor data may be transmitted via a network communication device 108 to a remote system 110, for example for remote storage, remote data processing, and/or remote display.
  • the device 102 may use one or more machine-learning algorithms to process information about the user’s consumption of media content and their resultant emotional state and/or neurotransmitter activity.
  • one or more sensors 104 may be configured to receive physiological data from the user before, during, and/or after consuming a media content object. The sensor data received from the sensors 104 may be used to update and refine information about the user’s emotional and physiological reaction to a media content object to improve future media content selections.
  • the device 102 may receive a plurality of information indicating associations between one or more media content objects and the emotional state or neurotransmitter activity of the user.
  • the user may use the application 105 to record one or more past experiences (e.g., memories or recent experiences) involving a media content object and the emotional state or neurotransmitter activity of the user onto the device 102 to develop a dataset including a plurality of associations between various media content and the emotional states of the users.
  • the user may also enter information about one or more concurrent experiences with media content objects (e.g., by entering information about their emotional state while consuming the media content concurrently or shortly after consuming the media content).
  • a user may input information about a number of media content objects and associated emotional states.
  • the number of media content objects and associated emotional states may be sufficient to include a full range of emotional states envisioned by the application 105.
  • the dataset may include at least one media content entry associated with each of the positive emotional states described above.
  • the user may enter a number of media content objects and associated emotional states sufficient to include the full range of neurotransmitters monitored by the system 100.
  • at least one media content entry can be associated with the activity of each of the neurotransmitters identified above.
  • a training or learning phase may request that the user inputs a particular number of media content objects and associated emotional states, such as at least 10, at least 20, at least 50, or at least 100 media content objects and associated emotional states.
  • the system 100 can then determine one or more reward molecules associated with the emotional state and media content and store that data for later use by the application 105.
  • the system 100 may be configured to determine, based on one or more user inputs indicating an emotional state of the user associated with a media content object, one or more neurotransmitters associated with the emotional state and media content.
  • the application 105 may receive information from the user indicating that a particular song is associated with the emotional state of enthusiasm, and may determine, based on the emotional state of enthusiasm identified by the user, that elevated levels of dopamine are associated with that song. That song and the associated neurotransmitter activity may then be stored by the system 100 in local storage 107 on the device 102 or in remote storage 117 on remote system 110.
  • the determination of neurotransmitter activity associated with the user’s emotional states takes place at a processor 106 of the device 102.
  • the processor 106 may be configured to determine, based on a user input indicating a particular emotional state associated with a media content object, a level of one or more neurotransmitters associated with the emotional state and media content object.
  • the information provided as input by the user may be transmitted via a network communication device 108 to the remote system 110, and the determination of the neurotransmitter activity associated with the emotional state and media content object takes place at the remote system 110 (e.g., on a processor 116 of the remote system 110).
  • the determination may be made collaboratively at processors on both the device 102 and the remote system 110.
  • the system 100 (e.g., via a processor 106 of the device 102 or a remote processor 116 included in a remote system 110) is configured to calculate and/or estimate the level of the neurotransmitter activity (such as levels of dopamine, testosterone, serotonin, oxytocin, cannabinoids, and opioids described above) based on the emotional state identified by the user and/or the intensity of the emotional state.
  • the system 100 may use one or more machine-learning algorithms to associate the emotional state of the user with the activity or level of one or more neurotransmitters.
  • the machine-learning algorithm may be based on brain scans, e.g., fMRI brain scans, of the user.
  • the system 100 may utilize a positive emotion to neurotransmitter (PE-NT) matrix that can translate the emotional state (and, optionally, the intensity of the emotion) into an amount of each of the neurotransmitters associated with the emotional state.
  • PE-NT positive emotion to neurotransmitter
  • FIG 2 illustrates an exemplary PE-NT matrix 202 that can be used by an application to calculate the amount or activity of one or more neurotransmitters associated with a particular emotional state experienced by the user.
  • the rows of the PE-NT matrix 202 can represent the positive emotion categories such as enthusiasm, sexual desire, pride/recognition, nurturant love, contentment, amusement, pleasure, and gratitude, which correspond to the emotional states input by a user and/or identified by the system.
  • the columns of the PE-NT matrix 202 can represent the neurotransmitters (dopamine, testosterone, Serotonin, Oxytocin, Cannabinoids, and Opioids) associated with the positive emotions.
  • a “1” in the matrix can indicate that a particular neurotransmitter is associated with that particular emotion.
  • a “0” in the matrix can indicate that a particular is not associated with that particular emotion.
  • the PE-NT matrix 202 can be used to calculate the amount of a neurotransmitter associated with a media content object.
  • the calculation can include a positive emotions (“PE”) ratings column that shows the emotional state(s) provided by the user with respect to a particular media content object.
  • the calculation may also include an indicator of the intensity of the emotional state experienced by the user. For instance, in one example, the user may have indicated that their enthusiasm while consuming a particular media content object is mild (indicating that it is lower than average but still present) and that PE rating can be quantified as a 3.
  • the user may rate their contentment while consuming the media content object as a 5, which is average (e.g., on a scale of 0-10).
  • the calculation can multiply the PE rating by the numbers in the PE-NT matrix to generate a number associated with the activity of a particular neurotransmitter.
  • enthusiasm can be associated with the release of dopamine.
  • the PE rating for enthusiasm (provided by the user as associated with a particular media content object) is 3. That value is multiplied by 1 under the dopamine column to arrive at a value of 3.
  • the dopamine level associated with that enthusiasm for the media content object is quantified at 3.
  • the remaining columns are left at 0 because those neurotransmitters are not associated with enthusiasm.
  • the identified emotional states may correspond to the activity of two or more neurotransmitters, and the PE-NT matrix values of each of the neurotransmitters can be multiplied by the PE rating to arrive at a value for each neurotransmitter. For instance, if contentment was rated a 5 by the user and is associated with dopamine, oxytocin, and cannabinoids, each of those neurotransmitters can be multiplied by 5 (multiplying 5x1) to determine the level of neurotransmitter activity corresponding to that positive emotion. Once the calculation is made for each of the emotional states experienced by the user for each neurotransmitter, the calculation can add up the totals for each neurotransmitter and associate the neurotransmitter totals with the media content object. Using a PE-NT matrix to identify and/or quantify neurotransmitters based on emotional states is described in greater detail in U.S. Patent Application No. 17/389,023, the contents of which are incorporated herein by reference in its entirety.
  • the system 100 is further configured to store the data indicating the association between the various media content objects and the user’s neurotransmitter activity.
  • the media content objects and associated neurotransmitter data can be stored in a database that is accessible to the application 105.
  • the database may be included in local data storage 107 on the device 102 and/or in the storage 117 of a remote system 110 in communication with the device 102.
  • media content objects may be stored in the local data storage 107 or remote data storage 117 and associated with the information about the user’s neurotransmitter activity.
  • a user may then query the application 105 to create a personalized media content output by inputting additional information about their current state. For instance, a user may input information into the device 102 about their current emotional state to trigger the application 105 to select and display media content to address that emotional state.
  • the current emotional state entered by the user may be associated with a particular deficit in neurotransmitter activity or a reduced level of one or more neurotransmitters, and the media content presented by the application may be selected to resolve the deficiency or otherwise boost the activity of certain neurotransmitters.
  • a user could enter information associated with any of the nine positive emotions or the six neurotransmitters mentioned above.
  • the user may enter a query that includes information about an event or location, such as the event or location at which media content was originally consumed, or a time or date associated with the media content.
  • a user may also enter information about a group of individuals that were present when the media was consumed, or a group of people currently present so the media content output can be tailored to the group’s emotions, experiences, and memories.
  • a user may input information into the application 105 using a user interface 103 of the device 102.
  • a user may input a current emotional state into the application 105 by selecting various prompts on a user interface 103 of the device 102 (e.g., prompts representing predefined categories associated with various emotional states), or the user may input a current emotional state as a natural language query.
  • Natural language queries may be translated into the predefined categories using a natural language module in the processor 106 or remote processor 116.
  • a user may input a natural language query “I am not motivated to exercise” and the application 105 may be configured to determine, based on the natural language input, the current emotional state of the user and/or an associated neurotransmitter deficiency (e.g., by determining the user is lacking enthusiasm and/or needs dopamine).
  • a user may also input additional information as natural language queries, such as a location, time, or event associated with the music, an identity of one or more individuals or listeners, or some other information (e.g., “play me music for a family reunion with relatives X, Y, and Z,” “I am going to a HIIT workout with Justine, please output an energizing 30 minute playlist of our favorite club music,” “create a playlist of 80’ s music to increase my dopamine levels”).
  • natural language queries may be processed by a natural language module to determine various aspects about the desired media content and to output content on the basis of the information contained in the query.
  • more advanced users of the system 100 may input a current emotional state by explicitly indicating one or more neurotransmitters that the user believes is in a deficit. For instance, a user may select “dopamine” and “oxytocin” on a user interface 103 of the device 102 if the user believes or has identified that their current emotional state is associated with reduced levels of dopamine and oxytocin.
  • the device 102 includes one or more sensors 104 for measuring physiological parameters indicative of the emotional state of the user.
  • receiving information about the user’s current emotional state may include receiving data from one or more sensors 104 of the device 102.
  • such a device may use the information received from the sensors to determine and store information about the user’s response to media content and anticipated need for media content.
  • a sensor 104 could receive information about the location of the user and use the user’s location to predict a user’s emotional state or otherwise determine desired media content pertaining to that location.
  • the system 100 determines, based on the information received about the user’s current emotional state, a deficiency in one or more neurotransmitters associated with that emotional state.
  • the neurotransmitter deficiency may be determined, for instance, by correlating the reported emotional state of the user with the activity of one or more neurotransmitters using PE-NT matrix or some other data providing correlations between a user’s current emotional state and neurotransmitter activity.
  • the neurotransmitter deficiency may be determined using a machine-learning algorithm trained on brain scans of the user.
  • the system 100 is configured to predict the current emotional state of the user. For instance, the system 100 may predict, based on information regarding the past emotional states of the user (e.g., past information input by the user regarding their emotional state), one or more predicted current emotional states of the user. The predicted current emotional state of the user may then be used to estimate a current deficit of one or more neurotransmitters. Similarly, the system 100 may determine, based on information regarding the past neurotransmitter activity of the user, one or more neurotransmitters that are predicted to be in a current deficit.
  • the system 100 may determine, based on information regarding the past neurotransmitter activity of the user, one or more neurotransmitters that are predicted to be in a current deficit.
  • the system 100 proceeds to select media content to address the user’s need for one or more neurotransmitters.
  • the selection of media content objects may be based on the previous association between the media content objects and a level of neurotransmitter activity in the user, determined as described above. For instance, the selection may be based on information associating the media content with any one or more of the nine positive emotions or six neurotransmitters described above with respect to a user’s emotional state.
  • the media content objects may also be selected based on additional information, such as the time, location, or event at which the media was originally consumed (or the current time, location, or event of the user), the individuals who participated in the memory or the individuals currently present, or some other information input by the user. For instance, media content could be selected to increase dopamine activity in the user, and then could be further narrowed by selecting media content associated with a particular place, time, event, or group of individuals.
  • the media content may be selected from the database created by the user and stored in the system (e.g., in local data storage 107 or remote data storage 117), which includes one or more media content objects uploaded by the user and associated with one or more emotional states neurotransmitter levels, and other information.
  • the media content may be selected from a larger library of media content, such as a library of a music streaming application or a database of music, art, or other media content. In such cases, the media selections from the library may be limited to only the content that has been previously associated by the user with one or more emotional states on the application 105.
  • the application 105 may be configured to provide the user with suggestions for media content for increasing neurotransmitter levels that may be reduced in the user’s current emotional state, or for boosting one or more neurotransmitters that are in a healthy range. For instance, if the system 100 determines that the user is experiencing an emotional state associated with low levels of dopamine or oxytocin (or any other neurotransmitter), the system may select media content that is associated with elevated levels of dopamine or oxytocin (or any other neurotransmitter) and suggest that media content in a prompt to the user.
  • the system 100 could determine that the user is at the gym, and may select media that is associated with elevated levels of dopamine and other neurotransmitters associated with enthusiasm and energy (irrespective of any current deficiency in dopamine).
  • the application 105 may present the selected media content as suggestions on a user interface 103 of the device 102. For instance, the application 105 may present one or more songs, images, or other media content objects for a user on a user interface 103 of the device, and the user may select one or more of the media content objects by interacting with the user interface 103.
  • the application may provide a user with one or more graphical user interfaces or other visual cues configured to allow the user to perceive deficits in neurotransmitter activity that can be used to select or suggest particular content to the user.
  • the user may be presented with one or more prompts on the user interface 103 that allow the user to identify determined or predicted neurotransmitter levels and respond by selecting the selected media content to present via the application.
  • an exemplary user interface 103 may be configured to provide the user with suggestions for media content for increasing neurotransmitter levels associated with a particular emotional state. The user may then proceed to select one or more media content objects on the user interface 103 to initiate the provision of media content on the device 102.
  • the system 100 can be configured to provide the media content to the user, for instance, by presenting the media content on the device 102.
  • the media content may be presented to the user responsive to a user input (e.g., a touch, a button press, or a click) indicating that the user has selected the media content.
  • Media content e.g., visual media content
  • the system 100 is configured to provide media content (e.g., auditory media content) by causing one or more speakers of the system 100 to output audio content of the first media content object.
  • FIG. 3 illustrates an exemplary method 300 for selecting and presenting media content to a user based on an identified neurotransmitter deficiency in the user.
  • Method 300 may be performed by a system for providing media content to a user, such as system 100 described with respect to FIG. 1.
  • Processor 106 of the system 100 of FIG. 1 may be configured to execute instructions to perform various steps of the method 300 described below in reference to FIG. 3.
  • the method includes receiving first information indicating an association between a first reward molecule of a user and a first media content object.
  • the reward molecule may include a neurotransmitter in the user’s nervous system or some other biomolecule associated with the health and wellbeing of the user.
  • the media content object can include auditory media, such as a song or playlist, and/or visual media, such as one or more images or photographs.
  • receiving the first information could include receiving an input from a user indicating an association between the reward molecule and a media content object.
  • the information indicating an association between the reward molecule and the media object may include information about the emotional state of a user.
  • the information may include information about an emotional state experienced by the user when presented with the media content object or the activity of one or more reward molecules during the consumption of the media content.
  • the emotional state may include one or more of the following: enthusiasm, sexual desire, recognition, nurturant/family love, contentment, friendship/attachment love, amusement, pleasure, and gratitude.
  • the information may further include information about the intensity of the emotional state experienced by the user.
  • the activity of one or more reward molecules can be determined based on the emotional state of the user. For instance, in some examples, receiving the first information includes receiving an input from the user indicating an association between the first media content object and a first emotional state of the user and determining, based at least in part on the emotional state of the user, one or more neurotransmitters associated with the emotional state. In some examples, determining the neurotransmitters includes applying the first emotional state to a positive-emotion-to-neurotransmitter (PE-NT) matrix, such as the PE-NT matrix shown in FIG. 2.
  • PE-NT positive-emotion-to-neurotransmitter
  • the neurotransmitter activity of a user may be determined based on information from a brain scan of the user. For instance, in some examples, determining one or more neurotransmitters associated with the user’s emotional state includes using a machine-learning algorithm trained using brain scan data from the user and receiving, from the machine-learning algorithm, output data comprising an indication of a first emotional state of the user and determining, based at least in part on the first emotional state of the user, the reward molecule.
  • the information provided as input by a user could include explicit information about one or more neurotransmitters believed to be associated with the media content.
  • receiving the first information may include receiving an input from the user explicitly indicating the association between the first media content object and the first reward molecule.
  • the information provided as input by a user includes a string of words or natural language indicating an emotional state of the user.
  • the information includes a categorical representation of the user’s emotional state based on one or more predefined categories corresponding to various emotional states, such as the positive emotions described above.
  • the information may include physiological information about the user that is received from one or more sensors configured for sensing information about the user’s emotional state.
  • the user may input the information via an application, such as an application for a mobile device or some other computing device.
  • the information is received when the user inputs the information using a user interface of a device, such as by entering information about their media consumption and/or emotional state via a keyboard or touchscreen of the mobile device.
  • receiving the information may include receiving the information from one or more sensors incorporated in a device, such as a wearable device configured to be worn on or near the user’s wrist, head, face, abdomen, or some other part of the user’s body.
  • a user may input the information concurrently with media content consumption (e.g., by recording their emotions in real-time while listening to a song or viewing a photograph). Additionally or alternatively, a user may input information about past associations between media content and the user’s emotional state while consuming the content (e.g., by recording past or recent memories about the media content). Receiving multiple pieces of information may enable the method to compile a database of media content which the method can associate with various emotional states of the user and physiological states of the user.
  • the information may be received by prompting the user to input the information.
  • the method could include prompting the user to input information about media consumption responsive to a determination that the user is consuming media (e.g., responsive to determining that a song is playing on a music streaming app on a device, or responsive to determining that music is playing in the ambient environment surrounding the user).
  • a user may be prompted to input information responsive to determining that the user is in a particular location, such as at home, at work, or at a location where the user is known to consume media content.
  • the user may be prompted to input media at a particular time of the day, week, month, or year, and the method could include prompting the user at that particular time.
  • certain information may be received automatically (e.g., without active user involvement) by a system performing the method.
  • information about the media content object may be automatically received by communicating with an application on a mobile device (e.g., a music or television streaming service) and determining that the user is consuming particular media content on the application (e.g., listening to a song or watching a movie).
  • Information about an emotional state of the user may also be received automatically, such as by communicating with one or more sensors for receiving information about the physiological state about a user that is indicative of their emotional state.
  • receiving the first information may include receiving information from a sensor indicating a first physiological state of the user, receiving information indicating that the user was exposed to the media content object during the time at which the user experienced the first physiological state; and determining, based at least in part on the first physiological state of the user, one or more neurotransmitters associated with the emotional state of the user.
  • the physiological state of the user is processed by machine-learning algorithm trained using brain scan data for the user, and the method 300 includes receiving, from the machine-learning algorithm, output data comprising an indication of one or more neurotransmitters associated with the emotional state.
  • the method includes generating and storing first data indicating an association between the first media content object and the first reward molecule associated for the user with the first media content object.
  • the data may include the identity of one or more neurotransmitters whose activity is modulated by the media content object, the expected level of one or more neurotransmitters, the difference in neurotransmitter activity effectuated by the media content, or some other data about the association between the media content and a reward molecule that has a physiological effect on the user.
  • information about an emotional state of a user during consumption of media content may be used to determine the level or activity of one or more neurotransmitters or other reward molecules associated with the media content.
  • generating data indicating an association between a media content object and a reward molecule could include calculating expected or predicted neurotransmitter activity using a PE-NT matrix.
  • a PE-NT matrix may enable the method to associate an emotional state of the user with one or more physiological effects on the user, such as the modulation of neurotransmitter activity.
  • the method may use information from, e.g., fMRI, to determine the association between various media content, emotional states, and the activity of one or more neurotransmitters of the user.
  • the method may user a machine-learning algorithm trained on brain scans of the user to determine the one or more neurotransmitters.
  • storing the data may include storing the data in storage on a device (e.g., a mobile device including an application for inputting information from a user) or in a remote system including remote storage (e.g., a remote computing system or a cloud in communication with the device).
  • the data may be stored in a database.
  • various media content may be stored in the database and information about the media content’s association with various emotional states or neurotransmitter levels of the user may also be stored in the database.
  • the method includes receiving second information indicating a current level of the reward molecule for the user.
  • the second information could include information about the user’s current emotional state or other physiological information about the user associated with a reward molecule deficiency, such as a deficiency of one or more neurotransmitters.
  • the second information could include an indication that the user is currently in a negative emotional state, or that the user is experiencing low or non- optimal levels of particular neurotransmitters.
  • the user may input information using a user interface of a device (e.g., a user interface associated with an application for collecting and tracking information about the user’s media consumption and physiological health).
  • receiving the second information includes receiving an input from the user including information about the user’s current emotional state.
  • the method may include determining, based at least in part on the emotional state of the user, the current deficiency of one or more neurotransmitters.
  • determining the current level of one or more neurotransmitters includes applying the emotional state to a positive-emotion-to-neurotransmitter (PE-NT) matrix.
  • PE-NT positive-emotion-to-neurotransmitter
  • the user may input information explicitly indicating one or more neurotransmitters that are in a deficiency in the user’s current emotional state.
  • receiving the second information includes receiving a user input explicitly indicating the current deficiency of one or more neurotransmitters.
  • information from a user may be received automatically by way of one or more sensors configured to measure physiological information about the user.
  • receiving the second information may include receiving information from a sensor indicating a physiological state of the user and determining, based at least in part on the second physiological state of the user, the current deficiency of one or more neurotransmitters.
  • the neurotransmitter activity of a user may be determined based on information from a brain scan of the user.
  • determining a current level of one or more neurotransmitters includes providing the received information from a sensor indicating the second physiological state of the user to a machine-learning algorithm trained using brain scan data for the user; and receiving, from the machine-learning algorithm, output data comprising an indication of the current level of the reward molecule.
  • receiving information indicating a current deficiency of one or more neurotransmitters could include determining the information based on previously reported emotional states of the user. For instance, the method may include analyzing information about the user’s emotional state over a period of time and predicting a deficiency of reward molecules in the user. For instance, the method could include determining, based on past user inputs indicative of a user’s emotional state, a predicted deficiency of reward molecules in the user.
  • the method 300 includes selecting, based at least in part on the current level of the reward molecule and the first stored data, the first media content object.
  • the method 300 includes selecting a number of media content objects based on a current deficiency in reward molecules, where each of the media content objects or the media content in combination may address the current deficiency for the reward molecules.
  • the method 300 includes selecting media content to boost a level of one or more reward molecules.
  • the method 300 could include selecting a series of songs for a user formatted as a playlist.
  • the media content may be selected by determining that the media content is likely to remedy the current deficiency in reward molecules or otherwise modulate the activity or level of the user’s reward molecules. For instance, the method could include selecting media content that has been previously associated with elevated levels of neurotransmitters that have been identified as deficient in the user’s current emotional state. In some examples, the media content may be selected based on one or more additional criteria, such as information about the media content (e.g., a genre of the media content or availability of the media content), information about the user (e.g., the user’s location or a preference input by the user), or information about one or more additional reward molecules (e.g., the level of one or more additional neurotransmitters of the user).
  • information about the media content e.g., a genre of the media content or availability of the media content
  • information about the user e.g., the user’s location or a preference input by the user
  • one or more additional reward molecules e.g., the level of one or more additional neuro
  • the media content may be selected from a collection of media content that has been previously associated with various emotional states of the user.
  • the media content may be selected from database in which media content has been stored by a user (e.g., a database including past media content uploaded to an application for tracking the user’s media content consumption and emotional states associated with the media content).
  • the media content may be selected from a database compiled by a different application, such as a music streaming application or another media application.
  • the method further includes suggesting the selected media content to the user.
  • the selected media content may be displayed to a user on a user interface of a mobile device. The user may then accept or reject the selected media content, e.g., by using a user interface of the device.
  • the method 300 includes providing the first media content object to the user.
  • Providing the media content object to the user may include initiating the media content on a device associated with a system for performing the method 300.
  • providing the media content could include displaying visual media content on a display of a device, or causing speakers of a device to output audio content of an auditory media content object.
  • the method 300 includes providing a plurality of media content objects to the user (e.g., a plurality of songs formatted as a playlist).
  • FIG. 4 illustrates an example of a computing device in accordance with one embodiment.
  • Device 400 can be a host computer connected to a network.
  • Device 400 can be a client computer or a server.
  • device 400 can be any suitable type of microprocessor-based device, such as a personal computer, workstation, server, or handheld computing device (portable electronic device) such as a phone or tablet.
  • the device 400 can include, for example, one or more of processor 410, input device 420, output device 430, storage 440, and communication device 460.
  • Input device 420 and output device 430 can generally correspond to those described above and can either be connectable or integrated with the computer.
  • Input device 420 can be any suitable device that provides input, such as a touch screen, keyboard or keypad, mouse, or voice-recognition device.
  • Output device 430 can be any suitable device that provides output, such as a touch screen, haptics device, or speaker.
  • Storage 440 can be any suitable device that provides storage, such as an electrical, magnetic, or optical memory, including a RAM, cache, hard drive, or removable storage disk.
  • Communication device 460 can include any suitable device capable of transmitting and receiving signals over a network, such as a network interface chip or device.
  • the components of the computing device 400 can be connected in any suitable manner, such as via a physical bus or wirelessly.
  • Software 400 which can be stored in storage 440 and executed by processor 410, can include, for example, the programming that embodies the functionality of the present disclosure (e.g., as embodied in the devices, systems, and methods as described above).
  • Software 450 can also be stored and/or transported within any non-transitory computer-readable storage medium for use by or in connection with an instruction execution system, apparatus, or device, such as those described above, that can fetch instructions associated with the software from the instruction execution system, apparatus, or device and execute the instructions.
  • a computer-readable storage medium can be any medium, such as storage 440, that can contain or store programming for use by or in connection with an instruction execution system, apparatus, or device.
  • Software 450 can also be propagated within any transport medium for use by or in connection with an instruction execution system, apparatus, or device, such as those described above, that can fetch instructions associated with the software from the instruction execution system, apparatus, or device and execute the instructions.
  • a transport medium can be any medium that can communicate, propagate, or transport programming for use by or in connection with an instruction execution system, apparatus, or device.
  • the transport readable medium can include, but is not limited to, an electronic, magnetic, optical, electromagnetic, or infrared wired or wireless propagation medium.
  • Device 400 may be connected to a network, which can be any suitable type of interconnected communication system.
  • the network can implement any suitable communications protocol and can be secured by any suitable security protocol.
  • the network can comprise network links of any suitable arrangement that can implement the transmission and reception of network signals, such as wireless network connections, T1 or T3 lines, cable networks, DSL, or telephone lines.
  • Device 400 can implement any operating system suitable for operating on the network.
  • Software 450 can be written in any suitable programming language, such as C, C++, Java, or Python.
  • application software embodying the functionality of the present disclosure can be deployed in different configurations, such as in a client/server arrangement or through a Web browser as a Web-based application or Web service, for example.
  • an application for a mobile device allows a user to compile information about their media consumption habits and their emotional responses to media.
  • the user uses the application to enter information relating to the media content they consume on a regular basis, such as songs that the user plays on the mobile device or hears in the ambient environment, such as at concerts, in clubs, or other public places.
  • the user additionally enters information about his/her emotional state while listening to the media content.
  • the user provides as input into the application one or more emotions he/she felt when listening to a song by selecting one or more predetermined prompts associated with various emotions (e.g., prompts indicating the user felt enthusiasm, sexual desire, recognition/pride, nurturant/family love, contentment, friendship love, amusement, pleasure, and/or gratitude) or by entering words associated with their emotional state (e.g., “I feel happy”, “I feel energized”).
  • the user uploads the media content itself to the application, such as a portion of a song heard by the user in the emotional state or an image viewed by the user in the emotional state.
  • the application determines, based on the user’s reported emotional state, one or more neurotransmitters that are active or at elevated levels during that emotional state. For instance, when a user indicates they felt amusement, pleasure, and gratitude while listening to the media content, the application determines that the media content elicits a physiological response in the user that is associated with elevated levels of the neurotransmitters dopamine, cannabinoids, and opioids. The application then stores data about the media content that associates the media content with increased dopamine, cannabinoid, and opioid activity.
  • the application By repeatedly using the application to record their emotional reaction to media content, the application builds a database of the media content input by the user and the associated neurotransmitters that may be affected by the media content.
  • the user then enters information into the application about a current emotional state to cause the application to select media content from the database that is known to elicit a particular emotional response that would be beneficial in the user’s current emotional state. For instance, when the user’s current emotional state is associated with low levels of one or more neurotransmitters, the application selects and suggests media content from the database that is known, based on the user’s prior entries in the application, to cause elevated levels of those neurotransmitters in the user. The user enters his/her current emotional state as a natural language query and/or by selecting one or more prompts on a user interface.
  • the media content includes various songs or audio recordings
  • the application is configured for constructing a playlist (i.e., a collection of songs) that would be beneficial to the user in the user’s current emotional state.
  • a playlist i.e., a collection of songs
  • the query “I feel lonely” causes selection and curation of music originally associated with positive memories of the user involving family (e.g., to trigger oxytocin) and friends (e.g., to trigger cannabinoids).
  • the songs selected by the application activate brain areas in the user that are associated with oxytocin and cannabinoids, thus filling the gap for these neurotransmitters and the associated emotional state of the user. The experience of “feeling lonely” in that person is thereby reduced.
  • the query “I am not motivated to exercise” causes selection and curation of music originally associated with positive memories involving excitement (dopamine), pride (serotonin).
  • the songs selected by the application activate brain areas in the user that are associated with dopamine and serotonin, thus filling the gap for these neurotransmitters and driving motivation in that person.
  • a user inputs information about media content he/she consumes in a manner similar to the example described above. However, in this example, the user enters information about predicted or determined neurotransmitter activity while consuming media content (i.e., instead of entering information about the user’s emotional state).
  • the user is educated on the physiological relationship between certain emotional states and the activity of neurotransmitters in the nervous system, or has some other means for detecting the activity of neurotransmitters in their body.
  • the same user provides as input to the application one or more neurotransmitters that he/she has identified as being deficient in the negative emotional state to prompt the application to select and provide media content tailored to addressing that negative emotional state. For instance, when the user is lonely, the user indicates “oxytocin” and/or “cannabinoids” by entering the desired neurotransmitters into the application (e.g., by selecting the neurotransmitters on a prompt or entering the neurotransmitters on a user interface of the mobile device). The application then determines, based on the neurotransmitters input by the user, media content or a collection of media content that is identified as increasing the levels of the specified neurotransmitters in the user.
  • entering “oxytocin” and “cannabinoids” causes the application to select media content from a database that is associated with higher levels of oxytocin and cannabinoids in the user.
  • the query “I need dopamine” causes the application to select and curate music associated with increased dopamine levels in the user to and present that media content to the user.
  • an application is used by multiple users to select and present media according to information from the group of users.
  • the application mis used similarly to the application described with respect to Prophetic Examples 1 and 2.
  • a group of users each utilize the application for recording their respective media consumption habits and emotional states associated with various media content objects.
  • Each of the users provides as input individualized information into their respective application about the media content they consume and emotional states associated with the media content.
  • two or more of the users provide as input information about the same media content object. For instance, two or more of the users have attended the same concert, watched the same television show, or otherwise have similar media consumption habits.
  • the information from the group of users is compiled using the application and used to recommend media content that is likely to improve the emotional or physiological state of the group of users.
  • the resulting database includes information from multiple users regarding various media content and the emotional state that media content is likely to elicit in one or more users of the group.
  • One or more users of the group then input a query (e.g., a query representing the overall emotional state of the group or an emotional state of one or more members of the group), and the application determines, based on the compiled data from the multiple users, media content that is likely to address the emotional state of the group.
  • a group of friends have attended the same concert and have input information about a song played at the concert and their emotional state while hearing the song.
  • the group of friends then chooses to combine the data from their application (e.g., in a “group play” mode of the application).
  • One or more of the friends then inputs a query, such as “we want to celebrate”, and the application selects media content based on the combined information from the group of friends that is likely to trigger celebration, such as media content associated with pleasure (e.g., elevated opioid levels) and/or excitement (e.g., elevated dopamine levels) for each user of the group.
  • pleasure e.g., elevated opioid levels
  • excitement e.g., elevated dopamine levels
  • an application automatically selects and suggests media content without involvement of a user.
  • the application is used similarly to the applications described with respect to Prophetic Examples 1, 2 and 3.
  • a user provides as input a series of entries to the application about his/her media content consumption and emotional states over a period of days, weeks, months, or years.
  • the application uses an algorithm to monitor the user’s emotional state and neurotransmitters over the period time and develop trends in the user’s neurotransmitter levels to predict a current or future neurotransmitter deficiency.
  • the application selects one or more media content objects likely to address the underlying needs of the user before the user experiences such an emotional need or without a query from the user identifying that particular emotional need.
  • a user is heavily involved with work during the week and fails to input information about his/her emotional state and/or media consumption habits.
  • the application determines that the user is likely to experience a deficit in cannabinoids (associated with friend love) and opioids (associated with pleasure) by Thursday or Friday of that week.
  • This deficit is predicted by the application based on past information indicating that the user is likely to experience the deficit at this time of the week, or is predicted based on the user’s failure to input entries into the application.
  • the application selects various media content (e.g., a playlist) that is associated with the deficient neurotransmitters in order to remedy the deficit in the neurotransmitters.
  • the application also selects media content in order to motivate the user to do and experience activities to increase the deficient neurotransmitters on Friday night or the weekend (i.e., reactively) or to motivate the user in the coming weeks and months (i.e., reactively).

Landscapes

  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • Development Economics (AREA)
  • Engineering & Computer Science (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

Systems and methods for evaluating and improving overall mental health of an individual are presented herein. The systems and methods are configured for selecting and providing user¬ specific media content objects to a user based on data indicating an association between the media content objects and reward molecules of the user, such as neurotransmitters in the nervous system. The system comprises one or more processors configured to receive information indicating an association between reward molecules of a user and a first media content object; generate and store the data associated for the user with the first media content object; receive second information indicating a current level of the reward molecule for the user; select, based at least in part on the current level of the reward molecule and the stored data, the first media content object; and provide the first media content object to the user.

Description

SYSTEMS AND METHODS FOR SELECTING AND PROVIDING MEDIA CONTENT TO IMPROVE NEUROTRANSMITTER LEVELS
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of and priority to U.S. Provisional Patent Application No. 63/507,647, filed June 12, 2023, the contents of which are incorporated herein by reference in its entirety.
FIELD
[0002] This disclosure relates generally to systems and methods for selecting and providing personalized media content to a user to modulate the neurotransmitter activity of the user.
BACKGROUND
[0003] Research indicates that the human landscape for positive emotions comprises nine distinct emotion types: enthusiasm, sexual desire, recognition/pride, nurturant/family love, contentment, friendship love, amusement, pleasure, and gratitude. These emotions are created from various combinations of one or more of six neurotransmitters that act as the molecular reward system of the nervous system: dopamine, testosterone, serotonin, oxytocin, cannabinoids, and opioids. Research has further investigated the manner in which media consumption can influence the activity of neurotransmitters in a media consumer. For instance, certain media content can trigger emotional responses in users, such as a startle response following a frightful scene in a horror movie or joy when playing a song or presenting a photograph that the consumer associates with a positive memory.
[0004] Commercially available media services, such as streaming services for music and television, may curate collections of media content that are associated with a particular mood or emotion (e.g., an energizing playlist for the gym including upbeat music, or a date night playlist including songs typically associated with romance). However, these media collections are generally compiled based on presumed associations between the media content and certain emotions or by algorithms that use generalized information to categorize the content into moods, rather than user-specific information that correlates the emotional response of a particular user to a particular piece of media and certain personal, positive memories associated with the piece of media. Thus, currently available media services and applications may not account for user-specific experiences about various media content that causes the media content to elicit a particular emotional state and/or neurotransmitter activity in a user.
SUMMARY
[0005] Despite research into the correlation between media content and autonomic nervous system activity, systems have not yet been developed for selecting and presenting media content to a user based on user-specific correlations between the media content and the user’s neurotransmitter activity, and in particular neurotransmitters involved in positive emotional significance and the creation of personal, positive memories.
[0006] Accordingly, there is a need for improved systems for selecting and presenting media content to a user based on user-specific information about that media content and its effect on the emotional and physiological state a user. Such systems may be useful in the medical space for prevention and mitigation of mental or psychiatric disease and/or in the consumer space for providing a user more control of how media content affects their mental and physical wellbeing.
[0007] Disclosed herein are systems and methods that may address such needs by allowing users to compile user-specific information about media consumption behavior and associated emotional and/or physiological responses to the media, such as the activity or level of neurotransmitters in the nervous system. This user-specific information may be provided as input into an application, such as an application for a mobile device, which compiles the media content and creates associations between the media content and a physiological state of a user at the time at which the user consumed the media content. The user may then recall specific media content from the application, such as a song or playlist, by querying the application to present media associated with a particular emotional or physiological state of the user. By presenting media that is known or predicted to have a particular physiological effect on the user (e.g., based on past correlation with certain physiological states of the user), the application can enable users to address deficiencies in their current mood and to promote overall emotional and physiological wellbeing, such as by modulating the level of various neurotransmitters in the nervous system. Such media content applications can be deployed consciously by a user as a highly personalized positive emotion booster. These applications may also generate media content sua sponte (i.e., without user input) by predicting or detecting a user’s current emotional state and automatically presenting media content to meet the user’s needs. [0008] In some examples, a system for selecting and providing media content is provided. The system comprises one or more processors configured to receive first information indicating an association between first reward molecule of a user and a first media content object, generate and store first data indicating an association between the first media content object and the first reward molecule associated for the user with the first media content object; receive second information indicating a current level of the reward molecule for the user; select, based at least in part on the current level of the reward molecule and the first stored data, the first media content object; and provide the first media content object to the user.
[0009] In some examples, receiving the first information comprises receiving an input from the user explicitly indicating the association between the first media content object and the first reward molecule.
[0010] In some examples, receiving the first information comprises receiving an input from the user indicating an association between the first media content object and a location, time, date, event, individual, or group of individuals.
[0011] In some examples, receiving the first information comprises receiving an input from the user indicating an association between the first media content object and a first emotional state of the user and determining, based at least in part on the first emotional state of the user, the reward molecule.
[0012] In some examples, determining the reward molecule comprises applying the first emotional state to a positive-emotion-to-neurotransmitter (PE-NT) matrix.
[0013] In some examples, the first emotional state of the user comprises an emotional state from the group comprising: enthusiasm, sexual desire, recognition, nurturant/family love, contentment, friendship/attachment love, amusement, pleasure, and gratitude.
[0014] In some examples, receiving the first information comprises receiving information from a first sensor indicating a first physiological state of the user; receiving information indicating that the user was exposed to the first media content object during the time at which the user experienced the first physiological state; and determining, based at least in part on the first physiological state of the user, the reward molecule.
[0015] In some examples, determining the reward molecule comprises: providing the received information from the first sensor indicating the first physiological state of the user to a machine-learning algorithm trained using brain scan data for the user; and receiving, from the machine-learning algorithm, output data comprising an indication of the reward molecule [0016] In some examples, determining the reward molecule comprises: providing the received information from the first sensor indicating the first physiological state of the user to a machine-learning algorithm trained using brain scan data for the user; receiving, from the machine-learning algorithm, output data comprising an indication of a first emotional state of the user; and determining, based at least in part on the first emotional state of the user, the reward molecule.
[0017] In some examples, determining the reward molecule comprises applying the first emotional state to a positive-emotion-to-neurotransmitter (PE-NT) matrix.
[0018] In some examples, receiving the second information comprises: receiving an input from the user explicitly indicating the current level of the reward molecule.
[0019] In some examples, receiving the second information comprises: receiving an input from the user indicating a second emotional state of the user; and determining, based at least in part on the second emotional state of the user, the current level of the reward molecule.
[0020] In some examples, receiving the second information comprises receiving an input from the user indicating a location, time, date, event, individual, or group of individuals and determining, based at least in part on the input, the current level of the reward molecule.
[0021] In some examples, receiving the second information comprises receiving the second information from a prediction model configured to predict the current level of the reward molecule for the user.
[0022] In some examples, determining the current level of the reward molecule comprises applying the second emotional state to a positive-emotion-to-neurotransmitter (PENT) matrix.
[0023] In some examples, receiving the second information comprises: receiving information from a second sensor indicating a second physiological state of the user; and determining, based at least in part on the second physiological state of the user, the current level of the reward molecule.
[0024] In some examples, determining the current level of the reward molecule comprises: providing the received information from the second sensor indicating the second physiological state of the user to a machine-learning algorithm trained using brain scan data for the user; and receiving, from the machine-learning algorithm, output data comprising an indication of the current level of the reward molecule. [0025] In some example, determining the current level of the reward molecule comprises: providing the received information from the second sensor indicating the second physiological state of the user to a machine-learning algorithm trained using brain scan data for the user; receiving, from the machine-learning algorithm, output data comprising an indication of a second emotional state of the user; and determining, based at least in part on the second emotional state of the user, the current level of the reward molecule
[0026] In some examples, determining the current level of the reward molecule comprises applying the second emotional state to a positive-emotion-to-neurotransmitter (PENT) matrix.
[0027] In some examples, providing the first media content object comprises causing one or more speakers of the system to output audio content of the first media content object. [0028] In some examples, providing the first media content object comprises causing one or more displays of the system to display an interactive affordance to the user prompting the user to play audio content of the first media content object.
[0029] In some examples, wherein an identity of the reward molecule is selected from a group consisting of dopamine, serotonin, testosterone, oxytocin, cannabinoids, and opioids. [0030] In some examples, a method for selecting and providing media content is provided. The method is performed by a system comprising one or more processors, and comprises: receiving first information indicating an association between first reward molecule of a user and a first media content object; generating and storing first data indicating an association between the first media content object and the first reward molecule associated for the user with the first media content object; receiving second information indicating a current level of the reward molecule for the user; selecting, based at least in part on the current level of the reward molecule and the first stored data, the first media content object; and providing the first media content object to the user.
[0031] In some examples, a non-transitory computer-readable storage medium storing instructions for selecting and providing media content is provided. The instructions are configured to be executed by one or more processors of a system to cause the system to: receive first information indicating an association between first reward molecule of a user and a first media content object; generate and store first data indicating an association between the first media content object and the first reward molecule associated for the user with the first media content object; receive second information indicating a current level of the reward molecule for the user; select, based at least in part on the current level of the reward molecule and the first stored data, the first media content object; and provide the first media content object to the user.
BRIEF DESCRIPTION OF THE FIGURES
[0032] FIG. 1 illustrates an exemplary system for providing media content to a user based on the emotional state of the user, according to some embodiments of present disclosure.
[0033] FIG. 2 illustrates an exemplary positive emotion to neurotransmitter (PE-NT) matrix, according to some examples of the present disclosure.
[0034] FIG. 3 illustrates an exemplary method for selecting and providing media content to a user, according to some examples of the present disclosure.
[0035] FIG. 4 illustrates an exemplary computing device, according to examples of the present disclosure.
DETAILED DESCRIPTION
[0036] In the following description of the disclosure and embodiments, reference is made to the accompanying drawings in which are shown, by way of illustration, specific embodiments that can be practiced. It is to be understood that other embodiments and examples can be practiced, and changes can be made, without departing from the scope of the disclosure.
[0037] In addition, it is also to be understood that the singular forms “a,” “an,” and “the” used in the following description are intended to include the plural forms as well unless the context clearly indicates otherwise. It is also to be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It is further to be understood that the terms “includes,” “including,” “comprises,” and/or “comprising,” when used herein, specify the presence of stated features, integers, steps, operations, elements, components, and/or units but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, units, and/or groups thereof.
[0038] Unless specifically stated otherwise as apparent from the following discussion, it is appreciate that, throughout the description, discussions utilizing terms such as “processing,” “computing,” “calculated,” “determining,” “displaying,” or the like refer to the action and processes of a computer system, or similar electronic computer device, the manipulates and transforms data represented as physical (electronic) quantities within the computer system memories or registers or other such information storage, transmission, or display devices.
[0039] Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that, throughout the description, discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining,” “displaying,” or the like refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system memories or registers or other such information storage, transmission, or display devices.
[0040] Certain aspects of the present disclosure include process steps and instructions that may be described herein in the form of an algorithm. It should be noted that the process steps and instructions of the present disclosure could be embodied in software, firmware, or hardware, and, when embodied in software, they could be downloaded to reside on and be operated from different platforms used by a variety of operating systems.
[0041] The present disclosure also relates to a system and devices for performing the operations herein. This system and/or devices may be specially constructed for the required purposes, or they may comprise general-purpose computers selectively activated or reconfigured by a computer program stored in the computer(s). Such a computer program may be stored in a non-transitory, computer-readable storage medium such as, but not limited to, any type of disk, including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, application-specific integrated circuits (ASICs), or any type of media suitable for storing electronic instructions and each coupled to a computer system bus. Furthermore, the computing devices referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
[0042] The methods, devices, and systems described herein are not inherently related to any particular computer or other apparatus. Various general -purpose systems may also be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description below. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present disclosure as described herein.
[0043] As used herein, the term “real time” or “real-time,” as used interchangeably herein, generally refers to an event (e.g., an operation, a process, a method, a technique, a computation, a calculation, an analysis, a visualization, an optimization, etc.) that is performed using recently obtained (e.g., collected or received) data. In some cases, a real time event may be performed almost immediately or within a short enough time span, such as within at least 1 millisecond (ms), 5 ms, 0.01 seconds, 0.05 seconds, 0.1 seconds, 0.5 seconds, 1 second, 0.1 minute, 0.5 minutes, 1 minute, or more. In some cases, a real time event may be performed almost immediately or within a short enough time span, such as within at most 1 second, 0.5 seconds, 0.1 seconds, 0.05 seconds, 0.01 seconds, 5 ms, 1 ms, or less.
[0044] In some examples, a system is provided for selecting and presenting media content to a user to improve a deficiency in reward molecules of a user, such as neurotransmitters of the nervous system or some other biomolecule known to affect a user’s mental and physical wellbeing. In some examples, the identity of a reward molecule is selected from a group consisting of dopamine, serotonin, testosterone, oxytocin, cannabinoids, and opioids. The media content may be any form of media, such as auditory media (e.g., songs, podcasts, speeches, audio recordings, and/or compilations thereof), visual media (e.g., visual artwork or photographs), or mixed media (e.g., media that includes both auditory and visual stimuli).
[0045] FIG. 1 illustrates an exemplary system 100, according to examples of the disclosure. In one or more examples of the disclosure, the system 100 includes a user- controlled device 102, such as a phone or personal computer, and, optionally, a remote system 110.
[0046] The device 102 may be configured to provide a user interface 103 (e.g., a graphical user interface presented on a touchscreen display, a keyboard or keypad, a voicerecognition device, etc.), one or more applications 105 (e.g., a mobile device application operable on device 102), one or more processors 106, local data storage 107, and a network communication device 108. The device 102 (e.g., user interface 103 of device 102) may further include one or more media output components, such as speakers for playing auditory media or visual displays for displaying visual media.
[0047] In some examples, the device 102 is a handheld electronic device such as a phone or a tablet, and the user can engage with an application 105 of the device to input information about media content and/or an emotional state, e.g., by using a user interface 103 of the device 102 to provide the information to an application 105 stored on the device 102. Additionally or alternatively, the device 102 may include a wearable device such as a watch, glasses, a head-mounted device, or another device configured to be worn by a user of the system 100. Optionally, a wearable device may be configured to receive information about the emotional state or neurotransmitter activity of the user using one or more sensors 104 configured to measure physiological information about the user indicative of an emotional state or the activity of one or more neurotransmitters. For example, the wearable device may include one or more sensors to measure one or more parameters including, but not limited to: [0048] Cardiac Interbeat Interval (CBI or IB I), ms, as measured, for example, by sensors based on (1) electrical activity (such as Electrocardiogram based on wet electrodes, dry electrodes, or capacitive electrodes), (2) sensors detecting arterial pulse using photoplethysmography (PPG), or sensors such as PhysioCam (PhyC), a non-contact system capable of measuring arterial pulse with sufficient precision to derive HRV during different challenges, (3) sensors based on mechanical activity (balistocardiogram (BCG) using e.g. Hydraulic sensors, EMFi film sensors, Accelerometer), radio frequency or seismocardiogram (SCG) using e.g. Accelerometer, Laser Doppler Vibrometer, Laser Speckle
Vibrometry, Airborne Ultrasound or gyrocardiogram (GCG) using gyroscope or Laser speckle vibrometer, and/or ( 4) Forcecardiography;
[0049] Cardiac Pre-Ejection Period (PEP), ms, as measured, for example, by one or more of the same or similar sensor types as described above with reference to IBI (optionally, with a preference for Forcecardiography and/or Seismocardiography). PEP may be measured by simultaneously collecting both ECG, as described earlier, and impedance cardiography; [0050] Number of valid Skin Conductance Responses (SCRs), as measured, for example, by sensors detecting galvanic skin response such as Ag/ AgCl, stainless steel, silver, brass, and gold electrodes, Flexcomp Infiniti physiological monitoring and data acquisition unit, Empatica E4 and Refa System, Microsoft Band 2, Empatica E4, Health Sensor Platform, BITalino, Polar H6, Wearable Zephyr BioHamess 3, and/or Obimon EDA;
[0051] Respiratory Sinus Arrhythmia (RSA), ms2, as measured, for example, by an electrocardiogram sensors such as any one or more of those described above; and/or [0052] Mean Arterial Pressure (MAP), mmHg, as measured, for example, by (1) pressure-based methods (e.g. vascular unloading technique, arterial tonometry), (2) ultrasound based methods, and/or (3) deep-learning based methods using data from PPG or ECG. [0053] Measured parameters may be stored in local data storage 107 provided as a part of device 102 for local analysis by one or more processors 106 provided as a part of device 102. Additionally or alternatively, measured parameters may be transmitted via network communication device 108 to remote system 110, for example for remote storage, remote display, and/or remote data processing.
[0054] As shown in FIG. 1, the device 102 is configured for communicating with a remote system 110 via a network communication device 108. The remote system 110 may include a network communication device 118 configured to receive data from and/or send data to one or more device, such as device 102. In the example of system 100, remote system 110 may receive various information indicating an association between one or more neurotransmitters of the user and media content consumed by the user from the device 102 and/or communicate various information about the media content and/or association with neurotransmitters to the device 102. The remote system may further include one or more processors 116 and data storage 117.
Receiving first input associating media (e.g., a song) with a positive experiences or positive emotions
[0055] A user of the system 100 may use the device 102 to play various forms of media content throughout the day, such as songs from a streaming application or images from a photo application. The user may also experience media content in the ambient environment (e.g., outside of the device 102), such as a song played at a concert or in a public space, a television series viewed on a television, or a piece of artwork viewed at a museum. The device 102 may be configured to monitor the environment to detect media content in the ambient environment (e.g., songs played in an environment may be detected by speakers of device 102) and/or to detect media content via electronic monitoring of one or more other devices or systems (e.g., the device may monitor, via a network, and detect when media is played using another network connected device). The device 102 may be used to record various information about media content consumed by a user and their emotional state while consuming the media content on the application 105 using the user interface 103. For instance, a user may record the form of the media content, the genre of media content, the artist or title of the media content and/or the viewing location of the media content, as well as the emotional state of the user, the intensity of the emotion experienced, the neurotransmitter activity of the user, and/or other physiological information about the user while the user consumed the media content. The user may also input information about other persons involved in the memory or who were present while consuming the media content. In some examples, the user may upload the media content to the application 105, such as an image or a snippet of a song associated with the particular emotional state or the activity or one or more neurotransmitters.
[0056] The device 102 is configured to accept an input from a user, e.g., on a user interface 103, regarding the media content and/or emotional state experienced by the user. In one or more examples, the user interface 103 of the device 102 can include one or more user selectable buttons, a touchscreen display, a keypad, a voice-control device, or some other means for inputting information from a user.
[0057] In one or more examples, the user interface 103 is configured to display various categories associated with emotional states and the user is prompted to categorize their emotional state based on the one or more predefined categories. For instance, in one or more examples, the user may be prompted to input information by selecting one or more categories representing various positive emotional states, such as: enthusiasm; sexual desire; recognition/pride; nurturant/family love; contentment; friendship love; amusement; pleasure; and gratitude. Each of the categories provided above are meant as examples only and should not be seen as limiting to the disclosure. Alternative or additional categories could be included.
[0058] Additionally or alternatively, a user of the device 102 may record the information about the media content and/or their associated emotional state in natural language. For example, a user may input information into application 105 by typing words or speaking into a user interface 103 of the device 102. In some examples, the device 102 may be configured to determine, based on the natural language input, one or more emotional states of the user. For instance, the processor 106 may include a natural language processing module configured to determine, based on the natural language input, one or more categories representing various emotional states of the user. In some examples, the various emotional states of the user may be defined by the categories above.
[0059] More advanced users may be capable of identifying, based on their own perceived emotional state, one or more neurotransmitters associated with the emotional state. In some examples, a user may input one or more neurotransmitters that they believe are elevated or active while consuming a particular media content object. In such examples, the user may forego entering an emotional state associated with the media content object.
[0060] In some examples, the application 105 may prompt the user to input information about media content and their associated emotional state or neurotransmitter activity, e.g., by displaying a prompt or notification on a user interface 103 of the device 102. In some examples, the user interface 103 may display a prompt if the device determines that the user is consuming media content. In one or more examples, the user may choose to ignore the prompt and the information will not be used to generate data associating the user’s emotional state and/or neurotransmitter levels associated with the media content. In some examples, the application 105 may prompt the user to input information at a particular time or responsive to a particular event, such as prompting the user at a particular time or the day or responsive to determining that the user is engaged in a particular activity or at a particular location.
[0061] In some examples, the application 105 is configured to automatically record information about media content consumed by the user in real-time based on information about the media content presented by an application on the device 102. For instance, the application 105 may be configured to record the title and artist of music played by a streaming application used by the user on the device 102 or an image presented to the user on a photography application on the device 102. In some examples, the application 105 may automatically record information about the media content based on information from one or more sensors configured for detecting various media content (e.g., microphones capable of detecting various songs played by the device 102 or in the ambient environment around the user).
[0062] In some examples, the device 102 includes one or more physical and/or chemical sensors 104 for monitoring physiological responses indicative of autonomic nervous system activity of the wearer. For instance, the device 102 may include one or more sensors 104 configured for determining various physiological parameters of the user that are indicative of a user’s mood or the level of a particular neurotransmitter of the user. While the user is consuming media content on the device 102, such as songs on a streaming application or images in a photo application of the device 102, the sensor(s) 104 may be configured to automatically receive sensor data indicating an association between the media content currently presented by the device 102 and the neurotransmitter activity of the user. The measured sensor data may be stored in local data storage 107 provided as a part of the device 102 for local analysis by one or more processors 106. Additionally or alternatively, one or more sensors may be configured to receive sensor data about the location of media consumption, the time of media consumption, or some other information. The measured sensor data may be transmitted via a network communication device 108 to a remote system 110, for example for remote storage, remote data processing, and/or remote display. [0063] In some examples, the device 102 may use one or more machine-learning algorithms to process information about the user’s consumption of media content and their resultant emotional state and/or neurotransmitter activity. For instance, one or more sensors 104 may be configured to receive physiological data from the user before, during, and/or after consuming a media content object. The sensor data received from the sensors 104 may be used to update and refine information about the user’s emotional and physiological reaction to a media content object to improve future media content selections.
[0064] In some examples, during a training or learning phase, the device 102 may receive a plurality of information indicating associations between one or more media content objects and the emotional state or neurotransmitter activity of the user. For instance, the user may use the application 105 to record one or more past experiences (e.g., memories or recent experiences) involving a media content object and the emotional state or neurotransmitter activity of the user onto the device 102 to develop a dataset including a plurality of associations between various media content and the emotional states of the users. The user may also enter information about one or more concurrent experiences with media content objects (e.g., by entering information about their emotional state while consuming the media content concurrently or shortly after consuming the media content). During the training or learning phase, a user may input information about a number of media content objects and associated emotional states. The number of media content objects and associated emotional states may be sufficient to include a full range of emotional states envisioned by the application 105. For example, the dataset may include at least one media content entry associated with each of the positive emotional states described above. In another example, the user may enter a number of media content objects and associated emotional states sufficient to include the full range of neurotransmitters monitored by the system 100. In this example, at least one media content entry can be associated with the activity of each of the neurotransmitters identified above. In yet further examples, a training or learning phase may request that the user inputs a particular number of media content objects and associated emotional states, such as at least 10, at least 20, at least 50, or at least 100 media content objects and associated emotional states.
Generating NT-media content association data based on the media/emotional state user inputs and storing the NT data
[0065] Once the application 105 has received information about a media content object and the user’s emotional state while consuming the media content, the system 100 can then determine one or more reward molecules associated with the emotional state and media content and store that data for later use by the application 105. For instance, the system 100 may be configured to determine, based on one or more user inputs indicating an emotional state of the user associated with a media content object, one or more neurotransmitters associated with the emotional state and media content. In a particular example, the application 105 may receive information from the user indicating that a particular song is associated with the emotional state of enthusiasm, and may determine, based on the emotional state of enthusiasm identified by the user, that elevated levels of dopamine are associated with that song. That song and the associated neurotransmitter activity may then be stored by the system 100 in local storage 107 on the device 102 or in remote storage 117 on remote system 110.
[0066] In some examples, the determination of neurotransmitter activity associated with the user’s emotional states takes place at a processor 106 of the device 102. The processor 106 may be configured to determine, based on a user input indicating a particular emotional state associated with a media content object, a level of one or more neurotransmitters associated with the emotional state and media content object. In some examples, the information provided as input by the user may be transmitted via a network communication device 108 to the remote system 110, and the determination of the neurotransmitter activity associated with the emotional state and media content object takes place at the remote system 110 (e.g., on a processor 116 of the remote system 110). In some examples, the determination may be made collaboratively at processors on both the device 102 and the remote system 110.
[0067] In one or more examples, the system 100 (e.g., via a processor 106 of the device 102 or a remote processor 116 included in a remote system 110) is configured to calculate and/or estimate the level of the neurotransmitter activity (such as levels of dopamine, testosterone, serotonin, oxytocin, cannabinoids, and opioids described above) based on the emotional state identified by the user and/or the intensity of the emotional state. Optionally, the system 100 may use one or more machine-learning algorithms to associate the emotional state of the user with the activity or level of one or more neurotransmitters. In some examples, the machine-learning algorithm may be based on brain scans, e.g., fMRI brain scans, of the user. Using fMRI brain scan data to associate the emotional state of the user with a neurotransmitter, such as the indication and/or a level thereof, is described in greater detail in Patent Cooperation Treat (PCT) Application No. PCT/US2024/031840, the contents of which are incorporated herein in its entirety. [0068] Additionally or alternatively, the system 100 may utilize a positive emotion to neurotransmitter (PE-NT) matrix that can translate the emotional state (and, optionally, the intensity of the emotion) into an amount of each of the neurotransmitters associated with the emotional state. FIG 2 illustrates an exemplary PE-NT matrix 202 that can be used by an application to calculate the amount or activity of one or more neurotransmitters associated with a particular emotional state experienced by the user. The rows of the PE-NT matrix 202 can represent the positive emotion categories such as enthusiasm, sexual desire, pride/recognition, nurturant love, contentment, amusement, pleasure, and gratitude, which correspond to the emotional states input by a user and/or identified by the system. The columns of the PE-NT matrix 202 can represent the neurotransmitters (dopamine, testosterone, Serotonin, Oxytocin, Cannabinoids, and Opioids) associated with the positive emotions. A “1” in the matrix can indicate that a particular neurotransmitter is associated with that particular emotion. A “0” in the matrix can indicate that a particular is not associated with that particular emotion.
[0069] The PE-NT matrix 202 can be used to calculate the amount of a neurotransmitter associated with a media content object. In one or more examples, the calculation can include a positive emotions (“PE”) ratings column that shows the emotional state(s) provided by the user with respect to a particular media content object. Optionally, the calculation may also include an indicator of the intensity of the emotional state experienced by the user. For instance, in one example, the user may have indicated that their enthusiasm while consuming a particular media content object is mild (indicating that it is lower than average but still present) and that PE rating can be quantified as a 3. In the same example, in the PE ratings column the user may rate their contentment while consuming the media content object as a 5, which is average (e.g., on a scale of 0-10). In order to calculate the amount of neurotransmitter activity associated with the media content object, the calculation can multiply the PE rating by the numbers in the PE-NT matrix to generate a number associated with the activity of a particular neurotransmitter. For instance, as shown in PE-NT matrix 202, enthusiasm can be associated with the release of dopamine. The PE rating for enthusiasm (provided by the user as associated with a particular media content object) is 3. That value is multiplied by 1 under the dopamine column to arrive at a value of 3. Thus, with respect to the enthusiasm felt by the user as expressed in the PE rating, the dopamine level associated with that enthusiasm for the media content object is quantified at 3. In one or more examples, the remaining columns are left at 0 because those neurotransmitters are not associated with enthusiasm. [0070] In some examples, the identified emotional states may correspond to the activity of two or more neurotransmitters, and the PE-NT matrix values of each of the neurotransmitters can be multiplied by the PE rating to arrive at a value for each neurotransmitter. For instance, if contentment was rated a 5 by the user and is associated with dopamine, oxytocin, and cannabinoids, each of those neurotransmitters can be multiplied by 5 (multiplying 5x1) to determine the level of neurotransmitter activity corresponding to that positive emotion. Once the calculation is made for each of the emotional states experienced by the user for each neurotransmitter, the calculation can add up the totals for each neurotransmitter and associate the neurotransmitter totals with the media content object. Using a PE-NT matrix to identify and/or quantify neurotransmitters based on emotional states is described in greater detail in U.S. Patent Application No. 17/389,023, the contents of which are incorporated herein by reference in its entirety.
[0071] Returning to FIG. 1, the system 100 is further configured to store the data indicating the association between the various media content objects and the user’s neurotransmitter activity. The media content objects and associated neurotransmitter data can be stored in a database that is accessible to the application 105. The database may be included in local data storage 107 on the device 102 and/or in the storage 117 of a remote system 110 in communication with the device 102. In some examples, media content objects may be stored in the local data storage 107 or remote data storage 117 and associated with the information about the user’s neurotransmitter activity.
Receiving second input indicating user's current emotional/neurological need
[0072] After the user has entered one or more media content objects and associated emotional states into the application 105 and the application has determined the neurotransmitter levels associated with the emotional state and media content object, a user may then query the application 105 to create a personalized media content output by inputting additional information about their current state. For instance, a user may input information into the device 102 about their current emotional state to trigger the application 105 to select and display media content to address that emotional state. The current emotional state entered by the user may be associated with a particular deficit in neurotransmitter activity or a reduced level of one or more neurotransmitters, and the media content presented by the application may be selected to resolve the deficiency or otherwise boost the activity of certain neurotransmitters. For instance, a user could enter information associated with any of the nine positive emotions or the six neurotransmitters mentioned above. In other examples, the user may enter a query that includes information about an event or location, such as the event or location at which media content was originally consumed, or a time or date associated with the media content. A user may also enter information about a group of individuals that were present when the media was consumed, or a group of people currently present so the media content output can be tailored to the group’s emotions, experiences, and memories.
[0073] For instance, as described above, a user may input information into the application 105 using a user interface 103 of the device 102. For instance, a user may input a current emotional state into the application 105 by selecting various prompts on a user interface 103 of the device 102 (e.g., prompts representing predefined categories associated with various emotional states), or the user may input a current emotional state as a natural language query. Natural language queries may be translated into the predefined categories using a natural language module in the processor 106 or remote processor 116. For instance, a user may input a natural language query “I am not motivated to exercise” and the application 105 may be configured to determine, based on the natural language input, the current emotional state of the user and/or an associated neurotransmitter deficiency (e.g., by determining the user is lacking enthusiasm and/or needs dopamine). A user may also input additional information as natural language queries, such as a location, time, or event associated with the music, an identity of one or more individuals or listeners, or some other information (e.g., “play me music for a family reunion with relatives X, Y, and Z,” “I am going to a HIIT workout with Justine, please output an energizing 30 minute playlist of our favorite club music,” “create a playlist of 80’ s music to increase my dopamine levels”). Such natural language queries may be processed by a natural language module to determine various aspects about the desired media content and to output content on the basis of the information contained in the query.
[0074] In some examples, more advanced users of the system 100 may input a current emotional state by explicitly indicating one or more neurotransmitters that the user believes is in a deficit. For instance, a user may select “dopamine” and “oxytocin” on a user interface 103 of the device 102 if the user believes or has identified that their current emotional state is associated with reduced levels of dopamine and oxytocin.
[0075] As described above, in some examples the device 102 includes one or more sensors 104 for measuring physiological parameters indicative of the emotional state of the user. In such examples, receiving information about the user’s current emotional state may include receiving data from one or more sensors 104 of the device 102. In some examples, such a device may use the information received from the sensors to determine and store information about the user’s response to media content and anticipated need for media content. For instance, a sensor 104 could receive information about the location of the user and use the user’s location to predict a user’s emotional state or otherwise determine desired media content pertaining to that location.
[0076] The system 100 then determines, based on the information received about the user’s current emotional state, a deficiency in one or more neurotransmitters associated with that emotional state. The neurotransmitter deficiency may be determined, for instance, by correlating the reported emotional state of the user with the activity of one or more neurotransmitters using PE-NT matrix or some other data providing correlations between a user’s current emotional state and neurotransmitter activity. In some examples, the neurotransmitter deficiency may be determined using a machine-learning algorithm trained on brain scans of the user.
[0077] In some examples, the system 100 is configured to predict the current emotional state of the user. For instance, the system 100 may predict, based on information regarding the past emotional states of the user (e.g., past information input by the user regarding their emotional state), one or more predicted current emotional states of the user. The predicted current emotional state of the user may then be used to estimate a current deficit of one or more neurotransmitters. Similarly, the system 100 may determine, based on information regarding the past neurotransmitter activity of the user, one or more neurotransmitters that are predicted to be in a current deficit.
Selecting media content to address the user's NT deficit
[0078] In one or more examples, after determining a neurotransmitter deficit of the user, the system 100 proceeds to select media content to address the user’s need for one or more neurotransmitters. The selection of media content objects may be based on the previous association between the media content objects and a level of neurotransmitter activity in the user, determined as described above. For instance, the selection may be based on information associating the media content with any one or more of the nine positive emotions or six neurotransmitters described above with respect to a user’s emotional state. In some examples, the media content objects may also be selected based on additional information, such as the time, location, or event at which the media was originally consumed (or the current time, location, or event of the user), the individuals who participated in the memory or the individuals currently present, or some other information input by the user. For instance, media content could be selected to increase dopamine activity in the user, and then could be further narrowed by selecting media content associated with a particular place, time, event, or group of individuals.
[0079] The media content may be selected from the database created by the user and stored in the system (e.g., in local data storage 107 or remote data storage 117), which includes one or more media content objects uploaded by the user and associated with one or more emotional states neurotransmitter levels, and other information. In other examples, the media content may be selected from a larger library of media content, such as a library of a music streaming application or a database of music, art, or other media content. In such cases, the media selections from the library may be limited to only the content that has been previously associated by the user with one or more emotional states on the application 105. [0080] In some examples, the application 105 may be configured to provide the user with suggestions for media content for increasing neurotransmitter levels that may be reduced in the user’s current emotional state, or for boosting one or more neurotransmitters that are in a healthy range. For instance, if the system 100 determines that the user is experiencing an emotional state associated with low levels of dopamine or oxytocin (or any other neurotransmitter), the system may select media content that is associated with elevated levels of dopamine or oxytocin (or any other neurotransmitter) and suggest that media content in a prompt to the user. In another example, the system 100 could determine that the user is at the gym, and may select media that is associated with elevated levels of dopamine and other neurotransmitters associated with enthusiasm and energy (irrespective of any current deficiency in dopamine). The application 105 may present the selected media content as suggestions on a user interface 103 of the device 102. For instance, the application 105 may present one or more songs, images, or other media content objects for a user on a user interface 103 of the device, and the user may select one or more of the media content objects by interacting with the user interface 103.
[0081] In some examples, the application may provide a user with one or more graphical user interfaces or other visual cues configured to allow the user to perceive deficits in neurotransmitter activity that can be used to select or suggest particular content to the user. For instance, the user may be presented with one or more prompts on the user interface 103 that allow the user to identify determined or predicted neurotransmitter levels and respond by selecting the selected media content to present via the application. For instance, an exemplary user interface 103 may be configured to provide the user with suggestions for media content for increasing neurotransmitter levels associated with a particular emotional state. The user may then proceed to select one or more media content objects on the user interface 103 to initiate the provision of media content on the device 102.
Providing the media content to the user
[0082] After selecting and/or suggesting media content that is determined to address the emotional and physiological needs of the user, the system 100 can be configured to provide the media content to the user, for instance, by presenting the media content on the device 102. The media content may be presented to the user responsive to a user input (e.g., a touch, a button press, or a click) indicating that the user has selected the media content. Media content (e.g., visual media content) may be presented on a user interface 103 of the device 102, such as a display. In some examples, the system 100 is configured to provide media content (e.g., auditory media content) by causing one or more speakers of the system 100 to output audio content of the first media content object.
Methods
[0083] FIG. 3 illustrates an exemplary method 300 for selecting and presenting media content to a user based on an identified neurotransmitter deficiency in the user. Method 300 may be performed by a system for providing media content to a user, such as system 100 described with respect to FIG. 1. Processor 106 of the system 100 of FIG. 1 may be configured to execute instructions to perform various steps of the method 300 described below in reference to FIG. 3.
[0084] At step 302, the method includes receiving first information indicating an association between a first reward molecule of a user and a first media content object. As described herein, the reward molecule may include a neurotransmitter in the user’s nervous system or some other biomolecule associated with the health and wellbeing of the user. The media content object can include auditory media, such as a song or playlist, and/or visual media, such as one or more images or photographs.
[0085] In some examples, receiving the first information could include receiving an input from a user indicating an association between the reward molecule and a media content object. The information indicating an association between the reward molecule and the media object may include information about the emotional state of a user. For instance, the information may include information about an emotional state experienced by the user when presented with the media content object or the activity of one or more reward molecules during the consumption of the media content. For instance, the emotional state may include one or more of the following: enthusiasm, sexual desire, recognition, nurturant/family love, contentment, friendship/attachment love, amusement, pleasure, and gratitude. In some examples, the information may further include information about the intensity of the emotional state experienced by the user.
[0086] After receiving information about the emotional state of the user, the activity of one or more reward molecules (e.g., neurotransmitters) can be determined based on the emotional state of the user. For instance, in some examples, receiving the first information includes receiving an input from the user indicating an association between the first media content object and a first emotional state of the user and determining, based at least in part on the emotional state of the user, one or more neurotransmitters associated with the emotional state. In some examples, determining the neurotransmitters includes applying the first emotional state to a positive-emotion-to-neurotransmitter (PE-NT) matrix, such as the PE-NT matrix shown in FIG. 2. Additionally or alternatively, the neurotransmitter activity of a user may be determined based on information from a brain scan of the user. For instance, in some examples, determining one or more neurotransmitters associated with the user’s emotional state includes using a machine-learning algorithm trained using brain scan data from the user and receiving, from the machine-learning algorithm, output data comprising an indication of a first emotional state of the user and determining, based at least in part on the first emotional state of the user, the reward molecule.
[0087] Additionally or alternatively, the information provided as input by a user could include explicit information about one or more neurotransmitters believed to be associated with the media content. In such examples, receiving the first information may include receiving an input from the user explicitly indicating the association between the first media content object and the first reward molecule.
[0088] In some examples, the information provided as input by a user includes a string of words or natural language indicating an emotional state of the user. In some examples, the information includes a categorical representation of the user’s emotional state based on one or more predefined categories corresponding to various emotional states, such as the positive emotions described above. In some examples, the information may include physiological information about the user that is received from one or more sensors configured for sensing information about the user’s emotional state.
[0089] In some examples, the user may input the information via an application, such as an application for a mobile device or some other computing device. In some examples, the information is received when the user inputs the information using a user interface of a device, such as by entering information about their media consumption and/or emotional state via a keyboard or touchscreen of the mobile device. Additionally or alternatively, receiving the information may include receiving the information from one or more sensors incorporated in a device, such as a wearable device configured to be worn on or near the user’s wrist, head, face, abdomen, or some other part of the user’s body.
[0090] In some examples, a user may input the information concurrently with media content consumption (e.g., by recording their emotions in real-time while listening to a song or viewing a photograph). Additionally or alternatively, a user may input information about past associations between media content and the user’s emotional state while consuming the content (e.g., by recording past or recent memories about the media content). Receiving multiple pieces of information may enable the method to compile a database of media content which the method can associate with various emotional states of the user and physiological states of the user.
[0091] In some examples, the information may be received by prompting the user to input the information. For instance, the method could include prompting the user to input information about media consumption responsive to a determination that the user is consuming media (e.g., responsive to determining that a song is playing on a music streaming app on a device, or responsive to determining that music is playing in the ambient environment surrounding the user). In some examples, a user may be prompted to input information responsive to determining that the user is in a particular location, such as at home, at work, or at a location where the user is known to consume media content. In some examples, the user may be prompted to input media at a particular time of the day, week, month, or year, and the method could include prompting the user at that particular time. [0092] Additionally or alternatively, certain information may be received automatically (e.g., without active user involvement) by a system performing the method. For instance, information about the media content object may be automatically received by communicating with an application on a mobile device (e.g., a music or television streaming service) and determining that the user is consuming particular media content on the application (e.g., listening to a song or watching a movie). Information about an emotional state of the user may also be received automatically, such as by communicating with one or more sensors for receiving information about the physiological state about a user that is indicative of their emotional state. In such examples, receiving the first information may include receiving information from a sensor indicating a first physiological state of the user, receiving information indicating that the user was exposed to the media content object during the time at which the user experienced the first physiological state; and determining, based at least in part on the first physiological state of the user, one or more neurotransmitters associated with the emotional state of the user. In some examples, the physiological state of the user is processed by machine-learning algorithm trained using brain scan data for the user, and the method 300 includes receiving, from the machine-learning algorithm, output data comprising an indication of one or more neurotransmitters associated with the emotional state.
[0093] At step 304, the method includes generating and storing first data indicating an association between the first media content object and the first reward molecule associated for the user with the first media content object. The data may include the identity of one or more neurotransmitters whose activity is modulated by the media content object, the expected level of one or more neurotransmitters, the difference in neurotransmitter activity effectuated by the media content, or some other data about the association between the media content and a reward molecule that has a physiological effect on the user.
[0094] For instance, as described above, information about an emotional state of a user during consumption of media content may be used to determine the level or activity of one or more neurotransmitters or other reward molecules associated with the media content. In some examples, generating data indicating an association between a media content object and a reward molecule could include calculating expected or predicted neurotransmitter activity using a PE-NT matrix. A PE-NT matrix may enable the method to associate an emotional state of the user with one or more physiological effects on the user, such as the modulation of neurotransmitter activity. Additionally or alternatively, the method may use information from, e.g., fMRI, to determine the association between various media content, emotional states, and the activity of one or more neurotransmitters of the user. In some examples, the method may user a machine-learning algorithm trained on brain scans of the user to determine the one or more neurotransmitters.
[0095] In some examples, storing the data may include storing the data in storage on a device (e.g., a mobile device including an application for inputting information from a user) or in a remote system including remote storage (e.g., a remote computing system or a cloud in communication with the device). The data may be stored in a database. For instance, various media content may be stored in the database and information about the media content’s association with various emotional states or neurotransmitter levels of the user may also be stored in the database.
[0096] At step 306, the method includes receiving second information indicating a current level of the reward molecule for the user. The second information could include information about the user’s current emotional state or other physiological information about the user associated with a reward molecule deficiency, such as a deficiency of one or more neurotransmitters. For instance, the second information could include an indication that the user is currently in a negative emotional state, or that the user is experiencing low or non- optimal levels of particular neurotransmitters.
[0097] As described previously with respect to step 302 of method 300, the user may input information using a user interface of a device (e.g., a user interface associated with an application for collecting and tracking information about the user’s media consumption and physiological health). In such examples, receiving the second information includes receiving an input from the user including information about the user’s current emotional state. In such examples, the method may include determining, based at least in part on the emotional state of the user, the current deficiency of one or more neurotransmitters. In some examples, determining the current level of one or more neurotransmitters includes applying the emotional state to a positive-emotion-to-neurotransmitter (PE-NT) matrix.
[0098] Additionally or alternatively, the user may input information explicitly indicating one or more neurotransmitters that are in a deficiency in the user’s current emotional state. In such examples, receiving the second information includes receiving a user input explicitly indicating the current deficiency of one or more neurotransmitters.
[0099] As described above with respect to step 302, in some examples information from a user may be received automatically by way of one or more sensors configured to measure physiological information about the user. In such examples, receiving the second information may include receiving information from a sensor indicating a physiological state of the user and determining, based at least in part on the second physiological state of the user, the current deficiency of one or more neurotransmitters. Additionally or alternatively, the neurotransmitter activity of a user may be determined based on information from a brain scan of the user. For instance, in some examples, determining a current level of one or more neurotransmitters includes providing the received information from a sensor indicating the second physiological state of the user to a machine-learning algorithm trained using brain scan data for the user; and receiving, from the machine-learning algorithm, output data comprising an indication of the current level of the reward molecule.
[0100] In some examples, receiving information indicating a current deficiency of one or more neurotransmitters could include determining the information based on previously reported emotional states of the user. For instance, the method may include analyzing information about the user’s emotional state over a period of time and predicting a deficiency of reward molecules in the user. For instance, the method could include determining, based on past user inputs indicative of a user’s emotional state, a predicted deficiency of reward molecules in the user.
[0101] At step 308, the method 300 includes selecting, based at least in part on the current level of the reward molecule and the first stored data, the first media content object. In some examples, the method 300 includes selecting a number of media content objects based on a current deficiency in reward molecules, where each of the media content objects or the media content in combination may address the current deficiency for the reward molecules. In some examples, the method 300 includes selecting media content to boost a level of one or more reward molecules. In a specific example, the method 300 could include selecting a series of songs for a user formatted as a playlist.
[0102] The media content may be selected by determining that the media content is likely to remedy the current deficiency in reward molecules or otherwise modulate the activity or level of the user’s reward molecules. For instance, the method could include selecting media content that has been previously associated with elevated levels of neurotransmitters that have been identified as deficient in the user’s current emotional state. In some examples, the media content may be selected based on one or more additional criteria, such as information about the media content (e.g., a genre of the media content or availability of the media content), information about the user (e.g., the user’s location or a preference input by the user), or information about one or more additional reward molecules (e.g., the level of one or more additional neurotransmitters of the user).
[0103] The media content may be selected from a collection of media content that has been previously associated with various emotional states of the user. For instance, the media content may be selected from database in which media content has been stored by a user (e.g., a database including past media content uploaded to an application for tracking the user’s media content consumption and emotional states associated with the media content). In some examples, the media content may be selected from a database compiled by a different application, such as a music streaming application or another media application.
[0104] In some examples, the method further includes suggesting the selected media content to the user. For instance, the selected media content may be displayed to a user on a user interface of a mobile device. The user may then accept or reject the selected media content, e.g., by using a user interface of the device.
[0105] At step 310, the method 300 includes providing the first media content object to the user. Providing the media content object to the user may include initiating the media content on a device associated with a system for performing the method 300. For instance, providing the media content could include displaying visual media content on a display of a device, or causing speakers of a device to output audio content of an auditory media content object. In some examples, the method 300 includes providing a plurality of media content objects to the user (e.g., a plurality of songs formatted as a playlist).
Computing Device
[0106] FIG. 4 illustrates an example of a computing device in accordance with one embodiment. Device 400 can be a host computer connected to a network. Device 400 can be a client computer or a server. As shown in FIG. 4, device 400 can be any suitable type of microprocessor-based device, such as a personal computer, workstation, server, or handheld computing device (portable electronic device) such as a phone or tablet. The device 400 can include, for example, one or more of processor 410, input device 420, output device 430, storage 440, and communication device 460. Input device 420 and output device 430 can generally correspond to those described above and can either be connectable or integrated with the computer.
[0107] Input device 420 can be any suitable device that provides input, such as a touch screen, keyboard or keypad, mouse, or voice-recognition device. Output device 430 can be any suitable device that provides output, such as a touch screen, haptics device, or speaker. [0108] Storage 440 can be any suitable device that provides storage, such as an electrical, magnetic, or optical memory, including a RAM, cache, hard drive, or removable storage disk. Communication device 460 can include any suitable device capable of transmitting and receiving signals over a network, such as a network interface chip or device. The components of the computing device 400 can be connected in any suitable manner, such as via a physical bus or wirelessly.
[0109] Software 400, which can be stored in storage 440 and executed by processor 410, can include, for example, the programming that embodies the functionality of the present disclosure (e.g., as embodied in the devices, systems, and methods as described above).
[0110] Software 450 can also be stored and/or transported within any non-transitory computer-readable storage medium for use by or in connection with an instruction execution system, apparatus, or device, such as those described above, that can fetch instructions associated with the software from the instruction execution system, apparatus, or device and execute the instructions. In the context of this disclosure, a computer-readable storage medium can be any medium, such as storage 440, that can contain or store programming for use by or in connection with an instruction execution system, apparatus, or device. [OHl] Software 450 can also be propagated within any transport medium for use by or in connection with an instruction execution system, apparatus, or device, such as those described above, that can fetch instructions associated with the software from the instruction execution system, apparatus, or device and execute the instructions. In the context of this disclosure, a transport medium can be any medium that can communicate, propagate, or transport programming for use by or in connection with an instruction execution system, apparatus, or device. The transport readable medium can include, but is not limited to, an electronic, magnetic, optical, electromagnetic, or infrared wired or wireless propagation medium.
[0112] Device 400 may be connected to a network, which can be any suitable type of interconnected communication system. The network can implement any suitable communications protocol and can be secured by any suitable security protocol. The network can comprise network links of any suitable arrangement that can implement the transmission and reception of network signals, such as wireless network connections, T1 or T3 lines, cable networks, DSL, or telephone lines.
[0113] Device 400 can implement any operating system suitable for operating on the network. Software 450 can be written in any suitable programming language, such as C, C++, Java, or Python. In various embodiments, application software embodying the functionality of the present disclosure can be deployed in different configurations, such as in a client/server arrangement or through a Web browser as a Web-based application or Web service, for example.
EXAMPLES
Prophetic Example 1:
[0114] In a prophetic example, an application for a mobile device allows a user to compile information about their media consumption habits and their emotional responses to media. The user uses the application to enter information relating to the media content they consume on a regular basis, such as songs that the user plays on the mobile device or hears in the ambient environment, such as at concerts, in clubs, or other public places. The user additionally enters information about his/her emotional state while listening to the media content. For instance, the user provides as input into the application one or more emotions he/she felt when listening to a song by selecting one or more predetermined prompts associated with various emotions (e.g., prompts indicating the user felt enthusiasm, sexual desire, recognition/pride, nurturant/family love, contentment, friendship love, amusement, pleasure, and/or gratitude) or by entering words associated with their emotional state (e.g., “I feel happy”, “I feel energized”). In some examples, the user uploads the media content itself to the application, such as a portion of a song heard by the user in the emotional state or an image viewed by the user in the emotional state.
[0115] The application then determines, based on the user’s reported emotional state, one or more neurotransmitters that are active or at elevated levels during that emotional state. For instance, when a user indicates they felt amusement, pleasure, and gratitude while listening to the media content, the application determines that the media content elicits a physiological response in the user that is associated with elevated levels of the neurotransmitters dopamine, cannabinoids, and opioids. The application then stores data about the media content that associates the media content with increased dopamine, cannabinoid, and opioid activity.
[0116] By repeatedly using the application to record their emotional reaction to media content, the application builds a database of the media content input by the user and the associated neurotransmitters that may be affected by the media content.
[0117] The user then enters information into the application about a current emotional state to cause the application to select media content from the database that is known to elicit a particular emotional response that would be beneficial in the user’s current emotional state. For instance, when the user’s current emotional state is associated with low levels of one or more neurotransmitters, the application selects and suggests media content from the database that is known, based on the user’s prior entries in the application, to cause elevated levels of those neurotransmitters in the user. The user enters his/her current emotional state as a natural language query and/or by selecting one or more prompts on a user interface.
[0118] In a specific example, the media content includes various songs or audio recordings, and the application is configured for constructing a playlist (i.e., a collection of songs) that would be beneficial to the user in the user’s current emotional state. In various particular examples:
• The query “I feel lonely” causes selection and curation of music originally associated with positive memories of the user involving family (e.g., to trigger oxytocin) and friends (e.g., to trigger cannabinoids). The songs selected by the application activate brain areas in the user that are associated with oxytocin and cannabinoids, thus filling the gap for these neurotransmitters and the associated emotional state of the user. The experience of “feeling lonely” in that person is thereby reduced. • The query “I am not motivated to exercise” causes selection and curation of music originally associated with positive memories involving excitement (dopamine), pride (serotonin). The songs selected by the application activate brain areas in the user that are associated with dopamine and serotonin, thus filling the gap for these neurotransmitters and driving motivation in that person.
Prophetic Example 2:
[0119] In a prophetic example, more advanced users of the application described with respect to Prophetic Example 1 directly enter information about their predicted neurotransmitter levels in order to store information in and query the application. In another prophetic example, a user inputs information about media content he/she consumes in a manner similar to the example described above. However, in this example, the user enters information about predicted or determined neurotransmitter activity while consuming media content (i.e., instead of entering information about the user’s emotional state). Such examples are possible because the user is educated on the physiological relationship between certain emotional states and the activity of neurotransmitters in the nervous system, or has some other means for detecting the activity of neurotransmitters in their body.
[0120] At a later time, to address a negative emotional state, the same user provides as input to the application one or more neurotransmitters that he/she has identified as being deficient in the negative emotional state to prompt the application to select and provide media content tailored to addressing that negative emotional state. For instance, when the user is lonely, the user indicates “oxytocin” and/or “cannabinoids” by entering the desired neurotransmitters into the application (e.g., by selecting the neurotransmitters on a prompt or entering the neurotransmitters on a user interface of the mobile device). The application then determines, based on the neurotransmitters input by the user, media content or a collection of media content that is identified as increasing the levels of the specified neurotransmitters in the user. For instance, in the example above, entering “oxytocin” and “cannabinoids” causes the application to select media content from a database that is associated with higher levels of oxytocin and cannabinoids in the user. In another particular example, the query “I need dopamine” causes the application to select and curate music associated with increased dopamine levels in the user to and present that media content to the user.
Prophetic Example 3:
[0121] In another prophetic example, an application is used by multiple users to select and present media according to information from the group of users. The application mis used similarly to the application described with respect to Prophetic Examples 1 and 2. In this example, a group of users each utilize the application for recording their respective media consumption habits and emotional states associated with various media content objects. Each of the users provides as input individualized information into their respective application about the media content they consume and emotional states associated with the media content. In some examples, two or more of the users provide as input information about the same media content object. For instance, two or more of the users have attended the same concert, watched the same television show, or otherwise have similar media consumption habits.
[0122] The information from the group of users is compiled using the application and used to recommend media content that is likely to improve the emotional or physiological state of the group of users. The resulting database includes information from multiple users regarding various media content and the emotional state that media content is likely to elicit in one or more users of the group. One or more users of the group then input a query (e.g., a query representing the overall emotional state of the group or an emotional state of one or more members of the group), and the application determines, based on the compiled data from the multiple users, media content that is likely to address the emotional state of the group.
[0123] In a particular example, a group of friends have attended the same concert and have input information about a song played at the concert and their emotional state while hearing the song. The group of friends then chooses to combine the data from their application (e.g., in a “group play” mode of the application). One or more of the friends then inputs a query, such as “we want to celebrate”, and the application selects media content based on the combined information from the group of friends that is likely to trigger celebration, such as media content associated with pleasure (e.g., elevated opioid levels) and/or excitement (e.g., elevated dopamine levels) for each user of the group.
Prophetic Example 4:
[0124] In another prophetic example, an application automatically selects and suggests media content without involvement of a user. The application is used similarly to the applications described with respect to Prophetic Examples 1, 2 and 3. In this example, a user provides as input a series of entries to the application about his/her media content consumption and emotional states over a period of days, weeks, months, or years. Based on the series of entries, the application uses an algorithm to monitor the user’s emotional state and neurotransmitters over the period time and develop trends in the user’s neurotransmitter levels to predict a current or future neurotransmitter deficiency. Without user involvement, the application then selects one or more media content objects likely to address the underlying needs of the user before the user experiences such an emotional need or without a query from the user identifying that particular emotional need.
[0125] In a particular example, a user is heavily involved with work during the week and fails to input information about his/her emotional state and/or media consumption habits. The application determines that the user is likely to experience a deficit in cannabinoids (associated with friend love) and opioids (associated with pleasure) by Thursday or Friday of that week. This deficit is predicted by the application based on past information indicating that the user is likely to experience the deficit at this time of the week, or is predicted based on the user’s failure to input entries into the application. The application then selects various media content (e.g., a playlist) that is associated with the deficient neurotransmitters in order to remedy the deficit in the neurotransmitters. In some examples, the application also selects media content in order to motivate the user to do and experience activities to increase the deficient neurotransmitters on Friday night or the weekend (i.e., reactively) or to motivate the user in the coming weeks and months (i.e., reactively).
Conclusion
[0126] The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the techniques and their practical applications. Others skilled in the art are thereby enabled to best utilize the techniques and various embodiments with various modifications as are suited to the particular use contemplated.
[0127] Although the disclosure and examples have been fully described with reference to the accompanying figures, it is to be noted that various changes and modifications will become apparent to those skilled in the art. Such changes and modifications are to be understood as being included within the scope of the disclosure and examples as defined by the claims.
[0128] For any numerical ranges disclosed in the text and figures, the numerical ranges disclosed inherently support any range or value within the disclosed numerical ranges, including the endpoints, even though a precise range limitation is not stated verbatim in the specification, because this disclosure can be practiced throughout the disclosed numerical ranges. [0129] The above description is presented to enable a person skilled in the art to make and use the disclosure, and it is provided in the context of a particular application and its requirements. Various modifications to the preferred embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the disclosure. Thus, this disclosure is not intended to be limited to the embodiments shown but is to be accorded the widest scope consistent with the principles and features disclosed herein. Finally, the entire disclosure of the patents and publications referred in this application are hereby incorporated herein by reference.

Claims

CLAIMS What is claimed is:
1. A system for selecting and providing media content, the system comprising one or more processors configured to: receive first information indicating an association between first reward molecule of a user and a first media content object; generate and store first data indicating an association between the first media content object and the first reward molecule associated for the user with the first media content object; receive second information indicating a current level of the reward molecule for the user; select, based at least in part on the current level of the reward molecule and the first stored data, the first media content object; and provide the first media content object to the user.
2. The system of claim 1, wherein receiving the first information comprises: receiving an input from the user explicitly indicating the association between the first media content object and the first reward molecule.
3. The system of any one of claims 1-2, wherein the receiving the first information comprises: receiving an input from the user indicating an association between the first media content object and a location, time, date, event, individual, or group of individuals.
4. The system of any one of claims 1-3, wherein receiving the first information comprises: receiving an input from the user indicating an association between the first media content object and a first emotional state of the user; and determining, based at least in part on the first emotional state of the user, the reward molecule.
5. The system of any one of claims 1-4, wherein determining the reward molecule comprises applying the first emotional state to a positive-emotion-to-neurotransmitter (PE-NT) matrix.
6. The system of any one of claims 1-5, wherein the first emotional state of the user comprises an emotional state from the group comprising: enthusiasm, sexual desire, recognition, nurturant/family love, contentment, friendship/attachment love, amusement, pleasure, and gratitude.
7. The system of any one of claims 1-6, wherein receiving the first information comprises: receiving information from a first sensor indicating a first physiological state of the user; receiving information indicating that the user was exposed to the first media content object during the time at which the user experienced the first physiological state; and determining, based at least in part on the first physiological state of the user, the reward molecule.
8. The system of any one of claims 1-7, wherein determining the reward molecule comprises: providing the received information from the first sensor indicating the first physiological state of the user to a machine-learning algorithm trained using brain scan data for the user; and receiving, from the machine-learning algorithm, output data comprising an indication of the reward molecule.
9. The system of any one of claims 1-8, wherein determining the reward molecule comprises: providing the received information from the first sensor indicating the first physiological state of the user to a machine-learning algorithm trained using brain scan data for the user; receiving, from the machine-learning algorithm, output data comprising an indication of a first emotional state of the user; and determining, based at least in part on the first emotional state of the user, the reward molecule.
10. The system of any one of claims 1-9, wherein determining the reward molecule comprises applying the first emotional state to a positive-emotion-to-neurotransmitter (PE-NT) matrix.
11. The system of any one of claims 1-10, wherein receiving the second information comprises: receiving an input from the user explicitly indicating the current level of the reward molecule.
12. The system of any one of claims 1-11, wherein receiving the second information comprises: receiving an input from the user indicating a second emotional state of the user; and determining, based at least in part on the second emotional state of the user, the current level of the reward molecule.
13. The system of any one of claims 1-12, wherein the receiving the second information comprises: receiving an input from the user indicating a location, time, date, event, individual, or group of individuals; and determining, based at least in part on the input, the current level of the reward molecule.
14. The system of any one of claims 1-13, wherein receiving the second information comprises: receiving the second information from a prediction model configured to predict the current level of the reward molecule for the user.
15. The system of any one of claims 1-14, wherein determining the current level of the reward molecule comprises applying the second emotional state to a positive-emotion-to- neurotransmitter (PE-NT) matrix.
16. The system of any one of claims 1-15, wherein receiving the second information comprises: receiving information from a second sensor indicating a second physiological state of the user; and determining, based at least in part on the second physiological state of the user, the current level or the reward molecule.
17. The system of any one of claims 1-16, wherein determining the current level of the reward molecule comprises: providing the received information from the second sensor indicating the second physiological state of the user to a machine-learning algorithm trained using brain scan data for the user; and receiving, from the machine-learning algorithm, output data comprising an indication of the current level of the reward molecule.
18. The system of any one of claims 1-17, wherein determining the current deficiency for the reward molecule comprises: providing the received information from the second sensor indicating the second physiological state of the user to a machine-learning algorithm trained using brain scan data for the user; receiving, from the machine-learning algorithm, output data comprising an indication of a second emotional state of the user; and determining, based at least in part on the second emotional state of the user, the current level of the reward molecule.
19. The system of any one of claims 1-18, wherein determining the current level of the reward molecule comprises applying the second emotional state to a positive-emotion-to- neurotransmitter (PE-NT) matrix.
20. The system of any one of claims 1-19, wherein providing the first media content object comprises causing one or more speakers of the system to output audio content of the first media content object.
21. The system of any one of claims 1-20, wherein providing the first media content object comprises causing one or more displays of the system to display an interactive affordance to the user prompting the user to play audio content of the first media content object.
22. The system of any one of claims 1-21, wherein an identity of the reward molecule is selected from a group consisting of dopamine, serotonin, testosterone, oxytocin, cannabinoids, and opioids.
23. A method for selecting and providing media content, the method performed by a system comprising one or more processors, the method comprising: receiving first information indicating an association between first reward molecule of a user and a first media content object; generating and storing first data indicating an association between the first media content object and the first reward molecule associated for the user with the first media content object; receiving second information indicating a current level of the reward molecule for the user; selecting, based at least in part on the current level of the reward molecule and the first stored data, the first media content object; and providing the first media content object to the user.
24. A non-transitory computer-readable storage medium storing instructions for selecting and providing media content, the instructions configured to be executed by one or more processors of a system to cause the system to: receive first information indicating an association between first reward molecule of a user and a first media content object; generate and store first data indicating an association between the first media content object and the first reward molecule associated for the user with the first media content object; receive second information indicating a current level of the reward molecule for the user; select, based at least in part on the current level of the reward molecule and the first stored data, the first media content object; and provide the first media content object to the user.
PCT/US2024/033243 2023-06-12 2024-06-10 Systems and methods for selecting and providing media content to improve neurotransmitter levels Pending WO2024258782A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363507647P 2023-06-12 2023-06-12
US63/507,647 2023-06-12

Publications (1)

Publication Number Publication Date
WO2024258782A1 true WO2024258782A1 (en) 2024-12-19

Family

ID=91853297

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2024/033243 Pending WO2024258782A1 (en) 2023-06-12 2024-06-10 Systems and methods for selecting and providing media content to improve neurotransmitter levels

Country Status (1)

Country Link
WO (1) WO2024258782A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180027347A1 (en) * 2011-06-10 2018-01-25 X-System Limited Method and system for analysing sound
US20190387998A1 (en) * 2014-04-22 2019-12-26 Interaxon Inc System and method for associating music with brain-state data
US20210149941A1 (en) * 2017-09-12 2021-05-20 AebeZe Labs System and Method for Autonomously Generating a Mood-Filtered Slideshow
US20220031212A1 (en) * 2020-07-31 2022-02-03 Brain Games Corporation Systems and methods for evaluating and improving neurotransmitter levels based on mobile device application data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180027347A1 (en) * 2011-06-10 2018-01-25 X-System Limited Method and system for analysing sound
US20190387998A1 (en) * 2014-04-22 2019-12-26 Interaxon Inc System and method for associating music with brain-state data
US20210149941A1 (en) * 2017-09-12 2021-05-20 AebeZe Labs System and Method for Autonomously Generating a Mood-Filtered Slideshow
US20220031212A1 (en) * 2020-07-31 2022-02-03 Brain Games Corporation Systems and methods for evaluating and improving neurotransmitter levels based on mobile device application data

Similar Documents

Publication Publication Date Title
US20200012959A1 (en) Systems and techniques for identifying and exploiting relationships between media consumption and health
US8612363B2 (en) Avatar individualized by physical characteristic
US20210098110A1 (en) Digital Health Wellbeing
US10248195B2 (en) Short imagery task (SIT) research method
US20210113149A1 (en) Cognitive state alteration system integrating multiple feedback technologies
US9173567B2 (en) Triggering user queries based on sensor inputs
Picard et al. Relative subjective count and assessment of interruptive technologies applied to mobile monitoring of stress
US20180122509A1 (en) Multilevel Intelligent Interactive Mobile Health System for Behavioral Physiology Self-Regulation in Real-Time
US20100004977A1 (en) Method and System For Measuring User Experience For Interactive Activities
US20120289791A1 (en) Calculating and Monitoring the Efficacy of Stress-Related Therapies
US20140095189A1 (en) Systems and methods for response calibration
JP7712275B2 (en) Systems and methods for assisting individuals in behavior change programs - Patents.com
US11783723B1 (en) Method and system for music and dance recommendations
WO2020232296A1 (en) Retreat platforms and methods
CA2884305A1 (en) Systems and methods for response calibration
Lete et al. Survey on virtual coaching for older adults
US20200001134A1 (en) Workout recommendation engine
JPWO2018116703A1 (en) Display control apparatus, display control method, and computer program
KR20170004479A (en) Method for providing on-line Quit smoking clinic service and System there-of
KR20150132135A (en) Hearing test provision system, and hearing test provision method
WO2020230589A1 (en) Information processing device, information processing method, and information processing program
WO2024258782A1 (en) Systems and methods for selecting and providing media content to improve neurotransmitter levels
JP6990672B2 (en) Support system for care recipients and support methods for care recipients
Boyle et al. Selecting a smartwatch for trials involving older adults with neurodegenerative diseases: A researcher’s framework to avoid hidden pitfalls
WO2020223600A1 (en) Automatic delivery of personalized messages

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24739855

Country of ref document: EP

Kind code of ref document: A1