[go: up one dir, main page]

WO2023183595A1 - Systems and methods of providing a real-time and interactive engagement data classification for a multimedia content - Google Patents

Systems and methods of providing a real-time and interactive engagement data classification for a multimedia content Download PDF

Info

Publication number
WO2023183595A1
WO2023183595A1 PCT/US2023/016271 US2023016271W WO2023183595A1 WO 2023183595 A1 WO2023183595 A1 WO 2023183595A1 US 2023016271 W US2023016271 W US 2023016271W WO 2023183595 A1 WO2023183595 A1 WO 2023183595A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
user
personal computing
viewer
engagement data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2023/016271
Other languages
French (fr)
Inventor
Joshua WELTON
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vurple Inc
Original Assignee
Vurple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vurple Inc filed Critical Vurple Inc
Publication of WO2023183595A1 publication Critical patent/WO2023183595A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/435Filtering based on additional data, e.g. user or group profiles
    • G06F16/437Administration of user profiles, e.g. generation, initialisation, adaptation, distribution
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • G06Q30/0203Market surveys; Market polls
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25891Management of end-user data being end-user preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • H04N21/4756End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data for rating content, e.g. scoring a recommended movie
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6582Data stored in the client, e.g. viewing habits, hardware capabilities, credit card number

Definitions

  • the present invention relates to systems and methods for identifying and analyzing engagement data (reactions including such as, for example, comments, moods, sentiments, and perceptions) from users during live streaming of a multimedia content, linking the engagement data to demographic information of the users, and generating reports illustrating classification of the engagement data with respect to users’ demographic information.
  • engagement data reactions including such as, for example, comments, moods, sentiments, and perceptions
  • Live streaming is an important way for politicians to reach their constituents.
  • a representative or politician should be able to understand the policy issues most important to their constituency.
  • Presently available platforms fail in providing effective constituent feedback to representatives or politicians.
  • One aspect of the invention provides a computer-implemented method.
  • the method includes: transmitting a live stream of an event using a computer application operating on a personal computing device; receiving viewer engagement data from viewers during the live stream; conducting a sentiment analysis of the viewer engagement data in real time; and relaying the sentiment analysis to a user.
  • Another aspect of the invention provides a computer-implemented method of providing a real-time and interactive engagement data classification for a multimedia content.
  • the method includes: identifying engagement data from users during live streaming of a multimedia content; analyzing the engagement data from users during live streaming of the multimedia content; linking the engagement data to demographic information of the users;
  • the system includes a personal computing device of the user including a mobile application.
  • the system includes a plurality of servers in electronic communication with the personal computing device, the plurality of servers being configured and adapted to conduct the sentiment analysis in real time.
  • the system includes a plurality of personal computing devices of the viewers.
  • the personal computing device of the user is configured and adapted to transmit an audio and video stream to the plurality of personal computing devices of the viewers.
  • the personal computing device of the user is configured and adapted to receive viewer engagement data from the plurality of personal computing devices of the viewers.
  • Ranges provided herein are understood to be shorthand for all of the values within the range.
  • a range of 1 to 50 is understood to include any number, combination of numbers, or sub-range from the group consisting 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,
  • FIG. 1 is a schematic summarizing systems and steps involved in identifying and analyzing engagement data from users during live streaming of a multimedia content, linking the engagement data to demographic information of the users, and generating a report depicting classification of the engagement analytics with respect to users’ demographic information according to embodiments of the present invention.
  • FIG. 2 illustrates a schematic outlining usefulness of the present invention to citizens, ambassadors and/or government officials according to embodiments of the present invention.
  • FIGS. 3A-3J illustrate images of screens showing details related to creating an account on the App (called “Vurple”), which is an application for live streaming of multimedia content and comprises an engine (called the “Vurplytics” engine) for identifying, analyzing and classifying the users’ engagement data based on their demographic information according to embodiments of the present invention.
  • Vurple an application for live streaming of multimedia content and comprises an engine (called the “Vurplytics” engine) for identifying, analyzing and classifying the users’ engagement data based on their demographic information according to embodiments of the present invention.
  • FIGS. 4A-4C illustrate images of screens showing a user’s profile as it appears on the App.
  • FIGS. 5A- 5C illustrate images of screens showing details related to password recovery processes for an exemplary account on the App.
  • FIGS. 6A- 6E illustrate images of screens showing that a user can use the App for live streaming of a multimedia content even without providing their demographic information.
  • FIGS. 7A-7B illustrate images of screens showing users’ reactions (engagement data in the form of comments, questions likes, upvotes, hearts, etc.) to the multimedia content being streamed.
  • FIGS. 8A-8D illustrate images of screens showing that a user can anonymously (in “Vhosf ’ mode) post their reactions during live streaming of multimedia content on the App.
  • FIGS. 9-48 illustrate images of screens showing additional processes and features related to an application (i.e., an “app”) employing certain embodiments of the present disclosure.
  • FIG. 49 illustrates an exemplary flow diagram of data flow, in accordance with exemplary embodiments of the present disclosure.
  • FIG. 50 illustrates an exemplary flow diagram implementing a latency adjustment, in accordance with exemplary embodiments of the present disclosure.
  • the present disclosure provides a computer-implemented method of providing a realtime and interactive engagement data classification for a multimedia content, and systems implementing the same.
  • the method includes: transmitting a live stream of an event using a computer application operating on a personal computing device; receiving viewer engagement data from viewers during the live stream; conducting a sentiment analysis of the viewer engagement data in real time; and relaying the sentiment analysis to a user.
  • the method includes: identifying engagement data from users during live streaming of a multimedia content; analyzing the engagement data from users during live streaming of the multimedia content; linking the engagement data to demographic information of the users; and generating a report depicting classification of the engagement analytics with respect to the demographic information.
  • An application-based software program can be used to live stream an event, and comments made by users (e.g., viewers) during the live stream can be assessed in real time to identify the users’ reactions (including such as, for example, moods, sentiments, and perceptions) with respect to the live stream.
  • reactions including such as, for example, moods, sentiments, and perceptions
  • the reports of crowd reaction to the live stream can be further used for developing predictive analytical systems and methods such as, for
  • 41 167541 3 example, systems and methods that use artificial intelligence (Al) for predicting an outcome of an election.
  • Al artificial intelligence
  • a system for implementing certain embodiments of the present disclosure can include an analytical digital engine.
  • An example of such an analytical digital engine is illustrated in FIG. 49.
  • the engine can be configured to collect certain analytical data extrapolated from user generated info (UGI) during a mobile phone application sign up process as well as database (DB) information, coded by artificial intelligence (Al) and/or machine learning (ML).
  • UGI user generated info
  • DB database
  • Al artificial intelligence
  • ML machine learning
  • the cross referenced UGI and DB smart data is processed to provide a real time, one button selection of smart info to the user during a live stream.
  • Smart data can include data collected from UGI or DB that is processed with mathematical equations coded to reflect a medium of information rather than just raw data.
  • Smart data can include calculative measurements on data sets or data that is already processed by the analytical engine (e.g., “Vurplytics engine”).
  • the live stream can consist of a video and/or audio stream.
  • the DB information can be a combination code (AI/ML) of chat summarization, smart count data and UGI.
  • a sentiment analysis can be conducted in real-time.
  • the realtime sentiment analysis can be an ML technique that automatically recognizes and extracts sentiment in a “chat” field (e.g., a live text chat), whenever it occurs.
  • the real-time sentiment analysis can be used to analyze mentions, positive/negative comments, and word summarizations. This process can use several ML tasks such as natural language processing (NLP), text analysis, and/or semantic clustering to identify opinions/statements about the live chat and extract intelligence for an analytical process.
  • NLP natural language processing
  • Certain embodiments can include a smart process for trending analytics including phrases and words which can be processed through ML giving a “smart” count instead of an average score.
  • a smart count can be characterized using a plurality of different parameters. For example, a time-weighted score can be given, where a particular phrase, word, or symbol is weighted differently during a particular section of a speech.
  • Trending phrases can also be processed using ML and similarity scores crossed referenced with relevancy to the topic. Both words and phrases can share matrix scores. Trending phrases can be targeted to accurately track phrases or words used in the chat ML and smart filtering can be used to remove repetitive jargon.
  • the general chat can be analyzed in a plurality of ways. For example, certain rules can be implemented, such as rules related to detection of certain words, phrases, or sentiments. Other rules can be implemented to detect and analyze the frequency of a word,
  • the general chat can be analyzed to understand the cosine similarity of words, phrases, and symbols (e.g., using a similarity matrix).
  • An NLP chat for any live broadcast can include smart computation similarities. Rules can include removal of repetitive words and ML to understand relevancy of words assigned to the topic. In implementing such rules, accurate summarization and sentiment real time heat mapping can be provided.
  • a summarization can be provided to a user portal.
  • the summarization can include data from emails and chat from the live stream
  • An incoming data stream of the entire chat can be: a few paragraphs (block of characters set by certain parameters) that can be paired down to a few sentences with video time stamping for comment recall.
  • the summarization can include video time stamping, which can include word or audio recognition (e.g., of the broadcasting user).
  • a voice command can be used to find comments and pull video “clips” related to the topic.
  • FIG. 1 illustrates a system 100 in accordance with an exemplary implementation of the present invention.
  • System 100 is illustrated including a device 102 implementing an application 104.
  • FIG. 1 illustrates “Event Stream Usage” 106 (e.g., using AWS Kinesis).
  • Event streams can be used to capture user engagement activities such as users submitting questions, interacting with questions or comments (e.g., “reacting”, “liking”, “loving”, “upvoting”, “hearting”, etc.), commenting, etc.
  • a single interaction or a plurality of interactions can be taken from any person in the application.
  • the event stream is a stream of a politician or government official giving a speech on a specific topic.
  • AWS Kinesis can be used to take different events from different users on different platforms and feed them to the analytics engine 108 (e.g., AWS Kinesis Analytics Engine).
  • AWS Kinesis Analytics can separate out the information that is text based (e.g., comments) and pass it on to natural-language processing (NLP) service 110 (e.g., AWS Comprehend) where it is analyzed for “sentiment.” At least part of that dataset can be passed back to analytics engine 108 (e.g., AWS Kinesis Analytics).
  • NLP natural-language processing
  • FIG. 1 also illustrates “Event Analytics” engine 108 (e.g., using AWS Kinesis Analytics).
  • Event Analytics engine 108 can be used to compute information in real timebased on incoming information. This analysis can be dumped back into the stream to make it available for downstream systems.
  • Event analytics engine 108 e.g., using AWS Kinesis Analytics Engine
  • Event analytics engine 108 combines such information with incoming information (e.g., from AWS Comprehend).
  • FIG. 1 also illustrates machine learning 114 (e.g., using AWS SageMaker) can be used to analyze all of the data in the Columnar Datastore to look for additional intelligence or metrics that are useful to the user (e.g., politicians). Some of the information can be reformatted and displayed back into the App (e.g., mobile App), as well as a separate dashboard.
  • Machine learning models can be used to take “sentiments” and turn them into “perceptions.” The perceptions can be used to identify engagement opportunities with citizens that are open to the right kind of conversation. These machine learning models will be regression based auto-tuning models. This data and/or analysis can be sent to “Columnar Datastore” 112.
  • Columnar Datastore 112 (e.g., using AWS Redshift) can be used by the system 100 to store all the information that comes in from the runtime system. This includes demographic data that is provided by the user or sourced from external systems. New demographic data can be fed directly into columnar datastore 112 (e.g., into AWS Redshift) such that the new demographic data can be used as part of the data models and present the information back to a user (e.g., an elected official) as part of an analytical summarization (illustrated on the rightmost side of FIG. 1). The combined information (demographic data, engagement data, sentiment analysis, perception analysis) are all combined into a real time interaction data presentation widget that provides the most relevant data to the user (e.g., elected official) with a simple request (e.g., a single click).
  • System 4900 includes a personal computing device 4902 (e.g., a smart phone) implementing an application 4904.
  • Application 4904 is illustrated being displayed on a graphical user interface (GUI) 4906.
  • GUI graphical user interface
  • Application 4904 is illustrated displaying a plurality of trending words 4908 and a plurality of trending phrases 4910.
  • Personal computing device 4902 is illustrated in electronic communication with a server 4912 implementing Amazon Elastic Compute Cloud (EC2).
  • Server 4912 is configured to received information from personal computing device 4902 and provide a data set 4914 including User Generated Content (UGC).
  • UPC User Generated Content
  • the data set can be considered UGC by a Transcoding server.
  • User interaction data 4916 e.g., user chat, user comments, user questions, etc.
  • System module 4918 is illustrated implementing an analytical process on the data.
  • System module 4918 can implement AI/ML, smart processing NLP analysis, clustering intelligence code (e.g.,
  • the analyzed data can be used to update the display on GUI 4906 of personal computing device 4902 (e.g., updating trending words, updating trending phrases, etc ).
  • a voice command can be used in connection with an application implementing certain methods of the present disclosure.
  • a phrase like “VIYAH” can be used to activate (e.g., wake up) an application such that the application is prepared and able to receive a voice command.
  • This voice command can be used to request or command automated feature sets of the data or analytics (e.g., data processed accessible through a user dashboard).
  • a voice command can be used in connection with an intelligent assistant (which can have voice activation optionality) and can give suggestions, information, and learned data points to aid in the success of user profiles.
  • the voice command can be used for users (e.g., approved or verified live streaming profiles) to use systems and methods of the present disclosure.
  • a live broadcaster can use a voice command like “HEY VIYAH” followed by a series of commands and requests which can trigger macro code features within the application.
  • a live broadcaster can say: “Thanks everyone for asking that important question, I plan on touching upon that on Friday during my next LIVE (pause/beat) - HEY VIYAH, can you please send a reminder to my Vurple message box, to open Friday’s LIVE with addressing the new mask mandate?
  • a live broadcaster can say: “Thanks everyone for asking that important question, I plan on touching upon that on Friday during my next LIVE (pause/beat) - HEY VIYAH, can you please schedule the next Vurple LIVE for Friday October 1st at 10:00am Central time? - Thanks VIYAH.”
  • the “HEY VIYAH” voice commands can trigger internal macro code features which can work in the background to give real time data to the broadcaster (e.g., on screen), place data or information into folders, and populate the user portal or dashboard with the requested data.
  • a voice command function can be implemented by a user (e.g., a broadcaster of a live stream) to access an internet connection during a live stream.
  • the voice commanded function can include accessing a search engine from an internet service and/or synthesizing results to provide and display a singular answer.
  • the voice command can implement a macro command and/or feature set to search the world wide web.
  • a user can access the internet to collect and/or display infomiation to viewers of the live stream.
  • additional analytical information e.g., Vurplytics.
  • a broadcaster can say: “HEY VIYAH - What
  • the broadcaster can say: “HEY VIYAH - Who was the 23rd President of the USA?” In another example, the broadcaster can say: “HEY VIYAH - what's 70 x 7?” In each of these examples, the question can be answered by information collected from the internet and requested information can be displayed to the viewers of the live stream (e.g., automatically).
  • Audio recognition capabilities can be implemented during a live broadcast.
  • the audio of the live stream can be recorded and the actual audio that the listener (e g., viewing users) hears is the alternate track being played with a slight purposeful latency integrated.
  • This integrated latency can allow the original audio to be analyzed continuously in the audio mixer (i. e. , using a superpowered engine software instance mixer, such as a 4-channel virtual mixer).
  • indicated key words e.g., voice commands
  • live analyzation of audio can be conducted during a live stream event.
  • a continuum of live audio analyzation can aid in predictive measures for live speech articulation (e.g., repetitive rhetoric can be sent via live notification to a live streaming host or user).
  • the analyzation can combine summarization, sentiment, and audio of the speech to gauge audience response (e.g., including a visualization graph showing the characteristics of recognized audio) to help correct future engagements.
  • a signal e.g., auditory, visual, etc.
  • jingles of audio can play through an audio channel to the host signifying certain events or various cues (e.g., repetitive words by the host, lack of engagement by viewers, etc.).
  • Certain viewers can be filtered out during live streaming such that the sentiment or response of a certain demographic can be determined.
  • viewer data e.g., responses
  • a live audio broadcast automation command can be used in connection with the voice command (e.g., “Hey VIYAH”).
  • the live audio broadcast automation command can be used to allow for simultaneous audio to be processed for specific word recognition.
  • Audio input for a mobile device 000 can be used to record the audio of the broadcasting user.
  • the audio of a video and audio stream 5002 can be processed into a 4-channel virtual mixer 5004, splitting the input (e.g., device microphone input) audio into “Channel 1” and “Channel 2.”
  • Channel 1 can pass through as an output, stitched to a video, and transfer (e.g., in ALAC-24- bit 48kHz) to an interactive video service (e.g., amazon IVS Player) and/or a live video
  • Channel 2 audio can be used as a digital instance (e.g., a copy of audio from Channel 1).
  • Channel 2 audio can be analyzed continuously by a player built into virtual mixer 5004 and can look for a multi signal reference audio fingerprint with wave forms matching the voice command (e.g., “HEY VIYAH”).
  • a database of fingerprinted algorithms e.g., general sine curves
  • low latency server 5010 e.g., where data pull has only milliseconds of latency.
  • the output of Channel 1 audio can have a 3000-millisecond purposeful delay on the output side.
  • Video can be matched for audio synchronization.
  • Such synchronization can be done so that analyzation (e.g., using code within the application) and processing of the voice command (e.g., the words “HEY VIYAH”) have ample time to respond, pull from the server an audio fingerprint (e.g., using sine matching) and respond with a set of given commands.
  • Such synchronization and processing will ensure more of a real time experience for those watching and those broadcasting. Time spent waiting for the voice command can be cut from the live stream in real time (e.g., from the perspective of the viewer).
  • voice commanded functions by the broadcaster can be completed in real-time and displayed nearly instantaneously on the viewer’s device 5008.
  • An additional 2 audio channels can be used as well for additional live audio analyzation enhancements such as repetitive rhetoric recognition, intensity of words, and volume increase during a live broadcast.
  • Such analyzation can ultimately be used as analytics in the user dashboard (e.g., by understanding and comparing speech between live broadcasts).
  • V-Portal User Dashboard
  • a user dashboard (sometimes referred to as a “V -Portal”) can be displayed on a mobile device, a personal computing devise, a web browser, or a similar electronic system.
  • the user dashboard can display summarizations and related analytical data from a mobile application. Chat highlights, trending words, and trending phrases can be collected into a database. Multiple live streams can be analyzed and combined together to get one data set of information from the live streams. Information from DB can be processed for predictive measures, summarization with parsing capabilities for future solo mode selecting.
  • “heat mapping” e.g., of trending phrases and topics
  • real time sentiment analysis e.g., from ML/ Al
  • V-Portal e.g., to compare current audience emotions or sentiments with related prior speeches or performances
  • Smart analytics across all user features e.g., chat posts, polls, trending words, trending phrases, trending symbols, etc.
  • chat posts, polls, trending words, trending phrases, trending symbols, etc. can be collected into a database and
  • Summarizations and “smart analyses” of all analytical data can be conducted using a “solo mode” (e.g., parsing features or filtering data based on a single criteria) for all databases and live broadcasts.
  • the “solo mode” feature can allow specific single data features to be analyzed at the exclusion of other data sets in order to extract certain information based on the entirety of the chat. In effect, other data sets are muted to single out specifics based on the individual data set or demographic (e.g., filtering out data outside of a certain age group, racial group, ethnic group, professional group, geographic region, etc ).
  • an application of the present disclosure can scrape data from social media (e.g., from linked social media accounts) with filter parameters using sourcing AI/ML and analyze all data.
  • Embodiment 1 provides a computer-implemented method, including: transmitting a live stream of an event using a computer application operating on a personal computing device; receiving viewer engagement data from viewers during the live stream; conducting a sentiment analysis of the viewer engagement data in real time; and relaying the sentiment analysis to a user.
  • Embodiment 2 provides the computer-implemented method of embodiment 1, further including: receiving individual demographic information from each viewer of the computer application.
  • Embodiment 3 provides the computer-implemented method of any one of embodiments 1-2, wherein the receiving individual demographic information is completed prior to the live stream.
  • Embodiment 4 provides the computer-implemented method of any one of embodiments 1-3, wherein the viewer engagement data includes one or more selected from the group consisting of: chat data, comment reactions, and polling data.
  • Embodiment 5 provides the computer-implemented method of any one of embodiments 1-4, wherein the sentiment analysis includes determining one or more selected from the group consisting of: moods, sentiments, and perceptions.
  • Embodiment 6 provides the computer-implemented method of any one of embodiments 1-5, wherein the transmitting includes a latency adjustment such that a viewer is viewing a delayed transmission.
  • Embodiment 7 provides the computer-implemented method of any one of embodiments 1-6, wherein the transmitting includes responding to voice commanded functions from the user.
  • Embodiment 8 provides the computer-implemented method of any one of embodiments 1-7, wherein the voice commanded functions provide a displayed result instantaneously from the perspective of the viewer.
  • Embodiment 9 provides the computer-implemented method of any one of embodiments 1-8, wherein the voice commanded functions include accessing a search engine from an internet service.
  • Embodiment 10 provides the computer-implemented method of any one of embodiments 1-9, wherein voice commanded functions from the user are completed instantaneously from the perspective of the viewer.
  • Embodiment 11 provides a computer-implemented method of providing a real-time and interactive engagement data classification for a multimedia content including: identifying engagement data from users during live streaming of a multimedia content; analyzing the engagement data from the users during live streaming of the multimedia content; linking the engagement data to demographic information of the users; and generating a report depicting classification of the engagement analytics with respect to the demographic information.
  • Embodiment 12 provides the computer-implemented method of any one of embodiments 1-10, further including: identifying engagement data from users during live streaming of a multimedia content; analyzing the engagement data from the users during live streaming of the multimedia content; linking the engagement data to demographic information of the users; and generating a report depicting classification of the engagement analytics with respect to the demographic information.
  • Embodiment 13 provides a system configured to implement the method of any one of embodiments 1-12, including: a personal computing device (e.g., a smart phone, a personal computer, etc.) of the user including a computer application (e.g., a mobile application, a web-based application, etc.); a plurality of servers in electronic communication with the personal computing device, the plurality of servers being configured and adapted to conduct the sentiment analysis in real time; and a plurality' of personal computing devices (e.g., smart phones, personal computers, etc.) of the viewers; wherein the personal computing device of the user is configured and adapted to transmit an audio and video stream to the plurality of personal computing devices of the viewers; wherein the personal computing device of the user is configured and adapted to receive viewer engagement data from the plurality of personal computing devices of the viewers.
  • a personal computing device e.g., a smart phone, a personal computer, etc.
  • a computer application e.g., a mobile application, a web-based application,

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Development Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Accounting & Taxation (AREA)
  • Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Finance (AREA)
  • Tourism & Hospitality (AREA)
  • Marketing (AREA)
  • Human Resources & Organizations (AREA)
  • General Engineering & Computer Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Data Mining & Analysis (AREA)
  • Primary Health Care (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Game Theory and Decision Science (AREA)
  • Computing Systems (AREA)
  • Educational Administration (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A computer-implemented method is provided herein. The method includes: transmitting a live stream of an event using a computer application; receiving viewer engagement data from viewers during the live stream; conducting a sentiment analysis of the viewer engagement data in real time; and relaying the sentiment analysis to a user.

Description

SYSTEMS AND METHODS OF PROVIDING A REAL-TIME AND INTERACTIVE ENGAGEMENT DATA CLASSIFICATION FOR A MULTIMEDIA CONTENT
CROSS-REFERENCE TO RELATED APPLICATION
The present application claims priority to U.S. Provisional Patent Application No. 63/323,764, filed March 25, 2022, which is incorporated herein by reference in its entirety.
FIELD OF THE INVENTION
The present invention relates to systems and methods for identifying and analyzing engagement data (reactions including such as, for example, comments, moods, sentiments, and perceptions) from users during live streaming of a multimedia content, linking the engagement data to demographic information of the users, and generating reports illustrating classification of the engagement data with respect to users’ demographic information.
BACKGROUND
Live streaming is an important way for politicians to reach their constituents. For a representative democracy to function effectively, a representative or politician should be able to understand the policy issues most important to their constituency. Presently available platforms fail in providing effective constituent feedback to representatives or politicians.
SUMMARY
One aspect of the invention provides a computer-implemented method. The method includes: transmitting a live stream of an event using a computer application operating on a personal computing device; receiving viewer engagement data from viewers during the live stream; conducting a sentiment analysis of the viewer engagement data in real time; and relaying the sentiment analysis to a user.
Another aspect of the invention provides a computer-implemented method of providing a real-time and interactive engagement data classification for a multimedia content. The method includes: identifying engagement data from users during live streaming of a multimedia content; analyzing the engagement data from users during live streaming of the multimedia content; linking the engagement data to demographic information of the users;
1
41 167541 1 and generating a report depicting classification of the engagement analytics with respect to the demographic information.
Another aspect of the invention provides a system configured to implement the methods described herein. The system includes a personal computing device of the user including a mobile application. The system includes a plurality of servers in electronic communication with the personal computing device, the plurality of servers being configured and adapted to conduct the sentiment analysis in real time. The system includes a plurality of personal computing devices of the viewers. The personal computing device of the user is configured and adapted to transmit an audio and video stream to the plurality of personal computing devices of the viewers. The personal computing device of the user is configured and adapted to receive viewer engagement data from the plurality of personal computing devices of the viewers.
DEFINITIONS
The instant invention is most clearly understood with reference to the following definitions.
As used herein, the singular form "a," "an," and "the" include plural references unless the context clearly dictates otherwise.
Unless specifically stated or obvious from context, as used herein, the term "about" is understood as within a range of normal tolerance in the art, for example within 2 standard deviations of the mean. "About" can be understood as within 10%, 9%, 8%, 7%, 6%, 5%, 4%, 3%, 2%, 1%, 0.5%, 0.1%, 0.05%, or 0.01% of the stated value. Unless otherwise clear from context, all numerical values provided herein are modified by the term about.
As used in the specification and claims, the terms "comprises," "comprising," "containing," "having," and the like can have the meaning ascribed to them in U.S. patent law and can mean "includes," "including," and the like.
Unless specifically stated or obvious from context, the term "or," as used herein, is understood to be inclusive.
Ranges provided herein are understood to be shorthand for all of the values within the range. For example, a range of 1 to 50 is understood to include any number, combination of numbers, or sub-range from the group consisting 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,
2
411675413 41, 42, 43, 44, 45, 46, 47, 48, 49, or 50 (as well as fractions thereof unless the context clearly dictates otherwise).
BRIEF DESCRIPTION OF THE DRAWINGS
For a fuller understanding of the nature and desired objects of the present invention, reference is made to the following detailed description taken in conjunction with the accompanying drawing figures wherein like reference characters denote corresponding parts throughout the several views.
The following detailed description of specific embodiments of the invention will be better understood when read in conjunction with the appended drawings. For the purpose of illustrating the invention, there are shown in the drawings specific embodiments. It should be understood, however, that the invention is not limited to the precise arrangements and instrumentalities of the embodiments shown in the drawings.
FIG. 1 is a schematic summarizing systems and steps involved in identifying and analyzing engagement data from users during live streaming of a multimedia content, linking the engagement data to demographic information of the users, and generating a report depicting classification of the engagement analytics with respect to users’ demographic information according to embodiments of the present invention.
FIG. 2 illustrates a schematic outlining usefulness of the present invention to citizens, ambassadors and/or government officials according to embodiments of the present invention.
FIGS. 3A-3J illustrate images of screens showing details related to creating an account on the App (called “Vurple”), which is an application for live streaming of multimedia content and comprises an engine (called the “Vurplytics” engine) for identifying, analyzing and classifying the users’ engagement data based on their demographic information according to embodiments of the present invention.
FIGS. 4A-4C illustrate images of screens showing a user’s profile as it appears on the App.
FIGS. 5A- 5C illustrate images of screens showing details related to password recovery processes for an exemplary account on the App.
FIGS. 6A- 6E illustrate images of screens showing that a user can use the App for live streaming of a multimedia content even without providing their demographic information.
3
41 167541 1 FIGS. 7A-7B illustrate images of screens showing users’ reactions (engagement data in the form of comments, questions likes, upvotes, hearts, etc.) to the multimedia content being streamed.
FIGS. 8A-8D illustrate images of screens showing that a user can anonymously (in “Vhosf ’ mode) post their reactions during live streaming of multimedia content on the App.
FIGS. 9-48 illustrate images of screens showing additional processes and features related to an application (i.e., an “app”) employing certain embodiments of the present disclosure.
FIG. 49 illustrates an exemplary flow diagram of data flow, in accordance with exemplary embodiments of the present disclosure.
FIG. 50 illustrates an exemplary flow diagram implementing a latency adjustment, in accordance with exemplary embodiments of the present disclosure.
DETAILED DESCRIPTION
The present disclosure provides a computer-implemented method of providing a realtime and interactive engagement data classification for a multimedia content, and systems implementing the same. In one aspect, the method includes: transmitting a live stream of an event using a computer application operating on a personal computing device; receiving viewer engagement data from viewers during the live stream; conducting a sentiment analysis of the viewer engagement data in real time; and relaying the sentiment analysis to a user. In another aspect, the method includes: identifying engagement data from users during live streaming of a multimedia content; analyzing the engagement data from users during live streaming of the multimedia content; linking the engagement data to demographic information of the users; and generating a report depicting classification of the engagement analytics with respect to the demographic information.
An application-based software program (an “App”) can be used to live stream an event, and comments made by users (e.g., viewers) during the live stream can be assessed in real time to identify the users’ reactions (including such as, for example, moods, sentiments, and perceptions) with respect to the live stream. There can be a calculus/tabulation of the individual demographic information that a user inputs via a profile within the App, and that information (along with other information) can be used to display reports of crowd reactions to the live stream. In certain embodiments, the reports of crowd reaction to the live stream can be further used for developing predictive analytical systems and methods such as, for
4
41 167541 3 example, systems and methods that use artificial intelligence (Al) for predicting an outcome of an election.
In certain embodiments, a system for implementing certain embodiments of the present disclosure is provided. The system can include an analytical digital engine. An example of such an analytical digital engine is illustrated in FIG. 49. The engine can be configured to collect certain analytical data extrapolated from user generated info (UGI) during a mobile phone application sign up process as well as database (DB) information, coded by artificial intelligence (Al) and/or machine learning (ML). The cross referenced UGI and DB smart data is processed to provide a real time, one button selection of smart info to the user during a live stream. Smart data can include data collected from UGI or DB that is processed with mathematical equations coded to reflect a medium of information rather than just raw data. Smart data can include calculative measurements on data sets or data that is already processed by the analytical engine (e.g., “Vurplytics engine”). The live stream can consist of a video and/or audio stream. The DB information can be a combination code (AI/ML) of chat summarization, smart count data and UGI.
In certain embodiments, a sentiment analysis can be conducted in real-time. The realtime sentiment analysis can be an ML technique that automatically recognizes and extracts sentiment in a “chat” field (e.g., a live text chat), whenever it occurs. The real-time sentiment analysis can be used to analyze mentions, positive/negative comments, and word summarizations. This process can use several ML tasks such as natural language processing (NLP), text analysis, and/or semantic clustering to identify opinions/statements about the live chat and extract intelligence for an analytical process.
Certain embodiments can include a smart process for trending analytics including phrases and words which can be processed through ML giving a “smart” count instead of an average score. A smart count can be characterized using a plurality of different parameters. For example, a time-weighted score can be given, where a particular phrase, word, or symbol is weighted differently during a particular section of a speech. Trending phrases can also be processed using ML and similarity scores crossed referenced with relevancy to the topic. Both words and phrases can share matrix scores. Trending phrases can be targeted to accurately track phrases or words used in the chat ML and smart filtering can be used to remove repetitive jargon.
The general chat can be analyzed in a plurality of ways. For example, certain rules can be implemented, such as rules related to detection of certain words, phrases, or sentiments. Other rules can be implemented to detect and analyze the frequency of a word,
5
41 167541 3 phrase, or other symbol (e.g., an emogi). The general chat can be analyzed to understand the cosine similarity of words, phrases, and symbols (e.g., using a similarity matrix). An NLP chat for any live broadcast can include smart computation similarities. Rules can include removal of repetitive words and ML to understand relevancy of words assigned to the topic. In implementing such rules, accurate summarization and sentiment real time heat mapping can be provided.
During or after a live streaming event, a summarization can be provided to a user portal. The summarization can include data from emails and chat from the live stream An incoming data stream of the entire chat can be: a few paragraphs (block of characters set by certain parameters) that can be paired down to a few sentences with video time stamping for comment recall. The summarization can include video time stamping, which can include word or audio recognition (e.g., of the broadcasting user). A voice command can be used to find comments and pull video “clips” related to the topic.
Referring now to the drawings, FIG. 1 illustrates a system 100 in accordance with an exemplary implementation of the present invention. System 100 is illustrated including a device 102 implementing an application 104. FIG. 1 illustrates “Event Stream Usage” 106 (e.g., using AWS Kinesis). Event streams can be used to capture user engagement activities such as users submitting questions, interacting with questions or comments (e.g., “reacting”, “liking”, “loving”, “upvoting”, “hearting”, etc.), commenting, etc. A single interaction or a plurality of interactions can be taken from any person in the application. In one exemplary embodiment, the event stream is a stream of a politician or government official giving a speech on a specific topic. In one example, AWS Kinesis can be used to take different events from different users on different platforms and feed them to the analytics engine 108 (e.g., AWS Kinesis Analytics Engine). In such an example, AWS Kinesis Analytics can separate out the information that is text based (e.g., comments) and pass it on to natural-language processing (NLP) service 110 (e.g., AWS Comprehend) where it is analyzed for “sentiment.” At least part of that dataset can be passed back to analytics engine 108 (e.g., AWS Kinesis Analytics).
FIG. 1 also illustrates “Event Analytics” engine 108 (e.g., using AWS Kinesis Analytics). Event analytics engine 108 can be used to compute information in real timebased on incoming information. This analysis can be dumped back into the stream to make it available for downstream systems. Event analytics engine 108 (e.g., using AWS Kinesis Analytics Engine) can take raw information about direct user actions and calculate derived
6
41 167541 3 information from the raw information. Event analytics engine 108 combines such information with incoming information (e.g., from AWS Comprehend).
FIG. 1 also illustrates machine learning 114 (e.g., using AWS SageMaker) can be used to analyze all of the data in the Columnar Datastore to look for additional intelligence or metrics that are useful to the user (e.g., politicians). Some of the information can be reformatted and displayed back into the App (e.g., mobile App), as well as a separate dashboard. Machine learning models can be used to take “sentiments” and turn them into “perceptions.” The perceptions can be used to identify engagement opportunities with citizens that are open to the right kind of conversation. These machine learning models will be regression based auto-tuning models. This data and/or analysis can be sent to “Columnar Datastore” 112. Columnar Datastore 112 (e.g., using AWS Redshift) can be used by the system 100 to store all the information that comes in from the runtime system. This includes demographic data that is provided by the user or sourced from external systems. New demographic data can be fed directly into columnar datastore 112 (e.g., into AWS Redshift) such that the new demographic data can be used as part of the data models and present the information back to a user (e.g., an elected official) as part of an analytical summarization (illustrated on the rightmost side of FIG. 1). The combined information (demographic data, engagement data, sentiment analysis, perception analysis) are all combined into a real time interaction data presentation widget that provides the most relevant data to the user (e.g., elected official) with a simple request (e.g., a single click).
Referring now to FIG. 49, a system 4900 is illustrated implementing certain methods of the present disclosure. System 4900 includes a personal computing device 4902 (e.g., a smart phone) implementing an application 4904. Application 4904 is illustrated being displayed on a graphical user interface (GUI) 4906. Application 4904 is illustrated displaying a plurality of trending words 4908 and a plurality of trending phrases 4910. Personal computing device 4902 is illustrated in electronic communication with a server 4912 implementing Amazon Elastic Compute Cloud (EC2). Server 4912 is configured to received information from personal computing device 4902 and provide a data set 4914 including User Generated Content (UGC). Once a stream is generated and sent to a server/system (e.g.. Elemental (AWS)) for additional processing, the data set can be considered UGC by a Transcoding server. User interaction data 4916 (e.g., user chat, user comments, user questions, etc.) is illustrated being extracted from data set 4914. System module 4918 is illustrated implementing an analytical process on the data. System module 4918 can implement AI/ML, smart processing NLP analysis, clustering intelligence code (e.g.,
7
411675413 clustering Vurple Intelligence code, etc). The analyzed data can be used to update the display on GUI 4906 of personal computing device 4902 (e.g., updating trending words, updating trending phrases, etc ).
Voice Command
A voice command can be used in connection with an application implementing certain methods of the present disclosure. For example, a phrase like “VIYAH” can be used to activate (e.g., wake up) an application such that the application is prepared and able to receive a voice command. This voice command can be used to request or command automated feature sets of the data or analytics (e.g., data processed accessible through a user dashboard). A voice command can be used in connection with an intelligent assistant (which can have voice activation optionality) and can give suggestions, information, and learned data points to aid in the success of user profiles. During a live stream, the voice command can be used for users (e.g., approved or verified live streaming profiles) to use systems and methods of the present disclosure. For example, a live broadcaster can use a voice command like “HEY VIYAH” followed by a series of commands and requests which can trigger macro code features within the application. For example, a live broadcaster can say: “Thanks everyone for asking that important question, I plan on touching upon that on Friday during my next LIVE (pause/beat) - HEY VIYAH, can you please send a reminder to my Vurple message box, to open Friday’s LIVE with addressing the new mask mandate? THANK YOU VIYAH!” In another example, a live broadcaster can say: “Thanks everyone for asking that important question, I plan on touching upon that on Friday during my next LIVE (pause/beat) - HEY VIYAH, can you please schedule the next Vurple LIVE for Friday October 1st at 10:00am Central time? - Thanks VIYAH.” In the previous example, the “HEY VIYAH” voice commands can trigger internal macro code features which can work in the background to give real time data to the broadcaster (e.g., on screen), place data or information into folders, and populate the user portal or dashboard with the requested data.
In certain embodiments, a voice command function can be implemented by a user (e.g., a broadcaster of a live stream) to access an internet connection during a live stream. The voice commanded function can include accessing a search engine from an internet service and/or synthesizing results to provide and display a singular answer. The voice command can implement a macro command and/or feature set to search the world wide web. For example, a user can access the internet to collect and/or display infomiation to viewers of the live stream. Such information can be used in connection with additional analytical information (e.g., Vurplytics). For example, a broadcaster can say: “HEY VIYAH - What
8
411675413 time does the state of the union air next week?" In another example, the broadcaster can say: “HEY VIYAH - Who was the 23rd President of the USA?" In another example, the broadcaster can say: “HEY VIYAH - what's 70 x 7?” In each of these examples, the question can be answered by information collected from the internet and requested information can be displayed to the viewers of the live stream (e.g., automatically).
Optimization of Live Stream Communications
Audio recognition capabilities can be implemented during a live broadcast. The audio of the live stream can be recorded and the actual audio that the listener (e g., viewing users) hears is the alternate track being played with a slight purposeful latency integrated. This integrated latency can allow the original audio to be analyzed continuously in the audio mixer (i. e. , using a superpowered engine software instance mixer, such as a 4-channel virtual mixer). Thus, indicated key words (e.g., voice commands) can have macro features which push code to conduct certain analytical functions. Thus, live analyzation of audio can be conducted during a live stream event. A continuum of live audio analyzation can aid in predictive measures for live speech articulation (e.g., repetitive rhetoric can be sent via live notification to a live streaming host or user). After live streaming, the analyzation can combine summarization, sentiment, and audio of the speech to gauge audience response (e.g., including a visualization graph showing the characteristics of recognized audio) to help correct future engagements. In certain embodiments, a signal (e.g., auditory, visual, etc.) can be provided to the live streaming host or user. For example, jingles of audio can play through an audio channel to the host signifying certain events or various cues (e.g., repetitive words by the host, lack of engagement by viewers, etc.). Certain viewers can be filtered out during live streaming such that the sentiment or response of a certain demographic can be determined. For example, viewer data (e.g., responses) can be filtered based on a specific single data feature, such as viewers aged 18-21.
Certain methods of the present disclosure can best be described in connection with FIG. 50. A live audio broadcast automation command can be used in connection with the voice command (e.g., “Hey VIYAH”). The live audio broadcast automation command can be used to allow for simultaneous audio to be processed for specific word recognition. Audio input for a mobile device 000 can be used to record the audio of the broadcasting user. The audio of a video and audio stream 5002 can be processed into a 4-channel virtual mixer 5004, splitting the input (e.g., device microphone input) audio into “Channel 1” and “Channel 2.” Channel 1 can pass through as an output, stitched to a video, and transfer (e.g., in ALAC-24- bit 48kHz) to an interactive video service (e.g., amazon IVS Player) and/or a live video
9
411675413 encoding application 5006 (e.g., AWS Elemental). Channel 2 audio can be used as a digital instance (e.g., a copy of audio from Channel 1). Channel 2 audio can be analyzed continuously by a player built into virtual mixer 5004 and can look for a multi signal reference audio fingerprint with wave forms matching the voice command (e.g., “HEY VIYAH”). A database of fingerprinted algorithms (e.g., general sine curves) can be stored via low latency server 5010 (e.g., where data pull has only milliseconds of latency).
The output of Channel 1 audio can have a 3000-millisecond purposeful delay on the output side. Video can be matched for audio synchronization. Such synchronization can be done so that analyzation (e.g., using code within the application) and processing of the voice command (e.g., the words “HEY VIYAH”) have ample time to respond, pull from the server an audio fingerprint (e.g., using sine matching) and respond with a set of given commands. Such synchronization and processing will ensure more of a real time experience for those watching and those broadcasting. Time spent waiting for the voice command can be cut from the live stream in real time (e.g., from the perspective of the viewer). From the perspective of the viewer, voice commanded functions by the broadcaster can be completed in real-time and displayed nearly instantaneously on the viewer’s device 5008. An additional 2 audio channels can be used as well for additional live audio analyzation enhancements such as repetitive rhetoric recognition, intensity of words, and volume increase during a live broadcast. Such analyzation can ultimately be used as analytics in the user dashboard (e.g., by understanding and comparing speech between live broadcasts).
User Dashboard (“V-Portal”)
A user dashboard (sometimes referred to as a “V -Portal”) can be displayed on a mobile device, a personal computing devise, a web browser, or a similar electronic system. The user dashboard can display summarizations and related analytical data from a mobile application. Chat highlights, trending words, and trending phrases can be collected into a database. Multiple live streams can be analyzed and combined together to get one data set of information from the live streams. Information from DB can be processed for predictive measures, summarization with parsing capabilities for future solo mode selecting. During a live broadcast, “heat mapping” (e.g., of trending phrases and topics) and real time sentiment analysis (e.g., from ML/ Al) can be pushed to V-Portal and crossed referenced with prior live streams (e.g., to compare current audience emotions or sentiments with related prior speeches or performances). Smart analytics across all user features (e.g., chat posts, polls, trending words, trending phrases, trending symbols, etc.) can be collected into a database and
10
41 167541 3 processed through summarization and/or a predictive engine. Summarizations and “smart analyses” of all analytical data can be conducted using a “solo mode” (e.g., parsing features or filtering data based on a single criteria) for all databases and live broadcasts. The “solo mode” feature can allow specific single data features to be analyzed at the exclusion of other data sets in order to extract certain information based on the entirety of the chat. In effect, other data sets are muted to single out specifics based on the individual data set or demographic (e.g., filtering out data outside of a certain age group, racial group, ethnic group, professional group, geographic region, etc ). In certain embodiments, an application of the present disclosure can scrape data from social media (e.g., from linked social media accounts) with filter parameters using sourcing AI/ML and analyze all data.
ENUMERATED EMBODIMENTS
The following enumerated embodiments are provided, the numbering of which is not to be construed as designating levels of importance.
Embodiment 1 provides a computer-implemented method, including: transmitting a live stream of an event using a computer application operating on a personal computing device; receiving viewer engagement data from viewers during the live stream; conducting a sentiment analysis of the viewer engagement data in real time; and relaying the sentiment analysis to a user.
Embodiment 2 provides the computer-implemented method of embodiment 1, further including: receiving individual demographic information from each viewer of the computer application.
Embodiment 3 provides the computer-implemented method of any one of embodiments 1-2, wherein the receiving individual demographic information is completed prior to the live stream.
Embodiment 4 provides the computer-implemented method of any one of embodiments 1-3, wherein the viewer engagement data includes one or more selected from the group consisting of: chat data, comment reactions, and polling data.
Embodiment 5 provides the computer-implemented method of any one of embodiments 1-4, wherein the sentiment analysis includes determining one or more selected from the group consisting of: moods, sentiments, and perceptions.
11
41 167541 3 Embodiment 6 provides the computer-implemented method of any one of embodiments 1-5, wherein the transmitting includes a latency adjustment such that a viewer is viewing a delayed transmission.
Embodiment 7 provides the computer-implemented method of any one of embodiments 1-6, wherein the transmitting includes responding to voice commanded functions from the user.
Embodiment 8 provides the computer-implemented method of any one of embodiments 1-7, wherein the voice commanded functions provide a displayed result instantaneously from the perspective of the viewer. Embodiment 9 provides the computer-implemented method of any one of embodiments 1-8, wherein the voice commanded functions include accessing a search engine from an internet service.
Embodiment 10 provides the computer-implemented method of any one of embodiments 1-9, wherein voice commanded functions from the user are completed instantaneously from the perspective of the viewer.
Embodiment 11 provides a computer-implemented method of providing a real-time and interactive engagement data classification for a multimedia content including: identifying engagement data from users during live streaming of a multimedia content; analyzing the engagement data from the users during live streaming of the multimedia content; linking the engagement data to demographic information of the users; and generating a report depicting classification of the engagement analytics with respect to the demographic information.
Embodiment 12 provides the computer-implemented method of any one of embodiments 1-10, further including: identifying engagement data from users during live streaming of a multimedia content; analyzing the engagement data from the users during live streaming of the multimedia content; linking the engagement data to demographic information of the users; and generating a report depicting classification of the engagement analytics with respect to the demographic information.
12
411675413 Embodiment 13 provides a system configured to implement the method of any one of embodiments 1-12, including: a personal computing device (e.g., a smart phone, a personal computer, etc.) of the user including a computer application (e.g., a mobile application, a web-based application, etc.); a plurality of servers in electronic communication with the personal computing device, the plurality of servers being configured and adapted to conduct the sentiment analysis in real time; and a plurality' of personal computing devices (e.g., smart phones, personal computers, etc.) of the viewers; wherein the personal computing device of the user is configured and adapted to transmit an audio and video stream to the plurality of personal computing devices of the viewers; wherein the personal computing device of the user is configured and adapted to receive viewer engagement data from the plurality of personal computing devices of the viewers.
EQUIVALENTS
Although the invention has been described in terms of exemplary embodiments, it is not limited thereto. Rather, the appended claims should be construed broadly to include other variants and embodiments of the invention which may be made by those skilled in the art without departing from the scope and range of equivalents of the invention. This disclosure is intended to cover any adaptations or variations of the embodiments discussed herein.
INCORPORATION BY REFERENCE
The entire contents of all patents, published patent applications, and other references cited herein are hereby expressly incorporated herein in their entireties by reference.
13
411675413

Claims

1. A computer-implemented method, comprising: transmitting a live stream of an event using a computer application operating on a personal computing device; receiving viewer engagement data from viewers during the live stream; conducting a sentiment analysis of the viewer engagement data in real time; and relaying the sentiment analysis to a user.
2. The method of claim 1, further comprising: receiving individual demographic information from each viewer of the computer application.
3. The method of claim 2 wherein the receiving individual demographic information is completed prior to the live stream.
4. The method of claim 1 , wherein the viewer engagement data includes one or more selected from the group consisting of: chat data, comment reactions, and polling data.
5. The method of claim 1, wherein the sentiment analysis includes determining one or more selected from the group consisting of: moods, sentiments, and perceptions.
6. The method of claim 1, wherein the transmitting includes a latency adjustment such that a viewer is viewing a delayed transmission.
7. The method of claim 1, wherein the transmitting includes responding to voice commanded functions from the user.
8. The method of claim 7, wherein the voice commanded functions provide a displayed result instantaneously from the perspective of the viewer.
9. The method of claim 7, wherein the voice commanded functions include accessing a search engine from an internet service.
14
41167541 .3
10. The method of claim 1, wherein voice commanded functions from the user are completed instantaneously from the perspective of the viewer.
11. A computer-implemented method of providing a real-time and interactive engagement data classification for a multimedia content comprising: identifying engagement data from users during live streaming of a multimedia content; analyzing the engagement data from the users during live streaming of the multimedia content; linking the engagement data to demographic information of the users; and generating a report depicting classification of the engagement analytics with respect to the demographic information.
12. A system configured to implement the method of claim 1, comprising: a personal computing device of the user including a mobile application; a plurality of servers in electronic communication with the personal computing device, the plurality of servers being configured and adapted to conduct the sentiment analysis in real time; and a plurality of personal computing devices of the viewers; wherein the personal computing device of the user is configured and adapted to transmit an audio and video stream to the plurality of personal computing devices of the viewers; wherein the personal computing device of the user is configured and adapted to receive viewer engagement data from the plurality of personal computing devices of the viewers.
15
411675413
PCT/US2023/016271 2022-03-25 2023-03-24 Systems and methods of providing a real-time and interactive engagement data classification for a multimedia content Ceased WO2023183595A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202263323764P 2022-03-25 2022-03-25
US63/323,764 2022-03-25

Publications (1)

Publication Number Publication Date
WO2023183595A1 true WO2023183595A1 (en) 2023-09-28

Family

ID=88101959

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2023/016271 Ceased WO2023183595A1 (en) 2022-03-25 2023-03-24 Systems and methods of providing a real-time and interactive engagement data classification for a multimedia content

Country Status (1)

Country Link
WO (1) WO2023183595A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117880566A (en) * 2024-03-12 2024-04-12 广州久零区块链技术有限公司 Digital live broadcast interaction method and system based on artificial intelligence

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050237378A1 (en) * 2004-04-27 2005-10-27 Rodman Jeffrey C Method and apparatus for inserting variable audio delay to minimize latency in video conferencing
US20140289226A1 (en) * 2009-09-04 2014-09-25 Tanya English System and Method For Search and Display of Content in the Form of Audio, Video or Audio-Video
US20150350730A1 (en) * 2010-06-07 2015-12-03 Affectiva, Inc. Video recommendation using affect
US20160180361A1 (en) * 2013-05-07 2016-06-23 Nasdaq, Inc. Webcast systems and methods with audience sentiment feedback and analysis

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050237378A1 (en) * 2004-04-27 2005-10-27 Rodman Jeffrey C Method and apparatus for inserting variable audio delay to minimize latency in video conferencing
US20140289226A1 (en) * 2009-09-04 2014-09-25 Tanya English System and Method For Search and Display of Content in the Form of Audio, Video or Audio-Video
US20150350730A1 (en) * 2010-06-07 2015-12-03 Affectiva, Inc. Video recommendation using affect
US20160180361A1 (en) * 2013-05-07 2016-06-23 Nasdaq, Inc. Webcast systems and methods with audience sentiment feedback and analysis

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117880566A (en) * 2024-03-12 2024-04-12 广州久零区块链技术有限公司 Digital live broadcast interaction method and system based on artificial intelligence

Similar Documents

Publication Publication Date Title
Groh et al. Human detection of political speech deepfakes across transcripts, audio, and video
US10528623B2 (en) Systems and methods for content curation in video based communications
Vergeer et al. Live audience responses to live televised election debates: Time series analysis of issue salience and party salience on audience behavior
US10614829B2 (en) Method and apparatus to determine and use audience affinity and aptitude
US7043433B2 (en) Method and apparatus to determine and use audience affinity and aptitude
Kafle et al. Evaluating the usability of automatically generated captions for people who are deaf or hard of hearing
JP2021012384A (en) Automated assistant having conferencing ability
KR101949308B1 (en) Sentimental information associated with an object within media
Rozado et al. Prevalence of prejudice-denoting words in news media discourse: A chronological analysis
Kim et al. Man vs. machine: Human responses to an AI newscaster and the role of social presence
CN107211062A (en) Audio playback scheduling in virtual acoustic room
US11405227B2 (en) Smart query buffering mechanism
Kafle et al. Predicting the understandability of imperfect english captions for people who are deaf or hard of hearing
Schwemmer et al. Whose ideas are worth spreading? The representation of women and ethnic groups in TED talks
Mukerjee et al. Metrics in action: how social media metrics shape news production on Facebook
Smith et al. Young people’s conceptions of political information: Insights into information experiences and implications for intervention
Doona News satire engagement as a transgressive space for genre work
US20250061270A1 (en) Computer-assisted interactive content presentation and real-time assistance
WO2023183595A1 (en) Systems and methods of providing a real-time and interactive engagement data classification for a multimedia content
CN112073757A (en) Emotion fluctuation index acquisition method, emotion fluctuation index display method and multimedia content production method
CN118590700A (en) Audio processing method, device, terminal and storage medium
Zhang Influence of network video film and television communication based on sampling theory
Chen et al. The Richer, the Better? Users perception of news credibility of short video news
Zamanirad et al. Say No2Ads: Automatic Advertisement and Music Filtering from Broadcast News Content
CN116368785B (en) Intelligent query caching mechanism

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23775737

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 23775737

Country of ref document: EP

Kind code of ref document: A1