WO2022125964A1 - Methods, systems, apparatuses, and devices for facilitating sharing of virtual experience between users - Google Patents
Methods, systems, apparatuses, and devices for facilitating sharing of virtual experience between users Download PDFInfo
- Publication number
- WO2022125964A1 WO2022125964A1 PCT/US2021/062916 US2021062916W WO2022125964A1 WO 2022125964 A1 WO2022125964 A1 WO 2022125964A1 US 2021062916 W US2021062916 W US 2021062916W WO 2022125964 A1 WO2022125964 A1 WO 2022125964A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- virtual
- audience
- data
- performer
- devices
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/02—Details
- H04L12/16—Arrangements for providing special services to substations
- H04L12/18—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
- H04L12/1813—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
- H04L12/1822—Conducting the conference, e.g. admission, detection, selection or grouping of participants, correlating users to one or more conference sessions, prioritising transmission
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0346—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/02—Details
- H04L12/16—Arrangements for providing special services to substations
- H04L12/18—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
- H04L12/1813—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
- H04L12/1827—Network arrangements for conference optimisation or adaptation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/52—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
Definitions
- the present disclosure relates to the field of data processing. More specifically, the present disclosure relates to methods, systems, apparatuses, and devices for facilitating sharing of virtual experience between users.
- the field of data processing is technologically important to several industries, business organizations, and/or individuals.
- fan-less stadiums and areas may impact the players or music performers and the viewing experience as concerts and sports including hockey, football, basketball, soccer, baseball, and more make their way back.
- the fans are the crowds who have provided the emotional context to major sporting events and concerts. Now, fans are being told to stay away from our stadiums, arenas, and ballparks. Further, professional athletes and musicians are used to playing and performing in front of fans. At home, the cheers provide adrenaline. The anticipation of a game-altering moment felt seat to seat in the stands carries over onto the field.
- Sports fandom compares to learned behavior, like writing. Further, the live event industry is trying. Be it virtual concerts, drive-in concerts, or some attempts at “socially distanced” concerts, some promoters are willing to try anything in these problematic economic and pandemic times. There are predictions that regular touring and concerts will not return in 2022. As the pandemic has stretched on, and it’s become clear that concerts full of tightly packed fans won’t be returning in a significant way until 2021, there’s new pressure on live streams and new questions about them. The experience for performers can be disorienting. The goal is crowd participation. Finding the sweet spot between what fans are willing to pay and what artists need to charge to make it profitable continues to be tricky.
- Performers including bands and athletes in these shared events, call out for a solution to enable their economic recovery, while fans are itching to connect and interact live. But, historically, performers and fans coupled their participation with their physical attendance.
- current technologies do not replicate the in-person audio-visual experience based on which the people can see other people or being seen by the other people, but at varying visual perspectives and audible levels associated with a location of the people within the event venue. Further, current technologies do not facilitate enabling members to select and purchase virtual merchandise (e.g., clothes, accessories, tattoos, etc.) for their corresponding human images in the virtual group experience. Further, current technologies do not facilitate members to select and purchase a ticket by providing them a view of the virtual group experience corresponding to a particular seating place. Moreover, current technologies do not facilitate interaction between the people and people who have been following the event venue using social media platforms.
- virtual merchandise e.g., clothes, accessories, tattoos, etc.
- the method may include a step of receiving, using a communication device, one or more audience data associated with two or more audience members from two or more audience devices associated with the two or more audience members. Further, the two or more audience members watch and participate in one or more virtual events. Further, the method may include a step of receiving, using the communication device, one or more performer data associated with one or more performers from one or more performer devices associated with the one or more performers. Further, the one or more performers perform and participate in the one or more virtual events. Further, the method may include a step of analyzing, using a processing device, the one or more performer data and the one or more audience data.
- the method may include a step of extracting, using the processing device, one or more human forms corresponding to one or more of the pluralities of audience members and the one or more performers based on the analyzing. Further, the method may include a step of generating, using the processing device, one or more human images of one or more of the two or more audience members and the one or more performers based on the one or more human forms. Further, the method may include a step of receiving, using the communication device, one or more background data of one or more virtual events from the one or more performer devices. Further, the one or more background data may include one or more virtual backgrounds for the one or more virtual events.
- the method may include a step of combining, using the processing device, the one or more human images with the one or more virtual backgrounds based on the generating. Further, the method may include a step of creating, using the processing device, a virtual interactive space based on the combining. Further, the method may include a step of receiving, using the communication device, one or more interaction data of one or more interactions of one or more of the plurality of audience members and the one or more performers from one or more of the two or more audience devices and the one or more performer devices. Further, the method may include a step of generating, using the processing device, a modified virtual interactive space data based on each of the one or more interaction data and the virtual interactive space.
- the method may include a step of transmitting, using the communication device, the modified virtual interactive space data to the one or more of the two or more audience devices and the one or more performer devices. Further, the method may include a step of storing, using a storage device, one or more of the one or more audience data, the one or more performer data, and the one or more background data.
- the system may include a communication device, a processing device, and a storage device. Further, the communication device may be configured for performing a step of receiving one or more audience data associated with two or more audience members from two or more audience devices associated with the two or more audience members. Further, the two or more audience members watch and participate in one or more virtual events. Further, the communication device may be configured for performing a step of receiving one or more performer data associated with one or more performers from one or more performer devices associated with the one or more performers. Further, the one or more performers perform and participate in the one or more virtual events.
- the communication device may be configured for performing a step of receiving one or more background data of one or more virtual events from the one or more performer devices.
- the one or more background data may include one or more virtual backgrounds for the one or more virtual events.
- the communication device may be configured for performing a step of receiving one or more interaction data of one or more interactions of one or more of the plurality of audience members and the one or more performers from one or more of the two or more audience devices and the one or more performer devices.
- the communication device may be configured for performing a step of transmitting a modified virtual interactive space data to the one or more of the two or more audience devices and the one or more performer devices.
- the processing device may be communicatively coupled with the communication device.
- the processing device may be configured for performing a step of analyzing the one or more performer data and the one or more audience data. Further, the processing device may be configured for performing a step of extracting one or more human forms corresponding to one or more of the pluralities of audience members and the one or more performers based on the analyzing. Further, the processing device may be configured for performing a step of generating one or more human images of one or more of the two or more audience members and the one or more performers based on the one or more human forms. Further, the processing device may be configured for performing a step of combining the one or more human images with the one or more virtual backgrounds based on the generating. Further, the processing device may be configured for performing a step of creating a virtual interactive space based on the combining.
- the processing device may be configured for performing a step of generating the modified virtual interactive space data based on each of the one or more interaction data and the virtual interactive space.
- the storage device may be communicatively coupled with the processing device. Further, the storage device may be configured for performing a step of storing one or more of the one or more audience data, the one or more performer data, and the one or more background data.
- drawings may contain text or captions that may explain certain embodiments of the present disclosure. This text is included for illustrative, non-limiting, explanatory purposes of certain embodiments detailed in the present disclosure.
- FIG. 1 is an illustration of an online platform consistent with various embodiments of the present disclosure.
- FIG. 2 is a block diagram of a computing device for implementing the methods disclosed herein, in accordance with some embodiments.
- FIG. 3 is a flowchart of a method for facilitating sharing of virtual experience between users, in accordance with some embodiments.
- FIG. 4 is a continuation flowchart of FIG. 3.
- FIG. 5 is a flowchart of a method for facilitating sharing of virtual experience between users in which the method further may include transmitting, the one or more virtual event interest data to the two or more audience devices, in accordance with some embodiments.
- FIG. 6 is a flowchart of a method for facilitating sharing of virtual experience between users in which the method further may include transmitting, the one or more tickets to the one or more audience devices, in accordance with some embodiments.
- FIG. 7 is a flowchart of a method for facilitating sharing of virtual experience between users in which the method further may include rendering, the one or more human forms with the one or more selected virtual merchandises based on the processing, in accordance with some embodiments.
- FIG. 8 is a flowchart of a method for facilitating sharing of virtual experience between users in which the method further may include modifying, the virtual interactive space based on the one or more actions, in accordance with some embodiments.
- FIG. 9 is a flowchart of a method for facilitating sharing of virtual experience between users in which the method further may include establishing one or more interaction sessions between one or more of the one or more audience members and the one or more performers and one or more of the one or more first audience members and the one or more first performers in real-time based on the identifying, in accordance with some embodiments.
- FIG. 10 is a flowchart of a method for facilitating sharing of virtual experience between users in which the method further may include transmitting the one or more virtual experiences corresponding to one or more of the plurality of audience members and the one or more performers to one or more of the plurality of audience member devices and the one or more performer devices, in accordance with some embodiments.
- FIG. 11 is a flowchart of a method for facilitating sharing of virtual experience between users in which the method further may include determining one or more attending parameters for attending the one or more virtual events at the one or more event venues by one or more of the two or more audience members and the one or more performers based on the analyzing of the one or more event venue data, in accordance with some embodiments.
- FIG. 12 is a block diagram of a system for facilitating sharing of virtual experience between users, in accordance with some embodiments.
- FIG. 13 is a flowchart of a method to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments.
- FIG. 14 is a flowchart of a method to link social media accounts of the plurality of users to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments.
- FIG. 15 is a flowchart of a method to create a virtual audience for live performers at a virtual event, in accordance with some embodiments.
- FIG. 16 is a flowchart of a method for providing a preview at an instance of purchasing tickets for the one or more virtual events, in accordance with some embodiments.
- FIG. 17 is a flowchart of a method for purchasing merchandise and rendering the one or more human forms accordingly, in accordance with some embodiments.
- FIG. 18 is an illustration of a screen associated with events navigation tab of a software application to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments.
- FIG. 19 is an illustration of a screen associated with the events navigation tab of the software application to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments.
- FIG. 20 is an illustration of a screen associated with my tickets ’ navigation tab of the software application to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments.
- FIG. 21 is an illustration of a screen associated with my tickets navigation tab of the software application to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments.
- FIG. 22 is an illustration of a screen associated with social navigation tab of the software application to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments.
- FIG. 23 is an illustration of a screen associated with shop navigation tab of the software application to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments.
- FIG. 24 is an illustration of a screen of the software application to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments.
- FIG. 25 is an illustration of a screen of the software application to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments.
- FIG. 26 is an illustration of a screen of the software application to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments.
- any embodiment may incorporate only one or a plurality of the above-disclosed aspects of the disclosure and may further incorporate only one or a plurality of the above-disclosed features.
- any embodiment discussed and identified as being “preferred” is considered to be part of a best mode contemplated for carrying out the embodiments of the present disclosure.
- Other embodiments also may be discussed for additional illustrative purposes in providing a full and enabling disclosure.
- many embodiments, such as adaptations, variations, modifications, and equivalent arrangements, will be implicitly disclosed by the embodiments described herein and fall within the scope of the present disclosure.
- any sequence(s) and/or temporal order of steps of various processes or methods that are described herein are illustrative and not restrictive. Accordingly, it should be understood that, although steps of various processes or methods may be shown and described as being in a sequence or temporal order, the steps of any such processes or methods are not limited to being carried out in any particular sequence or order, absent an indication otherwise. Indeed, the steps in such processes or methods generally may be carried out in various different sequences and orders while still falling within the scope of the present disclosure. Accordingly, it is intended that the scope of patent protection is to be defined by the issued claim(s) rather than the description set forth herein.
- the present disclosure includes many aspects and features. Moreover, while many aspects and features relate to, and are described in the context of methods, systems, apparatuses, and devices for facilitating sharing of virtual experience between users, embodiments of the present disclosure are not limited to use only in this context.
- the method disclosed herein may be performed by one or more computing devices.
- the method may be performed by a server computer in communication with one or more client devices over a communication network such as, for example, the Internet.
- the method may be performed by one or more of at least one server computer, at least one client device, at least one network device, at least one sensor, and at least one actuator.
- Examples of the one or more client devices and/or the server computer may include, a desktop computer, a laptop computer, a tablet computer, a personal digital assistant, a portable electronic device, a wearable computer, a smartphone, an Internet of Things (loT) device, a smart electrical appliance, a video game console, a rack server, a super-computer, a mainframe computer, minicomputer, micro-computer, a storage server, an application server (e.g., a mail server, a web server, a real-time communication server, an FTP server, a virtual server, a proxy server, a DNS server, etc.), a quantum computer, and so on.
- a desktop computer e.g., a laptop computer, a tablet computer, a personal digital assistant, a portable electronic device, a wearable computer, a smartphone, an Internet of Things (loT) device, a smart electrical appliance, a video game console, a rack server, a super-computer, a mainframe computer, minicomputer, micro-
- one or more client devices and/or the server computer may be configured for executing a software application such as, for example, but not limited to, an operating system (e.g., Windows, Mac OS, Unix, Linux, Android, etc.) in order to provide a user interface (e.g., GUI, touchscreen based interface, voice based interface, gesture based interface, etc.) for use by the one or more users and/or a network interface for communicating with other devices over a communication network.
- an operating system e.g., Windows, Mac OS, Unix, Linux, Android, etc.
- a user interface e.g., GUI, touchscreen based interface, voice based interface, gesture based interface, etc.
- the server computer may include a processing device configured for performing data processing tasks such as, for example, but not limited to, analyzing, identifying, determining, generating, transforming, calculating, computing, compressing, decompressing, encrypting, decrypting, scrambling, splitting, merging, interpolating, extrapolating, redacting, anonymizing, encoding and decoding.
- the server computer may include a communication device configured for communicating with one or more external devices.
- the one or more external devices may include, for example, but are not limited to, a client device, a third-party database, a public database, a private database, and so on.
- the communication device may be configured for communicating with the one or more external devices over one or more communication channels.
- the one or more communication channels may include a wireless communication channel and/or a wired communication channel.
- the communication device may be configured for performing one or more of transmitting and receiving of information in electronic form.
- the server computer may include a storage device configured for performing data storage and/or data retrieval operations.
- the storage device may be configured for providing reliable storage of digital information. Accordingly, in some embodiments, the storage device may be based on technologies such as, but not limited to, data compression, data backup, data redundancy, deduplication, error correction, data finger-printing, role based access control, and so on.
- one or more steps of the method disclosed herein may be initiated, maintained, controlled, and/or terminated based on a control input received from one or more devices operated by one or more users such as, for example, but not limited to, an end user, an admin, a service provider, a service consumer, an agent, a broker and a representative thereof.
- the user as defined herein may refer to a human, an animal, or an artificially intelligent being in any state of existence, unless stated otherwise, elsewhere in the present disclosure.
- the one or more users may be required to successfully perform authentication in order for the control input to be effective.
- a user of the one or more users may perform authentication based on the possession of a secret human readable secret data (e.g., username, password, passphrase, PIN, secret question, secret answer, etc.) and/or possession of a machine readable secret data (e.g., encryption key, decryption key, bar codes, etc.) and/or possession of one or more embodied characteristics unique to the user (e.g., biometric variables such as, but not limited to, fingerprint, palm-print, voice characteristics, behavioral characteristics, facial features, iris pattern, heart rate variability, evoked potentials, brain waves, and so on) and/or possession of a unique device (e.g., a device with a unique physical and/or chemical and/or biological characteristic, a hardware device with a unique serial number, a network device with a unique IP/MAC address, a telephone with a unique phone number, a smartcard with an authentication token stored thereupon, etc.).
- a secret human readable secret data e.g., username,
- the one or more steps of the method may include communicating (e.g., transmitting and/or receiving) with one or more sensor devices and/or one or more actuators in order to perform authentication.
- the one or more steps may include receiving, using the communication device, the secret human readable data from an input device such as, for example, a keyboard, a keypad, a touch-screen, a microphone, a camera, and so on.
- the one or more steps may include receiving, using the communication device, the one or more embodied characteristics from one or more biometric sensors.
- one or more steps of the method may be automatically initiated, maintained, and/or terminated based on one or more predefined conditions.
- the one or more predefined conditions may be based on one or more contextual variables.
- the one or more contextual variables may represent a condition relevant to the performance of the one or more steps of the method.
- the one or more contextual variables may include, for example, but are not limited to, location, time, identity of a user associated with a device (e.g., the server computer, a client device, etc.) corresponding to the performance of the one or more steps, environmental variables (e.g., temperature, humidity, pressure, wind speed, lighting, sound, etc.) associated with a device corresponding to the performance of the one or more steps, physical state and/or physiological state and/or psychological state of the user, physical state (e.g., motion, direction of motion, orientation, speed, velocity, acceleration, trajectory, etc.) of the device corresponding to the performance of the one or more steps and/or semantic content of data associated with the one or more users.
- environmental variables e.g., temperature, humidity, pressure, wind speed, lighting, sound, etc.
- physical state and/or physiological state and/or psychological state of the user e.g., motion, direction of motion, orientation, speed, velocity, acceleration, trajectory, etc.
- the one or more steps may include communicating with one or more sensors and/or one or more actuators associated with the one or more contextual variables.
- the one or more sensors may include, but are not limited to, a timing device (e.g., a real-time clock), a location sensor (e.g., a GPS receiver, a GLONASS receiver, an indoor location sensor, etc.), a biometric sensor (e.g., a fingerprint sensor), an environmental variable sensor (e.g., temperature sensor, humidity sensor, pressure sensor, etc.) and a device state sensor (e.g., a power sensor, a voltage/current sensor, a switch-state sensor, a usage sensor, etc. associated with the device corresponding to performance of the or more steps).
- a timing device e.g., a real-time clock
- a location sensor e.g., a GPS receiver, a GLONASS receiver, an indoor location sensor, etc.
- a biometric sensor e.g., a fingerprint sensor
- the one or more steps of the method may be performed one or more number of times. Additionally, the one or more steps may be performed in any order other than as exemplarily disclosed herein, unless explicitly stated otherwise, elsewhere in the present disclosure. Further, two or more steps of the one or more steps may, in some embodiments, be simultaneously performed, at least in part. Further, in some embodiments, there may be one or more time gaps between performance of any two steps of the one or more steps.
- the one or more predefined conditions may be specified by the one or more users. Accordingly, the one or more steps may include receiving, using the communication device, the one or more predefined conditions from one or more and devices operated by the one or more users. Further, the one or more predefined conditions may be stored in the storage device. Alternatively, and/or additionally, in some embodiments, the one or more predefined conditions may be automatically determined, using the processing device, based on historical data corresponding to performance of the one or more steps. For example, the historical data may be collected, using the storage device, from a plurality of instances of performance of the method.
- Such historical data may include performance actions (e.g., initiating, maintaining, interrupting, terminating, etc.) of the one or more steps and/or the one or more contextual variables associated therewith.
- machine learning may be performed on the historical data in order to determine the one or more predefined conditions. For instance, machine learning on the historical data may determine a correlation between one or more contextual variables and performance of the one or more steps of the method. Accordingly, the one or more predefined conditions may be generated, using the processing device, based on the correlation.
- one or more steps of the method may be performed at one or more spatial locations.
- the method may be performed by a plurality of devices interconnected through a communication network.
- one or more steps of the method may be performed by a server computer.
- one or more steps of the method may be performed by a client computer.
- one or more steps of the method may be performed by an intermediate entity such as, for example, a proxy server.
- one or more steps of the method may be performed in a distributed fashion across the plurality of devices in order to meet one or more objectives.
- one objective may be to provide load balancing between two or more devices.
- Another objective may be to restrict a location of one or more of an input data, an output data and any intermediate data therebetween corresponding to one or more steps of the method. For example, in a client-server environment, sensitive data corresponding to a user may not be allowed to be transmitted to the server computer. Accordingly, one or more steps of the method operating on the sensitive data and/or a derivative thereof may be performed at the client device.
- the present disclosure describes methods, systems, apparatuses, and devices for facilitating sharing of virtual experience between users.
- the disclosed system may facilitate a virtual experience between a plurality of users.
- the disclosed system may aim to isolate a visual representation of a human form, transmit the visual representation digitally, and then project the specific representation into a digital virtual event background with a large number of other similarly transmitted visual representations of other human forms.
- the visual representations of the human form may be such that the real physical or geographical background associated with locations of the plurality of users may be uncoupled.
- the disclosed system may couple a virtual background to facilitate the reality-based visual and auditory connection between the plurality of users.
- the disclosed system may equip the plurality of users with an ability to expand or zoom into a visual appearance of large numbers of people within an event venue, subsequently eliminating limitations of current telecommunication modalities, to observe a single and/or multiple, simultaneously projected images of human forms of other persons and to directly communicate between persons by a text, direct auditory messages or similar means to allow for real-time human-to-human interaction, on a real-time basis within the event venue, irrespective of the event being live or pre-recorded.
- the plurality of users may be categorized as performers and audience members. Further, the performers, in an instance, may be a user that may perform in the event venue.
- the audience members in an instance, may be a user that may be interested in attending a performance of the performers in the event venue.
- the methods and systems disclosed herein may be embodied in the form of a software application (executable on the online platform, and/or one or more other devices such as, but not limited to, one or more user devices). Further, the one or more user devices may be categorized as performer devices and audience devices.
- the fan may be a human being (contrast an avatar), even if their appearance is somewhat altered (how much alteration is too much?) who dedicates an amount of time to the enjoyment of a Live (or recorded) Performance, whether by listening or viewing (what makes the experience of interacting with another human “real” because this is reality (just bridging distance gaps) versus virtual reality (a game, an invention of the imagination).
- the psychological implications are really big. Further, widespread technology in cameras, speakers, and microphones can replicate an experience when people are at great physical distances. Further, transportation of physical objects over the Internet in 3D print form is already happening.
- the genuine interaction may be the experience that humans obtain from interacting with other humans in close physical proximity (maybe sufficient but not necessary— i.e., There may be other ways that may be superior — e.g., Impact of travel, virology impact of travel).
- the hologram may be an alternative to screens, which is a projection of light on a flat surface. Most of what we see is on screens - flat. Holograms project light in 3-dimensions. (e.g., difference of a picture of a wax figure vs hologram of a wax figure in 3D). Further, the hologram may be a Fan (already Holograms of Performers, but they cannot see you which makes it not real. Fans and Performers need to see or hear you).
- Live refers to a Performance at which Fans enjoy Performance(s) contemporaneously with Performers and/or other Fans.
- the performance may be any concert, sports event, or another event in any place in the world, where one or more performers, but not more than fifty Performers, and fans.
- the performer may be an individual who is doing, the listening to or viewing of which is something for the enjoyment of Fans.
- Present refers to a Fan who is attending a Performance regardless of whether geographically close to or far from the Venue.
- the “Venue” may be any indoor or outdoor location where a performance occurs that can accommodate in- person or virtual fans.
- the event venue may correspond to, for example, music shows and/or concerts, sporting events such as, but not limited to, baseball, basketball, football, golf, hockey, racing, soccer, etc., large gathering events such as but not limited to, circus, tutorial courses, exercise sessions, festivals, museum visits, night clubs, protests, shopping, theater, theme parks, tours, etc., and so on.
- the disclosed system may facilitate interaction between the plurality of users based on a plurality of social media platforms such as Facebook, Twitter, Instagram, etc.
- the software application may facilitate the linking of social media accounts associated with the plurality of users, such that the plurality of users may choose to interact amongst each other on a basis of mutual interests in the event venues.
- the plurality of users may select one or more other users based on the mutual interests to recommend event venues, past preferences for the event venues of the one or more other users, future event tickets of the one or more other users, etc.
- the disclosed system may aim to facilitate the sharing of virtual experience between the plurality of users using two modes, namely a land gate and a cloud gate.
- the land gate may facilitate the plurality of users to access the event venue by attending the event venue in person.
- the software application may enable the sale of tickets for the physical attendance of the plurality of users.
- the software application may help retail sales with a first- class e-commerce experience accessible using the software application at the event venue, and a user may pick up merchandise of choice at a designated kiosk with a QR code, such that the software application-based retail sales, with QR-based kiosk pickup, may reduce contact with the other one or more users.
- the software application may connect the plurality of users to providers of local and long-distance travel, to help facilitate attendance. Further, the software application may use QR- based tickets rather than paper tickets that may reduce the risk of infection spreading between the plurality of users.
- the disclosed system may enable the simultaneous computational engagement of large numbers of persons to digitally transmit a visual image of their human form and to permit each person the ability to positionally view, and communicate with, other persons in the event venue. Further, the disclosed system may be configured for holding events that people cannot attend in person, whether because of restrictions due to a pandemic or reasons where a person is unable, or unwilling, to travel to an event venue whether the event is a concert, theatre production, sporting event, political assemblies, life cycle events, or other large gatherings of persons.
- the cloud gate may facilitate the plurality of users to access the event venue by attending the event venue virtually.
- the software application may enable the sale of tickets for remote attendance of the plurality of users, thereby eliminating the risk of spreading infection between the plurality of users.
- the software application may facilitate attending of the plurality of users virtually, such that the plurality of users may experience an immersive visual and audio experience similar to attending the event in-person at the convenience of being present geographically anywhere, and drastically reduces cost by reducing needs for event staff, utility costs, and facility use.
- the software application may enable seeing the other one or more users attending the event, irrespective of the mode.
- the software application may facilitate interaction with the plurality of users inclusive of both modes. Further, the software application may facilitate purchasing the merchandise online that may reduce transaction time, and encourage purchasing, thereby increasing overall retail sales.
- the disclosed system may be used for music (concerts, shows); Sports (baseball, basketball, football, golf, hockey, racing, soccer); and gatherings (circus, classes, exercise, festivals, museums, night clubs, protests, shopping, theater, theme parks, tours).
- the disclosed system may be configured for Isolating the human forms and inserting/projecting them into a digital event. Further, the disclosed system may be configured for expanding past capabilities/limitations of other teleconferencing and seeing and friend large numbers of people (in thousands) (not in little boxes). Further, the disclosed system may be configured for Interacting with anyone in the crowd and identifying with a specific person, and communicating with the person(s) determined by the number of pixels by (via audio if that person permitted, social media, etc.). Further, the disclosed system may be configured for finding ways to interact with new social relationships (human behavior is changing worldwide irrevocably). Further, the disclosed system may be configured for interacting with performers that need high- end cameras at the venue (others are focused on the interaction between performers and fans but missing the social aspects of fans).
- the disclosed system may be configured for enabling stronger connections among more humans. Further, the disclosed system may be configured for fostering the human need for togetherness. Further, the disclosed system may be configured for reducing the carbon footprint associated with travel. Further, the disclosed system may be configured for reducing the incidence of infection.
- the disclosed system may put everyone together using cloud computing, currently-available virtual reality technology, and virtual 3D spaces, it is possible to put a large number of people in one visual field. Further, the disclosed system may replicate the in-person audiovisual experience. At an in-person event, fans can see thousands of other fans, but at varying visual sizes (perspective) and audible levels.
- FANtech an exemplary embodiment of the disclosed system herein, enables the fan to see the event as a “real” event - with a full venue in view. Further, FANtech enables the fan to meet other fans, “like” other fans, chat with other fans (both text and voice), and connect on social media. Other industry leaders have limited text chat functionality (a chat room style, circa the 1990s) within a “meeting”. FANtech recreates the in-person venue experience for performers by filling a screen with fans as they would appear in a physical venue. Most conventional live-streaming in 2020 is “one-way,” where large event performances only broadcast the performers to the fans, not the fans to the performers. FANtech enables performers to sell physical and virtual goods with a seamless eCommerce app.
- FANtech encourages fans to connect their social media presence to FANtech, so fans can share their experiences on social media, find social friends on FANtech, and find FANtech likes on social media.
- Most conventional online event platforms do not connect with social media.
- FANtech includes a recommendation engine that considers past preferences, friends’ past preferences, and friends’ future event tickets, to suggest new events.
- Most conventional online event platforms do not have a recommendation engine.
- FANtech integrates in-person tickets and in-person venues to bridge physical and remote attendance.
- Most conventional live-streaming in 2020 is a self-contained platform, which does not consider alternative media of presentation.
- FANtech enables the sale of tickets for remote attendance.
- fans can have an authentic visual and audio experience, comparable to attending an event in person.
- the fans who attend through the Cloud Gate can see when friends and likes attend, whether through the Land Gate or the Cloud Gate.
- the cloud fate fans can “like” both Land Gate Fans and Cloud Gate Fans, expanding their circles.
- Cloud Gate Fans can purchase physical and virtual merchandise from the FANtech app. Sale losses resulting from limited inventories can be stemmed by central warehousing and just-in-time production. Further, the ease of purchasing through the Cloud Gate reduces transaction time and encourages purchasing, which will increase overall retail sales. Travel to an event through the Cloud Gate is only limited by Internet bandwidth, which is freely available worldwide at a minimal cost.
- the disclosed system may be configured for facilitating traveling to an event through the Cloud Gate, the carbon footprint of event attendance is significantly reduced. Further, remote attendance associated with the disclosed system eliminates the risk of infection from event attendance. Remote attendance drastically reduces costs by reducing needs for event staff, utility costs, and facility use.
- the disclosed system may insert participants from remote locations into a live location, to optimize social interaction. Further, the disclosed system may fill the void that people feel by not interacting with others as if they were present at the event. Further, the disclosed system may keep participants alone during the pandemic, yet together.
- the disclosed system may utilize holograms to separate from screens. Further, the disclosed system may transmit other senses over the Internet: touch, taste, and smell. Further, the touch may be associated with haptic feedback. Further, a tap on the shoulder over the Internet may be felt. Further, taste and smell may be associated with remote cooking demonstrations.
- the disclosed system may use smart devices and the Internet to transmit sight and sound across physical barriers. Further, the disclosed system may allow users to experience togetherness through screens, cameras, speakers, and microphones. For fans in different places across the globe may unite at any show and have that interaction using the disclosed system.
- FIG. 1 is an illustration of an online platform 100 consistent with various embodiments of the present disclosure.
- the online platform 100 for facilitating sharing of virtual experience between users may be hosted on a centralized server 102, such as, for example, a cloud computing service.
- the centralized server 102 may communicate with other network entities, such as, for example, a mobile device 106 (such as a smartphone, a laptop, a tablet computer, etc.), other electronic devices 110 (such as desktop computers, server computers, etc.), databases 114, and sensors 116 over a communication network 104, such as, but not limited to, the Internet.
- users of the online platform 100 may include relevant parties such as, but not limited to, end-users, administrators, service providers, service consumers, and so on. Accordingly, in some instances, electronic devices operated by the one or more relevant parties may be in communication with the platform.
- a user 112 may access online platform 100 through a web based software application or browser.
- the web based software application may be embodied as, for example, but not be limited to, a website, a web application, a desktop application, and a mobile application compatible with a computing device X00.
- a system consistent with an embodiment of the disclosure may include a computing device or cloud service, such as computing device 200.
- computing device 200 may include at least one processing unit 202 and a system memory 204.
- system memory 204 may comprise, but is not limited to, volatile (e.g., random-access memory (RAM)), non-volatile (e.g., read-only memory (ROM)), flash memory, or any combination.
- System memory 204 may include operating system 205, one or more programming modules 206, and may include a program data 207. Operating system 205, for example, may be suitable for controlling computing device 200’ s operation.
- programming modules 206 may include image-processing module, machine learning module.
- embodiments of the disclosure may be practiced in conjunction with a graphics library, other operating systems, or any other application program and is not limited to any particular application or system. This basic configuration is illustrated in FIG. 2 by those components within a dashed line 208.
- Computing device 200 may have additional features or functionality.
- computing device 200 may also include additional data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape.
- additional storage is illustrated in FIG. 2 by a removable storage 209 and a non-removable storage 210.
- Computer storage media may include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program modules, or other data.
- System memory 204, removable storage 209, and non-removable storage 210 are all computer storage media examples (i.e., memory storage.)
- Computer storage media may include, but is not limited to, RAM, ROM, electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store information and which can be accessed by computing device 200. Any such computer storage media may be part of device 200.
- Computing device 200 may also have input device(s) 212 such as a keyboard, a mouse, a pen, a sound input device, a touch input device, a location sensor, a camera, a biometric sensor, etc.
- Output device(s) 214 such as a display, speakers, a printer, etc. may also be included.
- the aforementioned devices are examples and others may be used.
- Computing device 200 may also contain a communication connection 216 that may allow device 200 to communicate with other computing devices 218, such as over a network in a distributed computing environment, for example, an intranet or the Internet.
- Communication connection 216 is one example of communication media.
- Communication media may typically be embodied by computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and includes any information delivery media.
- modulated data signal may describe a signal that has one or more characteristics set or changed in such a manner as to encode information in the signal.
- communication media may include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency (RF), infrared, and other wireless media.
- wireless media such as acoustic, radio frequency (RF), infrared, and other wireless media.
- RF radio frequency
- computer readable media may include both storage media and communication media.
- program modules and data files may be stored in system memory 204, including operating system 205.
- programming modules 206 e.g., application 220 such as a media player
- processes including, for example, one or more stages of methods, algorithms, systems, applications, servers, databases as described above.
- processing unit 202 may perform other processes.
- Other programming modules that may be used in accordance with embodiments of the present disclosure may include machine learning applications.
- program modules may include routines, programs, components, data structures, and other types of structures that may perform particular tasks or that may implement particular abstract data types.
- embodiments of the disclosure may be practiced with other computer system configurations, including hand-held devices, general purpose graphics processor-based systems, multiprocessor systems, microprocessor-based or programmable consumer electronics, application specific integrated circuit-based electronics, minicomputers, mainframe computers, and the like.
- Embodiments of the disclosure may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
- program modules may be located in both local and remote memory storage devices.
- embodiments of the disclosure may be practiced in an electrical circuit comprising discrete electronic elements, packaged or integrated electronic chips containing logic gates, a circuit utilizing a microprocessor, or on a single chip containing electronic elements or microprocessors.
- Embodiments of the disclosure may also be practiced using other technologies capable of performing logical operations such as, for example, AND, OR, and NOT, including but not limited to mechanical, optical, fluidic, and quantum technologies.
- embodiments of the disclosure may be practiced within a general -purpose computer or in any other circuits or systems.
- Embodiments of the disclosure may be implemented as a computer process (method), a computing system, or as an article of manufacture, such as a computer program product or computer readable media.
- the computer program product may be a computer storage media readable by a computer system and encoding a computer program of instructions for executing a computer process.
- the computer program product may also be a propagated signal on a carrier readable by a computing system and encoding a computer program of instructions for executing a computer process.
- the present disclosure may be embodied in hardware and/or in software (including firmware, resident software, micro-code, etc.).
- embodiments of the present disclosure may take the form of a computer program product on a computer-usable or computer-readable storage medium having computer-usable or computer-readable program code embodied in the medium for use by or in connection with an instruction execution system.
- a computer-usable or computer-readable medium may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
- the computer-usable or computer-readable medium may be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. More specific computer-readable medium examples (a non-exhaustive list), the computer-readable medium may include the following: an electrical connection having one or more wires, a portable computer diskette, a random-access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, and a portable compact disc read-only memory (CD- ROM).
- RAM random-access memory
- ROM read-only memory
- EPROM or Flash memory erasable programmable read-only memory
- CD- ROM portable compact disc read-only memory
- the computer-usable or computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
- Embodiments of the present disclosure are described above with reference to block diagrams and/or operational illustrations of methods, systems, and computer program products according to embodiments of the disclosure.
- the functions/acts noted in the blocks may occur out of the order as shown in any flowchart.
- two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
- FIG. 3 is a flowchart of a method 300 for facilitating sharing of virtual experience between users, in accordance with some embodiments.
- the method 300 may include a step 302 of receiving, using a communication device, one or more audience data associated with two or more audience members from two or more audience devices associated with the two or more audience members. Further, the two or more audience members watch and participate in one or more virtual events. Further, the method 300 may include a step 304 of receiving, using the communication device, one or more performer data associated with one or more performers from one or more performer devices associated with the one or more performers. Further, the one or more performers perform and participate in the one or more virtual events.
- the method 300 may include a step 306 of analyzing, using a processing device, the one or more performer data and the one or more audience data.
- the method 300 may include a step 308 of extracting, using the processing device, one or more human forms corresponding to one or more of the pluralities of audience members and the one or more performers based on the analyzing.
- the one or more human forms may include one or more human form data.
- the method 300 may include a step 310 of generating, using the processing device, one or more human images of one or more of the two or more audience members and the one or more performers based on the one or more human forms.
- the one or more human images may include one or more virtual representations.
- the method 300 may include a step 312 of receiving, using the communication device, one or more background data of the one or more virtual events from the one or more performer devices.
- the one or more background data may include one or more virtual backgrounds for the one or more virtual events.
- FIG. 4 is a continuation flowchart of FIG. 3.
- the method 300 may include a step 314 of combining, using the processing device, the one or more human images with the one or more virtual backgrounds based on the generating.
- the method 300 may include a step 316 of creating, using the processing device, a virtual interactive space based on the combining.
- the virtual interactive space may include the one or more human images of one or more of the two or more audience members and the one or more performers in the one or more virtual backgrounds.
- the method 300 may include a step 318 of receiving, using the communication device, one or more interaction data of one or more interactions of one or more of the two or more audience members and the one or more performers from one or more of the two or more audience devices and the one or more performer devices.
- the method 300 may include a step 320 of generating, using the processing device, a modified virtual interactive space data based on each of the one or more interaction data and the virtual interactive space.
- the modified virtual interactive space data may include the one or more human images and the one or more interactions of one or more of the two or more audience members and the one or more performers within the one or more virtual backgrounds.
- the method 300 may include a step 322 of transmitting, using the communication device, the modified virtual interactive space data to the one or more of the two or more audience devices and the one or more performer devices.
- the method 300 may include a step 324 of storing, using a storage device, one or more of the one or more audience data, the one or more performer data, and the one or more background data.
- the one or more performer data may include one or more of a performer’s appearance, a performer’s gesture, a performer’s verbal expression, a performer’s nonverbal expression, and a performer’s movement.
- the one or more performer devices may include one or more of a performer image sensor, a performer microphone, and a performer motion sensor.
- one or more of the performer image sensor, the performer microphone, and the performer motion sensor may be configured for generating the one or more performer data based on capturing one or more of an appearance, a gesture, a verbal expression, a nonverbal expression, and a movement of the one or more performers.
- the one or more audience data may include one or more of an audience member’s appearance, an audience member’s gesture, an audience member’s verbal expression, an audience member’s nonverbal expression, and an audience member’s movement.
- the two or more audience devices may include one or more of an audience image sensor, an audience microphone, and an audience motion sensor. Further, one or more of the audience image sensor, the audience microphone, and the audience motion sensor may be configured for generating the one or more audience data based on capturing one or more of an appearance, a gesture, a verbal expression, a nonverbal expression, and a movement of the two or more audience members.
- FIG. 5 is a flowchart of a method 500 for facilitating sharing of virtual experience between users in which the method 500 further may include transmitting the one or more virtual event interest data to the two or more audience devices, in accordance with some embodiments.
- the method 500 may include receiving, using the communication device, one or more audience member data associated with the two or more audience members from one or more social media platforms associated with the two or more audience members. Further, the one or more social media platforms may be hosted by one or more social media servers.
- the method 500 may include analyzing, using the processing device, the one or more audience member data. Further, at 506, the method 500 may include generating, using the processing device, one or more virtual event interest data based on the analyzing of the one or more audience member data.
- the one or more virtual event interest data may include one or more similar interests shown by one or more first audience members of the two or more audience members and one or more second audience members of the two or more audience members in the one or more virtual events.
- the method 500 may include transmitting, using the communication device, the one or more virtual event interest data to the two or more audience devices.
- FIG. 6 is a flowchart of a method 600 for facilitating sharing of virtual experience between users in which the method 600 further may include transmitting the one or more tickets to the one or more audience devices, in accordance with some embodiments.
- the one or more background data may include one or more virtual backgrounds locations of one or more virtual seats in the one or more virtual backgrounds for the two or more audience members.
- the method 600 may include a step 602 of analyzing, using the processing device, the one or more background data and the virtual interactive space.
- the method 600 may include a step 604 of generating, using the processing device, one or more virtual interactive space views of the virtual interactive space corresponding to the one or more virtual seats based on the analyzing of the one or more background data and the virtual interactive space.
- the method 600 may include a step 606 of transmitting, using the communication device, the one or more virtual interactive space views of the virtual interactive space corresponding to the one or more virtual seats to the two or more audience devices. Further, the method 600 may include a step 608 of receiving, using the communication device, one or more seat indications of one or more selected virtual seats of the one or more virtual seats from one or more audience devices associated with one or more audience members. Further, the method 600 may include a step 610 of issuing, using the processing device, one or more tickets for the one or more selected virtual seats to the one or more audience members based on the one or more seat indications of the one or more selected virtual seats for the one or more virtual events. Further, the method 600 may include a step 612 of transmitting, using the communication device, the one or more tickets to the one or more audience devices.
- FIG. 7 is a flowchart of a method 700 for facilitating sharing of virtual experience between users in which the method 700 further may include rendering the one or more human forms with the one or more selected virtual merchandises based on the processing, in accordance with some embodiments. Further, at 702, the method 700 may include transmitting, using the communication device, one or more virtual merchandises for the one or more human forms to the two or more audience devices. Further, at 704, the method 700 may include receiving, using the communication device, one or more merchandise indications for purchasing of one or more selected virtual merchandises of the one or more virtual merchandises from one or more audience devices associated with one or more audiences.
- the method 700 may include processing, using the processing device, one or more transactions associated with the purchasing of the one or more selected virtual merchandises based on the one or more merchandise indications. Further, at 708, the method 700 may include rendering, using the processing device, the one or more human forms with the one or more selected virtual merchandises based on the processing. Further, the generating of the one or more human images may be based on the rendering.
- FIG. 8 is a flowchart of a method 800 for facilitating sharing of virtual experience between users in which the method 800 further may include modifying the virtual interactive space based on the one or more actions, in accordance with some embodiments.
- the method 800 may include analyzing, using the processing device, the one or more interaction data using one or more machine learning models. Further, the one or more machine learning models may be trained for detecting actions of one or more of the two or more audience members and the one or more performers.
- the method 800 may include determining, using the processing device, one or more actions corresponding to one or more of the two or more audience members and the one or more performers based on the analyzing of the one or more interaction data.
- the method 800 may include modifying, using the processing device, the virtual interactive space based on the one or more actions. Further, the generating of the modified virtual interactive space data may be based on the modifying.
- FIG. 9 is a flowchart of a method 900 for facilitating sharing of virtual experience between users in which the method 900 further may include establishing one or more interaction sessions between one or more of the one or more audience members and the one or more performers and one or more of the one or more first audience members and the one or more first performers in real-time based on the identifying, in accordance with some embodiments. Further, at 902, the method 900 may include identifying, using the processing device, one or more of one or more first audience members and one or more first performers based on the determining of the one or more actions.
- the method 900 may include establishing, using the processing device, one or more interaction sessions between one or more of the one or more audience members and the one or more performers and one or more of the one or more first audience members and the one or more first performers in realtime based on the identifying. Further, the modifying of the virtual interactive space may be based on the establishing. Further, the establishing of the one or more interaction sessions allows one or more of the one or more audience members and the one or more performers to interact with one or more of the one or more first audience members and the one or more first performers in the real-time.
- FIG. 10 is a flowchart of a method 1000 for facilitating sharing of virtual experience between users in which the method 1000 further may include transmitting the one or more virtual experiences corresponding to one or more of the plurality of audience members and the one or more performers to one or more of the plurality of audience member devices and the one or more performer devices, in accordance with some embodiments. Further, at 1002, the method 1000 may include generating, using the processing device, one or more virtual experiences of the virtual interactive space for one or more of the plurality of audience members and the one or more performers based on the virtual interactive space, the one or more audience member data, and the one or more performer data.
- the method 1000 may include transmitting, using the communication device, the one or more virtual experiences corresponding to one or more of the plurality of audience members and the one or more performers to one or more of the plurality of audience member devices and the one or more performer devices.
- FIG. 11 is a flowchart of a method 1100 for facilitating sharing of virtual experience between users in which the method 1100 further may include determining one or more attending parameters for attending the one or more virtual events at the one or more event venues by one or more of the two or more audience members and the one or more performers based on the analyzing of the one or more event venue data, in accordance with some embodiments. Further, at 1102, the method 1100 may include receiving, using the communication device, one or more event venue data associated with one or more event venues of the one or more virtual events from the one or more performer devices. Further, at 1104, the method 1100 may include analyzing, using the processing device, the one or more event venue data using one or more first machine learning models.
- the one or more first machine learning models may be trained for detecting attending parameters for attending the one or more virtual events at the one or more event venues.
- the method 1100 may include determining, using the processing device, one or more attending parameters for attending the one or more virtual events at the one or more event venues by one or more of the two or more audience members and the one or more performers based on the analyzing of the one or more event venue data. Further, the creating of the virtual interactive space may be further based the one or more attending parameters.
- the one or more attending parameters may include one or more seating areas in the one or more event venues, one or more performing areas in the one or more event venues, etc. Further, the one or more seating areas may include a virtual area for two or more human images of the two or more audience members. Further, the one or more performing areas may include a virtual area for one or more human images of the one or more performers.
- FIG. 12 is a block diagram of a system 1200 for facilitating sharing of virtual experience between users, in accordance with some embodiments.
- the system 1200 may include a communication device 1202, a processing device 1204, and a storage device 1206.
- the communication device 1202 may be configured for performing a step of receiving one or more audience data associated with two or more audience members from two or more audience devices associated with the two or more audience members. Further, the two or more audience members watch and participate in one or more virtual events. Further, the communication device 1202 may be configured for performing a step of receiving one or more performer data associated with one or more performers from one or more performer devices associated with the one or more performers. Further, the one or more performers perform and participate in the one or more virtual events.
- the communication device 1202 may be configured for performing a step of receiving one or more background data of one or more virtual events from the one or more performer devices.
- the one or more background data may include one or more virtual backgrounds for the one or more virtual events.
- the communication device 1202 may be configured for performing a step of receiving one or more interaction data of one or more interactions of one or more of the two or more audience members and the one or more performers from one or more of the two or more audience devices and the one or more performer devices.
- the communication device 1202 may be configured for performing a step of transmitting a modified virtual interactive space data to the one or more of the two or more audience devices and the one or more performer devices.
- the modified virtual interactive space data may include the one or more human images and the one or more interactions of one or more of the two or more audience members and the one or more performers within the one or more virtual backgrounds.
- the processing device 1204 may be communicatively coupled with the communication device 1202.
- processing device 1204 may be configured for performing a step of analyzing the one or more performer data and the one or more audience data.
- processing device 1204 may be configured for performing a step of extracting one or more human forms corresponding to one or more of the pluralities of audience members and the one or more performers based on the analyzing.
- the processing device 1204 may be configured for performing a step of generating one or more human images of one or more of the two or more audience members and the one or more performers based on the one or more human forms. Further, the processing device 1204 may be configured for performing a step of combining the one or more human images with the one or more virtual backgrounds based on the generating.
- the processing device 1204 may be configured for performing a step of creating a virtual interactive space based on the combining.
- the virtual interactive space may include the one or more human images of one or more of the two or more audience members and the one or more performers in the one or more virtual backgrounds.
- processing device 1204 may be configured for performing a step of generating the modified virtual interactive space data based on each of the one or more interaction data and the virtual interactive space.
- the storage device 1206 may be communicatively coupled with the processing device 1204.
- the storage device 1206 may be configured for performing a step of storing one or more of the one or more audience data, the one or more performer data, and the one or more background data.
- the communication device 1202 may be configured for receiving one or more audience member data associated with the plurality of audience members from one or more social media platforms associated with the plurality of audience members. Further, the communication device 1202 may be configured for transmitting at least one virtual event interest data to the plurality of audience devices. Further, the processing device 1204 may be configured for analyzing the one or more audience member data. Further, the processing device 1204 may be configured for generating the at least one virtual event interest data based on the analyzing of the one or more audience member data. Further, the at least one virtual event interest data may include one or more similar interests shown by one or more first audience members of the plurality of audience members and one or more second audience members of the plurality of audience members in the at least one virtual event.
- the at least one background data may include one or more virtual backgrounds locations of one or more virtual seats in the at least one virtual background for the plurality of audience members.
- the processing device 1204 may be configured for analyzing the at least one background data and the virtual interactive space, the processing device 1204 may be configured for generating one or more virtual interactive space views of the virtual interactive space corresponding to the one or more virtual seats based on the analyzing of the at least one background data and the virtual interactive space. Further, the processing device 1204 may be configured for issuing one or more tickets for one or more selected virtual seats to the one or more audience members based on one or more seat indications of the one or more selected virtual seats for the at least one virtual event.
- the communication device 1202 may be configured for transmitting the one or more virtual interactive space views of the virtual interactive space corresponding to the one or more virtual seats to the plurality of audience devices. Further, the communication device 1202 may be configured for receiving the one or more seat indications of the one or more selected virtual seats of the one or more virtual seats from one or more audience devices associated with one or more audience members. Further, the communication device 1202 may be configured for transmitting the one or more tickets to the one or more audience devices.
- the communication device 1202 may be configured for transmitting one or more virtual merchandises for the one or more human forms to the plurality of audience devices. Further, the communication device 1202 may be configured for receiving one or more merchandise indications for purchasing of one or more selected virtual merchandises of the one or more virtual merchandises from one or more audience devices associated with one or more audiences. Further, the processing device 1204 may be configured for processing one or more transactions associated with the purchasing of the one or more selected virtual merchandises based on the one or more merchandise indications. Further, the processing device 1204 may be configured for rendering the one or more human forms with the one or more selected virtual merchandises based on the processing. Further, the generating of the one or more human images may be based on the rendering.
- the one or more audience data may include one or more of an audience member’s appearance, an audience member’s gesture, an audience member’s verbal expression, an audience member’s nonverbal expression, and an audience member’s movement.
- the plurality of audience devices may include one or more of an audience image sensor, an audience microphone, and an audience motion sensor. Further, one or more of the audience image sensor, the audience microphone, and the audience motion sensor may be configured for generating the one or more audience data based on capturing one or more of an appearance, a gesture, a verbal expression, a nonverbal expression, and a movement of the plurality of audience members.
- the one or more performer data may include one or more of a performer’s appearance, a performer’s gesture, a performer’s verbal expression, a performer’s nonverbal expression, and a performer’s movement.
- the one or more performer devices may include one or more of a performer image sensor, a performer microphone, and a performer motion sensor.
- one or more of the performer image sensor, the performer microphone, and the performer motion sensor may be configured for generating the one or more performer data based on capturing one or more of an appearance, a gesture, a verbal expression, a nonverbal expression, and a movement of the one or more performers.
- the processing device 1204 may be configured for analyzing the at least one interaction data using one or more machine learning models. Further, the one or more machine learning models may be trained for detecting actions of one or more of the plurality of audience members and the one or more performers. Further, the processing device 1204 may be configured for determining one or more actions corresponding to one or more of the plurality of audience members and the one or more performers based on the analyzing of the at least one interaction data. Further, the processing device 1204 may be configured for modifying the virtual interactive space based on the one or more actions. Further, the generating of the modified virtual interactive space data may be based on the modifying.
- the processing device 1204 may be configured for identifying one or more of one or more first audience members and one or more first performers based on the determining of the one or more actions. Further, the processing device 1204 may be configured for establishing one or more interaction sessions between one or more of the one or more audience members and the one or more performers and one or more of the one or more first audience members and the one or more first performers in real-time based on the identifying. Further, the modifying of the virtual interactive space may be based on the establishing.
- the processing device 1204 may be configured for generating one or more virtual experiences of the virtual interactive space for one or more of the plurality of audience members and the one or more performers based on the virtual interactive space, the one or more audience member data, and the one or more performer data.
- the communication device 1202 may be configured for transmitting the one or more virtual experiences corresponding to one or more of the plurality of audience members and the one or more performers to one or more of the plurality of audience member devices and the one or more performer devices.
- the communication device 1202 may be configured for receiving one or more event venue data associated with one or more event venues of the at least one virtual event from the one or more performer devices.
- the processing device 1204 may be configured for analyzing the one or more event venue data using one or more first machine learning models. Further, the one or more first machine learning models may be trained for detecting attending parameters for attending the at least one virtual event at the one or more event venues. Further, the processing device 1204 may be configured for determining one or more attending parameters for attending the at least one virtual event at the one or more event venues by one or more of the plurality of audience members and the one or more performers based on the analyzing of the one or more event venue data. Further, the creating of the virtual interactive space may be further based the one or more attending parameters.
- FIG. 13 is a flowchart of a method 1300 to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments.
- the method 1300 may include a step of receiving, using the communication device, one or more audience data from a plurality of audience devices associated with a plurality of audience members.
- the plurality of audience members in an instance, may include a group of people attending one or more virtual events associated with one or more performers.
- the one or more virtual events in an instance, may be organized by the one or more performers performing in the one or more virtual events.
- the one or more virtual events may include music shows and/or concerts.
- the one or more virtual events may include sporting events such as, but not limited to, such as baseball, basketball, football, golf, hockey, racing, soccer, and so on. Further, in some embodiments, the one or more virtual events may include large gathering events such as, but not limited to, circus, teaching courses, exercise sessions, festivals, museum visits, night clubs, protests, drama, theme parks, etc. Further, in some embodiments, the one or more virtual events may correspond to a live stream of a corresponding virtual event. Further, in some embodiments, the one or more virtual events may correspond to a pre-recorded virtual event. Further, the plurality of audience devices may include devices that may facilitate attending of one or more virtual events by the plurality of the audience members.
- the plurality of audience devices may be configured to capture one or more variables, such as, but not limited to, a physical variable, a biological variable, a physiological variable, a psychological variable, etc.
- the plurality of audience devices may include one or more sensors configured to capture the one or more variables.
- the plurality of audience devices may include at least one image capturing device and a microphone.
- an audience member of the plurality of audience members may choose to switch between the one or more virtual events based on an interaction with corresponding audience device of the plurality of audience devices.
- examples of the plurality of audience devices may include devices such as, but not limited to, a smartphone, a laptop, a PC, and so on.
- a software application disclosed herein may include a mobile application that may be installed on the plurality of audience devices.
- the one or more audience data may be any data that may be indicative of identities associated with the plurality of audience members watching the one or more virtual events.
- the one or more audience data in an instance, may include a plurality of at least one of an audience image and an audience sound corresponding to the plurality of audience members.
- the audience image may include a live audience video feed that may characterize presence of a corresponding audience member in the one or more virtual events.
- the live audience video feed may include one or more gestures performed by the corresponding audience member that may convey communicative information to the one or more performers and/or other audience members of the plurality of audience members in real-time. Further, the one or more gestures, in an instance, may distract, confuse, impact, instruct, command, or otherwise positively and/or negatively affect the one or more performers and/or the other audience members in the one or more virtual events. Further, in some embodiments, the audience sound may include at least one audience speech that may characterize the presence of the corresponding audience member in the one or more virtual events. Further, the at least one audience speech, in an instance, may facilitate communication between the plurality of audience members.
- the at least one speech may facilitate communication between the plurality of audience members and the one or more performers in the one or more virtual events.
- the plurality of audience devices may be configured to capture the at least one of the audience images and the audience sound corresponding to the plurality of audience members. Further, the capturing, in an instance, may be based on a user interface of the software application.
- the method 1300 may include a step of receiving, using the communication device, one or more performer data from one or more performer devices associated with one or more performers.
- the one or more performer data in an instance, may include audio-visual footage of the at least one performance of the one or more performers in the one or more virtual events.
- the one or more performer data in an instance, may include a plurality of at least one of performer images and performer sound corresponding to the one or more performers.
- the performer image may include a live performer video feed that may characterize presence of a corresponding performer performing in the one or more virtual events.
- the live performer video feed may include one or more gestures performed by the corresponding performer that may convey communicative information to other performers of the one or more performers and/or the plurality of audience members in real-time. Further, the one or more gestures, in an instance, may distract, confuse, impact, instruct, command, or otherwise positively and/or negatively affect the other performers and/or the plurality of audience members in the one or more virtual events.
- the performer sound may include at least one performer speech that may characterize the presence of the corresponding performer in the one or more virtual events. Further, the at least one performer speech may facilitate communication between the other performers and/or the plurality of audience members.
- a performer of the one or more performers may choose to share a pre-recorded performance and/or a live performance of the one or more virtual events.
- the one or more performer devices may be configured to capture the at least one of the performer images and a performer sound corresponding to the one or more performers.
- the one or more performer devices may be configured to capture one or more variables such as, but not limited to, a biological variable, a physiological variable, a psychological variable, etc.
- the one or more performer devices may include one or more sensors configured to capture the one or more variables.
- the one or more performer devices may include at least one image capturing device and a microphone.
- examples of the plurality of audience devices may include devices such as, but not limited to, a smartphone, a laptop, a PC, and so on.
- the software application disclosed herein may include a mobile application that may be installed on the one or more performer devices.
- the one or more performers devices may include a plurality of high-definition cameras that may capture the at least one performance such that the plurality of high-definition cameras may be installed at one or more locations in a space corresponding to the one or more virtual events.
- the installing, in an instance may facilitate capturing of one or more virtual event places such that the plurality of audience members may navigate through the one or more virtual event places during the at least one performance in the one or more virtual events.
- the plurality of high-definition cameras may establish a link with one or more other performer devices such that the at least one performance from the plurality of high-definition cameras may be received on the one or more other performer devices. Further, the receiving, in an instance, may facilitate broadcasting of the at least one performance to the plurality of audience devices over a communication network (such as the Internet).
- a communication network such as the Internet
- the method 1300 may include a step of analyzing, using a processing device, the one or more performer data and the one or more audience data.
- the method 1300 may include a step of extracting, using the processing device, one or more human form data corresponding to one or more of the pluralities of audience members and the one or more performers based on the analyzing.
- the one or more human form data may include visual characteristics and/or auditory characteristics of the plurality of audience members and the one or more performers.
- the visual characteristics in an instance, may be associated with the at least one of the performer images and the audience images.
- the auditory characteristics in an instance, may be associated with the at least one of the performer sound and the audience sound.
- the one or more human form data may include a portion of the one or more performer data and the one or more audience data that may be indicative of the presence of the corresponding performer and the corresponding audience member. Further, the portion may include at least one full real representation of the corresponding performer and the corresponding audience member in similitude with the captured one or more performer data and the one or more audience data by the respective performer device and audience device. Further, in some embodiments, the one or more human form data may include one or more virtual representations of the plurality of audience members and/or the one or more performers.
- the one or more virtual representations may be in accordance with a plurality of real characteristics of the one or more performers and/or the plurality of audience members captured by corresponding one or more performer devices and/or a corresponding plurality of audience devices. Further, in some embodiments, the one or more virtual representations may include three-dimensional holograms (or, 3D holograms) of each of the plurality of audience members and the one or more performers. Further, in some embodiments, the one or more virtual representations may include avatars of the each of the plurality of audience members and the one or more performers.
- the method 1300 may include a step of combining, using the processing device, the one or more human form data with at least one background data corresponding to a virtual background.
- the at least one background data may correspond to data that may facilitate simulating of the one or more virtual events such that the one or more performers may create a virtual reality environment associated with the one or more virtual events.
- the virtual reality environment in an instance, may facilitate an immersive one or more virtual events that may include the plurality of audience members and the one or more performers in a form of one or more human forms based on the one or more human form data.
- the method 1300 may include a step of creating, using the processing device, a virtual interactive space based on the combining.
- the virtual interactive space may be based on the virtual reality environment.
- at least one virtual interactive space data may be generated by the processing device based on the creating of the virtual interactive space.
- at least one virtual interactive space data in an instance, may facilitate the interaction between the plurality of audience members and/or between the plurality of audience members and the one or more performers similar to a real-world interaction using the plurality of audience devices and the one or more performer devices.
- the method 1300 may include a step of receiving, using the communication device, at least one interaction data from one or more of the pluralities of audience devices and the one or more performer devices. Accordingly, the at least one interaction data may facilitate communication between the one or more human forms in the virtual interactive space. Further, in some embodiments, the at least one interaction data may include, but are not limited to, one or more of textual content, audio content, visual content, audio-visual content, and so on. Further, the textual content, in an instance, may include real-time text messaging between the one or more human forms in the virtual interactive space.
- the audio content in an instance, may include real-time communication between the one or more human forms in the virtual interactive space using vocal gestures (for example, speaking, shouting, whispering, etc.).
- the visual content and/or the audio-visual content in an instance, may include real-time communication between the one or more human forms in the virtual interactive space using one or more multimedia content (such as, one or more captured footage of the virtual reality environment and/or the real environment associated with the one or more virtual events, etc.).
- the at least one interaction data may facilitate performing of one or more actions in the virtual interactive space based on the generated at least one virtual interactive space data based on at least one interaction received from one or more of the pluralities of audience devices and the one or more performer devices for navigating around in the virtual interactive space.
- the at least one interaction may include the one or more actions such as, but not limited to, pinching for zooming in/out to interact with the one or more human forms, walking around by the one or more human forms in the virtual interactive space, etc.
- the method 1300 may include a step of generating, using the processing device, a modified virtual interactive space data based on each of the at least one interaction data and the virtual interactive space.
- the modified virtual interactive space may facilitate interaction between the plurality of audience members and/or the plurality of audience members and the one or more performers based on the at least one interaction data.
- the modified virtual interactive space may include a zoomed view of the virtual interactive space.
- the zoomed view in an instance, may include a vantage point of at least one of one or more objects in the virtual interactive space.
- the one or more objects in an instance, may correspond to the one or more human forms in the virtual interactive space.
- the virtual interactive space data may include an indication such as, but not limited to, a friend request, a message, and so on. Further, the indication may facilitate social interaction between the plurality of audience members and/or the one or more performers and the plurality of audience members.
- the method 1300 may include a step of transmitting, using the communication device, the modified virtual interactive space data to the one or more of the pluralities of audience devices and the one or more performer devices.
- At least one first audience member of the plurality of audience members may attend the one or more virtual events in-person such that the at least one of the pluralities of audience members may watch performance of the one or more performers at the one or more virtual event places in a real environment. Further, at least one second audience member of the plurality of audience members may watch the performance of the one or more performers in the virtual interactive space. Further, the at least one first audience member of the plurality of audience members and the at least one second audience member of the plurality of audience members may interact with each other in real-time.
- the interaction may include a real-time conversation between the at least one first audience member of the plurality of audience members and the at least one second audience member of the plurality of audience members that may include, such as, but not limited to, sharing of the one or more of the textual content, audio content, visual content, audio-visual content, and so on.
- the one or more performers may interact with the at least one second audience member of the plurality of audience members at an instance of the performance.
- FIG. 14 is a flowchart of a method 1400 to link social media accounts of the plurality of users to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments.
- the at least one interaction data received from a first user of the plurality of audience members may include an indication of at least one second user of the plurality of audience members and an invitation to establish a social interaction with the at least one second user.
- the modified virtual interactive space data transmitted to the at least one second user may include the invitation.
- the at least one interaction data may also include at least one response to the invitation corresponding to the at least one user. Further, the response may include at least one of acceptance and rejection.
- the method 1400 may include a step of forming at least one social media connection between the first user and the at least one second user in the audience. Accordingly, at 1402, the method 1400 may include a step of receiving, using the communication device, one or more authentication results from one or more social media servers. Accordingly, the one or more authentication results, in an instance, may include data that may reflect an authenticity associated with an identity of a corresponding audience member on one or more social media platforms. Additionally, and/or alternatively, the one or more authentication results may correspond to the identification of the plurality of audience members on the one or more social media platforms based on entered one or more authentication data on a plurality of audience devices.
- examples of the plurality of audience devices may include devices such as, but not limited to, a smartphone, a laptop, a PC, and so on.
- the entered one or more authentication data may be any data that may reflect the identity of the corresponding audience member that may wish to share the virtual experience between the plurality of users (such as, the plurality of audience members and/or the one or more performers) on the one or more social media platforms.
- the one or more authentication data in an instance, may include but are not limited to, passwords, PINs, OTPs, biometric variables, etc. associated with the corresponding audience member.
- the one or more social media servers may include servers that may store the one or more authentication results of the plurality of audience members associated with the one or more social media platforms.
- the one or more social media platforms may include platforms such as, but not limited to, FacebookTM, TwitterTM, InstagramTM, SnapchatTM, WhatsappTM, WeChatTM, BeeboTM, IMOappTM, RedditTM, and so on.
- the method 1400 may include a step of establishing, using the communication device, one or more links between the one or more social media servers and the plurality of audience devices based on the received one or more authentication results.
- the software application disclosed herein may include a mobile application that may be installed on the plurality of audience devices. Further, a user interface of the software application may facilitate establishing the one or more links between the one or more social media servers and the plurality of audience devices.
- the method 1400 may include a step of receiving, using the communication device, one or more audience member data associated with the plurality of audience members based on the one or more social media platforms. Further, the one or more audience member data may be received over the one or more links established between the one or more social media servers and the plurality of audience devices. Further, the one or more audience member data may be any data that may be based on an interest of the plurality of audience members in one or more virtual events. Further, in some embodiments, the one or more audience member data may correspond to the at least one second user that may be associated with the first user on the one or more social media platforms. Further, the at least one second user, in an instance, may share the interest similar to the first user in the one or more virtual events. Further, the at least one second user may include followers, friends, fans, etc., of the first user on the one or more social media platforms.
- the method 1400 may include a step of analyzing, using the processing device, the one or more audience member data. Further, at 1410, the method 1400 may include a step of determining, using the processing device, at least one virtual event interest data based on the analyzing. Further, the at least one virtual event interest data corresponds to data based on similar interest shown by the at least one second user and the first user in the one or more virtual events. Further, the at least one virtual event interest data may include information associated with the at least one second user based on choices corresponding to attending the one or more virtual events.
- the information may include at least one preferred choice of the at least one second user based on the one or more virtual events, but not limited to, future one or more virtual events, preferences (for examples, preferences based on interest in the one or more performers, dates of the one or more virtual events, places associated with the one or more virtual events, etc.) associated with the one or more virtual events, tickets purchased for the one or more virtual events, etc.
- the at least one virtual event interest data may be automatically determined based on the at least one preferred choice associated with attending the one or more virtual events by the first user and the at least one second user. Further, the automatically determining, in an instance, may be based on one or more machine learning algorithms.
- the online platform may process the at least one preferred choice using the processing device based on the one or more machine learning algorithms to suggest the at least one second user to the first user. Further, in some embodiments, sharing of the at least one virtual event interest data with the first user may be based on a consent of the at least one second user.
- the method 1400 may include a step of transmitting, using the communication device, the at least one virtual event interest data to the plurality of audience devices.
- the first user may establish the social interaction with the at least one second user based on the user interface of the software application. Further, the interaction may be based on the at least one social media connection. Further, the interaction may include a real-time conversation between the first user and the at least one second user that may include sharing of, but not limited to, one or more of textual content, audio content, visual content, audiovisual content, and so on.
- the textual content may include real-time text messaging between the plurality of audience members at an instance of the one or more virtual events and/or before attending the one or more virtual events.
- the audio content in an instance, may include real-time communication between the plurality of audience members at the instance of the one or more virtual events and/or before attending the one or more virtual events.
- the visual content and/or the audio-visual content in an instance, may include real-time communication between the plurality of audience members using one or more multimedia content at the instance of the one or more virtual events and/or before attending the one or more virtual events.
- the first user may create a room (e.g, a group that may include fans of the one or more virtual events) that may include one or more of the at least one second user based on the at least one virtual event interest data using the user interface of the software application.
- a room e.g, a group that may include fans of the one or more virtual events
- FIG. 15 is a flowchart of a method 1500 to create a virtual audience for live performers at a virtual event, in accordance with some embodiments.
- the live performers may include the one or more performers that may perform in a real environment, such as, for example, in the one or more virtual events that may include sporting events such as, but not limited to, baseball, basketball, football, golf, hockey, racing, soccer, and so on.
- the method 1500 may include a step of receiving, using the communication device, the one or more audience data from the plurality of audience devices.
- the one or more audience data may be any data that may be indicative of identities associated with the plurality of audience members watching the one or more virtual events.
- the one or more audience data may include a plurality of at least one of an audience image and an audience sound corresponding to the plurality of audience members.
- the method 1500 may include a step of analyzing, using the processing device, the one or more audience data.
- the method 1500 may include a step of extracting, using the processing device, the one or more human form data corresponding to the plurality of audience members based on the analyzing.
- the one or more human form data may include visual characteristics and/or auditory characteristics of the plurality of audience members.
- the method 1500 may include a step of generating, using the processing device, a virtual audience data based on the extracting.
- the creating may include combining the one or more human forms with at least one virtual background such that the combining may imitate a real audience watching the one or more virtual events.
- the at least one virtual background may include, but is not limited to, virtual seats in the one or more virtual events such as an arena, a stadium, a court, and so on.
- the method 1500 may include a step of transmitting, using the communication device, the virtual audience data to one or more display devices in the one or more virtual events.
- the one or more display devices may include devices, such as but not limited to, electroluminescent (ELD) displays, liquid crystal displays (LCD), light-emitting diode (LED) backlit LCDs, thin-film transistor (TFT) LCDs, light-emitting diode (LED) displays, plasma display panel (PDP) displays, and so on.
- the one or more display devices may include the one or more performer devices.
- the one or more virtual events may include, in an instance, one or more directional audio devices. Further, the one or more directional audio devices may facilitate generating of a spatial audio effect in the one or more virtual events.
- the spatial audio effect in an instance, may include sound from the one or more human forms displayed on the one or more display devices at a varying audible level. Further, the varying audible level may be based on a proximity of the one or more performers from the one or more human forms displayed on the one or more display devices. Further, the directional audio devices may include, but not limited to, one or more directional microphones, one or more directional speakers, and so on. Further, in some embodiments, the virtual interactive space based on the generated modified virtual interactive data may be displayed on the one or more display devices that may facilitate interaction between the one or more human forms during the one or more virtual events.
- FIG. 16 is a flowchart of a method 1600 for providing a preview at an instance of purchasing tickets for the one or more virtual events, in accordance with some embodiments.
- the method 1600 may include a step of receiving, using the communication device, at least one virtual event data associated with the one or more virtual events from the one or more performer devices.
- the one or more performer devices may be associated with the one or more performers performing in the one or more virtual events.
- the at least one virtual event data may include but is not limited to, a preview of the virtual interactive space, price of each ticket, venue, name, facilities, attendees, and so on, associated with the one or more virtual events.
- the method 1600 may include a step of transmitting, using the communication device, the at least one virtual event data to the plurality of audience devices.
- one or more kiosks may be located in a vicinity of the one or more virtual events. Further, the one or more kiosks may receive the transmitted at least one virtual event data for the one or more virtual events.
- the method 1600 may include a step of determining, using the processing device, an instance of the purchasing of the tickets for the one or more virtual events on the plurality of audience devices. Further, a ticket window associated with the user interface of the software application may facilitate the purchasing of the tickets by the plurality of audience members.
- the ticket window may include one or more transaction options for facilitating transactions associated with the purchasing of the tickets.
- the one or more transaction options may include but are not limited to, credit cards, debit cards, payment wallets, and so on.
- one or more kiosks may be located in a vicinity of the one or more virtual events. Further, the one or more kiosks may facilitate purchasing tickets for the one or more virtual events.
- the method 1600 may include a step of displaying, using the processing device, the at least one virtual event data on the plurality of audience devices. Further, at the instance of purchasing the tickets, the preview of the virtual interactive space may be displayed on the plurality of audience devices at the ticket window.
- FIG. 17 is a flowchart of a method 1700 for purchasing merchandise and rendering the one or more human forms accordingly, in accordance with some embodiments.
- the method 1700 may include a step of receiving, using the communication device, one or more indications associated with the purchasing of the merchandise from the plurality of audience devices.
- the user interface of the software application in an instance, may facilitate an e-commerce platform for selling the merchandise.
- the merchandise in an instance, may correspond to customized goods and/or products associated with the one or more virtual events. Further, the one or more performers may wish to sell the merchandise on the e-commerce platform.
- the method 1700 may include a step of processing, using the processing device, one or more transactions associated with the purchasing of the merchandise. Further, at 1706, the method 1700 may include a step of rendering, using the processing device, the one or more human forms with virtual merchandise. Further, in some embodiments, the merchandise may include physical goods and/or products. Further, the plurality of audience members may place an order for purchasing the physical goods and/or products using the e-commerce platform. Further, at 1708, the method 1700 may include a step of transmitting, using the communication device, the one or more human forms subsequent to the rendering to the one or more display devices.
- FIG. 18 is an illustration of a screen 1800 associated with events navigation tab of a software application to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments.
- the illustration may be associated with a screenshot of the software application.
- the events navigation tab may display information based on one or more upcoming virtual events. Further, the displaying may be based on the user interface of the software application.
- each of the information corresponding to the one or more upcoming virtual events may include, but is not limited to, the price of a ticket for attending a corresponding upcoming virtual event, name, facilities provided, venue, date, attendees based on the social interaction (explained further in conjunction with FIG. 14), and so on.
- a user may wish to watch a preview of the one or more upcoming virtual events that may be displayed on a corresponding user device.
- the preview may include a sneak peek of a corresponding upcoming virtual event that may display the information in a graphical context (such as, a sequence of images and/or videos) about the corresponding upcoming virtual event.
- the events navigation tab may include one or more navigation tabs such as, but not limited to, calendar, hosting, and so on.
- the screen 1800 may display a directory of available events for both land gate and cloud gate tickets. Further, the screen 1800 may facilitate remembering events that are liked and recommending events based on past preferences and friends’ attendance.
- FIG. 19 is an illustration of a screen 1900 associated with the events navigation tab of the software application to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments. Further, the illustration may be associated with a screenshot of the software application. Further, in some embodiments, the user may wish to select one or more filter preferences associated with the one or more upcoming virtual events such that the user interface may display specific one or more upcoming virtual events on the corresponding user device based on a choice of the user. Further, the one or more filter preferences may include preference options such as, but not limited to, selecting dates, selecting venues, selecting one or more options corresponding to attending the upcoming virtual event as in-person or virtually, genre and/or type based on the one or more upcoming virtual events, and so on.
- the screen 1900 may facilitate dynamic searching to see all available events.
- the one or more upcoming virtual events may include music shows and/or concerts.
- the one or more upcoming virtual events may include sporting events such as but not limited to, baseball, basketball, football, golf, hockey, racing, soccer, and so on.
- the one or more upcoming virtual events may include large gathering events such as, but not limited to, circus, teaching courses, exercise sessions, festivals, museum visits, night clubs, protests, drama, theme parks, etc.
- FIG. 20 is an illustration of a screen 2000 associated with my tickets ’ navigation tab of the software application to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments. Further, the illustration may be associated with a screenshot of the software application. Further, in some embodiments, the my ticket navigation tab may display one or more tickets for the one or more virtual events. Further, at least one future ticket of the one or more tickets may be displayed under the future navigation tab that may correspond to attending of the one or more upcoming virtual events based on the at least one future ticket. Further, at least one past ticket of the one or more tickets may be displayed under the past navigation tab that may correspond to attending one or more virtual events in past based on the at least one past ticket.
- FIG. 21 is an illustration of a screen 2100 associated with my tickets navigation tab of the software application to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments. Further, the illustration may be associated with a screenshot of the software application. Further, in some embodiments, at least one quick response code (or, QR code) may be generated corresponding to each of the one or more tickets under my tickets navigation tab. Further, in some embodiments, the at least one future ticket may display the at least one QR code subsequent to receiving an interaction from the user (such as tapping, swiping, etc.) on the corresponding user device. Further, each of the at least one future ticket may include the at least one QR code.
- QR code quick response code
- each QR code of the at least one QR code may correspond to at least one attendee associated with the user for the one or more upcoming virtual events. Further, the each QR code may be configured to include at least one attendee information associated with attending the corresponding upcoming virtual events. Further, the at least one attendee information may include but is not limited to, seat number, row number, section number, name of the venue, and so on. Further, in some embodiments, the one or more kiosks present at the one or more virtual events may facilitate scanning of one or more QR codes such that the scanning may be equivalent to a gate pass for attending the one or more virtual events.
- FIG. 22 is an illustration of a screen 2200 associated with social navigation tab of the software application to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments. Further, the illustration may be associated with a screenshot of the software application. Further, in some embodiments, the social navigation tab may display followers, fans, friends, linked social media platforms, groups, the one or more upcoming virtual events to be attended, etc. associated with the user (explained in conjunction with FIG. 14).
- FIG. 23 is an illustration of a screen 2300 associated with shop navigation tab of the software application to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments.
- the illustration may be associated with a screenshot of the software application.
- the shop navigation tab may facilitate an e-commerce platform for selling the merchandise.
- the merchandise in an instance, may correspond to customized goods and/or products associated with the one or more virtual events.
- the one or more performers may wish to sell the merchandise on the e- commerce platform.
- the merchandise may include physical goods and/or products.
- the user may place an order for purchasing the physical goods and/or products using the e-commerce platform.
- FIG. 24 is an illustration of a screen 2400 of the software application to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments.
- the illustration may be associated with a screenshot of the software application.
- the screen 2400 may be associated with a performance of the one or more performers in the one or more virtual events.
- the screen 2400 may display the one or more human forms of the one or more performers (explained further in conjunction with FIG. 13).
- the user interface of the software application may facilitate switching between one or more screens based on the interaction received from the user on the corresponding user device.
- the one or more screens may correspond to one or more navigations tabs under performance screen that may include, but are not limited to, selfie view, fan view, social, shop, exit, and so on.
- FIG. 25 is an illustration of a screen 2500 of the software application to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments.
- the illustration may be associated with a screenshot of the software application.
- the screen 2500 may be associated with the virtual interactive space based on the virtual interactive space data that may display the plurality of audience members in the one or more virtual events. Further, in some embodiments, the screen 2500 may display the one or more human forms of the plurality of audience members, (explained further in conjunction with FIG. 13).
- FIG. 26 is an illustration of a screen 2600 of the software application to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments. Further, the illustration may be associated with a screenshot of the software application. Further, in some embodiments, the screen 2600 may represent interaction with the at least one second user based on the modified virtual interactive space data (explained further in conjunction with FIG. 13). Further, in some embodiments, the interaction may be based on the social interaction established based on the one or more social media platforms (explained further in conjunction with FIG. 14). Further, the screen 2600 may be associated with the social navigational tab in the one or more navigational tabs.
- the one or more social media platforms may include, but are not limited to, FacebookTM, TwitterTM, InstagramTM, FacebookTM Messenger, and so on.
- the user may choose to communicate with one or more random audience members in the one or more virtual events based on an interaction received on the corresponding user device using the modified virtual interactive space data.
- the user may wish to save information corresponding to the one or more random audience members such that the saving may facilitate future communication with the one or more random audience members.
- the communicating in an instance, may include calling, texting, facetime, and so on.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Information Transfer Between Computers (AREA)
Abstract
A method and system for facilitating sharing of virtual experience between users are provided. Further, the method comprises receiving audience data from audience devices, receiving performer data from performer devices, analyzing the performer data and the audience data, extracting human forms corresponding to audience members and performers based on the analyzing, generating human images of the audience members and the performers based on the human forms, receiving background data of virtual event from the performer devices, combining the human images with the virtual background, creating a virtual interactive space based on the combining, receiving interaction data from the audience devices and the performer devices, generating a modified virtual interactive space data based on each of the interaction data and the virtual interactive space, transmitting the modified virtual interactive space data to the audience devices and the performer devices, and storing the audience data, the performer data, and the background data.
Description
METHODS, SYSTEMS, APPARATUSES, AND DEVICES FOR FACILITATING SHARING OF VIRTUAL EXPERIENCE BETWEEN USERS
FIELD OF THE INVENTION
Generally, the present disclosure relates to the field of data processing. More specifically, the present disclosure relates to methods, systems, apparatuses, and devices for facilitating sharing of virtual experience between users.
BACKGROUND OF THE INVENTION
The field of data processing is technologically important to several industries, business organizations, and/or individuals.
For millennia, humans have sought the ability to convene and share experiences in large gatherings, ranging from hundreds of persons to thousands of persons, which include concerts, theatre productions, sporting events, political assemblies, life cycle events, community celebrations, and others. The challenges humans have had in achieving the goal of the successful large event shared experiences are many. Some of them being, limitations on the size and safety of event venues, economic costs of physically traveling from a person's location to an event venue, time involved with physically traveling from a person's location to an event venue, effect of different time zones, and scheduling to maximize large group attendance, and the adverse social effects of the expenditure of fossil fuels for transport and event production due to negative climate effects.
With no return dates for shows in sight, fans and artists are adapting to a new way of experiencing music together. Silence will be the new soundtrack as sports resume during the pandemic. Further, the fans are the background in every iconic moment. Extras add breadth to our most memorable scenes. They are the crowds who have provided the emotional context to major sporting events.
What’s lost when audiences can’t be close to musicians, applause is virtual and over-the-top artists are crammed into screens. So much. So many good intentions, so little joy. Social distancing tears apart the closeness that performers and listeners
had always taken for granted at concerts: closeness onstage, in the crowd, and the shared moment. Livestreaming has become practically a rite of passage for musicians living through the global coronavirus pandemic. But it’s hard to feel like 10,000 people are watching and participating when you’re alone in your house and there isn’t a crowd response. It feels weird. It’s a weirdness that artists and fans have become intimately familiar with. Further, fan-less stadiums and areas may impact the players or music performers and the viewing experience as concerts and sports including hockey, football, basketball, soccer, baseball, and more make their way back. The fans are the crowds who have provided the emotional context to major sporting events and concerts. Now, fans are being told to stay away from our stadiums, arenas, and ballparks. Further, professional athletes and musicians are used to playing and performing in front of fans. At home, the cheers provide adrenaline. The anticipation of a game-altering moment felt seat to seat in the stands carries over onto the field.
Sports fandom compares to learned behavior, like writing. Further, the live event industry is trying. Be it virtual concerts, drive-in concerts, or some attempts at “socially distanced” concerts, some promoters are willing to try anything in these problematic economic and pandemic times. There are predictions that regular touring and concerts will not return in 2022. As the pandemic has stretched on, and it’s become clear that concerts full of tightly packed fans won’t be returning in a significant way until 2021, there’s new pressure on live streams and new questions about them. The experience for performers can be disorienting. The goal is crowd participation. Finding the sweet spot between what fans are willing to pay and what artists need to charge to make it profitable continues to be tricky.
Since the onset of the CO VID-19 pandemic around the world, thousands of concerts, shows, sports games, and gatherings of several genres abruptly stopped, canceled, or rescheduled. Live Nation Entertainment Inc., which dominates 70% of the U.S. concert marketplace and 30% worldwide, reported that by the end of QI 2020, it stopped or rescheduled approximately 10,000 concerts involving 34 million tickets, resulting in a drop of $2.9 Billion in revenue for Q2 2020 and $3.3 Billion for six months ending June 30, 2020. The pandemic has adversely impacted many others in the concert and show music industry. Concert-related losses for 2020 have reached $30 Billion.
In the second quarter of 2020, Live Nation Entertainment had 67 million fans view over 18 thousand streaming concerts and festivals globally. [1] In a PYMNTS
consumer survey, respondents indicate that 54.2 percent missed attending events like concerts. [2] A May 2020 poll found that 72 percent of respondents would not attend a sporting event before a coronavirus vaccine was available. [3] A February 2020 study conducted by Verizon Media found that 63% of consumers would be willing to pay more to stream live sports “if platforms were to provide a more personalized user experience.” [4]
Sports have suffered. Estimates show that the pandemic caused by COVID-19 has negatively affected the $160bn sports industry due to missed games, broadcast revenue, gate revenues, and salary obligations (Futterman et al., 2020). The MLB faces major losses since it estimates that 40% of its revenues come from game-time experience. The National Basketball Association (NBA) and National Hockey League (NHL) paused their seasons, while Major League Baseball (MLB) pushed back opening day.
The tens of millions of fans to these and other events suffer. Fans enjoy group experiences and shared events. Yet, in these uncertain times of social distancing and crowd avoidance for public health reasons, the loss of these experiences compounds feelings of isolation, anxiety, and depression.
Performers, including bands and athletes in these shared events, call out for a solution to enable their economic recovery, while fans are itching to connect and interact live. But, historically, performers and fans coupled their participation with their physical attendance.
Importantly, there have recently arisen new and profound challenges to large event gatherings as a result of the novel coronavirus-2019 (CO VID-19) pandemic and health-related risks associated with such events. Whether these challenges are the result of governmentally mandated rules on social distancing, limitations of the number of persons permitted in a single event venue, or the other commercial, social and societal factors, the effect have been the same: namely, that despite the great desire of humans to socially connect in large groups, they are being restricted from doing so. These restrictions have significant economic costs to the producers of events, artists/performers, and their related industries. However, beyond the economic costs, there are significant psychological and mental health costs to the general public on a global scale. Recently, there have been attempts to address this challenge, but each has failed to create an authentic, real-time, human-to-human large group experience that is multi-sensory and immersive.
Existing techniques for facilitating sharing of virtual experience between users are deficient with regard to several aspects. For example, in some existing techniques, each user in the virtual crowd perceives the same view of the arena or stage, which is unintuitive, and/or unrealistic. Further, while some existing solutions may provide a virtual experience that varies depending on the position of a user within the virtual space, such a vantage point is fixed for a given position of the user, thus restricting the view of the virtual space, as a result of which current technologies do not facilitate expanding and/or zooming into the visual appearance of large numbers of people within an event venue, to see or be seen by a single, or multiple, simultaneously projected images of human forms of other persons and eventually facilitate direct communication between persons by a text, social media or by direct auditory messages to allow for authentic human-to-human interaction, on a real-time basis within the event venue, whether the group event is live or pre-recorded. Furthermore, current technologies do not replicate the in-person audio-visual experience based on which the people can see other people or being seen by the other people, but at varying visual perspectives and audible levels associated with a location of the people within the event venue. Further, current technologies do not facilitate enabling members to select and purchase virtual merchandise (e.g., clothes, accessories, tattoos, etc.) for their corresponding human images in the virtual group experience. Further, current technologies do not facilitate members to select and purchase a ticket by providing them a view of the virtual group experience corresponding to a particular seating place. Moreover, current technologies do not facilitate interaction between the people and people who have been following the event venue using social media platforms.
Therefore, there is a need for improved methods, systems, apparatuses, and devices for facilitating sharing of virtual experience between users that may overcome one or more of the above-mentioned problems and/or limitations.
SUMMARY OF THE INVENTION
This summary is provided to introduce a selection of concepts in a simplified form, that are further described below in the Detailed Description. This summary is not intended to identify key features or essential features of the claimed subject
matter. Nor is this summary intended to be used to limit the claimed subject matter’s scope.
Disclosed herein is a method for facilitating sharing of virtual experience between users, in accordance with some embodiments. The method may include a step of receiving, using a communication device, one or more audience data associated with two or more audience members from two or more audience devices associated with the two or more audience members. Further, the two or more audience members watch and participate in one or more virtual events. Further, the method may include a step of receiving, using the communication device, one or more performer data associated with one or more performers from one or more performer devices associated with the one or more performers. Further, the one or more performers perform and participate in the one or more virtual events. Further, the method may include a step of analyzing, using a processing device, the one or more performer data and the one or more audience data. Further, the method may include a step of extracting, using the processing device, one or more human forms corresponding to one or more of the pluralities of audience members and the one or more performers based on the analyzing. Further, the method may include a step of generating, using the processing device, one or more human images of one or more of the two or more audience members and the one or more performers based on the one or more human forms. Further, the method may include a step of receiving, using the communication device, one or more background data of one or more virtual events from the one or more performer devices. Further, the one or more background data may include one or more virtual backgrounds for the one or more virtual events. Further, the method may include a step of combining, using the processing device, the one or more human images with the one or more virtual backgrounds based on the generating. Further, the method may include a step of creating, using the processing device, a virtual interactive space based on the combining. Further, the method may include a step of receiving, using the communication device, one or more interaction data of one or more interactions of one or more of the plurality of audience members and the one or more performers from one or more of the two or more audience devices and the one or more performer devices. Further, the method may include a step of generating, using the processing device, a modified virtual interactive space data based on each of the one or more interaction data and the virtual interactive space. Further, the method may include a step of transmitting, using the
communication device, the modified virtual interactive space data to the one or more of the two or more audience devices and the one or more performer devices. Further, the method may include a step of storing, using a storage device, one or more of the one or more audience data, the one or more performer data, and the one or more background data.
Further disclosed herein is a system for facilitating sharing of virtual experience between users, in accordance with some embodiments. The system may include a communication device, a processing device, and a storage device. Further, the communication device may be configured for performing a step of receiving one or more audience data associated with two or more audience members from two or more audience devices associated with the two or more audience members. Further, the two or more audience members watch and participate in one or more virtual events. Further, the communication device may be configured for performing a step of receiving one or more performer data associated with one or more performers from one or more performer devices associated with the one or more performers. Further, the one or more performers perform and participate in the one or more virtual events. Further, the communication device may be configured for performing a step of receiving one or more background data of one or more virtual events from the one or more performer devices. Further, the one or more background data may include one or more virtual backgrounds for the one or more virtual events. Further, the communication device may be configured for performing a step of receiving one or more interaction data of one or more interactions of one or more of the plurality of audience members and the one or more performers from one or more of the two or more audience devices and the one or more performer devices. Further, the communication device may be configured for performing a step of transmitting a modified virtual interactive space data to the one or more of the two or more audience devices and the one or more performer devices. The processing device may be communicatively coupled with the communication device. Further, the processing device may be configured for performing a step of analyzing the one or more performer data and the one or more audience data. Further, the processing device may be configured for performing a step of extracting one or more human forms corresponding to one or more of the pluralities of audience members and the one or more performers based on the analyzing. Further, the processing device may be configured for performing a step of generating one or more human images of one or
more of the two or more audience members and the one or more performers based on the one or more human forms. Further, the processing device may be configured for performing a step of combining the one or more human images with the one or more virtual backgrounds based on the generating. Further, the processing device may be configured for performing a step of creating a virtual interactive space based on the combining. Further, the processing device may be configured for performing a step of generating the modified virtual interactive space data based on each of the one or more interaction data and the virtual interactive space. The storage device may be communicatively coupled with the processing device. Further, the storage device may be configured for performing a step of storing one or more of the one or more audience data, the one or more performer data, and the one or more background data.
Both the foregoing summary and the following detailed description provide examples and are explanatory only. Accordingly, the foregoing summary and the following detailed description should not be considered to be restrictive. Further, features or variations may be provided in addition to those set forth herein. For example, embodiments may be directed to various feature combinations and subcombinations described in the detailed description.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate various embodiments of the present disclosure. The drawings contain representations of various trademarks and copyrights owned by the Applicants. In addition, the drawings may contain other marks owned by third parties and are being used for illustrative purposes only. All rights to various trademarks and copyrights represented herein, except those belonging to their respective owners, are vested in and the property of the applicants. The applicants retain and reserve all rights in their trademarks and copyrights included herein, and grant permission to reproduce the material only in connection with reproduction of the granted patent and for no other purpose.
Furthermore, the drawings may contain text or captions that may explain certain embodiments of the present disclosure. This text is included for illustrative,
non-limiting, explanatory purposes of certain embodiments detailed in the present disclosure.
FIG. 1 is an illustration of an online platform consistent with various embodiments of the present disclosure.
FIG. 2 is a block diagram of a computing device for implementing the methods disclosed herein, in accordance with some embodiments.
FIG. 3 is a flowchart of a method for facilitating sharing of virtual experience between users, in accordance with some embodiments.
FIG. 4 is a continuation flowchart of FIG. 3.
FIG. 5 is a flowchart of a method for facilitating sharing of virtual experience between users in which the method further may include transmitting, the one or more virtual event interest data to the two or more audience devices, in accordance with some embodiments.
FIG. 6 is a flowchart of a method for facilitating sharing of virtual experience between users in which the method further may include transmitting, the one or more tickets to the one or more audience devices, in accordance with some embodiments.
FIG. 7 is a flowchart of a method for facilitating sharing of virtual experience between users in which the method further may include rendering, the one or more human forms with the one or more selected virtual merchandises based on the processing, in accordance with some embodiments.
FIG. 8 is a flowchart of a method for facilitating sharing of virtual experience between users in which the method further may include modifying, the virtual interactive space based on the one or more actions, in accordance with some embodiments.
FIG. 9 is a flowchart of a method for facilitating sharing of virtual experience between users in which the method further may include establishing one or more interaction sessions between one or more of the one or more audience members and the one or more performers and one or more of the one or more first audience members and the one or more first performers in real-time based on the identifying, in accordance with some embodiments.
FIG. 10 is a flowchart of a method for facilitating sharing of virtual experience between users in which the method further may include transmitting the one or more virtual experiences corresponding to one or more of the plurality of audience members and the one or more performers to one or more of the plurality of audience
member devices and the one or more performer devices, in accordance with some embodiments.
FIG. 11 is a flowchart of a method for facilitating sharing of virtual experience between users in which the method further may include determining one or more attending parameters for attending the one or more virtual events at the one or more event venues by one or more of the two or more audience members and the one or more performers based on the analyzing of the one or more event venue data, in accordance with some embodiments.
FIG. 12 is a block diagram of a system for facilitating sharing of virtual experience between users, in accordance with some embodiments.
FIG. 13 is a flowchart of a method to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments.
FIG. 14 is a flowchart of a method to link social media accounts of the plurality of users to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments.
FIG. 15 is a flowchart of a method to create a virtual audience for live performers at a virtual event, in accordance with some embodiments.
FIG. 16 is a flowchart of a method for providing a preview at an instance of purchasing tickets for the one or more virtual events, in accordance with some embodiments.
FIG. 17 is a flowchart of a method for purchasing merchandise and rendering the one or more human forms accordingly, in accordance with some embodiments.
FIG. 18 is an illustration of a screen associated with events navigation tab of a software application to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments.
FIG. 19 is an illustration of a screen associated with the events navigation tab of the software application to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments.
FIG. 20 is an illustration of a screen associated with my tickets ’ navigation tab of the software application to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments.
FIG. 21 is an illustration of a screen associated with my tickets navigation tab of the software application to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments.
FIG. 22 is an illustration of a screen associated with social navigation tab of the software application to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments.
FIG. 23 is an illustration of a screen associated with shop navigation tab of the software application to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments.
FIG. 24 is an illustration of a screen of the software application to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments.
FIG. 25 is an illustration of a screen of the software application to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments.
FIG. 26 is an illustration of a screen of the software application to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments.
DETAIL DESCRIPTIONS OF THE INVENTION
As a preliminary matter, it will readily be understood by one having ordinary skill in the relevant art that the present disclosure has broad utility and application. As should be understood, any embodiment may incorporate only one or a plurality of the above-disclosed aspects of the disclosure and may further incorporate only one or a plurality of the above-disclosed features. Furthermore, any embodiment discussed and identified as being “preferred” is considered to be part of a best mode contemplated for carrying out the embodiments of the present disclosure. Other embodiments also may be discussed for additional illustrative purposes in providing a full and enabling disclosure. Moreover, many embodiments, such as adaptations, variations, modifications, and equivalent arrangements, will be implicitly disclosed by the embodiments described herein and fall within the scope of the present disclosure.
Accordingly, while embodiments are described herein in detail in relation to one or more embodiments, it is to be understood that this disclosure is illustrative and exemplary of the present disclosure, and are made merely for the purposes of providing a full and enabling disclosure. The detailed disclosure herein of one or more
embodiments is not intended, nor is to be construed, to limit the scope of patent protection afforded in any claim of a patent issuing here from, which scope is to be defined by the claims and the equivalents thereof. It is not intended that the scope of patent protection be defined by reading into any claim limitation found herein and/or issuing here from that does not explicitly appear in the claim itself.
Thus, for example, any sequence(s) and/or temporal order of steps of various processes or methods that are described herein are illustrative and not restrictive. Accordingly, it should be understood that, although steps of various processes or methods may be shown and described as being in a sequence or temporal order, the steps of any such processes or methods are not limited to being carried out in any particular sequence or order, absent an indication otherwise. Indeed, the steps in such processes or methods generally may be carried out in various different sequences and orders while still falling within the scope of the present disclosure. Accordingly, it is intended that the scope of patent protection is to be defined by the issued claim(s) rather than the description set forth herein.
Additionally, it is important to note that each term used herein refers to that which an ordinary artisan would understand such term to mean based on the contextual use of such term herein. To the extent that the meaning of a term used herein — as understood by the ordinary artisan based on the contextual use of such term — differs in any way from any particular dictionary definition of such term, it is intended that the meaning of the term as understood by the ordinary artisan should prevail.
Furthermore, it is important to note that, as used herein, “a” and “an” each generally denotes “at least one,” but does not exclude a plurality unless the contextual use dictates otherwise. When used herein to join a list of items, “or” denotes “at least one of the items,” but does not exclude a plurality of items of the list. Finally, when used herein to join a list of items, “and” denotes “all of the items of the list.”
The following detailed description refers to the accompanying drawings. Wherever possible, the same reference numbers are used in the drawings and the following description to refer to the same or similar elements. While many embodiments of the disclosure may be described, modifications, adaptations, and other implementations are possible. For example, substitutions, additions, or modifications may be made to the elements illustrated in the drawings, and the methods described herein may be modified by substituting, reordering, or adding
stages to the disclosed methods. Accordingly, the following detailed description does not limit the disclosure. Instead, the proper scope of the disclosure is defined by the claims found herein and/or issuing here from. The present disclosure contains headers. It should be understood that these headers are used as references and are not to be construed as limiting upon the subjected matter disclosed under the header.
The present disclosure includes many aspects and features. Moreover, while many aspects and features relate to, and are described in the context of methods, systems, apparatuses, and devices for facilitating sharing of virtual experience between users, embodiments of the present disclosure are not limited to use only in this context.
In general, the method disclosed herein may be performed by one or more computing devices. For example, in some embodiments, the method may be performed by a server computer in communication with one or more client devices over a communication network such as, for example, the Internet. In some other embodiments, the method may be performed by one or more of at least one server computer, at least one client device, at least one network device, at least one sensor, and at least one actuator. Examples of the one or more client devices and/or the server computer may include, a desktop computer, a laptop computer, a tablet computer, a personal digital assistant, a portable electronic device, a wearable computer, a smartphone, an Internet of Things (loT) device, a smart electrical appliance, a video game console, a rack server, a super-computer, a mainframe computer, minicomputer, micro-computer, a storage server, an application server (e.g., a mail server, a web server, a real-time communication server, an FTP server, a virtual server, a proxy server, a DNS server, etc.), a quantum computer, and so on. Further, one or more client devices and/or the server computer may be configured for executing a software application such as, for example, but not limited to, an operating system (e.g., Windows, Mac OS, Unix, Linux, Android, etc.) in order to provide a user interface (e.g., GUI, touchscreen based interface, voice based interface, gesture based interface, etc.) for use by the one or more users and/or a network interface for communicating with other devices over a communication network. Accordingly, the server computer may include a processing device configured for performing data processing tasks such as, for example, but not limited to, analyzing, identifying, determining, generating, transforming, calculating, computing, compressing, decompressing, encrypting, decrypting, scrambling, splitting, merging, interpolating,
extrapolating, redacting, anonymizing, encoding and decoding. Further, the server computer may include a communication device configured for communicating with one or more external devices. The one or more external devices may include, for example, but are not limited to, a client device, a third-party database, a public database, a private database, and so on. Further, the communication device may be configured for communicating with the one or more external devices over one or more communication channels. Further, the one or more communication channels may include a wireless communication channel and/or a wired communication channel. Accordingly, the communication device may be configured for performing one or more of transmitting and receiving of information in electronic form. Further, the server computer may include a storage device configured for performing data storage and/or data retrieval operations. In general, the storage device may be configured for providing reliable storage of digital information. Accordingly, in some embodiments, the storage device may be based on technologies such as, but not limited to, data compression, data backup, data redundancy, deduplication, error correction, data finger-printing, role based access control, and so on.
Further, one or more steps of the method disclosed herein may be initiated, maintained, controlled, and/or terminated based on a control input received from one or more devices operated by one or more users such as, for example, but not limited to, an end user, an admin, a service provider, a service consumer, an agent, a broker and a representative thereof. Further, the user as defined herein may refer to a human, an animal, or an artificially intelligent being in any state of existence, unless stated otherwise, elsewhere in the present disclosure. Further, in some embodiments, the one or more users may be required to successfully perform authentication in order for the control input to be effective. In general, a user of the one or more users may perform authentication based on the possession of a secret human readable secret data (e.g., username, password, passphrase, PIN, secret question, secret answer, etc.) and/or possession of a machine readable secret data (e.g., encryption key, decryption key, bar codes, etc.) and/or possession of one or more embodied characteristics unique to the user (e.g., biometric variables such as, but not limited to, fingerprint, palm-print, voice characteristics, behavioral characteristics, facial features, iris pattern, heart rate variability, evoked potentials, brain waves, and so on) and/or possession of a unique device (e.g., a device with a unique physical and/or chemical and/or biological characteristic, a hardware device with a unique serial number, a network device with a
unique IP/MAC address, a telephone with a unique phone number, a smartcard with an authentication token stored thereupon, etc.). Accordingly, the one or more steps of the method may include communicating (e.g., transmitting and/or receiving) with one or more sensor devices and/or one or more actuators in order to perform authentication. For example, the one or more steps may include receiving, using the communication device, the secret human readable data from an input device such as, for example, a keyboard, a keypad, a touch-screen, a microphone, a camera, and so on. Likewise, the one or more steps may include receiving, using the communication device, the one or more embodied characteristics from one or more biometric sensors.
Further, one or more steps of the method may be automatically initiated, maintained, and/or terminated based on one or more predefined conditions. In an instance, the one or more predefined conditions may be based on one or more contextual variables. In general, the one or more contextual variables may represent a condition relevant to the performance of the one or more steps of the method. The one or more contextual variables may include, for example, but are not limited to, location, time, identity of a user associated with a device (e.g., the server computer, a client device, etc.) corresponding to the performance of the one or more steps, environmental variables (e.g., temperature, humidity, pressure, wind speed, lighting, sound, etc.) associated with a device corresponding to the performance of the one or more steps, physical state and/or physiological state and/or psychological state of the user, physical state (e.g., motion, direction of motion, orientation, speed, velocity, acceleration, trajectory, etc.) of the device corresponding to the performance of the one or more steps and/or semantic content of data associated with the one or more users. Accordingly, the one or more steps may include communicating with one or more sensors and/or one or more actuators associated with the one or more contextual variables. For example, the one or more sensors may include, but are not limited to, a timing device (e.g., a real-time clock), a location sensor (e.g., a GPS receiver, a GLONASS receiver, an indoor location sensor, etc.), a biometric sensor (e.g., a fingerprint sensor), an environmental variable sensor (e.g., temperature sensor, humidity sensor, pressure sensor, etc.) and a device state sensor (e.g., a power sensor, a voltage/current sensor, a switch-state sensor, a usage sensor, etc. associated with the device corresponding to performance of the or more steps).
Further, the one or more steps of the method may be performed one or more number of times. Additionally, the one or more steps may be performed in any order
other than as exemplarily disclosed herein, unless explicitly stated otherwise, elsewhere in the present disclosure. Further, two or more steps of the one or more steps may, in some embodiments, be simultaneously performed, at least in part. Further, in some embodiments, there may be one or more time gaps between performance of any two steps of the one or more steps.
Further, in some embodiments, the one or more predefined conditions may be specified by the one or more users. Accordingly, the one or more steps may include receiving, using the communication device, the one or more predefined conditions from one or more and devices operated by the one or more users. Further, the one or more predefined conditions may be stored in the storage device. Alternatively, and/or additionally, in some embodiments, the one or more predefined conditions may be automatically determined, using the processing device, based on historical data corresponding to performance of the one or more steps. For example, the historical data may be collected, using the storage device, from a plurality of instances of performance of the method. Such historical data may include performance actions (e.g., initiating, maintaining, interrupting, terminating, etc.) of the one or more steps and/or the one or more contextual variables associated therewith. Further, machine learning may be performed on the historical data in order to determine the one or more predefined conditions. For instance, machine learning on the historical data may determine a correlation between one or more contextual variables and performance of the one or more steps of the method. Accordingly, the one or more predefined conditions may be generated, using the processing device, based on the correlation.
Further, one or more steps of the method may be performed at one or more spatial locations. For instance, the method may be performed by a plurality of devices interconnected through a communication network. Accordingly, in an example, one or more steps of the method may be performed by a server computer. Similarly, one or more steps of the method may be performed by a client computer. Likewise, one or more steps of the method may be performed by an intermediate entity such as, for example, a proxy server. For instance, one or more steps of the method may be performed in a distributed fashion across the plurality of devices in order to meet one or more objectives. For example, one objective may be to provide load balancing between two or more devices. Another objective may be to restrict a location of one or more of an input data, an output data and any intermediate data therebetween corresponding to one or more steps of the method. For example, in a client-server
environment, sensitive data corresponding to a user may not be allowed to be transmitted to the server computer. Accordingly, one or more steps of the method operating on the sensitive data and/or a derivative thereof may be performed at the client device.
Overview
The present disclosure describes methods, systems, apparatuses, and devices for facilitating sharing of virtual experience between users. Further, the disclosed system may facilitate a virtual experience between a plurality of users. Further, the disclosed system may aim to isolate a visual representation of a human form, transmit the visual representation digitally, and then project the specific representation into a digital virtual event background with a large number of other similarly transmitted visual representations of other human forms. Further, the visual representations of the human form may be such that the real physical or geographical background associated with locations of the plurality of users may be uncoupled. Further, the disclosed system may couple a virtual background to facilitate the reality-based visual and auditory connection between the plurality of users. Further, the disclosed system may equip the plurality of users with an ability to expand or zoom into a visual appearance of large numbers of people within an event venue, subsequently eliminating limitations of current telecommunication modalities, to observe a single and/or multiple, simultaneously projected images of human forms of other persons and to directly communicate between persons by a text, direct auditory messages or similar means to allow for real-time human-to-human interaction, on a real-time basis within the event venue, irrespective of the event being live or pre-recorded. Further, the plurality of users may be categorized as performers and audience members. Further, the performers, in an instance, may be a user that may perform in the event venue. Further, the audience members, in an instance, may be a user that may be interested in attending a performance of the performers in the event venue. Further, the methods and systems disclosed herein may be embodied in the form of a software application (executable on the online platform, and/or one or more other devices such as, but not limited to, one or more user devices). Further, the one or more user devices may be categorized as performer devices and audience devices.
Further, the fan may be a human being (contrast an avatar), even if their appearance is somewhat altered (how much alteration is too much?) who dedicates an
amount of time to the enjoyment of a Live (or recorded) Performance, whether by listening or viewing (what makes the experience of interacting with another human “real” because this is reality (just bridging distance gaps) versus virtual reality (a game, an invention of the imagination). The psychological implications are really big. Further, widespread technology in cameras, speakers, and microphones can replicate an experience when people are at great physical distances. Further, transportation of physical objects over the Internet in 3D print form is already happening.
Further, the genuine interaction may be the experience that humans obtain from interacting with other humans in close physical proximity (maybe sufficient but not necessary— i.e., There may be other ways that may be superior — e.g., Impact of travel, virology impact of travel). Further, the hologram may be an alternative to screens, which is a projection of light on a flat surface. Most of what we see is on screens - flat. Holograms project light in 3-dimensions. (e.g., difference of a picture of a wax figure vs hologram of a wax figure in 3D). Further, the hologram may be a Fan (already Holograms of Performers, but they cannot see you which makes it not real. Fans and Performers need to see or hear you). Further, “Live” refers to a Performance at which Fans enjoy Performance(s) contemporaneously with Performers and/or other Fans. Further, the performance may be any concert, sports event, or another event in any place in the world, where one or more performers, but not more than fifty Performers, and fans. Further, the performer may be an individual who is doing, the listening to or viewing of which is something for the enjoyment of Fans. Further, “Present” refers to a Fan who is attending a Performance regardless of whether geographically close to or far from the Venue. Further, the “Venue” may be any indoor or outdoor location where a performance occurs that can accommodate in- person or virtual fans.
Further, the event venue may correspond to, for example, music shows and/or concerts, sporting events such as, but not limited to, baseball, basketball, football, golf, hockey, racing, soccer, etc., large gathering events such as but not limited to, circus, tutorial courses, exercise sessions, festivals, museum visits, night clubs, protests, shopping, theater, theme parks, tours, etc., and so on. Further, the disclosed system may facilitate interaction between the plurality of users based on a plurality of social media platforms such as Facebook, Twitter, Instagram, etc. Further, the software application may facilitate the linking of social media accounts associated with the plurality of users, such that the plurality of users may choose to interact
amongst each other on a basis of mutual interests in the event venues. Additionally, and/or the plurality of users may select one or more other users based on the mutual interests to recommend event venues, past preferences for the event venues of the one or more other users, future event tickets of the one or more other users, etc. Further, the disclosed system may aim to facilitate the sharing of virtual experience between the plurality of users using two modes, namely a land gate and a cloud gate. Accordingly, the land gate may facilitate the plurality of users to access the event venue by attending the event venue in person. Further, in such an instance, the software application may enable the sale of tickets for the physical attendance of the plurality of users. Further, the software application may help retail sales with a first- class e-commerce experience accessible using the software application at the event venue, and a user may pick up merchandise of choice at a designated kiosk with a QR code, such that the software application-based retail sales, with QR-based kiosk pickup, may reduce contact with the other one or more users. Further, the software application may connect the plurality of users to providers of local and long-distance travel, to help facilitate attendance. Further, the software application may use QR- based tickets rather than paper tickets that may reduce the risk of infection spreading between the plurality of users.
Further, the disclosed system may enable the simultaneous computational engagement of large numbers of persons to digitally transmit a visual image of their human form and to permit each person the ability to positionally view, and communicate with, other persons in the event venue. Further, the disclosed system may be configured for holding events that people cannot attend in person, whether because of restrictions due to a pandemic or reasons where a person is unable, or unwilling, to travel to an event venue whether the event is a concert, theatre production, sporting event, political assemblies, life cycle events, or other large gatherings of persons.
Accordingly, the cloud gate may facilitate the plurality of users to access the event venue by attending the event venue virtually. Further, in such an instance, the software application may enable the sale of tickets for remote attendance of the plurality of users, thereby eliminating the risk of spreading infection between the plurality of users. Further, the software application may facilitate attending of the plurality of users virtually, such that the plurality of users may experience an immersive visual and audio experience similar to attending the event in-person at the
convenience of being present geographically anywhere, and drastically reduces cost by reducing needs for event staff, utility costs, and facility use. Further, the software application may enable seeing the other one or more users attending the event, irrespective of the mode. Further, the software application may facilitate interaction with the plurality of users inclusive of both modes. Further, the software application may facilitate purchasing the merchandise online that may reduce transaction time, and encourage purchasing, thereby increasing overall retail sales.
Further, the disclosed system may be used for music (concerts, shows); Sports (baseball, basketball, football, golf, hockey, racing, soccer); and gatherings (circus, classes, exercise, festivals, museums, night clubs, protests, shopping, theater, theme parks, tours).
Further, the disclosed system may be configured for Isolating the human forms and inserting/projecting them into a digital event. Further, the disclosed system may be configured for expanding past capabilities/limitations of other teleconferencing and seeing and friend large numbers of people (in thousands) (not in little boxes). Further, the disclosed system may be configured for Interacting with anyone in the crowd and identifying with a specific person, and communicating with the person(s) determined by the number of pixels by (via audio if that person permitted, social media, etc.). Further, the disclosed system may be configured for finding ways to interact with new social relationships (human behavior is changing worldwide irrevocably). Further, the disclosed system may be configured for interacting with performers that need high- end cameras at the venue (others are focused on the interaction between performers and fans but missing the social aspects of fans).
Further, the disclosed system may be configured for enabling stronger connections among more humans. Further, the disclosed system may be configured for fostering the human need for togetherness. Further, the disclosed system may be configured for reducing the carbon footprint associated with travel. Further, the disclosed system may be configured for reducing the incidence of infection.
Further, using audiovisual technology, cloud computing, and currently- available Internet bandwidth, large groups of people can experience togetherness while physically distant. Further, the disclosed system may put everyone together using cloud computing, currently-available virtual reality technology, and virtual 3D spaces, it is possible to put a large number of people in one visual field. Further, the disclosed system may replicate the in-person audiovisual experience. At an in-person
event, fans can see thousands of other fans, but at varying visual sizes (perspective) and audible levels.
Further, FANtech, an exemplary embodiment of the disclosed system herein, enables the fan to see the event as a “real” event - with a full venue in view. Further, FANtech enables the fan to meet other fans, “like” other fans, chat with other fans (both text and voice), and connect on social media. Other industry leaders have limited text chat functionality (a chat room style, circa the 1990s) within a “meeting”. FANtech recreates the in-person venue experience for performers by filling a screen with fans as they would appear in a physical venue. Most conventional live-streaming in 2020 is “one-way,” where large event performances only broadcast the performers to the fans, not the fans to the performers. FANtech enables performers to sell physical and virtual goods with a seamless eCommerce app. Most conventional online event platforms do not have eCommerce capabilities. FANtech encourages fans to connect their social media presence to FANtech, so fans can share their experiences on social media, find social friends on FANtech, and find FANtech likes on social media. Most conventional online event platforms do not connect with social media. FANtech includes a recommendation engine that considers past preferences, friends’ past preferences, and friends’ future event tickets, to suggest new events. Most conventional online event platforms do not have a recommendation engine. FANtech integrates in-person tickets and in-person venues to bridge physical and remote attendance. Most conventional live-streaming in 2020 is a self-contained platform, which does not consider alternative media of presentation.
Further, FANtech enables the sale of tickets for remote attendance. By attending through the Cloud Gate, fans can have an authentic visual and audio experience, comparable to attending an event in person. Further, the fans who attend through the Cloud Gate can see when friends and likes attend, whether through the Land Gate or the Cloud Gate. Further, the cloud fate fans can “like” both Land Gate Fans and Cloud Gate Fans, expanding their circles. Cloud Gate Fans can purchase physical and virtual merchandise from the FANtech app. Sale losses resulting from limited inventories can be stemmed by central warehousing and just-in-time production. Further, the ease of purchasing through the Cloud Gate reduces transaction time and encourages purchasing, which will increase overall retail sales. Travel to an event through the Cloud Gate is only limited by Internet bandwidth, which is freely available worldwide at a minimal cost. Further, the disclosed system
may be configured for facilitating traveling to an event through the Cloud Gate, the carbon footprint of event attendance is significantly reduced. Further, remote attendance associated with the disclosed system eliminates the risk of infection from event attendance. Remote attendance drastically reduces costs by reducing needs for event staff, utility costs, and facility use.
Further, humans are social beings who connect emotionally, using body language and verbal cues to build feelings. In uncertain times during a pandemic, people are restricted from attending concerts, shows, sports, or other events. Just watching a broadcasted event, without social interaction, does not satisfy people’s need and craving to interact. When someone’s face looms large in your visual sphere in real life, generally triggers either fight or mate reactions. Video calls generally just show participants’ faces filling multiple boxes. Seeing multiple faces in up to 25 boxes can overwhelm the body’s nervous system. People are constantly interpreting non-verbal cues from body movement, posture, etc. Some conventional calls do not facilitate people’s need for a deep level of interaction than just face-to-face.
Further, the disclosed system may insert participants from remote locations into a live location, to optimize social interaction. Further, the disclosed system may fill the void that people feel by not interacting with others as if they were present at the event. Further, the disclosed system may keep participants alone during the pandemic, yet together.
Further, the disclosed system may utilize holograms to separate from screens. Further, the disclosed system may transmit other senses over the Internet: touch, taste, and smell. Further, the touch may be associated with haptic feedback. Further, a tap on the shoulder over the Internet may be felt. Further, taste and smell may be associated with remote cooking demonstrations.
Further, the disclosed system may use smart devices and the Internet to transmit sight and sound across physical barriers. Further, the disclosed system may allow users to experience togetherness through screens, cameras, speakers, and microphones. For fans in different places across the globe may unite at any show and have that interaction using the disclosed system.
FIG. 1 is an illustration of an online platform 100 consistent with various embodiments of the present disclosure. By way of non-limiting example, the online platform 100 for facilitating sharing of virtual experience between users may be hosted on a centralized server 102, such as, for example, a cloud computing service.
The centralized server 102 may communicate with other network entities, such as, for example, a mobile device 106 (such as a smartphone, a laptop, a tablet computer, etc.), other electronic devices 110 (such as desktop computers, server computers, etc.), databases 114, and sensors 116 over a communication network 104, such as, but not limited to, the Internet. Further, users of the online platform 100 may include relevant parties such as, but not limited to, end-users, administrators, service providers, service consumers, and so on. Accordingly, in some instances, electronic devices operated by the one or more relevant parties may be in communication with the platform.
A user 112, such as the one or more relevant parties, may access online platform 100 through a web based software application or browser. The web based software application may be embodied as, for example, but not be limited to, a website, a web application, a desktop application, and a mobile application compatible with a computing device X00.
With reference to FIG. 2, a system consistent with an embodiment of the disclosure may include a computing device or cloud service, such as computing device 200. In a basic configuration, computing device 200 may include at least one processing unit 202 and a system memory 204. Depending on the configuration and type of computing device, system memory 204 may comprise, but is not limited to, volatile (e.g., random-access memory (RAM)), non-volatile (e.g., read-only memory (ROM)), flash memory, or any combination. System memory 204 may include operating system 205, one or more programming modules 206, and may include a program data 207. Operating system 205, for example, may be suitable for controlling computing device 200’ s operation. In one embodiment, programming modules 206 may include image-processing module, machine learning module. Furthermore, embodiments of the disclosure may be practiced in conjunction with a graphics library, other operating systems, or any other application program and is not limited to any particular application or system. This basic configuration is illustrated in FIG. 2 by those components within a dashed line 208.
Computing device 200 may have additional features or functionality. For example, computing device 200 may also include additional data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape. Such additional storage is illustrated in FIG. 2 by a removable storage 209 and a non-removable storage 210. Computer storage media may include volatile
and non-volatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program modules, or other data. System memory 204, removable storage 209, and non-removable storage 210 are all computer storage media examples (i.e., memory storage.) Computer storage media may include, but is not limited to, RAM, ROM, electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store information and which can be accessed by computing device 200. Any such computer storage media may be part of device 200. Computing device 200 may also have input device(s) 212 such as a keyboard, a mouse, a pen, a sound input device, a touch input device, a location sensor, a camera, a biometric sensor, etc. Output device(s) 214 such as a display, speakers, a printer, etc. may also be included. The aforementioned devices are examples and others may be used.
Computing device 200 may also contain a communication connection 216 that may allow device 200 to communicate with other computing devices 218, such as over a network in a distributed computing environment, for example, an intranet or the Internet. Communication connection 216 is one example of communication media. Communication media may typically be embodied by computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and includes any information delivery media. The term “modulated data signal” may describe a signal that has one or more characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media may include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency (RF), infrared, and other wireless media. The term computer readable media as used herein may include both storage media and communication media.
As stated above, a number of program modules and data files may be stored in system memory 204, including operating system 205. While executing on processing unit 202, programming modules 206 (e.g., application 220 such as a media player) may perform processes including, for example, one or more stages of methods, algorithms, systems, applications, servers, databases as described above. The
aforementioned process is an example, and processing unit 202 may perform other processes. Other programming modules that may be used in accordance with embodiments of the present disclosure may include machine learning applications.
Generally, consistent with embodiments of the disclosure, program modules may include routines, programs, components, data structures, and other types of structures that may perform particular tasks or that may implement particular abstract data types. Moreover, embodiments of the disclosure may be practiced with other computer system configurations, including hand-held devices, general purpose graphics processor-based systems, multiprocessor systems, microprocessor-based or programmable consumer electronics, application specific integrated circuit-based electronics, minicomputers, mainframe computers, and the like. Embodiments of the disclosure may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
Furthermore, embodiments of the disclosure may be practiced in an electrical circuit comprising discrete electronic elements, packaged or integrated electronic chips containing logic gates, a circuit utilizing a microprocessor, or on a single chip containing electronic elements or microprocessors. Embodiments of the disclosure may also be practiced using other technologies capable of performing logical operations such as, for example, AND, OR, and NOT, including but not limited to mechanical, optical, fluidic, and quantum technologies. In addition, embodiments of the disclosure may be practiced within a general -purpose computer or in any other circuits or systems.
Embodiments of the disclosure, for example, may be implemented as a computer process (method), a computing system, or as an article of manufacture, such as a computer program product or computer readable media. The computer program product may be a computer storage media readable by a computer system and encoding a computer program of instructions for executing a computer process. The computer program product may also be a propagated signal on a carrier readable by a computing system and encoding a computer program of instructions for executing a computer process. Accordingly, the present disclosure may be embodied in hardware and/or in software (including firmware, resident software, micro-code, etc.). In other words, embodiments of the present disclosure may take the form of a computer
program product on a computer-usable or computer-readable storage medium having computer-usable or computer-readable program code embodied in the medium for use by or in connection with an instruction execution system. A computer-usable or computer-readable medium may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
The computer-usable or computer-readable medium may be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. More specific computer-readable medium examples (a non-exhaustive list), the computer-readable medium may include the following: an electrical connection having one or more wires, a portable computer diskette, a random-access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, and a portable compact disc read-only memory (CD- ROM). Note that the computer-usable or computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
Embodiments of the present disclosure, for example, are described above with reference to block diagrams and/or operational illustrations of methods, systems, and computer program products according to embodiments of the disclosure. The functions/acts noted in the blocks may occur out of the order as shown in any flowchart. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
While certain embodiments of the disclosure have been described, other embodiments may exist. Furthermore, although embodiments of the present disclosure have been described as being associated with data stored in memory and other storage mediums, data can also be stored on or read from other types of computer-readable media, such as secondary storage devices, like hard disks, solid state storage (e.g., USB drive), or a CD-ROM, a carrier wave from the Internet, or other forms of RAM or ROM. Further, the disclosed methods’ stages may be
modified in any manner, including by reordering stages and/or inserting or deleting stages, without departing from the disclosure.
FIG. 3 is a flowchart of a method 300 for facilitating sharing of virtual experience between users, in accordance with some embodiments.
Further, the method 300 may include a step 302 of receiving, using a communication device, one or more audience data associated with two or more audience members from two or more audience devices associated with the two or more audience members. Further, the two or more audience members watch and participate in one or more virtual events. Further, the method 300 may include a step 304 of receiving, using the communication device, one or more performer data associated with one or more performers from one or more performer devices associated with the one or more performers. Further, the one or more performers perform and participate in the one or more virtual events.
Further, the method 300 may include a step 306 of analyzing, using a processing device, the one or more performer data and the one or more audience data.
Further, the method 300 may include a step 308 of extracting, using the processing device, one or more human forms corresponding to one or more of the pluralities of audience members and the one or more performers based on the analyzing. Further, the one or more human forms may include one or more human form data.
Further, the method 300 may include a step 310 of generating, using the processing device, one or more human images of one or more of the two or more audience members and the one or more performers based on the one or more human forms. Further, the one or more human images may include one or more virtual representations.
Further, the method 300 may include a step 312 of receiving, using the communication device, one or more background data of the one or more virtual events from the one or more performer devices. Further, the one or more background data may include one or more virtual backgrounds for the one or more virtual events.
FIG. 4 is a continuation flowchart of FIG. 3.
Further, the method 300 may include a step 314 of combining, using the processing device, the one or more human images with the one or more virtual backgrounds based on the generating.
Further, the method 300 may include a step 316 of creating, using the
processing device, a virtual interactive space based on the combining. Further, the virtual interactive space may include the one or more human images of one or more of the two or more audience members and the one or more performers in the one or more virtual backgrounds.
Further, the method 300 may include a step 318 of receiving, using the communication device, one or more interaction data of one or more interactions of one or more of the two or more audience members and the one or more performers from one or more of the two or more audience devices and the one or more performer devices.
Further, the method 300 may include a step 320 of generating, using the processing device, a modified virtual interactive space data based on each of the one or more interaction data and the virtual interactive space. Further, the modified virtual interactive space data may include the one or more human images and the one or more interactions of one or more of the two or more audience members and the one or more performers within the one or more virtual backgrounds.
Further, the method 300 may include a step 322 of transmitting, using the communication device, the modified virtual interactive space data to the one or more of the two or more audience devices and the one or more performer devices.
Further, the method 300 may include a step 324 of storing, using a storage device, one or more of the one or more audience data, the one or more performer data, and the one or more background data.
In some embodiments, the one or more performer data may include one or more of a performer’s appearance, a performer’s gesture, a performer’s verbal expression, a performer’s nonverbal expression, and a performer’s movement. Further, the one or more performer devices may include one or more of a performer image sensor, a performer microphone, and a performer motion sensor. Further, one or more of the performer image sensor, the performer microphone, and the performer motion sensor may be configured for generating the one or more performer data based on capturing one or more of an appearance, a gesture, a verbal expression, a nonverbal expression, and a movement of the one or more performers.
In some embodiments, the one or more audience data may include one or more of an audience member’s appearance, an audience member’s gesture, an audience member’s verbal expression, an audience member’s nonverbal expression, and an audience member’s movement. Further, the two or more audience devices may
include one or more of an audience image sensor, an audience microphone, and an audience motion sensor. Further, one or more of the audience image sensor, the audience microphone, and the audience motion sensor may be configured for generating the one or more audience data based on capturing one or more of an appearance, a gesture, a verbal expression, a nonverbal expression, and a movement of the two or more audience members.
FIG. 5 is a flowchart of a method 500 for facilitating sharing of virtual experience between users in which the method 500 further may include transmitting the one or more virtual event interest data to the two or more audience devices, in accordance with some embodiments. Further, at 502, the method 500 may include receiving, using the communication device, one or more audience member data associated with the two or more audience members from one or more social media platforms associated with the two or more audience members. Further, the one or more social media platforms may be hosted by one or more social media servers. Further, at 504, the method 500 may include analyzing, using the processing device, the one or more audience member data. Further, at 506, the method 500 may include generating, using the processing device, one or more virtual event interest data based on the analyzing of the one or more audience member data. Further, the one or more virtual event interest data may include one or more similar interests shown by one or more first audience members of the two or more audience members and one or more second audience members of the two or more audience members in the one or more virtual events. Further, at 508, the method 500 may include transmitting, using the communication device, the one or more virtual event interest data to the two or more audience devices.
FIG. 6 is a flowchart of a method 600 for facilitating sharing of virtual experience between users in which the method 600 further may include transmitting the one or more tickets to the one or more audience devices, in accordance with some embodiments. Further, the one or more background data may include one or more virtual backgrounds locations of one or more virtual seats in the one or more virtual backgrounds for the two or more audience members. Further, the method 600 may include a step 602 of analyzing, using the processing device, the one or more background data and the virtual interactive space. Further, the method 600 may include a step 604 of generating, using the processing device, one or more virtual interactive space views of the virtual interactive space corresponding to the one or
more virtual seats based on the analyzing of the one or more background data and the virtual interactive space. Further, the method 600 may include a step 606 of transmitting, using the communication device, the one or more virtual interactive space views of the virtual interactive space corresponding to the one or more virtual seats to the two or more audience devices. Further, the method 600 may include a step 608 of receiving, using the communication device, one or more seat indications of one or more selected virtual seats of the one or more virtual seats from one or more audience devices associated with one or more audience members. Further, the method 600 may include a step 610 of issuing, using the processing device, one or more tickets for the one or more selected virtual seats to the one or more audience members based on the one or more seat indications of the one or more selected virtual seats for the one or more virtual events. Further, the method 600 may include a step 612 of transmitting, using the communication device, the one or more tickets to the one or more audience devices.
FIG. 7 is a flowchart of a method 700 for facilitating sharing of virtual experience between users in which the method 700 further may include rendering the one or more human forms with the one or more selected virtual merchandises based on the processing, in accordance with some embodiments. Further, at 702, the method 700 may include transmitting, using the communication device, one or more virtual merchandises for the one or more human forms to the two or more audience devices. Further, at 704, the method 700 may include receiving, using the communication device, one or more merchandise indications for purchasing of one or more selected virtual merchandises of the one or more virtual merchandises from one or more audience devices associated with one or more audiences. Further, at 706, the method 700 may include processing, using the processing device, one or more transactions associated with the purchasing of the one or more selected virtual merchandises based on the one or more merchandise indications. Further, at 708, the method 700 may include rendering, using the processing device, the one or more human forms with the one or more selected virtual merchandises based on the processing. Further, the generating of the one or more human images may be based on the rendering.
FIG. 8 is a flowchart of a method 800 for facilitating sharing of virtual experience between users in which the method 800 further may include modifying the virtual interactive space based on the one or more actions, in accordance with some embodiments. Further, at 802, the method 800 may include analyzing, using the
processing device, the one or more interaction data using one or more machine learning models. Further, the one or more machine learning models may be trained for detecting actions of one or more of the two or more audience members and the one or more performers. Further, at 804, the method 800 may include determining, using the processing device, one or more actions corresponding to one or more of the two or more audience members and the one or more performers based on the analyzing of the one or more interaction data. Further, at 806, the method 800 may include modifying, using the processing device, the virtual interactive space based on the one or more actions. Further, the generating of the modified virtual interactive space data may be based on the modifying.
FIG. 9 is a flowchart of a method 900 for facilitating sharing of virtual experience between users in which the method 900 further may include establishing one or more interaction sessions between one or more of the one or more audience members and the one or more performers and one or more of the one or more first audience members and the one or more first performers in real-time based on the identifying, in accordance with some embodiments. Further, at 902, the method 900 may include identifying, using the processing device, one or more of one or more first audience members and one or more first performers based on the determining of the one or more actions. Further, at 904, the method 900 may include establishing, using the processing device, one or more interaction sessions between one or more of the one or more audience members and the one or more performers and one or more of the one or more first audience members and the one or more first performers in realtime based on the identifying. Further, the modifying of the virtual interactive space may be based on the establishing. Further, the establishing of the one or more interaction sessions allows one or more of the one or more audience members and the one or more performers to interact with one or more of the one or more first audience members and the one or more first performers in the real-time.
FIG. 10 is a flowchart of a method 1000 for facilitating sharing of virtual experience between users in which the method 1000 further may include transmitting the one or more virtual experiences corresponding to one or more of the plurality of audience members and the one or more performers to one or more of the plurality of audience member devices and the one or more performer devices, in accordance with some embodiments. Further, at 1002, the method 1000 may include generating, using the processing device, one or more virtual experiences of the virtual interactive space
for one or more of the plurality of audience members and the one or more performers based on the virtual interactive space, the one or more audience member data, and the one or more performer data. Further, at 1004, the method 1000 may include transmitting, using the communication device, the one or more virtual experiences corresponding to one or more of the plurality of audience members and the one or more performers to one or more of the plurality of audience member devices and the one or more performer devices.
FIG. 11 is a flowchart of a method 1100 for facilitating sharing of virtual experience between users in which the method 1100 further may include determining one or more attending parameters for attending the one or more virtual events at the one or more event venues by one or more of the two or more audience members and the one or more performers based on the analyzing of the one or more event venue data, in accordance with some embodiments. Further, at 1102, the method 1100 may include receiving, using the communication device, one or more event venue data associated with one or more event venues of the one or more virtual events from the one or more performer devices. Further, at 1104, the method 1100 may include analyzing, using the processing device, the one or more event venue data using one or more first machine learning models. Further, the one or more first machine learning models may be trained for detecting attending parameters for attending the one or more virtual events at the one or more event venues. Further, at 1106, the method 1100 may include determining, using the processing device, one or more attending parameters for attending the one or more virtual events at the one or more event venues by one or more of the two or more audience members and the one or more performers based on the analyzing of the one or more event venue data. Further, the creating of the virtual interactive space may be further based the one or more attending parameters. Further, the one or more attending parameters may include one or more seating areas in the one or more event venues, one or more performing areas in the one or more event venues, etc. Further, the one or more seating areas may include a virtual area for two or more human images of the two or more audience members. Further, the one or more performing areas may include a virtual area for one or more human images of the one or more performers.
FIG. 12 is a block diagram of a system 1200 for facilitating sharing of virtual experience between users, in accordance with some embodiments. The system 1200 may include a communication device 1202, a processing device 1204, and a storage
device 1206.
Further, the communication device 1202 may be configured for performing a step of receiving one or more audience data associated with two or more audience members from two or more audience devices associated with the two or more audience members. Further, the two or more audience members watch and participate in one or more virtual events. Further, the communication device 1202 may be configured for performing a step of receiving one or more performer data associated with one or more performers from one or more performer devices associated with the one or more performers. Further, the one or more performers perform and participate in the one or more virtual events.
Further, the communication device 1202 may be configured for performing a step of receiving one or more background data of one or more virtual events from the one or more performer devices. Further, the one or more background data may include one or more virtual backgrounds for the one or more virtual events.
Further, the communication device 1202 may be configured for performing a step of receiving one or more interaction data of one or more interactions of one or more of the two or more audience members and the one or more performers from one or more of the two or more audience devices and the one or more performer devices.
Further, the communication device 1202 may be configured for performing a step of transmitting a modified virtual interactive space data to the one or more of the two or more audience devices and the one or more performer devices. Further, the modified virtual interactive space data may include the one or more human images and the one or more interactions of one or more of the two or more audience members and the one or more performers within the one or more virtual backgrounds.
The processing device 1204 may be communicatively coupled with the communication device 1202.
Further, the processing device 1204 may be configured for performing a step of analyzing the one or more performer data and the one or more audience data.
Further, the processing device 1204 may be configured for performing a step of extracting one or more human forms corresponding to one or more of the pluralities of audience members and the one or more performers based on the analyzing.
Further, the processing device 1204 may be configured for performing a step of generating one or more human images of one or more of the two or more audience members and the one or more performers based on the one or more human forms.
Further, the processing device 1204 may be configured for performing a step of combining the one or more human images with the one or more virtual backgrounds based on the generating.
Further, the processing device 1204 may be configured for performing a step of creating a virtual interactive space based on the combining. Further, the virtual interactive space may include the one or more human images of one or more of the two or more audience members and the one or more performers in the one or more virtual backgrounds.
Further, the processing device 1204 may be configured for performing a step of generating the modified virtual interactive space data based on each of the one or more interaction data and the virtual interactive space.
The storage device 1206 may be communicatively coupled with the processing device 1204.
Further, the storage device 1206 may be configured for performing a step of storing one or more of the one or more audience data, the one or more performer data, and the one or more background data.
Further, in some embodiments, the communication device 1202 may be configured for receiving one or more audience member data associated with the plurality of audience members from one or more social media platforms associated with the plurality of audience members. Further, the communication device 1202 may be configured for transmitting at least one virtual event interest data to the plurality of audience devices. Further, the processing device 1204 may be configured for analyzing the one or more audience member data. Further, the processing device 1204 may be configured for generating the at least one virtual event interest data based on the analyzing of the one or more audience member data. Further, the at least one virtual event interest data may include one or more similar interests shown by one or more first audience members of the plurality of audience members and one or more second audience members of the plurality of audience members in the at least one virtual event.
Further, in some embodiments, the at least one background data may include one or more virtual backgrounds locations of one or more virtual seats in the at least one virtual background for the plurality of audience members. Further, the processing device 1204 may be configured for analyzing the at least one background data and the virtual interactive space, the processing device 1204 may be configured for generating
one or more virtual interactive space views of the virtual interactive space corresponding to the one or more virtual seats based on the analyzing of the at least one background data and the virtual interactive space. Further, the processing device 1204 may be configured for issuing one or more tickets for one or more selected virtual seats to the one or more audience members based on one or more seat indications of the one or more selected virtual seats for the at least one virtual event. Further, the communication device 1202 may be configured for transmitting the one or more virtual interactive space views of the virtual interactive space corresponding to the one or more virtual seats to the plurality of audience devices. Further, the communication device 1202 may be configured for receiving the one or more seat indications of the one or more selected virtual seats of the one or more virtual seats from one or more audience devices associated with one or more audience members. Further, the communication device 1202 may be configured for transmitting the one or more tickets to the one or more audience devices.
Further, in some embodiments, the communication device 1202 may be configured for transmitting one or more virtual merchandises for the one or more human forms to the plurality of audience devices. Further, the communication device 1202 may be configured for receiving one or more merchandise indications for purchasing of one or more selected virtual merchandises of the one or more virtual merchandises from one or more audience devices associated with one or more audiences. Further, the processing device 1204 may be configured for processing one or more transactions associated with the purchasing of the one or more selected virtual merchandises based on the one or more merchandise indications. Further, the processing device 1204 may be configured for rendering the one or more human forms with the one or more selected virtual merchandises based on the processing. Further, the generating of the one or more human images may be based on the rendering.
Further, in some embodiments, the one or more audience data may include one or more of an audience member’s appearance, an audience member’s gesture, an audience member’s verbal expression, an audience member’s nonverbal expression, and an audience member’s movement. Further, the plurality of audience devices may include one or more of an audience image sensor, an audience microphone, and an audience motion sensor. Further, one or more of the audience image sensor, the audience microphone, and the audience motion sensor may be configured for
generating the one or more audience data based on capturing one or more of an appearance, a gesture, a verbal expression, a nonverbal expression, and a movement of the plurality of audience members.
Further, in some embodiments, the one or more performer data may include one or more of a performer’s appearance, a performer’s gesture, a performer’s verbal expression, a performer’s nonverbal expression, and a performer’s movement. Further, the one or more performer devices may include one or more of a performer image sensor, a performer microphone, and a performer motion sensor. Further, one or more of the performer image sensor, the performer microphone, and the performer motion sensor may be configured for generating the one or more performer data based on capturing one or more of an appearance, a gesture, a verbal expression, a nonverbal expression, and a movement of the one or more performers.
Further, in some embodiments, the processing device 1204 may be configured for analyzing the at least one interaction data using one or more machine learning models. Further, the one or more machine learning models may be trained for detecting actions of one or more of the plurality of audience members and the one or more performers. Further, the processing device 1204 may be configured for determining one or more actions corresponding to one or more of the plurality of audience members and the one or more performers based on the analyzing of the at least one interaction data. Further, the processing device 1204 may be configured for modifying the virtual interactive space based on the one or more actions. Further, the generating of the modified virtual interactive space data may be based on the modifying.
Further, in some embodiments, the processing device 1204 may be configured for identifying one or more of one or more first audience members and one or more first performers based on the determining of the one or more actions. Further, the processing device 1204 may be configured for establishing one or more interaction sessions between one or more of the one or more audience members and the one or more performers and one or more of the one or more first audience members and the one or more first performers in real-time based on the identifying. Further, the modifying of the virtual interactive space may be based on the establishing.
Further, in some embodiments, the processing device 1204 may be configured for generating one or more virtual experiences of the virtual interactive space for one or more of the plurality of audience members and the one or more performers based
on the virtual interactive space, the one or more audience member data, and the one or more performer data. Further, the communication device 1202 may be configured for transmitting the one or more virtual experiences corresponding to one or more of the plurality of audience members and the one or more performers to one or more of the plurality of audience member devices and the one or more performer devices.
Further, in some embodiments, the communication device 1202 may be configured for receiving one or more event venue data associated with one or more event venues of the at least one virtual event from the one or more performer devices. Further, the processing device 1204 may be configured for analyzing the one or more event venue data using one or more first machine learning models. Further, the one or more first machine learning models may be trained for detecting attending parameters for attending the at least one virtual event at the one or more event venues. Further, the processing device 1204 may be configured for determining one or more attending parameters for attending the at least one virtual event at the one or more event venues by one or more of the plurality of audience members and the one or more performers based on the analyzing of the one or more event venue data. Further, the creating of the virtual interactive space may be further based the one or more attending parameters.
FIG. 13 is a flowchart of a method 1300 to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments. Accordingly, at 1302, the method 1300 may include a step of receiving, using the communication device, one or more audience data from a plurality of audience devices associated with a plurality of audience members. Further, the plurality of audience members, in an instance, may include a group of people attending one or more virtual events associated with one or more performers. Further, the one or more virtual events, in an instance, may be organized by the one or more performers performing in the one or more virtual events. Further, in some embodiments, the one or more virtual events may include music shows and/or concerts. Further, in some embodiments, the one or more virtual events may include sporting events such as, but not limited to, such as baseball, basketball, football, golf, hockey, racing, soccer, and so on. Further, in some embodiments, the one or more virtual events may include large gathering events such as, but not limited to, circus, teaching courses, exercise sessions, festivals, museum visits, night clubs, protests, drama, theme parks, etc. Further, in some embodiments, the one or more virtual events may correspond to a
live stream of a corresponding virtual event. Further, in some embodiments, the one or more virtual events may correspond to a pre-recorded virtual event. Further, the plurality of audience devices may include devices that may facilitate attending of one or more virtual events by the plurality of the audience members. Further, the plurality of audience devices may be configured to capture one or more variables, such as, but not limited to, a physical variable, a biological variable, a physiological variable, a psychological variable, etc. Accordingly, the plurality of audience devices may include one or more sensors configured to capture the one or more variables. Further, the plurality of audience devices may include at least one image capturing device and a microphone. Further, in some embodiments, an audience member of the plurality of audience members may choose to switch between the one or more virtual events based on an interaction with corresponding audience device of the plurality of audience devices. Further, examples of the plurality of audience devices may include devices such as, but not limited to, a smartphone, a laptop, a PC, and so on. Further, a software application disclosed herein may include a mobile application that may be installed on the plurality of audience devices. Further, the one or more audience data may be any data that may be indicative of identities associated with the plurality of audience members watching the one or more virtual events. Accordingly, the one or more audience data, in an instance, may include a plurality of at least one of an audience image and an audience sound corresponding to the plurality of audience members. Further, in some embodiments, the audience image may include a live audience video feed that may characterize presence of a corresponding audience member in the one or more virtual events. Further, the live audience video feed, in an instance, may include one or more gestures performed by the corresponding audience member that may convey communicative information to the one or more performers and/or other audience members of the plurality of audience members in real-time. Further, the one or more gestures, in an instance, may distract, confuse, impact, instruct, command, or otherwise positively and/or negatively affect the one or more performers and/or the other audience members in the one or more virtual events. Further, in some embodiments, the audience sound may include at least one audience speech that may characterize the presence of the corresponding audience member in the one or more virtual events. Further, the at least one audience speech, in an instance, may facilitate communication between the plurality of audience members. Further, in some embodiments, the at least one speech may facilitate communication
between the plurality of audience members and the one or more performers in the one or more virtual events. Further, the plurality of audience devices may be configured to capture the at least one of the audience images and the audience sound corresponding to the plurality of audience members. Further, the capturing, in an instance, may be based on a user interface of the software application.
Further, at 1304, the method 1300 may include a step of receiving, using the communication device, one or more performer data from one or more performer devices associated with one or more performers. Further, the one or more performer data, in an instance, may include audio-visual footage of the at least one performance of the one or more performers in the one or more virtual events. Accordingly, the one or more performer data, in an instance, may include a plurality of at least one of performer images and performer sound corresponding to the one or more performers. Further, in some embodiments, the performer image may include a live performer video feed that may characterize presence of a corresponding performer performing in the one or more virtual events. Further, the live performer video feed, in an instance, may include one or more gestures performed by the corresponding performer that may convey communicative information to other performers of the one or more performers and/or the plurality of audience members in real-time. Further, the one or more gestures, in an instance, may distract, confuse, impact, instruct, command, or otherwise positively and/or negatively affect the other performers and/or the plurality of audience members in the one or more virtual events. Further, in some embodiments, the performer sound may include at least one performer speech that may characterize the presence of the corresponding performer in the one or more virtual events. Further, the at least one performer speech may facilitate communication between the other performers and/or the plurality of audience members. Further, in some embodiments, a performer of the one or more performers may choose to share a pre-recorded performance and/or a live performance of the one or more virtual events. Further, the one or more performer devices may be configured to capture the at least one of the performer images and a performer sound corresponding to the one or more performers. Further, the one or more performer devices may be configured to capture one or more variables such as, but not limited to, a biological variable, a physiological variable, a psychological variable, etc. Accordingly, the one or more performer devices may include one or more sensors configured to capture the one or more variables. Further, the one or more performer
devices may include at least one image capturing device and a microphone. Further, examples of the plurality of audience devices may include devices such as, but not limited to, a smartphone, a laptop, a PC, and so on. Further, the software application disclosed herein may include a mobile application that may be installed on the one or more performer devices. Further, in some embodiments, the one or more performers devices may include a plurality of high-definition cameras that may capture the at least one performance such that the plurality of high-definition cameras may be installed at one or more locations in a space corresponding to the one or more virtual events. Further, the installing, in an instance, may facilitate capturing of one or more virtual event places such that the plurality of audience members may navigate through the one or more virtual event places during the at least one performance in the one or more virtual events. Further, in some embodiments, the plurality of high-definition cameras may establish a link with one or more other performer devices such that the at least one performance from the plurality of high-definition cameras may be received on the one or more other performer devices. Further, the receiving, in an instance, may facilitate broadcasting of the at least one performance to the plurality of audience devices over a communication network (such as the Internet).
Further, at 1306, the method 1300 may include a step of analyzing, using a processing device, the one or more performer data and the one or more audience data.
Further, at 1308, the method 1300 may include a step of extracting, using the processing device, one or more human form data corresponding to one or more of the pluralities of audience members and the one or more performers based on the analyzing. Further, the one or more human form data may include visual characteristics and/or auditory characteristics of the plurality of audience members and the one or more performers. Further, the visual characteristics, in an instance, may be associated with the at least one of the performer images and the audience images. Further, the auditory characteristics, in an instance, may be associated with the at least one of the performer sound and the audience sound. Further, in some embodiments, the one or more human form data may include a portion of the one or more performer data and the one or more audience data that may be indicative of the presence of the corresponding performer and the corresponding audience member. Further, the portion may include at least one full real representation of the corresponding performer and the corresponding audience member in similitude with the captured one or more performer data and the one or more audience data by the
respective performer device and audience device. Further, in some embodiments, the one or more human form data may include one or more virtual representations of the plurality of audience members and/or the one or more performers. Further, the one or more virtual representations, in an instance, may be in accordance with a plurality of real characteristics of the one or more performers and/or the plurality of audience members captured by corresponding one or more performer devices and/or a corresponding plurality of audience devices. Further, in some embodiments, the one or more virtual representations may include three-dimensional holograms (or, 3D holograms) of each of the plurality of audience members and the one or more performers. Further, in some embodiments, the one or more virtual representations may include avatars of the each of the plurality of audience members and the one or more performers.
Further, at 1310, the method 1300 may include a step of combining, using the processing device, the one or more human form data with at least one background data corresponding to a virtual background. Further, in some embodiments, the at least one background data may correspond to data that may facilitate simulating of the one or more virtual events such that the one or more performers may create a virtual reality environment associated with the one or more virtual events. Further, the virtual reality environment, in an instance, may facilitate an immersive one or more virtual events that may include the plurality of audience members and the one or more performers in a form of one or more human forms based on the one or more human form data.
Further, at 1312, the method 1300 may include a step of creating, using the processing device, a virtual interactive space based on the combining. Further, the virtual interactive space may be based on the virtual reality environment. Further, in some embodiments, at least one virtual interactive space data may be generated by the processing device based on the creating of the virtual interactive space. Further, at least one virtual interactive space data, in an instance, may facilitate the interaction between the plurality of audience members and/or between the plurality of audience members and the one or more performers similar to a real-world interaction using the plurality of audience devices and the one or more performer devices.
Further, at 1314, the method 1300 may include a step of receiving, using the communication device, at least one interaction data from one or more of the pluralities of audience devices and the one or more performer devices. Accordingly, the at least
one interaction data may facilitate communication between the one or more human forms in the virtual interactive space. Further, in some embodiments, the at least one interaction data may include, but are not limited to, one or more of textual content, audio content, visual content, audio-visual content, and so on. Further, the textual content, in an instance, may include real-time text messaging between the one or more human forms in the virtual interactive space. Further, the audio content, in an instance, may include real-time communication between the one or more human forms in the virtual interactive space using vocal gestures (for example, speaking, shouting, whispering, etc.). Further, the visual content and/or the audio-visual content, in an instance, may include real-time communication between the one or more human forms in the virtual interactive space using one or more multimedia content (such as, one or more captured footage of the virtual reality environment and/or the real environment associated with the one or more virtual events, etc.). Further, in some embodiments, the at least one interaction data may facilitate performing of one or more actions in the virtual interactive space based on the generated at least one virtual interactive space data based on at least one interaction received from one or more of the pluralities of audience devices and the one or more performer devices for navigating around in the virtual interactive space. Further, the at least one interaction may include the one or more actions such as, but not limited to, pinching for zooming in/out to interact with the one or more human forms, walking around by the one or more human forms in the virtual interactive space, etc.
Further, at 1316, the method 1300 may include a step of generating, using the processing device, a modified virtual interactive space data based on each of the at least one interaction data and the virtual interactive space. Accordingly, the modified virtual interactive space may facilitate interaction between the plurality of audience members and/or the plurality of audience members and the one or more performers based on the at least one interaction data. Further, in some embodiments, the modified virtual interactive space may include a zoomed view of the virtual interactive space. Further, the zoomed view, in an instance, may include a vantage point of at least one of one or more objects in the virtual interactive space. Further, the one or more objects, in an instance, may correspond to the one or more human forms in the virtual interactive space. Further, in some embodiments, the virtual interactive space data may include an indication such as, but not limited to, a friend request, a message, and so on. Further, the indication may facilitate social interaction between the plurality of
audience members and/or the one or more performers and the plurality of audience members.
Further, at 1318, the method 1300 may include a step of transmitting, using the communication device, the modified virtual interactive space data to the one or more of the pluralities of audience devices and the one or more performer devices.
Further, in some embodiments, at least one first audience member of the plurality of audience members may attend the one or more virtual events in-person such that the at least one of the pluralities of audience members may watch performance of the one or more performers at the one or more virtual event places in a real environment. Further, at least one second audience member of the plurality of audience members may watch the performance of the one or more performers in the virtual interactive space. Further, the at least one first audience member of the plurality of audience members and the at least one second audience member of the plurality of audience members may interact with each other in real-time. Further, the interaction may include a real-time conversation between the at least one first audience member of the plurality of audience members and the at least one second audience member of the plurality of audience members that may include, such as, but not limited to, sharing of the one or more of the textual content, audio content, visual content, audio-visual content, and so on. Further, in some embodiments, the one or more performers may interact with the at least one second audience member of the plurality of audience members at an instance of the performance.
FIG. 14 is a flowchart of a method 1400 to link social media accounts of the plurality of users to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments. Accordingly, the at least one interaction data received from a first user of the plurality of audience members may include an indication of at least one second user of the plurality of audience members and an invitation to establish a social interaction with the at least one second user. Further, the modified virtual interactive space data transmitted to the at least one second user may include the invitation. Further, the at least one interaction data may also include at least one response to the invitation corresponding to the at least one user. Further, the response may include at least one of acceptance and rejection. Further, the method 1400 may include a step of forming at least one social media connection between the first user and the at least one second user in the audience. Accordingly, at 1402, the method 1400 may include a step of receiving, using the
communication device, one or more authentication results from one or more social media servers. Accordingly, the one or more authentication results, in an instance, may include data that may reflect an authenticity associated with an identity of a corresponding audience member on one or more social media platforms. Additionally, and/or alternatively, the one or more authentication results may correspond to the identification of the plurality of audience members on the one or more social media platforms based on entered one or more authentication data on a plurality of audience devices. Further, examples of the plurality of audience devices may include devices such as, but not limited to, a smartphone, a laptop, a PC, and so on. Further, the entered one or more authentication data, in an instance, may be any data that may reflect the identity of the corresponding audience member that may wish to share the virtual experience between the plurality of users (such as, the plurality of audience members and/or the one or more performers) on the one or more social media platforms. Further, the one or more authentication data, in an instance, may include but are not limited to, passwords, PINs, OTPs, biometric variables, etc. associated with the corresponding audience member. Further, the one or more social media servers may include servers that may store the one or more authentication results of the plurality of audience members associated with the one or more social media platforms. Further, the one or more social media platforms may include platforms such as, but not limited to, Facebook™, Twitter™, Instagram™, Snapchat™, Whatsapp™, WeChat™, Beebo™, IMOapp™, Reddit™, and so on.
Further, at 1404, the method 1400 may include a step of establishing, using the communication device, one or more links between the one or more social media servers and the plurality of audience devices based on the received one or more authentication results. Further, the software application disclosed herein may include a mobile application that may be installed on the plurality of audience devices. Further, a user interface of the software application may facilitate establishing the one or more links between the one or more social media servers and the plurality of audience devices.
Further, at 1406, the method 1400 may include a step of receiving, using the communication device, one or more audience member data associated with the plurality of audience members based on the one or more social media platforms. Further, the one or more audience member data may be received over the one or more links established between the one or more social media servers and the plurality of
audience devices. Further, the one or more audience member data may be any data that may be based on an interest of the plurality of audience members in one or more virtual events. Further, in some embodiments, the one or more audience member data may correspond to the at least one second user that may be associated with the first user on the one or more social media platforms. Further, the at least one second user, in an instance, may share the interest similar to the first user in the one or more virtual events. Further, the at least one second user may include followers, friends, fans, etc., of the first user on the one or more social media platforms.
Further, at 1408, the method 1400 may include a step of analyzing, using the processing device, the one or more audience member data. Further, at 1410, the method 1400 may include a step of determining, using the processing device, at least one virtual event interest data based on the analyzing. Further, the at least one virtual event interest data corresponds to data based on similar interest shown by the at least one second user and the first user in the one or more virtual events. Further, the at least one virtual event interest data may include information associated with the at least one second user based on choices corresponding to attending the one or more virtual events. Further, the information, in an instance, may include at least one preferred choice of the at least one second user based on the one or more virtual events, but not limited to, future one or more virtual events, preferences (for examples, preferences based on interest in the one or more performers, dates of the one or more virtual events, places associated with the one or more virtual events, etc.) associated with the one or more virtual events, tickets purchased for the one or more virtual events, etc. Further, in some embodiments, the at least one virtual event interest data may be automatically determined based on the at least one preferred choice associated with attending the one or more virtual events by the first user and the at least one second user. Further, the automatically determining, in an instance, may be based on one or more machine learning algorithms. Accordingly, the online platform may process the at least one preferred choice using the processing device based on the one or more machine learning algorithms to suggest the at least one second user to the first user. Further, in some embodiments, sharing of the at least one virtual event interest data with the first user may be based on a consent of the at least one second user.
Further, at 1412, the method 1400 may include a step of transmitting, using the communication device, the at least one virtual event interest data to the plurality
of audience devices. Further, in some embodiments, the first user may establish the social interaction with the at least one second user based on the user interface of the software application. Further, the interaction may be based on the at least one social media connection. Further, the interaction may include a real-time conversation between the first user and the at least one second user that may include sharing of, but not limited to, one or more of textual content, audio content, visual content, audiovisual content, and so on. Further, the textual content, in an instance, may include real-time text messaging between the plurality of audience members at an instance of the one or more virtual events and/or before attending the one or more virtual events. Further, the audio content, in an instance, may include real-time communication between the plurality of audience members at the instance of the one or more virtual events and/or before attending the one or more virtual events. Further, the visual content and/or the audio-visual content, in an instance, may include real-time communication between the plurality of audience members using one or more multimedia content at the instance of the one or more virtual events and/or before attending the one or more virtual events. Further, in some embodiments, the first user may create a room (e.g, a group that may include fans of the one or more virtual events) that may include one or more of the at least one second user based on the at least one virtual event interest data using the user interface of the software application.
FIG. 15 is a flowchart of a method 1500 to create a virtual audience for live performers at a virtual event, in accordance with some embodiments. Further, the live performers may include the one or more performers that may perform in a real environment, such as, for example, in the one or more virtual events that may include sporting events such as, but not limited to, baseball, basketball, football, golf, hockey, racing, soccer, and so on. Accordingly, at 1502, the method 1500 may include a step of receiving, using the communication device, the one or more audience data from the plurality of audience devices. Further, the one or more audience data may be any data that may be indicative of identities associated with the plurality of audience members watching the one or more virtual events. Accordingly, the one or more audience data, in an instance, may include a plurality of at least one of an audience image and an audience sound corresponding to the plurality of audience members. Further, at 1504, the method 1500 may include a step of analyzing, using the processing device, the one or more audience data. Further, at 1506, the method 1500 may include a step of
extracting, using the processing device, the one or more human form data corresponding to the plurality of audience members based on the analyzing. Further, the one or more human form data may include visual characteristics and/or auditory characteristics of the plurality of audience members. Further, at 1508, the method 1500 may include a step of generating, using the processing device, a virtual audience data based on the extracting. Further, the creating, in an instance, may include combining the one or more human forms with at least one virtual background such that the combining may imitate a real audience watching the one or more virtual events. Further, the at least one virtual background may include, but is not limited to, virtual seats in the one or more virtual events such as an arena, a stadium, a court, and so on. Further, at 1510, the method 1500 may include a step of transmitting, using the communication device, the virtual audience data to one or more display devices in the one or more virtual events. Further, the one or more display devices may include devices, such as but not limited to, electroluminescent (ELD) displays, liquid crystal displays (LCD), light-emitting diode (LED) backlit LCDs, thin-film transistor (TFT) LCDs, light-emitting diode (LED) displays, plasma display panel (PDP) displays, and so on. Further, in some embodiments, the one or more display devices may include the one or more performer devices. Further, the one or more virtual events may include, in an instance, one or more directional audio devices. Further, the one or more directional audio devices may facilitate generating of a spatial audio effect in the one or more virtual events. Further, the spatial audio effect, in an instance, may include sound from the one or more human forms displayed on the one or more display devices at a varying audible level. Further, the varying audible level may be based on a proximity of the one or more performers from the one or more human forms displayed on the one or more display devices. Further, the directional audio devices may include, but not limited to, one or more directional microphones, one or more directional speakers, and so on. Further, in some embodiments, the virtual interactive space based on the generated modified virtual interactive data may be displayed on the one or more display devices that may facilitate interaction between the one or more human forms during the one or more virtual events.
FIG. 16 is a flowchart of a method 1600 for providing a preview at an instance of purchasing tickets for the one or more virtual events, in accordance with some embodiments. Accordingly, at 1602, the method 1600 may include a step of receiving, using the communication device, at least one virtual event data associated with the
one or more virtual events from the one or more performer devices. Further, the one or more performer devices may be associated with the one or more performers performing in the one or more virtual events. Further, the at least one virtual event data may include but is not limited to, a preview of the virtual interactive space, price of each ticket, venue, name, facilities, attendees, and so on, associated with the one or more virtual events. Further, at 1604, the method 1600 may include a step of transmitting, using the communication device, the at least one virtual event data to the plurality of audience devices. Further, in some embodiments, one or more kiosks may be located in a vicinity of the one or more virtual events. Further, the one or more kiosks may receive the transmitted at least one virtual event data for the one or more virtual events. Further, at 1606, the method 1600 may include a step of determining, using the processing device, an instance of the purchasing of the tickets for the one or more virtual events on the plurality of audience devices. Further, a ticket window associated with the user interface of the software application may facilitate the purchasing of the tickets by the plurality of audience members. Further, in some embodiments, the ticket window may include one or more transaction options for facilitating transactions associated with the purchasing of the tickets. Further, the one or more transaction options may include but are not limited to, credit cards, debit cards, payment wallets, and so on. Further, in some embodiments, one or more kiosks may be located in a vicinity of the one or more virtual events. Further, the one or more kiosks may facilitate purchasing tickets for the one or more virtual events. Further, at 1608, the method 1600 may include a step of displaying, using the processing device, the at least one virtual event data on the plurality of audience devices. Further, at the instance of purchasing the tickets, the preview of the virtual interactive space may be displayed on the plurality of audience devices at the ticket window.
FIG. 17 is a flowchart of a method 1700 for purchasing merchandise and rendering the one or more human forms accordingly, in accordance with some embodiments. Accordingly, at 1702, the method 1700 may include a step of receiving, using the communication device, one or more indications associated with the purchasing of the merchandise from the plurality of audience devices. Further, in some embodiments, the user interface of the software application, in an instance, may facilitate an e-commerce platform for selling the merchandise. Further, the merchandise, in an instance, may correspond to customized goods and/or products
associated with the one or more virtual events. Further, the one or more performers may wish to sell the merchandise on the e-commerce platform. Further, at 1704, the method 1700 may include a step of processing, using the processing device, one or more transactions associated with the purchasing of the merchandise. Further, at 1706, the method 1700 may include a step of rendering, using the processing device, the one or more human forms with virtual merchandise. Further, in some embodiments, the merchandise may include physical goods and/or products. Further, the plurality of audience members may place an order for purchasing the physical goods and/or products using the e-commerce platform. Further, at 1708, the method 1700 may include a step of transmitting, using the communication device, the one or more human forms subsequent to the rendering to the one or more display devices.
FIG. 18 is an illustration of a screen 1800 associated with events navigation tab of a software application to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments. Further, the illustration may be associated with a screenshot of the software application. Further, the events navigation tab may display information based on one or more upcoming virtual events. Further, the displaying may be based on the user interface of the software application. Further, in some embodiments, each of the information corresponding to the one or more upcoming virtual events may include, but is not limited to, the price of a ticket for attending a corresponding upcoming virtual event, name, facilities provided, venue, date, attendees based on the social interaction (explained further in conjunction with FIG. 14), and so on. Further, in some embodiments, a user may wish to watch a preview of the one or more upcoming virtual events that may be displayed on a corresponding user device. Further, the preview may include a sneak peek of a corresponding upcoming virtual event that may display the information in a graphical context (such as, a sequence of images and/or videos) about the corresponding upcoming virtual event. Further, in some embodiments, the events navigation tab may include one or more navigation tabs such as, but not limited to, calendar, hosting, and so on. Further, the screen 1800 may display a directory of available events for both land gate and cloud gate tickets. Further, the screen 1800 may facilitate remembering events that are liked and recommending events based on past preferences and friends’ attendance.
FIG. 19 is an illustration of a screen 1900 associated with the events navigation tab of the software application to facilitate the sharing of the virtual
experience between the plurality of users, in accordance with some embodiments. Further, the illustration may be associated with a screenshot of the software application. Further, in some embodiments, the user may wish to select one or more filter preferences associated with the one or more upcoming virtual events such that the user interface may display specific one or more upcoming virtual events on the corresponding user device based on a choice of the user. Further, the one or more filter preferences may include preference options such as, but not limited to, selecting dates, selecting venues, selecting one or more options corresponding to attending the upcoming virtual event as in-person or virtually, genre and/or type based on the one or more upcoming virtual events, and so on. Further, the screen 1900 may facilitate dynamic searching to see all available events. Further, in some embodiments, the one or more upcoming virtual events may include music shows and/or concerts. Further, in some embodiments, the one or more upcoming virtual events may include sporting events such as but not limited to, baseball, basketball, football, golf, hockey, racing, soccer, and so on. Further, in some embodiments, the one or more upcoming virtual events may include large gathering events such as, but not limited to, circus, teaching courses, exercise sessions, festivals, museum visits, night clubs, protests, drama, theme parks, etc.
FIG. 20 is an illustration of a screen 2000 associated with my tickets ’ navigation tab of the software application to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments. Further, the illustration may be associated with a screenshot of the software application. Further, in some embodiments, the my ticket navigation tab may display one or more tickets for the one or more virtual events. Further, at least one future ticket of the one or more tickets may be displayed under the future navigation tab that may correspond to attending of the one or more upcoming virtual events based on the at least one future ticket. Further, at least one past ticket of the one or more tickets may be displayed under the past navigation tab that may correspond to attending one or more virtual events in past based on the at least one past ticket.
FIG. 21 is an illustration of a screen 2100 associated with my tickets navigation tab of the software application to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments. Further, the illustration may be associated with a screenshot of the software application. Further, in some embodiments, at least one quick response code (or, QR
code) may be generated corresponding to each of the one or more tickets under my tickets navigation tab. Further, in some embodiments, the at least one future ticket may display the at least one QR code subsequent to receiving an interaction from the user (such as tapping, swiping, etc.) on the corresponding user device. Further, each of the at least one future ticket may include the at least one QR code. Further, each QR code of the at least one QR code may correspond to at least one attendee associated with the user for the one or more upcoming virtual events. Further, the each QR code may be configured to include at least one attendee information associated with attending the corresponding upcoming virtual events. Further, the at least one attendee information may include but is not limited to, seat number, row number, section number, name of the venue, and so on. Further, in some embodiments, the one or more kiosks present at the one or more virtual events may facilitate scanning of one or more QR codes such that the scanning may be equivalent to a gate pass for attending the one or more virtual events.
FIG. 22 is an illustration of a screen 2200 associated with social navigation tab of the software application to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments. Further, the illustration may be associated with a screenshot of the software application. Further, in some embodiments, the social navigation tab may display followers, fans, friends, linked social media platforms, groups, the one or more upcoming virtual events to be attended, etc. associated with the user (explained in conjunction with FIG. 14).
FIG. 23 is an illustration of a screen 2300 associated with shop navigation tab of the software application to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments. Further, the illustration may be associated with a screenshot of the software application. Further, in some embodiments, the shop navigation tab may facilitate an e-commerce platform for selling the merchandise. Further, the merchandise, in an instance, may correspond to customized goods and/or products associated with the one or more virtual events. Further, the one or more performers may wish to sell the merchandise on the e- commerce platform. Further, in some embodiments, the merchandise may include physical goods and/or products. Further, the user may place an order for purchasing the physical goods and/or products using the e-commerce platform.
FIG. 24 is an illustration of a screen 2400 of the software application to facilitate the sharing of the virtual experience between the plurality of users, in
accordance with some embodiments. Further, the illustration may be associated with a screenshot of the software application. Further, the screen 2400 may be associated with a performance of the one or more performers in the one or more virtual events. Further, in some embodiments, the screen 2400 may display the one or more human forms of the one or more performers (explained further in conjunction with FIG. 13). Further, in some embodiments, the user interface of the software application may facilitate switching between one or more screens based on the interaction received from the user on the corresponding user device. Further, in some embodiments, the one or more screens may correspond to one or more navigations tabs under performance screen that may include, but are not limited to, selfie view, fan view, social, shop, exit, and so on.
FIG. 25 is an illustration of a screen 2500 of the software application to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments. Further, the illustration may be associated with a screenshot of the software application. Further, the screen 2500 may be associated with the virtual interactive space based on the virtual interactive space data that may display the plurality of audience members in the one or more virtual events. Further, in some embodiments, the screen 2500 may display the one or more human forms of the plurality of audience members, (explained further in conjunction with FIG. 13).
FIG. 26 is an illustration of a screen 2600 of the software application to facilitate the sharing of the virtual experience between the plurality of users, in accordance with some embodiments. Further, the illustration may be associated with a screenshot of the software application. Further, in some embodiments, the screen 2600 may represent interaction with the at least one second user based on the modified virtual interactive space data (explained further in conjunction with FIG. 13). Further, in some embodiments, the interaction may be based on the social interaction established based on the one or more social media platforms (explained further in conjunction with FIG. 14). Further, the screen 2600 may be associated with the social navigational tab in the one or more navigational tabs. Further, the one or more social media platforms may include, but are not limited to, Facebook™, Twitter™, Instagram™, Facebook™ Messenger, and so on. Further, in some embodiments, the user may choose to communicate with one or more random audience members in the one or more virtual events based on an interaction received on the corresponding user device using the modified virtual interactive space data.
Further, in some embodiments, the user may wish to save information corresponding to the one or more random audience members such that the saving may facilitate future communication with the one or more random audience members. Further, the communicating, in an instance, may include calling, texting, facetime, and so on. Although the present disclosure has been explained in relation to its preferred embodiment, it is to be understood that many other possible modifications and variations can be made without departing from the spirit and scope of the disclosure.
REFERENCES 1. Rolling Stone. Live Nation’s Second-Quarter Revenue Dropped 98% in 2020, available at https://www.rollingstone.com/pro/news/live-nation-revenue- concerts-q2-2020-1040181/ 2. PYMNTS.com. Live Nation’s Q2 Double-Whammy: Few Ticket Sales, but Lots Of Refunds, available at https://www.pymnts.com/earnings/2020/live- nation-q2-ticket-sales-refunds/ 3. Kelly Cohen, “Poll: Sports fans won’t attend games without coronavirus vaccine,” ESPN, April 9, 2020, available at https://www.espn.com/espn/story/_/id/29018209/poll-fans-attend-games- vaccine 4. SportsProMedia.com, “Study; 63% of live sports consumers would pay more to personalise,” February 25, 2020, available at https://www.sportspromedia.com/news/verizon-live-streaming-sports- personalised-consumer-experience-study
Claims
1. A method for facilitating sharing of virtual experience between users, the method comprising: receiving, using a communication device, one or more audience data associated with a plurality of audience members from a plurality of audience devices associated with the plurality of audience members, wherein the plurality of audience members watch and participate in at least one virtual event; receiving, using the communication device, one or more performer data associated with one or more performers from one or more performer devices associated with the one or more performers, wherein the one or more performers perform and participate in the at least one virtual event; analyzing, using a processing device, the one or more performer data and the one or more audience data; extracting, using the processing device, one or more human forms corresponding to one or more of the pluralities of audience members and the one or more performers based on the analyzing; generating, using the processing device, one or more human images of one or more of the plurality of audience members and the one or more performers based on the one or more human forms; receiving, using the communication device, at least one background data of the at least one virtual event from the one or more performer devices, wherein the at least one background data comprises at least one virtual background for the at least one virtual event; combining, using the processing device, the one or more human images with the at least one virtual background based on the generating; creating, using the processing device, a virtual interactive space based on the combining, wherein the virtual interactive space comprises the one or more human images of one or more of the plurality of audience members and the one or more performers in the at least one virtual background; receiving, using the communication device, at least one interaction data of one or more interactions of one or more of the plurality of audience members and the one or more performers from one or more of the plurality of audience devices and the one or more performer devices;
generating, using the processing device, a modified virtual interactive space data based on each of the at least one interaction data and the virtual interactive space, wherein the modified virtual interactive space data comprises the one or more human images and the one or more interactions of one or more of the plurality of audience members and the one or more performers within the at least one virtual background; transmitting, using the communication device, the modified virtual interactive space data to the one or more of the plurality of audience devices and the one or more performer devices; and storing, using a storage device, one or more of the one or more audience data, the one or more performer data, and the at least one background data.
2. The method of claim 1 further comprising: receiving, using the communication device, one or more audience member data associated with the plurality of audience members from one or more social media platforms associated with the plurality of audience members; analyzing, using the processing device, the one or more audience member data; generating, using the processing device, at least one virtual event interest data based on the analyzing of the one or more audience member data, wherein the at least one virtual event interest data comprises one or more similar interests shown by one or more first audience members of the plurality of audience members and one or more second audience members of the plurality of audience members in the at least one virtual event; and transmitting, using the communication device, the at least one virtual event interest data to the plurality of audience devices.
3. The method of claim 1, wherein the at least one background data comprises one or more virtual backgrounds locations of one or more virtual seats in the at least one virtual background for the plurality of audience members, wherein the method further comprises:
analyzing, using the processing device, the at least one background data and the virtual interactive space; generating, using the processing device, one or more virtual interactive space views of the virtual interactive space corresponding to the one or more virtual seats based on the analyzing of the at least one background data and the virtual interactive space; transmitting, using the communication device, the one or more virtual interactive space views of the virtual interactive space corresponding to the one or more virtual seats to the plurality of audience devices; receiving, using the communication device, one or more seat indications of one or more selected virtual seats of the one or more virtual seats from one or more audience devices associated with one or more audience members; issuing, using the processing device, one or more tickets for the one or more selected virtual seats to the one or more audience members based on the one or more seat indications of the one or more selected virtual seats for the at least one virtual event; and transmitting, using the communication device, the one or more tickets to the one or more audience devices.
4. The method of claim 1 further comprising: transmitting, using the communication device, one or more virtual merchandises for the one or more human forms to the plurality of audience devices; receiving, using the communication device, one or more merchandise indications for purchasing of one or more selected virtual merchandises of the one or more virtual merchandises from one or more audience devices associated with one or more audiences; processing, using the processing device, one or more transactions associated with the purchasing of the one or more selected virtual merchandises based on the one or more merchandise indications; and rendering, using the processing device, the one or more human forms with the one or more selected virtual merchandises based on the processing,
wherein the generating of the one or more human images is further based on the rendering.
5. The method of claim 1, wherein the one or more audience data comprises one or more of an audience member’s appearance, an audience member’s gesture, an audience member’s verbal expression, an audience member’s nonverbal expression, and an audience member’s movement, wherein the plurality of audience devices comprises one or more of an audience image sensor, an audience microphone, and an audience motion sensor, wherein one or more of the audience image sensor, the audience microphone, and the audience motion sensor is configured for generating the one or more audience data based on capturing one or more of an appearance, a gesture, a verbal expression, a nonverbal expression, and a movement of the plurality of audience members.
6. The method of claim 1, wherein the one or more performer data comprises one or more of a performer’s appearance, a performer’s gesture, a performer’s verbal expression, a performer’s nonverbal expression, and a performer’s movement, wherein the one or more performer devices comprises one or more of a performer image sensor, a performer microphone, and a performer motion sensor, wherein one or more of the performer image sensor, the performer microphone, and the performer motion sensor is configured for generating the one or more performer data based on capturing one or more of an appearance, a gesture, a verbal expression, a nonverbal expression, and a movement of the one or more performers.
7. The method of claim 1 further comprising: analyzing, using the processing device, the at least one interaction data using one or more machine learning models, wherein the one or more machine learning models is trained for detecting actions of one or more of the plurality of audience members and the one or more performers; determining, using the processing device, one or more actions corresponding to one or more of the plurality of audience members and the one or more performers based on the analyzing of the at least one interaction data; and
modifying, using the processing device, the virtual interactive space based on the one or more actions, wherein the generating of the modified virtual interactive space data is further based on the modifying.
8. The method of claim 7 further comprises: identifying, using the processing device, one or more of one or more first audience members and one or more first performers based on the determining of the one or more actions; and establishing, using the processing device, one or more interaction sessions between one or more of the one or more audience members and the one or more performers and one or more of the one or more first audience members and the one or more first performers in real-time based on the identifying, wherein the modifying of the virtual interactive space is further based on the establishing.
9. The method of claim 1 further comprising: generating, using the processing device, one or more virtual experiences of the virtual interactive space for one or more of the plurality of audience members and the one or more performers based on the virtual interactive space, the one or more audience member data, and the one or more performer data; and transmitting, using the communication device, the one or more virtual experiences corresponding to one or more of the plurality of audience members and the one or more performers to one or more of the plurality of audience member devices and the one or more performer devices.
10. The method of claim 1 further comprising: receiving, using the communication device, one or more event venue data associated with one or more event venues of the at least one virtual event from the one or more performer devices; analyzing, using the processing device, the one or more event venue data using one or more first machine learning models, wherein the one or more first machine learning models is trained for detecting attending parameters for attending the at least one virtual event at the one or more event venues; and
determining, using the processing device, one or more attending parameters for attending the at least one virtual event at the one or more event venues by one or more of the plurality of audience members and the one or more performers based on the analyzing of the one or more event venue data, wherein the creating of the virtual interactive space is further based the one or more attending parameters.
11. A system for facilitating sharing of virtual experience between users, the system comprising: a communication device configured for: receiving one or more audience data associated with a plurality of audience members from a plurality of audience devices associated with the plurality of audience members, wherein the plurality of audience members watch and participate in at least one virtual event; receiving one or more performer data associated with one or more performers from one or more performer devices associated with the one or more performers, wherein the one or more performers perform and participate in the at least one virtual event; receiving at least one background data of the at least one virtual event from the one or more performer devices, wherein the at least one background data comprises at least one virtual background for the at least one virtual event; receiving at least one interaction data of one or more interactions of one or more of the plurality of audience members and the one or more performers from one or more of the plurality of audience devices and the one or more performer devices; and transmitting a modified virtual interactive space data to the one or more of the plurality of audience devices and the one or more performer devices, wherein the modified virtual interactive space data comprises the one or more human images and the one or more interactions of one or more of the plurality of audience members and the one or more performers within the at least one virtual background; a processing device communicatively coupled with the communication device, wherein the processing device is configured for:
analyzing the one or more performer data and the one or more audience data; extracting one or more human forms corresponding to one or more of the pluralities of audience members and the one or more performers based on the analyzing; generating one or more human images of one or more of the plurality of audience members and the one or more performers based on the one or more human forms; combining the one or more human images with the at least one virtual background based on the generating; creating a virtual interactive space based on the combining, wherein the virtual interactive space comprises the one or more human images of one or more of the plurality of audience members and the one or more performers in the at least one virtual background; and generating the modified virtual interactive space data based on each of the at least one interaction data and the virtual interactive space; and a storage device communicatively coupled with the processing device, wherein the storage device is configured for storing one or more of the one or more audience data, the one or more performer data, and the at least one background data.
12. The system of claim 11, wherein the communication device is further configured for: receiving one or more audience member data associated with the plurality of audience members from one or more social media platforms associated with the plurality of audience members; and transmitting at least one virtual event interest data to the plurality of audience devices, wherein the processing device is further configured for: analyzing the one or more audience member data; and generating the at least one virtual event interest data based on the analyzing of the one or more audience member data, wherein the at least one virtual event interest data comprises one or more similar interests shown by one or more first audience members of the plurality of audience members and
one or more second audience members of the plurality of audience members in the at least one virtual event.
13. The system of claim 11, wherein the at least one background data comprises one or more virtual backgrounds locations of one or more virtual seats in the at least one virtual background for the plurality of audience members, wherein the processing device is further configured for: analyzing the at least one background data and the virtual interactive space; generating one or more virtual interactive space views of the virtual interactive space corresponding to the one or more virtual seats based on the analyzing of the at least one background data and the virtual interactive space; and issuing one or more tickets for one or more selected virtual seats to the one or more audience members based on one or more seat indications of the one or more selected virtual seats for the at least one virtual event, wherein the communication device is further configured for: transmitting the one or more virtual interactive space views of the virtual interactive space corresponding to the one or more virtual seats to the plurality of audience devices; receiving the one or more seat indications of the one or more selected virtual seats of the one or more virtual seats from one or more audience devices associated with one or more audience members; and transmitting the one or more tickets to the one or more audience devices.
14. The system of claim 11, wherein the communication device is further configured for: transmitting one or more virtual merchandises for the one or more human forms to the plurality of audience devices; and receiving one or more merchandise indications for purchasing of one or more selected virtual merchandises of the one or more virtual merchandises from one or more audience devices associated with one or more audiences, wherein the processing device is further configured for:
processing one or more transactions associated with the purchasing of the one or more selected virtual merchandises based on the one or more merchandise indications; and rendering the one or more human forms with the one or more selected virtual merchandises based on the processing, wherein the generating of the one or more human images is further based on the rendering.
15. The system of claim 11, wherein the one or more audience data comprises one or more of an audience member’s appearance, an audience member’s gesture, an audience member’s verbal expression, an audience member’s nonverbal expression, and an audience member’s movement, wherein the plurality of audience devices comprises one or more of an audience image sensor, an audience microphone, and an audience motion sensor, wherein one or more of the audience image sensor, the audience microphone, and the audience motion sensor is configured for generating the one or more audience data based on capturing one or more of an appearance, a gesture, a verbal expression, a nonverbal expression, and a movement of the plurality of audience members.
16. The system of claim 11, wherein the one or more performer data comprises one or more of a performer’s appearance, a performer’s gesture, a performer’s verbal expression, a performer’s nonverbal expression, and a performer’s movement, wherein the one or more performer devices comprises one or more of a performer image sensor, a performer microphone, and a performer motion sensor, wherein one or more of the performer image sensor, the performer microphone, and the performer motion sensor is configured for generating the one or more performer data based on capturing one or more of an appearance, a gesture, a verbal expression, a nonverbal expression, and a movement of the one or more performers.
17. The system of claim 11, wherein the processing device is further configured for: analyzing the at least one interaction data using one or more machine learning models, wherein the one or more machine learning models is trained
for detecting actions of one or more of the plurality of audience members and the one or more performers; determining one or more actions corresponding to one or more of the plurality of audience members and the one or more performers based on the analyzing of the at least one interaction data; and modifying the virtual interactive space based on the one or more actions, wherein the generating of the modified virtual interactive space data is further based on the modifying.
18. The system of claim 17, wherein the processing device is further configured for: identifying one or more of one or more first audience members and one or more first performers based on the determining of the one or more actions; and establishing one or more interaction sessions between one or more of the one or more audience members and the one or more performers and one or more of the one or more first audience members and the one or more first performers in real-time based on the identifying, wherein the modifying of the virtual interactive space is further based on the establishing.
19. The system of claim 11, wherein the processing device is further configured for generating one or more virtual experiences of the virtual interactive space for one or more of the plurality of audience members and the one or more performers based on the virtual interactive space, the one or more audience member data, and the one or more performer data, wherein the communication device is further configured for transmitting the one or more virtual experiences corresponding to one or more of the plurality of audience members and the one or more performers to one or more of the plurality of audience member devices and the one or more performer devices. 0. The system of claim 11, wherein the communication device is further configured for receiving one or more event venue data associated with one or more event venues of the at least one virtual event from the one or more performer devices, wherein the processing device is further configured for:
analyzing the one or more event venue data using one or more first machine learning models, wherein the one or more first machine learning models is trained for detecting attending parameters for attending the at least one virtual event at the one or more event venues; and determining one or more attending parameters for attending the at least one virtual event at the one or more event venues by one or more of the plurality of audience members and the one or more performers based on the analyzing of the one or more event venue data, wherein the creating of the virtual interactive space is further based the one or more attending parameters.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202063123944P | 2020-12-10 | 2020-12-10 | |
| US63/123,944 | 2020-12-10 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2022125964A1 true WO2022125964A1 (en) | 2022-06-16 |
Family
ID=81974011
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2021/062916 Ceased WO2022125964A1 (en) | 2020-12-10 | 2021-12-10 | Methods, systems, apparatuses, and devices for facilitating sharing of virtual experience between users |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2022125964A1 (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20250384500A1 (en) * | 2024-06-17 | 2025-12-18 | SQ Technology (Shanghai) Corporation | System of licensing virtual avatar interaction in metaverse digital wax museum and method thereof |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20120060101A1 (en) * | 2010-08-30 | 2012-03-08 | Net Power And Light, Inc. | Method and system for an interactive event experience |
| US20180286460A1 (en) * | 2016-04-08 | 2018-10-04 | DISH Technologies L.L.C. | Systems and methods for generating and presenting virtual experiences |
| US10789764B2 (en) * | 2017-05-31 | 2020-09-29 | Live Cgi, Inc. | Systems and associated methods for creating a viewing experience |
-
2021
- 2021-12-10 WO PCT/US2021/062916 patent/WO2022125964A1/en not_active Ceased
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20120060101A1 (en) * | 2010-08-30 | 2012-03-08 | Net Power And Light, Inc. | Method and system for an interactive event experience |
| US20180286460A1 (en) * | 2016-04-08 | 2018-10-04 | DISH Technologies L.L.C. | Systems and methods for generating and presenting virtual experiences |
| US10789764B2 (en) * | 2017-05-31 | 2020-09-29 | Live Cgi, Inc. | Systems and associated methods for creating a viewing experience |
Non-Patent Citations (2)
| Title |
|---|
| DIONISIO ET AL.: "3D Virtual Worlds and the Metaverse: Current Status and Future Possibilities", ACM COMPUTING SURVEYS (CSUR, vol. 45, no. 3, June 2013 (2013-06-01), pages 1 - 38, XP058020920, Retrieved from the Internet <URL:https://dl.acm.org/doi/abs/10.1145/2480741.2480751> [retrieved on 20220210] * |
| VITRUVIAN ENTERTAINMENT: "VIRTUAL DJ CONCERT 1", YOUTUBE VIDEO, XP055944881, Retrieved from the Internet <URL:https://www.youtube.com/watch?v=WmG6tOCees4> [retrieved on 20220210] * |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20250384500A1 (en) * | 2024-06-17 | 2025-12-18 | SQ Technology (Shanghai) Corporation | System of licensing virtual avatar interaction in metaverse digital wax museum and method thereof |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Pavlik et al. | The emergence of augmented reality (AR) as a storytelling medium in journalism | |
| US11216166B2 (en) | Customizing immersive media content with embedded discoverable elements | |
| US8799005B2 (en) | Systems and methods for capturing event feedback | |
| US9292163B2 (en) | Personalized 3D avatars in a virtual social venue | |
| US10020025B2 (en) | Methods and systems for customizing immersive media content | |
| US8667402B2 (en) | Visualizing communications within a social setting | |
| US20110225515A1 (en) | Sharing emotional reactions to social media | |
| US20110239136A1 (en) | Instantiating widgets into a virtual social venue | |
| US20090013263A1 (en) | Method and apparatus for selecting events to be displayed at virtual venues and social networking | |
| US20110225039A1 (en) | Virtual social venue feeding multiple video streams | |
| US20110244954A1 (en) | Online social media game | |
| US20110225519A1 (en) | Social media platform for simulating a live experience | |
| US20120094768A1 (en) | Web-based interactive game utilizing video components | |
| US20110225498A1 (en) | Personalized avatars in a virtual social venue | |
| US20110225516A1 (en) | Instantiating browser media into a virtual social venue | |
| US20110225518A1 (en) | Friends toolbar for a virtual social venue | |
| US20140325540A1 (en) | Media synchronized advertising overlay | |
| WO2016029224A1 (en) | Apparatus, system, and method for providing users with a shared media experience | |
| US20110225517A1 (en) | Pointer tools for a virtual social venue | |
| US11715270B2 (en) | Methods and systems for customizing augmentation of a presentation of primary content | |
| WO2022125964A1 (en) | Methods, systems, apparatuses, and devices for facilitating sharing of virtual experience between users | |
| Carpio et al. | Gala: a case study of accessible design for interactive virtual reality cinema | |
| Ding | (Self-) Representation of Migrant Workers in Chinese Smaller-Screen Visual Practices: From DV-made Documentaries to Short Videos | |
| Zimmer | Commodified Surveillance: First-Person Cameras, the Internet, and Compulsive Documentation | |
| Styliari | Digital identity at the movies: understanding and designing the contemporary cinema-going experience |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21904510 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 21904510 Country of ref document: EP Kind code of ref document: A1 |