[go: up one dir, main page]

HK40035901A - Interactive toy - Google Patents

Interactive toy Download PDF

Info

Publication number
HK40035901A
HK40035901A HK42021025997.4A HK42021025997A HK40035901A HK 40035901 A HK40035901 A HK 40035901A HK 42021025997 A HK42021025997 A HK 42021025997A HK 40035901 A HK40035901 A HK 40035901A
Authority
HK
Hong Kong
Prior art keywords
riding device
trigger signal
riding
responses
experience data
Prior art date
Application number
HK42021025997.4A
Other languages
Chinese (zh)
Other versions
HK40035901B (en
Inventor
A·K·卡拉马
R·S·特罗布里奇
P·斯蒂夫尼维兹
C·M·景
J·C·汉普顿
Original Assignee
迪士尼企业公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 迪士尼企业公司 filed Critical 迪士尼企业公司
Publication of HK40035901A publication Critical patent/HK40035901A/en
Publication of HK40035901B publication Critical patent/HK40035901B/en

Links

Description

Interactive toy
Cross-reference to related application data
This application claims the benefit of U.S. application No.16421223 filed on 23/5/2019, the entire disclosure of which is hereby incorporated by reference for all purposes.
Technical Field
The present disclosure relates generally to toys.
Background
Toys are items that users possess that encourage play and imagination. Toys have long been used in many different types of play, including sports-based play, role-playing, and fantasy play. Toys have incorporated technology to provide different modes of enjoyment and interaction. For example, some early toys included mechanisms driven by pull cords to make noise, "talk" or movement, while more recent toys have included processing capabilities.
Toys are manufactured in different forms and types, and may include building sets, character-type toys such as action characters, character dolls or character dolls, animatronics (animatronic) dolls, and robots. Toys may also include non-character items such as, for example, shopping carts, lawn mowers, baby accessories or jeans and belts, and the like, and toys may include clothing, or clothing accessories. While toys provide many different play experiences, it is desirable to further develop play items, including toys and any items used in play.
Disclosure of Invention
Certain embodiments are directed to interactive play items that may include toys. One aspect of the present disclosure relates to a device for interactive game play. The apparatus comprises: a body portion; a transmitter embedded in the body to transmit a self-describing signal for the interactive apparatus; a receiver that can receive a trigger signal, the trigger signal transmitted in response to a self-describing signal; and a memory that can store the trigger signal and a plurality of preprogrammed responses that are responsive to the trigger signal. In some embodiments, the pre-programmed responses include one or more of an audible response and a physical response. In some embodiments, the physical response may include any of moving an actuator, powering a light, and activating a motor. The apparatus may include: a speaker to transmit an auditory response; and a processor in communication with the memory and the receiver. The processor may process a plurality of instructions for performing the following process: identifying a trigger signal; selecting one of a plurality of preprogrammed responses that are responsive to the trigger signal; and executing one of the plurality of preprogrammed responses. In some embodiments, performing one of the plurality of pre-programmed responses comprises at least one of transmitting an audible response and performing a physical response.
In some embodiments, the memory may be a writable persistent storage that may store experience data (experience data) and a plurality of preprogrammed responses. In some embodiments, the plurality of pre-programmed responses characterize behavior of the device. In some embodiments, the processor may update the memory with new experience data. In some embodiments, updating the memory with new experience data modifies future selections of one of a plurality of preprogrammed responses in response to the received trigger signal.
In some embodiments, the device comprises at least one sensor for generating experience data. In some embodiments, the experience data is associated with a location and a time. In some embodiments, the receiver may receive new experience data. In some embodiments, the processor may update the memory with new experience data. In some embodiments, updating the memory with new experience data modifies future selections of one of a plurality of preprogrammed responses in response to the received trigger signal.
In some embodiments, one of the plurality of preprogrammed responses is selected based on the trigger signal. In some embodiments, selecting one of the plurality of preprogrammed responses comprises: identifying a set of potential responses based on the trigger signal; and selecting one of a set of potential responses. In some embodiments, one of a set of potential responses is randomly selected.
In some embodiments, performing one of the plurality of pre-programmed responses comprises: determining execution parameters for executing the pre-programmed response; and executing one of the plurality of preprogrammed responses according to the execution parameter. In some embodiments, the performance parameter includes at least one of delay, amplitude, and duration.
One aspect of the present disclosure relates to a system for delivery of attraction experiences. The system comprises: a passenger vehicle having a plurality of passenger positions; a content presentation system that can present a virtual portion of a sight experience viewable from a plurality of passenger locations; a transceiver that can transmit a trigger signal; and at least one processor. The at least one processor may: controlling, via a content presentation system, delivery of a virtual portion of a sight experience; detecting the presence of a non-riding device, which may be, for example, a toy; and transmitting the trigger signal to the non-riding device via the transceiver. In some embodiments, the trigger signal may be associated with an attraction experience.
In some embodiments, the at least one processor may determine a property of the non-riding device. In some embodiments, the attributes of the non-riding device are based on experience data. In some embodiments, the experience data is associated with a location and a time. In some embodiments, the trigger signal is based at least in part on a property of the non-riding device.
In some embodiments, the at least one processor may modify the attraction experience based on at least one of the presence of the non-riding device and experience data of the non-riding device. In some embodiments, the trigger signal indicates that the non-ride device is reacting to an event in the attraction experience. In some embodiments, the event in the attraction experience includes the presence of a character. In some embodiments, the trigger signal indicates that the non-riding device provides assistance to an owner of the non-riding device. In some embodiments, providing assistance to the owner of the non-riding device comprises: determining at least one desired action of an owner of the non-riding device; and providing information via the non-riding device to facilitate the at least one desired action. In some embodiments, the trigger signal indicates a particular reaction by the non-riding device. In some embodiments, the trigger signal indicates one of a class of reactions by the non-riding device.
One aspect of the present disclosure relates to a method for scenic experience delivery. The method comprises the following steps: controlling, via a content presentation system, delivery of a virtual portion of a sight experience that is perceivable from a passenger location within a passenger vehicle; detecting a presence of a non-riding device in a passenger vehicle; transmitting a trigger signal to the non-riding device, the trigger signal indicating an action of the non-riding device. In some embodiments, the trigger signal is associated with an attraction experience.
In some embodiments, the method includes determining a property of the non-riding device. In some embodiments, determining the attributes of the non-riding device includes receiving data from the non-riding device. In some embodiments, the attributes of the non-riding device are based on experience data. In some embodiments, the experience data characterizes the experience of the non-riding device and is location-dependent and time-dependent. In some embodiments, the trigger signal is based at least in part on a property of the non-riding device.
In some embodiments, the method includes modifying the attraction experience based on at least one of the presence of the non-riding device and experience data of the non-riding device. In some embodiments, the trigger signal indicates that the non-ride device is reacting to an event in the attraction experience. In some embodiments, the event in the attraction experience includes the presence of a character. In some embodiments, the character may be a virtual character.
In some embodiments, the trigger signal indicates that the non-riding device provides assistance to an owner of the non-riding device. In some embodiments, providing assistance to the owner of the non-riding device comprises: determining at least one desired action of an owner of the non-riding device; and providing information via the non-riding device to facilitate the at least one desired action. In some embodiments, the trigger signal indicates a particular reaction by the non-riding device. In some embodiments, the trigger signal indicates one of a class of reactions by the non-riding device.
The nature and advantages of embodiments of the present disclosure may be better understood with reference to the following detailed description and accompanying drawings.
Drawings
FIG. 1 is a schematic diagram of one embodiment of an interactive presentation system.
FIG. 2 is a schematic diagram of one embodiment of a simulation environment.
FIG. 3 is an exemplary depiction of one embodiment of a non-riding device.
FIG. 4 is a functional block diagram depicting components of one embodiment of a non-riding device.
Fig. 5 shows a flow chart of an exemplary method.
Figure 6 is a flow diagram illustrating one embodiment of an attraction experience delivery process.
FIG. 7 is a flow diagram illustrating one embodiment of a process for interactive game play by a non-riding device.
FIG. 8 is a depiction of one embodiment of a computer system.
Detailed Description
The following description provides illustrative embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the illustrative embodiments will provide those skilled in the art with an enabling description for implementing a preferred exemplary embodiment. It being understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope as set forth in the appended claims.
Introduction, introduction
The development of simulation techniques provides many opportunities for the future creation of attractions that provide a unique customized experience for the ride passenger and/or performance audience. As used herein, attractions include amusement rides, interactions or events (such as interactions with one or more characters), and/or performances. In some embodiments, these attractions may be within an attraction park, and in some embodiments, these attractions may be in an amusement park, theme store, theater, and/or restaurant (e.g., theme restaurant). While these techniques provide opportunities for future attractions, generating content based on the passenger's responses to the attractions also presents challenges. For example, a user may have playing items related to attractions, which may include toys, but integrating these items into attractions may be problematic.
These and other problems may be overcome by the use of the systems, devices, and methods disclosed herein. For example, a play item, also referred to herein as a "non-riding device," when presented at an attraction, may include features that enable communication with the attraction, and the attraction may include features that enable detection of one or several non-riding devices in a viewing area, such as in a passenger vehicle for example, and communication with some or all of the one or several non-riding devices. As used herein, a play item refers to any item that may be used to enhance the participation or realism of an embodiment in a subject attraction or environment. The playing item may comprise any item used in playing and may comprise a toy, a garment, a badge, any item worn or carried, clothing, and/or clothing accessories. Such play items may be tagged and/or may include features that allow communication with the play item and computer-based identification of the play item. As used herein, a "non-riding device" refers to a device owned, controlled and/or owned by a passenger of an attraction and/or an audience of the attraction that is present at the attraction and/or is brought to the attraction and moved away from the attraction by the passenger. The non-riding device may include play items such as toys. The toy may include a doll including an action character or character doll, a stuffed animal, an animatronic doll, or a robot.
The sights may be modified based on the detection of one or more non-riding devices in the passenger vehicle and/or the observation area and/or based on one or more attributes of the one or more non-riding devices. In some embodiments, characters in the attraction (such as virtual characters) may interact with one or several of the one or several non-riding devices. The attractions may also generate and transmit one or several trigger signals that may trigger a response of the non-riding device. The response may include interaction with a character in the attraction, a response to an event in the attraction, assistance to a passenger (such as the owner, controller, and/or owner of the non-ride device). Such assistance may include providing information about one or several upcoming events and/or how to provide a desired response to one or several events. In some embodiments, such assistance may include the non-riding device providing and/or appearing to provide one or more inputs to control the attraction and/or taking one or more actions in response to the attraction.
The trigger signal may be customized based on one or several attributes of the non-riding device. In some embodiments, for example, each non-riding device may include memory containing experience data. The experience data may be accumulated by the non-riding device based on the non-riding device's interaction with one or several other devices (including one or several non-riding devices), the presence of the non-riding device at a certain location and/or at a certain location at a time, or the prior presence of the non-riding device at one or several attractions, etc. In some embodiments, the experience data may be generated by the non-riding device via one or several sensors of the non-riding device, and, in some embodiments, the non-riding device may receive a communication containing the experience data of the non-riding device. For example, if the non-riding device is brought into a store, the experience device may be generated by a sensor of the non-riding device, or the experience data may be generated based on information received by a communication component of the non-riding device from a communication component in the store that transmits information related to one or several events.
In some embodiments, upon receiving the trigger signal, the processor of the non-riding device may control the non-riding device to provide the desired action and/or reaction. In some embodiments, the trigger signal may specify an action and/or reaction, and the processor of the non-riding device may control the non-riding device to provide the action and/or reaction. In some embodiments, the trigger signal may specify a type of action. Upon receiving a trigger signal specifying a type of motion, the non-riding device may select one of the type of motion, and a processor of the non-riding device may control the non-riding device in accordance with the selected one of the type of motion.
Two, interactive performance system
Referring now to fig. 1, a schematic diagram of one embodiment of an interactive performance system 100 is shown. The system 100 may be used to provide attractions such as attractions to an attraction experience. Attractions may include amusement rides, performances, interactive performances or concerts, etc. The system 100 may be used to provide an attraction experience interacting with one or several non-riding devices, or to provide interactive game play. The system 100 may include a processor 102. Processor 102 may be any computing and/or processing device including, for example, one or more laptop computers, personal computers, tablets, smart phones, servers, mainframe computers or processors, and the like. The processor 102 may be configured to receive input from one or more other components of the system 100, process the input according to one or more stored instructions, and provide output to control the operation of one or more of the other components of the system 100.
In some embodiments, the processor 102 may include a game engine that may include a presentation engine. The game engine and presentation engine may together or independently develop and/or advance a simulated narrative (narrative) and/or generate an image corresponding to the narrative. The narrative comprises a series of events and/or actions that can be presented, and in some embodiments can be presented to attraction passengers or performance audiences in sequence. Modifications to the narrative can include changes to all or part of the narrative. Thus, in some embodiments, a modification to the narrative may include adding one, some, or all events, excluding one, some, or all events, or changing some aspect of one, some, or all events. Similarly, modifications to the narrative can include changes to the role or role actions in the narrative. Thus, changing the role line in the narrative is a modification of the narrative.
In some embodiments, narration may occur in a virtual world. As used herein, a "virtual world" is a computer-based simulated environment of images, events, videos, storylines, sounds or effects, etc., generated by and from a game engine and/or presentation engine. In some embodiments, the virtual world may be presented to one or several passengers and/or viewers via a display, such as provided via one or several projectors, screens, monitors, and/or speakers, and, in some embodiments, the virtual world may be presented via one or several control signals that generate and/or communicate actions that control one or several animated characters, animated props, or performing action devices, and, in some embodiments, the virtual world may be presented via a combination of electronic animation and display. In some embodiments, the presentation engine may generate one or several events that may be based in part on user input provided to the system 100. These events may include, for example, one or more accelerations, decelerations, changes in direction, or interactions with one or more items or characters, etc.
In some embodiments, the processor 102 may include motion control. Motion control may control the motion of a simulated vehicle 108 (also referred to herein as a passenger vehicle 108, passenger transport 108, or attraction environment 108) via controls connected to the simulated vehicle 108 and/or a motion base 110 on or to which the simulated vehicle is mounted. In embodiments where the interactive system provides all or part of an attraction, the simulated vehicle 108 may accommodate the occupants of the attraction. In embodiments where the interactive system provides all or part of a performance, simulated vehicle 108 may be a viewing area from which the performance may be viewed. Thus, in embodiments where the performance is performed to one or several passers-by on a pedestrian walkway, sidewalk, alley, or street, the viewing area may be part of the pedestrian walkway, sidewalk, alley, or street. Motion control may control the motion of the simulated vehicle based on one or more inputs received from a user and/or one or more game events.
The system 100 may include a memory 104. Memory 104 may represent one or more storage media and/or memory for storing data, including Read Only Memory (ROM), Random Access Memory (RAM), magnetic RAM, core memory, magnetic disk storage media, optical storage media, flash memory devices, and/or other machine-readable media for storing information. The term "machine-readable medium" includes, but is not limited to portable or fixed storage devices, optical storage devices, and/or various other storage media capable of storing instructions and/or data. The memory 104 may be an integrated part of the processor 102 and/or may be separate from the processor 102. In embodiments where the memory 104 is separate from the processor 102, the memory 104 and the processor 102 may be communicatively associated via, for example, the communication network 130. In some embodiments, the communication network 130 may include any wired or wireless communication connections between components of the simulation system 100.
The memory 104 may include software code and/or instructions for directing the operation of the processor 102 and/or may include one or several databases 106 containing information used by the processor 102 and/or generated by the processor 102.
Memory 104 may include narration/image database 106-a. Narration/image database 106-a stores narration and image data. Such narrative and/or image data can define and/or bind a virtual world. Such narration and image data may include information and/or data related to the narration and images generated as part of the narration. Specifically, narrative and image data are data and information used to generate narration and images and/or sounds in narration. This may include identifying one or more of the following: the article, role, effect or thing that is present in the narrative, and defines one or more of the following: data or databases of items, characters, effects or things. Defining one or more of the following: this data or database of items, characters, effects or things may identify one or several of the following: attributes of an item, character, effect, or thing, which may define size, speed, sound, movement characteristics, or lighting characteristics, etc. Narrative database 106-A may also include information regarding the events within the narrative and the order of those events.
Memory 104 may include a vehicle database 106-B. The vehicle and/or actuator system database 106-B may include data related to simulated vehicles and/or actuator systems. In some embodiments, the database 106-B may include information relating to features of the simulated vehicle and/or relating to control of the simulated vehicle and/or interaction with user control features located on the simulated vehicle. In some embodiments, for example, the simulated vehicle may move in response to user input to a user-controlled feature and/or according to a simulated narrative or event in the simulated narrative. Vehicle database 106-B may include data identifying one or more characteristics of the simulated vehicle that enable the simulated vehicle to move. These features may include, for example, one or more of the following: motors, servomotors, pneumatic, electric or hydraulic components or any components causing movement, etc.
Memory 104 may include trigger database 106-C. Trigger database 106-C may include information related to one or several trigger signals. In some embodiments, the information related to one or several trigger signals may include a set of information for each trigger signal that may instruct the recipient non-riding device to take an action and/or to take an action from a class of actions. In some embodiments, the information related to the one or more trigger scans further relates to one or more events in the narrative and/or experience data of one or more non-riding devices or non-riding devices. In some embodiments, for example, when an event occurs in the simulation and/or when an event in the simulation is in proximity, the processor 102 may identify one or several potential trigger signals associated with the event. The processor 102 may further identify one or more non-riding devices in the passenger vehicle and/or experience data for each of the one or more non-riding devices. The processor 102 may then select one or more trigger signals based on the current one or more non-riding devices and/or experience data of the current one or more non-riding devices.
The memory 104 may include a non-ride device database 106-D. The non-riding device database 106-D may include information related to experience data for non-riding devices detected in the passenger vehicle and/or one or more non-riding devices detected in the passenger vehicle. In some embodiments, this information may be stored until completion of the simulation and/or completion of the attraction. This information in non-riding device database 106-D may be received from non-riding devices in the passenger vehicle.
The system 100 may include one or several simulated vehicles 108. The simulated vehicle 108 may provide hardware corresponding to some or all of the features of the virtual vehicle with the passenger located in the gaming/simulation portion of the attraction experience. In some embodiments of the attraction, the simulated vehicle 108 may transport the passenger from a starting location, which may be the location where the passenger enters the simulated vehicle 108, to an ending location, which may be the location where the passenger exits the simulated vehicle 108. In some embodiments, the starting location and the ending location may coexist.
The simulated vehicle 108 may house one or several passengers in one or several passenger locations 111. Each of these passenger locations 111 may include, for example, a location or one or several passengers, a seat or a restraint system, or the like. The simulated vehicle 108 and/or components thereof may be communicatively coupled with the processor 102. The communication connection may allow for providing information to the simulated vehicle 108 that may control the operation of all or a portion of the simulated vehicle 108, and the communication connection may allow the server 102 to receive information from the simulated vehicle 108 that may include one or several user inputs at the simulated vehicle 108. In some embodiments, the simulated vehicle 108 may move according to the narration and/or according to one or several events in the narration to create a sense of motion for the passenger in conjunction with the generated images. In some embodiments, each of the simulated vehicles 108 may be mounted on, and/or over a motion base 110, also referred to herein as an actuator system. Motion base 110 may move simulated vehicle 108 mounted on, above, and/or over motion base 110. Motion base 110 may include one or more of the following: motors, servomotors, pneumatic components, hydraulic components, electrical components, any motion-inducing components, and the like.
The simulated vehicle 108 may include controls 109 through which one or more user inputs may be received. In some embodiments, these controls may include one or more of the following: wheels, levers, buttons, levers, pedals, switches, sliders and knobs. In some embodiments, the simulated vehicle 108 may move and/or be configured to move according to control signals received from the processor 102 and/or user control features.
The system 100 may communicate with one or several non-ride devices 112. In some embodiments, each of the non-ride devices 112 may include a toy, such as a doll including an action character or character doll, a stuffed animal, an animatronic doll, or a robot. In some embodiments, the system 100 may communicate with one or several non-riding devices 112 while the one or several non-riding devices are in the simulated vehicle 108.
The system 100 may include a non-riding device detection system 114. The non-riding device detection system 114 may detect the presence of a non-riding device in the passenger vehicle 108. In some embodiments, the non-riding device detection system 114 may detect the presence of one or several non-riding devices via one or several communication protocols. In some embodiments, for example, the non-riding device detection system 114 may include one or several software modules or communication protocols that are executed using the non-riding device communication system 118 discussed below.
The system 100 may include a content presentation system 116, which content presentation system 116 may include an audio presentation system, one or several electronic animation presentation systems, and/or a video presentation system. The content presentation system 116 may provide or present content to passengers of the passenger vehicle 108, including passengers of the attraction. The content presentation system 116 may be communicatively coupled to the processor 102 and may include one or several features configured to generate images based on one or several control signals received from the processor 102.
The content presentation system 116 may include hardware components configured to deliver content to the occupants of the simulated vehicle 108. The components of the content presentation system 116 may include, for example, one or more of the following: a speaker, sound generator, display, screen, monitor, projector, illuminator, speaker, laser, fan, or heater. In some embodiments, each simulated vehicle 108 may include a unique content presentation system 116, and in some embodiments, the content presentation system 116 may be non-unique to some or all of the simulated vehicles 108. In some embodiments where the simulated vehicle 108 includes a unique content presentation system 116, the content presentation system 116 may be part of the simulated vehicle 108, may be attached to the simulated vehicle 108, and/or may move with the simulated vehicle 108 from a starting point to an ending point.
The content presentation system 116 can provide and/or present audio content via an audio presentation system. The audio rendering system may include, for example: amplifiers, such as one or more of a preamplifier, a power amplifier, a phonograph preamplifier, a subwoofer amplifier, and an integrated amplifier; one or more speakers, such as one or more of a subwoofer, a tweeter, and an intermediate frequency speaker; a mixing engine; an equalizer; a speaker processor and/or a scheduler. The content presentation system 116 can provide and/or present video content via a video presentation system. The video presentation system may comprise one or several of the following: a screen, a display, a monitor, a projector, a laser, and/or a light source.
The system 100 may include a non-ride device communication system 118. The non-riding device communication system 118 may include one or several features for communicating with one or several non-riding devices in the passenger vehicle 108. The one or several features may include, for example, an antenna, a transceiver, a transmitter or a receiver, etc. In some embodiments, the receiver and transceiver may receive information at the non-riding device 112 via electromagnetic communications including, for example, radio frequency communications, infrared communications, or visible light communications. In some embodiments, the transceiver and/or transmitter may send information from the non-riding device 112 via electromagnetic communication including, for example, radio frequency communication, infrared communication, or visible light communication in some embodiments. In some embodiments, these non-ride device communication systems 118 may communicate with the non-ride devices 112 via any desired wireless communication regime (including, for example, via electromagnetic waves, such as, for example, via radio frequency communication).
Referring to FIG. 2, a schematic diagram of a simulation environment 200 is shown. The simulated environment 200 may comprise all or part of the system 100. Specifically, as shown in FIG. 2, simulated environment 200 includes simulated vehicle 108, motion base 110, and user controls 109. The simulated vehicle 108 shown in fig. 2 also includes a body 202, the body 202 containing a window 204 and opaque structural features 206 such as, for example, a roof, pillars, posts (posts), and/or window frames or frames. The simulated vehicle 108 may also include a passenger area 208, which passenger area 208 may include passenger locations 111, which passenger locations 111 contain, for example, one or several seats or restraint devices, etc. The simulated vehicle 108 may include one or several accessory features 210.
The simulation environment 200 may include a content presentation system 116. The content presentation system 116 may include a screen 212 and at least one projector 214. The screen 212 may include various shapes and sizes, and may be made of various materials. In some embodiments, the screen 212 may be flat, and in some embodiments, the screen 212 may be angular, curved, dome-shaped, or the like. In some embodiments, the screen 212 is curved and/or domed to extend around all or part of the simulated vehicle 108, and, in particular, curved and/or domed to extend around part of the simulated vehicle 108 such that passengers looking outward from the simulated vehicle 108 see the screen.
One or more projectors 214 may project images onto screen 212. These projectors 214 may be located on the same side of the screen 212 as the simulated vehicle 108 or on the opposite side of the screen 212 from the simulated vehicle. The projector 214 may be controlled by the processor 102.
The simulated environment 200 may include one or more detection features 220 and/or one or more communication features 222. These detection features 220 and/or communication features 222 may include, for example, one or several antennas. One or more detection features 220 and/or one or more communication features 222 may be in communication with one or more non-riding devices 112 in the passenger vehicle 108.
Three, non-riding equipment
Referring to fig. 3, a schematic view of one embodiment of the non-riding device 112 is shown. The non-riding device 112 may include a playing item, which may be a toy, such as a doll including an action character or a character doll, a badge (insignia), a badge, a stuffed animal, an animated doll, or a robot. The non-riding device 112 may include a body that may house one or more components and/or to which one or more components may be connected. In some embodiments, the non-riding device 112 may include one or several communication features that may enable the non-riding device 112 to communicate with other components of the system 100. In some embodiments, the communication features may include, for example, a transceiver, transmitter 304, and/or receiver 306. In some embodiments, the functionality of the transmitter 304 and receiver 306 may be combined in a transceiver. The transmitter 304 may transmit information from the non-riding device 112 including, for example, information identifying the non-riding device 112 and/or information related to experience data of the non-riding device 112. In some embodiments, the transmitter 304 may be embedded in the body 302 of the non-riding device 112 and may transmit one or several self-describing signals for the non-riding device 112, also referred to herein as the interaction device 112. The receiver 306 may receive information from other components of the system 100, from other components of a venue such as an amusement park, theater, or stage, and/or from other non-ride devices 112. In particular, the receiver may receive one or several trigger signals from the attraction and in particular from the non-riding device communication system 118. In some embodiments, the non-ride device communication system 118 may communicate the trigger signal in response to receipt of the self-describing signal. In some embodiments, receiver 306 may receive new experience data. The new experience data may be received from other non-riding devices 112 and/or from one or several modules configured to communicate with the non-riding devices 112. In some embodiments, the transmitter 304, receiver 306, and/or transceiver may transmit and/or receive information via electromagnetic communications, including, for example, radio frequency communications, infrared communications, or visible light communications.
The non-riding device 112 may include output features that the non-riding device 112 may use to output information to the owner of the non-riding device 112. These characteristics may include, for example, one or more speakers 308, one or more lights, one or more displays, and/or one or more screens. The non-riding device 112 may also include one or several sensors 310. The sensors 310 may sense properties of the environment surrounding the non-riding device 112. These sensors may include, for example, one or more cameras, microphones, accelerometers, thermometers, and/or location features such as one or more GPS antennas. In some embodiments, one or several sensors 310 may generate experience data, which may be associated with a location and/or time in some embodiments.
The non-riding device 112 may also include one or more movement features 312. These features 312 may be controlled by the non-riding device 112 to move the non-riding device 112 and/or take some action. In some embodiments, these movement features 312 may include one or several accessories, such as, for example, one and/or more arms or legs. These arms and/or legs may be controlled by the non-riding device 112 to move the non-riding device 112 and/or may be controlled to perform a desired movement or take a desired action.
Referring now to FIG. 4, one embodiment of a functional block diagram depicting components of the non-riding device 112 is presented. Components of the non-riding device 112 may be communicatively coupled to each other. As shown in fig. 4, the non-riding device 112 may include a transmitter 304, a receiver 306, a speaker 308, and/or a sensor 310. In some embodiments, the non-riding device 112 may also include a device processor 350, which device processor 350 may receive information from one or more components of the non-riding device 112 and provide control signals to one or more components of the non-riding device 112.
The non-riding device 112 may also include a device memory 352. The device memory 352 may include volatile and/or nonvolatile memory. The device memory may comprise one or several databases and may store one or several received trigger signals and/or a plurality of pre-programmed responses to one or several received trigger signals. In some embodiments, device memory 352 may be updated and/or device 112 may be programmed by a user. In some embodiments, for example, a user may program device 112 and/or update device memory 352, and specifically update and/or change a plurality of preprogrammed responses in device memory 352. In some embodiments, this step may be performed, for example, by inserting a programming device, such as a chip, into the device 112. The device 112 may read the chip and retrieve information from the chip to reprogram the device 112 and/or update the device memory 352. In some embodiments, a user may reprogram device 112 by replacing all or part of the memory, particularly by removing and/or replacing all or part of the memory of device 112. In some embodiments, for example, a user may program and/or reprogram the device 112 so that the device has one or several desired attributes, such as contact. In some embodiments, for example, the device 112 may be programmed to have a connection with one or several evil characters, and in some embodiments, the device 112 may be programmed to have a connection with one or several good characters. In some embodiments, such a connection may be made programmatically and in response to the device 112 responding to the presence of one or several events or characters, which may be in the attraction.
In some embodiments, some or all of the pre-programmed responses may be organized into one or several response classes. In some embodiments, for example, a trigger signal may be associated with a single pre-programmed response, and in some embodiments, a trigger signal may be associated with a class of pre-programmed responses that may include multiple pre-programmed responses. In some embodiments where multiple non-riding devices 112 are in a single passenger vehicle 108, trigger signals associated with one type of pre-programmed response may prevent a large number or all of the non-riding devices from having the same response to the trigger signal. In some embodiments, for example, when the non-riding device receives a trigger signal associated with a class of pre-programmed responses, one of a plurality of pre-programmed responses from the class of pre-programmed responses may be selected by each of the non-riding devices. In some embodiments, each non-riding device may randomly select one of a plurality of preprogrammed responses, thereby reducing the likelihood that multiple non-riding devices will have the same response. In some embodiments, the non-riding devices 112 in the passenger vehicle 108 may communicate with each other to ensure that each of the non-riding devices 112 selects a unique one of the plurality of preprogrammed responses and/or that less than a maximum number of the non-riding devices 112 select the same one of the plurality of preprogrammed responses.
In some embodiments, device memory 352 may include writable persistent storage that may store experience data and a plurality of preprogrammed responses. These multiple pre-programmed responses characterize the behavior of the device 112. In some embodiments, device processor 350 may update device memory 352 with the new experience data. Updating device memory 352 with new experience data may modify future selections for one of a plurality of preprogrammed responses in response to a received trigger signal. In particular, the association between the trigger signal and the response in device memory 352 may be modified based on experience data stored in device memory 352. Thus, in some embodiments, the device 112 may be initially programmed to be friendly to pirate roles, but after generating experience data and modifying the device memory 352 with the experience data, the device 112 may be afraid of pirate roles.
In some embodiments, device memory 352 may include one or several execution parameters. In some embodiments, the execution parameters may affect the execution of one or more of the preprogrammed responses. In some embodiments, the execution parameters may affect the speed at which the pre-programmed response is executed, the duration of execution of the pre-programmed response, and/or the magnitude of execution of the pre-programmed response. In some embodiments, the execution parameter may identify at least one of an execution delay, an execution duration, and an execution magnitude, such as an execution quantity.
These pre-programmed responses may include one or more of an audible response and a physical response. In some embodiments, the physical response may include at least one of moving an actuator, powering a light, moving and/or manipulating an accessory, and/or activating a motor. In some embodiments, processor 350 may provide one or more control signals based on responses contained in memory and/or based on one or more received trigger signals. In some embodiments, the one or more control signals may cause, for example, speaker 308 to deliver and/or transmit an audible response, may cause one or more lights, one or more displays, and/or one or more screens to deliver a visual response, and/or cause one or more actuators 354 and/or motors to deliver a physical response via, for example, one or more movement features 312 of mobile device 112.
The device 112 may include one or several actuators 354. One or more actuators 354, which actuators 354 may be, for example, one or more motors, hydraulic or pneumatic cylinders, electroactive polymers, and/or piezoelectric actuators. One or more actuators 354 may control the movement of one or more moving features 312 of device 112.
Fourth, interactive experience delivery
Referring now to FIG. 5, a functional block diagram of a module 500 for interactive experience delivery is shown. These modules 500 may be hardware modules and/or software modules. In some embodiments, these modules 500 may be located, in whole or in part, on the processor 102 or in the processor 102. These modules include a communications module 502. The communication module 502 interacts with the non-riding device communication system 118 and/or the non-riding device detection system 114 to detect the presence of one or more non-riding devices 112 in the passenger vehicle 108, receive data from the non-riding devices 112, and/or send one or more trigger signals or data to the non-riding devices 112.
The game engine 504 (also referred to herein as a "story engine" or "experience engine") may control the generation and/or presentation of attraction experience narratives to passengers of the simulated vehicle 108 and/or to viewers of the attraction. This generation and/or presentation of narratives can be based on and/or can include generation of a virtual world of scenic experiences. The generation and/or presentation of the narrative can include identifying a game event that can be associated with the trigger signal. In some embodiments, these events may include, for example, acceleration, deceleration, a change in direction of travel, a collision with an object, an explosion, and/or the presence of a character. The generation and/or presentation of the narrative of sight experiences may include generating signals to control the content presentation system 116 to generate images and/or sounds corresponding to one or several events in the narrative of sight experiences.
In some embodiments, the experience engine 504 may receive information indicative of the presence of one or several non-ride devices 112 in the passenger vehicle 108 and/or may receive information indicative of one or several attributes of one or several non-ride devices in the passenger vehicle 108. Experience engine 504 may modify all or a portion of the narration based on the received information. In some embodiments, the modification may include generating and/or selecting one or several trigger signals to cause a desired action of one or several non-ride devices 112 to facilitate the narrative. In some embodiments, for example, the characters in the narrative may interact with non-riding devices, and trigger signals may be generated to cause the non-riding devices to react to these interactions.
The input module 506 may be in communication with the controls 109 of the simulated vehicle 108 to receive electrical signals corresponding to user inputs provided via the controls 109. Input module 506 can output information related to user input to experience engine 504. In some embodiments, based on the received user input, experience engine 504 may identify the response of simulated vehicle 108 to the user input and/or the direct or indirect impact of the user input on simulated vehicle 108. In some embodiments, and by way of example, the direct impact includes when the user input indicates a turn of the simulated vehicle 108, the indirect impact includes when the user input causes an explosion of an item within the game/simulation portion of the attraction experience, the explosion creating a shockwave that propels the simulated vehicle 108. The experience engine may further track the progress of the passenger and/or simulate the progress of the vehicle 108 through the attraction experience.
In some embodiments, and based on input received from input module 506, the skill level of one or several passengers may be determined. In some embodiments, if it is determined that the passenger's skill level is below a threshold level, the experience engine 504 may generate one or several trigger signals associated with the event that cause the non-ride device 112 to provide assistance to the passenger. In some embodiments, for example, the trigger signal may cause the non-ride device to take action to provide one or several inputs to the passenger vehicle 108 to cause a desired effect or action in the virtual world, and in some embodiments, the trigger signal may cause the non-ride device to take action to provide guidance to the passenger to provide one or several inputs to the passenger vehicle 108 to cause a desired effect or action in the virtual world.
Referring now to FIG. 6, a flow diagram is shown illustrating one embodiment of a process 600 for attraction experience delivery. The process 600 may be performed by all or part of the system 100 and/or the module 500 of fig. 5. The process 600 may begin at block 602, where one or more passengers are loaded into the simulated vehicle 108 and/or enter the simulated vehicle 108. In some embodiments, simulated vehicle 108 may be a location from which a performance, which in some embodiments is not part of an amusement ride, may be viewed. At block 604, the presence of one or several non-riding devices 112 in the simulated vehicle 108 is detected. In some embodiments, the detection may be performed during the loading of block 602 or after the loading of block 602. In some embodiments, step 604 may be performed independently of block 602.
At block 606, one or more attributes of each of the one or more non-riding devices detected in the passenger vehicle 108 may be determined. In some embodiments, this determination may be performed via communication with the non-riding device 112, and in particular by receiving data from the non-riding device characterizing the one or several attributes of the non-riding device. In some embodiments, the data may include experience data of the non-riding device 112 and/or may be derived from experience data of the non-riding device 112. The experience data may characterize one or several experiences of the non-ride device 112, and in some embodiments, the experience data may be location-dependent and/or time-dependent.
At block 608, a narration is generated. In some embodiments, this may include generating and/or retrieving content for presentation to the passengers of the passenger vehicle 108. As shown in block 610, the narration may be modified based on the presence of one or several non-ride devices in the passenger vehicle 108 and/or based on properties of one, some, or all of the non-ride devices 112 in the passenger vehicle 108. In some embodiments, the narration may be modified based on a combination of non-ride devices 112 present in the passenger vehicle 108. In such embodiments, the modification may be specific to the combination of non-ride devices 112 present in the passenger vehicle 108. In some embodiments, the modification of the narrative may include inputting information into the experience engine 504 indicating the presence of one or several non-ride devices 112 and inputting information into the experience engine 504 identifying at least one attribute of each of the one or several non-ride devices 112. Experience engine 504 may generate a modified narrative based at least in part on these received inputs. In some embodiments, modifying the narrative may result in modifying the sight experience of the passenger in the passenger vehicle 108.
At block 612, the narration is passed. In some embodiments, this may include controlling the delivery of virtual portions of the attraction experience via the content presentation system 116. In some embodiments, a virtual portion of the attraction experience may be perceived from passenger location 111 within passenger vehicle 108. At decision step 614, a determination is made whether the narration is complete. If it is determined that the narration is not complete, process 600 may return to 608 and generation, modification, and/or delivery of the narration may continue. If it is determined that the narrative is complete, which may correspond to the end of an attraction, process 600 proceeds to block 624, where the non-ride database is updated. In some embodiments, this may include deleting information related to the presence of one, some, or all of the non-ride devices in the passenger vehicle 108.
At block 616, one or several trigger events may be generated. In some embodiments, step 616 may be performed simultaneously with and/or after any of steps 608-612. In some embodiments, the triggering event is an event with a relevant and/or desired action and/or reaction of one or several non-ride devices 112 in the passenger vehicle 108.
At block 618, one or several trigger signals are generated. In some embodiments, these trigger signals may be generated in association with the trigger event identified in block 616. In some embodiments, the step of block 618 may include retrieving one or several trigger signals from 106-C. These trigger signals may be generated and/or selected based at least in part on attributes of the non-ride device 112.
In some embodiments, some or all of the identified trigger signals may indicate that one or more of the non-ride devices 112 in the passenger vehicle 108 are reacting to the event in the narrative. In some embodiments, the event may include the presence of a character, which may be a virtual character, or in other words, a character that is in a virtual world and depicted via the content presentation system 116.
In some embodiments, the trigger signal instructs the non-riding device 112 to provide assistance to the owner of the non-riding device 112. In some embodiments, providing assistance to the owner of the non-riding device 112 may include: determining at least one desired action of an owner of the non-riding device 112; and providing information to facilitate at least one desired action via the non-riding device 112. In some embodiments, the trigger signal indicates a particular reaction by the non-riding device 112, and in some embodiments, the trigger signal indicates one of a class of reactions by the non-riding device 112.
At block 620, a trigger signal is communicated. In some embodiments, the trigger signal may be communicated via the non-ride device communication system 118. At block 622, an action acknowledgement may be received, which may acknowledge receipt of the trigger signal and performance of the action associated with the trigger signal. In some embodiments, the delivery of one or several future trigger signals may be modified based on the receipt of the action confirmation. In some embodiments, for example, if no action acknowledgement is received, no future trigger signal is generated and/or communicated. In some embodiments, receipt of one or several action acknowledgements may result in more trigger signals being generated and communicated.
Referring now to fig. 7, a flow diagram is shown illustrating one embodiment of a process 700 for interactive game play with a non-riding device 112. Process 700 may be performed by non-riding device 112 and begins at block 702 where experience data is generated and/or received. In some embodiments, the experience data may be generated by sensors of the non-riding device 112 and/or the experience data may be received by the receiver 306 of the non-riding device 112 in some embodiments.
At block 704, the device memory is updated based on the experience data newly received and/or generated in block 702. In some embodiments, the update may be performed by device processor 350. In some embodiments, such updating of the device memory with new experience data may modify future selections of one of a plurality of preprogrammed responses in response to a received trigger signal.
At block 706, the non-riding device 112 communicates with the non-riding device communication system 118. In some embodiments, this may facilitate detection of the presence of the non-riding device 112 in the passenger vehicle 108 and/or may provide information from the non-riding device 112 to the non-riding device communication system 118.
At block 708, a trigger signal is received and identified. In some embodiments, the identification and/or receipt of the trigger signal may include extracting information from the trigger signal. At block 710, a pre-programmed response to the received trigger signal is selected. In some embodiments, this may include selecting one of a plurality of preprogrammed responses that are responsive to the trigger signal. In some embodiments, one of a plurality of preprogrammed responses is selected based on the trigger signal. In some embodiments, selecting one of a plurality of preprogrammed responses may include: identifying a set of potential responses based on the trigger signal; and selecting one of a set of potential responses. In some embodiments, the response may be selected randomly, and in some embodiments, the response may be selected via communication with one or several other non-riding devices 112 in the passenger vehicle.
At block 712, the selected pre-programmed response is executed. In some embodiments, performing one of the plurality of pre-programmed responses comprises: determining execution parameters for executing the pre-programmed response; and executing one of the plurality of preprogrammed responses according to the execution parameter. In some embodiments, the execution parameters may identify a delay, amplitude, and/or duration of the response.
At block 714, an action confirmation is generated and provided to the non-riding device communication system 118 to indicate completion of execution of the non-riding response, and at block 716, the experience data is updated.
It should be understood that the specific steps illustrated in the above figures provide particular methods of presenting images using an immersive content system, according to various embodiments of the present disclosure. Other sequences of steps may also be performed according to alternative embodiments. For example, alternative embodiments of the present disclosure may perform the above steps in a different order. Furthermore, the various steps shown in the above figures may include multiple sub-steps that may be performed in a different order as appropriate to the various steps. In addition, additional steps may be added or deleted depending on the particular application. One of ordinary skill in the art would recognize many variations, modifications, and alternatives.
Each of the methods described herein may be implemented using one or several computer systems. The steps of these methods may be performed automatically by one or more computer systems and/or may be provided through input/output involving a user. For example, a user may provide inputs for the steps in the method, and each of these inputs may be in response to a particular output requesting such input, where the output is generated by a computer system. The output may receive inputs in response to respective requests. Further, input may be received from a user, from another computer system as a data stream, input may be retrieved from a memory location, input may be retrieved over a network, and/or input may be requested from a web service, and so forth. Similarly, the output can be provided to a user, provided as a data stream to another computer system, saved in a memory location, sent over a network, and/or provided to a web service, and so forth. Briefly, the steps of the methods described herein may be performed by a computer system and may involve any number of inputs, outputs, and/or requests to and from the computer system that may or may not involve a user. Those steps that do not involve the user can be said to be performed automatically by the computer system without human intervention. Thus, it will be appreciated that in view of the present disclosure, the steps of the methods described herein may be altered to include input and output to and from a user or may be automated by a computer system without human intervention, where any decision is made by a processor. Furthermore, some embodiments of each of the methods described herein may be implemented as a set of instructions stored on a tangible, non-transitory storage medium to form a tangible software product.
Computer system
FIG. 8 shows a block diagram of a computer system 1000, which computer system 1000 is an exemplary embodiment of the processor 102 and may be used to implement the methods and processes disclosed herein. Fig. 8 is merely illustrative. Computer system 1000 may include familiar computer components such as one or more data processors or Central Processing Units (CPUs) 1005, one or more graphics processors or Graphics Processing Units (GPUs) 1010, a memory subsystem 1015, a storage subsystem 1020, one or more input/output (I/O) interfaces 1025, or a communications interface 1030, among others. The computer system 1000 may include a system bus 1035 that interconnects the above components and provides functionality such as connectivity and inter-device communication.
One or more data processors or Central Processing Units (CPUs) 1005 execute program code to implement the processes described herein. One or more graphics processors or Graphics Processing Units (GPUs) 1010 execute logic or program code associated with graphics or for providing graphics-specific functionality. Memory subsystem 1015 may store information, for example, through the use of a machine-readable article, an information storage device, or a computer-readable storage medium. Storage subsystem 1020 may also store information using a machine-readable article, information storage device, or computer-readable storage medium. Storage subsystem 1020 may store information using storage media 1045, which may be any desired storage media.
One or more input/output (I/O) interfaces 1025 may perform I/O operations and one or more output devices 1055 may output information to one or more destinations of computer system 1000. One or more input devices 1050 and/or one or more output devices 1055 may be communicatively coupled to one or more I/O interfaces 1025. One or more input devices 1050 may receive information from one or more sources in computer system 1000. One or more output devices 1055 may allow a user of computer system 1000 to view objects, icons, text, user interface widgets, or other user interface elements.
The communication interface 1030 may perform communication operations including transmitting and receiving data. The communication interface 1030 may be coupled to a communication network/external bus 1060, such as a computer network or a USB hub. The computer system may include multiple identical components or subsystems connected together, for example, through communication interface 1030 or through an internal interface.
The computer system 1000 may also include one or more applications (e.g., software components or functions) that are executed by the processor to perform, run, or otherwise implement the techniques disclosed herein. These applications may be embodied as data and program code 1040. Such applications may also be encoded and transmitted using carrier wave signals adapted for transmission via wired, optical, and/or wireless networks (compliant with various protocols, including the internet).
The foregoing description of the exemplary embodiments of the invention has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form described, and many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, to thereby enable others skilled in the art to best utilize the invention in various embodiments and with various modifications as are suited to the particular use contemplated.

Claims (37)

1. A device for interactive game play, the device comprising:
a body portion;
a transmitter embedded in the body to transmit a self-describing signal for an interactive device;
a receiver configured to receive a trigger signal, the trigger signal transmitted in response to a self-describing signal;
a memory storing a trigger signal and a plurality of preprogrammed responses in response to the trigger signal, wherein the preprogrammed responses include one or more of: an audible response and a physical response, the physical response being any one of moving the actuator, powering the light, and activating the motor;
a speaker to transmit the auditory response; and
a processor in communication with the memory and the receiver, the processor configured to process a plurality of instructions to:
identifying the trigger signal;
selecting one of a plurality of preprogrammed responses that are responsive to the trigger signal; and
performing one of a plurality of pre-programmed responses, wherein performing comprises at least one of: transmitting the audible response and performing the physical response.
2. The device of claim 1, wherein the memory comprises writable persistent storage configured to store experience data and a plurality of pre-programmed responses, wherein the plurality of pre-programmed responses characterize behavior of the device.
3. The device of claim 2, wherein the processor is configured to update the memory with new experience data, wherein updating the memory with new experience data modifies future selections of one of a plurality of preprogrammed responses in response to the received trigger signal.
4. The device of claim 2, further comprising at least one sensor configured to generate experience data.
5. The device of claim 4, wherein the experience data is associated with a location and a time.
6. The device of claim 2, wherein the receiver is configured to receive new experience data, and wherein the processor is configured to update the memory with the new experience data, wherein updating the memory with the new experience data modifies a future selection of one of a plurality of preprogrammed responses in response to the received trigger signal.
7. The device of claim 6, wherein one of the plurality of preprogrammed responses is selected based on the trigger signal.
8. The apparatus of claim 7, wherein selecting one of a plurality of preprogrammed responses comprises: identifying a set of potential responses based on the trigger signal; and selecting one of the set of potential responses.
9. The apparatus of claim 8, wherein one of the set of potential responses is selected randomly.
10. The device of claim 8, wherein performing one of the plurality of pre-programmed responses comprises: determining execution parameters for executing the pre-programmed response; and executing one of the plurality of preprogrammed responses according to the execution parameter.
11. The apparatus of claim 10, wherein the performance parameter comprises at least one of delay, amplitude, and duration.
12. A system for delivery of narration, the system comprising:
a passenger vehicle including a plurality of passenger locations;
a content presentation system configured to present a virtual portion of the narration viewable from the plurality of passenger locations;
a transceiver configured to transmit a trigger signal; and
at least one processor configured to:
controlling delivery of a virtual portion of the narration via a content presentation system;
detecting a presence of a non-riding device; and
communicating a trigger signal to the non-riding device via a transceiver, the trigger signal associated with the narration.
13. The system of claim 12, wherein the at least one processor is further configured to determine a property of the non-riding device.
14. The system of claim 13, wherein the attributes of the non-riding device are based on experience data.
15. The system of claim 14, wherein the experience data is associated with a location and a time.
16. The system of claim 15, wherein the trigger signal is based at least in part on a property of the non-riding device.
17. The system of claim 16, wherein the at least one processor is further configured to modify the narration based on at least one of a presence of the non-riding device and experience data of the non-riding device.
18. The system of claim 17, wherein the trigger signal indicates that the non-riding device is reacting to an event in the narrative.
19. The system of claim 18, wherein the event in the narrative comprises the presence of a character.
20. The system of claim 19, wherein the trigger signal instructs the non-riding device to provide assistance to an owner of the non-riding device.
21. The system of claim 20, wherein providing assistance to the owner of the non-riding device comprises: determining at least one desired action of an owner of the non-riding device; and providing information via the non-riding device to facilitate the at least one desired action.
22. The system of claim 21, wherein the trigger signal indicates a particular reaction by the non-riding device.
23. The system of claim 21, wherein the trigger signal is indicative of one of a class of reactions by the non-riding device.
24. A method for communicating narratives, said method comprising:
controlling delivery of a virtual portion of the narration via a content presentation system, the virtual portion of the narration being perceptible from a passenger location within a passenger vehicle;
detecting a presence of a non-riding device in the passenger vehicle;
communicating a trigger signal to the non-riding device, the trigger signal indicating an action of the non-riding device, and wherein the trigger signal is associated with the narration.
25. The method of claim 24, further comprising determining a property of the non-riding device.
26. The method of claim 25, wherein determining the attribute of the non-riding device comprises receiving data from the non-riding device.
27. The method of claim 26, wherein the attributes of the non-riding device are based on experience data.
28. The method of claim 27, wherein the experience data characterizes an experience of the non-riding device and is location-dependent and time-dependent.
29. The method of claim 28, wherein the trigger signal is based at least in part on a property of the non-riding device.
30. The method of claim 29, further comprising modifying the narration based on at least one of the presence of the non-riding device and experience data of the non-riding device.
31. The method of claim 30, wherein the trigger signal indicates that the non-riding device is reacting to an event in the narrative.
32. The method of claim 31, wherein the event in the narrative comprises the presence of a character.
33. The method of claim 32, wherein the persona comprises a virtual persona.
34. The method of claim 33, wherein the trigger signal instructs the non-riding device to provide assistance to an owner of the non-riding device.
35. The method of claim 34, wherein providing assistance to the owner of the non-riding device comprises: determining at least one desired action of an owner of the non-riding device; and providing information via the non-riding device to facilitate the at least one desired action.
36. The method of claim 35, wherein the trigger signal indicates a particular reaction by the non-riding device.
37. The method of claim 35, wherein the trigger signal is indicative of one of a class of reactions by the non-riding device.
HK42021025997.4A 2019-05-23 2021-02-20 Interactive toy HK40035901B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US16/421,223 2019-05-23

Publications (2)

Publication Number Publication Date
HK40035901A true HK40035901A (en) 2021-05-21
HK40035901B HK40035901B (en) 2025-10-10

Family

ID=

Similar Documents

Publication Publication Date Title
US9914057B2 (en) Immersive storytelling environment
US10105594B2 (en) Wearable garments recognition and integration with an interactive gaming system
US9753540B2 (en) Systems and methods for haptic remote control gaming
KR102292522B1 (en) Controlling physical toys using a physics engine
US9836984B2 (en) Storytelling environment: story and playgroup creation
US11195339B2 (en) Augmented reality system
EP3697511A1 (en) Interactive play apparatus
CN111973998B (en) Interactive Toys
HK40035901A (en) Interactive toy
JP2022098651A (en) Information processing system and program
CN111246923B (en) play equipment
WO2025003309A1 (en) Interactive play apparatus