WO2018067600A1 - Système d'arborescence vidéo pour reproduction, simulation et lecture multimédia interactives - Google Patents
Système d'arborescence vidéo pour reproduction, simulation et lecture multimédia interactives Download PDFInfo
- Publication number
- WO2018067600A1 WO2018067600A1 PCT/US2017/054984 US2017054984W WO2018067600A1 WO 2018067600 A1 WO2018067600 A1 WO 2018067600A1 US 2017054984 W US2017054984 W US 2017054984W WO 2018067600 A1 WO2018067600 A1 WO 2018067600A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- application
- video
- state machine
- machine controller
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/74—Browsing; Visualisation therefor
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/45—Controlling the progress of the video game
- A63F13/47—Controlling the progress of the video game involving branching, e.g. choosing one of several possible scenarios at a given point in time
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8541—Content authoring involving branching, e.g. to different story endings
Definitions
- the replicating entity produces a screen recording of a user interacting with the application.
- the screen recording can be replayed and streamed over the web.
- the entity can augment the video by editing it.
- Application sampling is also used for testing new features of an application before making the application available in an application store such as "Google Play” or the “Apple iOS App Store.” Both the leading application stores require approval prior to making application feature updates available to app store customers. That is, in order for application developers to test new functionality of an application (by exposure to actual users), that new functionality first has to be approved. As such, the timeline for testing the functionality with users is unduly prolonged by the application store approval process. Many application developers do not feel that the process is sufficient to meet the market demands of producing new application content for users. There are few public-facing alternatives to testing new application features and content against a sample user group.
- the system and method for application replication described herein centers on the need for the rapid recreation of applications for uses in testing, simulation, sampling, marketing, and feature optimization.
- the technology allows for reproduction (such as simulation and playback) by third parties of interactive media, such as an application program or inline video advertisement, with a high degree of accuracy, without the delivery of the original application's source code.
- the underlying method for reproduction centers on a branching approach to recording applications (such as digital video formats), and the stitching of the sampled digital video using a taxonomical branching of user journeys.
- the technology simplifies the process of providing accurate, highly efficacious samples, and reproductions of applications.
- the sample experiences are provided via a scripting and configuration file linked to a plurality of video branches.
- the video branches are stitched together to mimic the look and feel of the original application.
- the technology eliminates the need for developers to re-write a program to create a viewer and user experience identical to the native application.
- the technology further comprises a user-launched application engine that: interacts with the host operating system to present one or more user interface (Ul) elements; runs a presentation loop responsible for executing the tasks assigned to it by a state machine controller; and accepts user input from the user interface.
- Ul user interface
- FIG 1 shows an exemplary process as described herein.
- FIG. 2 shows a second aspect of an exemplary process as described herein.
- FIG. 3 shows an aspect of a process for creating a video-tree, as further described herein.
- Application store and “App Store” refer to an online store for purchasing and downloading software applications and mobile apps for computers and mobile devices.
- Application refers to software that allows a user to perform one or more specific tasks.
- Applications for desktop or laptop computers are sometimes called desktop applications, while those for mobile devices such as mobile phones and tablets are called apps or mobile apps.
- apps When a user opens an application, it runs inside the computer's operating system until the user closes it. Apps may be continually active, however, such as in the background while the device is switched on.
- the term application may be used herein generically, which is to say that - in context - it will be apparent that the term is being used to refer to both applications and apps.
- Streaming refers to the process of delivering media content to a user, that is constantly received by and presented to an end-user while being delivered by a provider.
- a client end-user can use their media player to begin to play the data file (such as a digital file of a movie or song) before the entire file has been transmitted.
- Video Sampling refers to the act of appropriating a portion of preexisting digital video and reusing it to recreate a new video.
- “Feedback loop” refers to a process in which outputs of a system are routed back as inputs as part of a chain of cause-and-effect that forms a circuit or loop. The system can then be said to feed back into itself.
- the term feedback loop refers to the process of collecting end-user data as an input, analyzing that data, and making changes to the application to improve the overall user experience. The output is the set of improved user experience features.
- “Native Application” refers to an application program that has been developed for use on a particular platform or device.
- “Creative Concept Script” refers to the written embodiment of a creative concept.
- a creative concept is an overarching theme that captures audience interest, influences their emotional response and inspires them to take action. It is a unifying theme that can be used across all application messages, calls to action, communication channels and audiences.
- Computer Logic refers to the use of logic to perform or reason about computation, or "logic programming.”
- a program written in a logic programming language is a set of sentences in logical form, expressing facts and rules within a specified domain.
- Application Content Branches refer to the experiential content of a software application, organized into a branch-like taxonomy representative of the end-user experience.
- High Branching Factor refers to the existence of a high volume of possible
- the branches contain a plurality of variances, each ordered within a defined hierarchical structure.
- Genetic Algorithm refers to an artificial intelligence programming technique wherein computer programs are encoded as a set of genes that are then modified (evolved) using an evolutionary algorithm. It is an automated method for creating a working computer program from a high-level problem statement. Genetic programming starts from a high-level statement of "what needs to be done” and automatically creates a computer program to solve the problem. The result is a computer program able to perform well in a predefined task.
- a “journey” or “user journey” refers to a script, such as for a video game that has a detailed theme.
- the journey tracks potential positions the user can be in, as defined by an environment, as well as particular avatars that the user may be associating with.
- the term can be used to describe which parts of the simulated environment give the most accurate simulation and can thereby produce a simulated script. It can also be used to describe a particular sequence of positions that a particular user has taken.
- A/B testing is a term used for randomized experimentation with a control performing against one or more variants.
- WYSIWYG (What You See Is What You Get) Editor
- WYSIWYG (What You See Is What You Get) Editor
- WYSIWYG implies the ability to directly manipulate the layout of a document without having to type or remember names of layout commands.
- WYSIWYG refers to an interface that provides video editing capabilities, wherein the video can be played back and viewed with the edits. The video editing with playback occurs within an editor mode.
- FIGs. 1 and 2 An exemplary process is shown in FIGs. 1 and 2, and is suitable for being executed on a computing apparatus having a memory, processor, input and output devices, as further described herein.
- FIG. 1 represents an overview of a process for reproducing a segment of interactive media as further described herein.
- the system acquires a configuration file 1000 from a remote server or local source.
- the configuration file contains definitions of an item of interactive media that is to be reproduced.
- the configuration file is parsed 1010 and used as instructions to acquire other video, audio, image or font files (collectively, "assets") that represent the interactive media.
- the configuration file is parsed 101 1 and used to configure a state machine controller that involves a method of making a video-tree, as described further herein.
- the state machine drives the user experience, when reproducing the interactive media in question.
- the state machine is responsible for handling changes in the presentation of the interactive media, such as prompting audio or video files to begin playback, showing or hiding user interface elements, enabling and disabling touch responsiveness.
- Parsing the configuration file 1012 is also used to create the various operating- system specific user interface elements (video players, image views, labels, touch detection areas, etc.) for display and interaction.
- video players image views, labels, touch detection areas, etc.
- the program is typically run within an operating system, or in the case of playable advertising, the interactive media can be run during use of a host application such as a web-browser, or an app on a mobile device.
- a host application such as a web-browser, or an app on a mobile device.
- the user takes an action that initiates playback of the segment of interactive media 1020.
- the program may also be driven by an application engine, such as by batch processing, or by utilizing a cloud computing paradigm.
- the program launches the replicated segment of interactive media 1030 and interacts with the host operating system to present the user interface elements created in 1012, based on the state machine controller 101 1 .
- the program may include a loop 1040 in which the segment of interactive media is presented multiple times in succession or in multiple different ways according to user input.
- the program is responsible for executing various tasks assigned to it by the state machine controller 101 1 dependent on accepting user input from the user interface 1012.
- the segment of interactive media is launched 1030, instead of being played once, there are multiple events under a user's direction that occur, including possibly the execution of the interactive media more than once.
- the state machine controller responds to the user.
- the state machine controller is able to transition to a new state 1050 based on its configuration. It may do so in response to user input, presentation events such as a video file completing playback, or internally configured events such as timed actions.
- Each presentation has a defined end state, usually triggered by a user interaction, such as with a 'close' button 1060.
- the presentation loop will allow the state machine to submit its commands to an application engine. On transitioning between states, new video segments may be played, user interface elements may be shown or hidden, touch responsiveness may be activated or deactivated, etc.
- the state runtime loop 1080 controls the playback of an individual node on the video tree. This is a self-contained unit that presents one meaningful part of the experience, e.g., in a video baseball game, "batter runs to first after hitting ball”.
- the user may interact with the presentation 1090. If the user performs a valid interaction, the user interface will capture the event and submit it to the state machine controller for processing.
- the state machine controller changes states based on what user is doing; for example it may interpret a user interaction and choose to transition to a new state, simulating the user's interaction with the original product.
- the state machine controller can have its own configured events 1 100. Typically these events are timed, such as displaying a "help" window if the user is inactive for a period of time.
- the digital video sampling process that lies behind the steps 1010, 101 1 , and 1012 of FIG. 1 includes a consumer device screen recording process (such as acquiring the configuration file), creative concept scripting, screen recording footage splitting process, a video tree branching process, computational logic scripting, and distribution.
- An exemplary process for video-tree creation is as set forth in FIG. 3.
- the method includes recording 300 in whole or in part of a segment of an interactive media source such as an application program or playable advertisement.
- the end-to-end recording is operated by a screen recording software, such as QuickTime Player, Active Presenter, CamStudio, Snagit, Webinaria, Ashampoo Snap, and the like.
- a single end-to-end recording of the desired application sample is all that is required for the remaining processing in order to replicate an application experience.
- multiple recordings may be taken, especially if a tree with a high branching factor is being created.
- the recording can occur on any consumer computing device such as a desktop computer, mobile handset, or a tablet.
- a creative concept is scripted 310 that outlines the application features contained in the screen recording.
- the creative concept script provides an outline of the user journey captured in the one or more screen recordings.
- the creative concept outlines core concepts of the application. For instance, if the application is a game, the concept will outline the game's emphasis, player goals, flow, script and animation sequences. Storyboarding techniques such as those using a digital flow diagram are utilized to organize and identify the application's configuration and user journey.
- the user selects a baseball team (e.g. , the New York Yankees);
- the user swipes the device screen to engage the player to swing at a pitch.
- the screen recordings are split into a variety of branches 320, referred to herein as a video tree.
- Each segment of the creative concept represents, and correlates with a piece of the screen recording and is a unique branch of the application video tree.
- the video is segmented into a plurality of branches to mirror all possible user interactions.
- Video editing software is used to split the screen recording into micro-segments.
- a game is segmented into a variety of micro-segments, some segments as short as 0.6 seconds. A segment can conceivably be as short as 0.03 seconds, so that the recording becomes a short sequence of effectively still images.
- Micro-segments can range in length from 0.01 s to 3 mins., such as 0.1 s to 1 min., or 0.5 s to 30 s, or 1 s to 15 s, where it is understood that any lower of upper endpoint of the foregoing ranges may be matched up with any other lower of upper end point.
- Each creative concept branch is paired to the video representation (screen recording) 330 of the interactive media that corresponds to that branch.
- a baseball game can contain hundreds of possible branches, each branch representing a portion of a game played by a user captured in the video recording.
- Each branch has the possibility of containing a plurality of sub-branches, each sub-branch organized as a possible portion of a user journey that has not yet been travelled, and associated video file.
- an additional program layer is created to automate the production of the video-tree branches.
- an editor such as a WYSIWYG editor 340 is used to automate the creation of computational logic.
- the editor is instructed to download a file containing the storyboard, such as a document containing a flow chart, and programmatically creates the configuration logic for the video-tree.
- the editor programmatically splits the input video into video-tree branches.
- the WYSIWYG editor program is able to analyze the video segments, and distribute the segments into video-tree branches according to the creative concept provided.
- the program integrates user-interaction detection, for example, the
- the various video-tree branches can be stitched together 350 so that they loop autonomously, thereby no longer requiring a developer to manually stitch video segments together using video editing software.
- a rules-based system is implemented to execute operation of the state machine controller. Such an approach simplifies the way that the operation is segmented.
- the rules-based system is used to create the video tree.
- Computational logic can be scripted to mirror and perform actions represented in each video tree branch.
- Logic programming is a programming paradigm based on cognitive logic. A program written in a logic programming language is a set of sentences in logical form, expressing facts and rules about a specified domain. In the context of application
- programmatic logic can be written to process rules of a video game, perform specified parameters of functions based on those rules, and respond to the existence of certain criteria.
- the technology described herein provides an alternative by pairing a generated application engine with the application configuration file.
- the generated engine is created using the video-tree branching method described herein, and paired with a downloadable configuration file of the original application.
- Each branch of the application's video-tree correlates to an associated configuration logic 350.
- the logic references specific branches of the application video-tree.
- the resulting logic-based program is able to play back the application and produce an application with the look and feel of the original application because the configuration file of the original application is paired to the generated video-tree engine.
- logic is written as a configuration file containing sections that define different parts of the behavior of the program.
- the sections include resource controls (videos, sounds, fonts and other images), state controls (execution logic), and interface controls (collecting user input).
- Each individual element under each controller has an identifier that allows the controllers to coordinate interactions between each other and their elements, and a pre-determined set of action items it can execute.
- the configuration file is parsed by the engine to enable or disable those interactions as a subset of its full functionality thereby creating the simulated experience.
- This script instructs the user interaction engine to create a view with a defined set of features, such as size, color, and position.
- This view is a tap-detection view, and when the view is active and the user taps on it, the state machine controller will be instructed to transition to state ID #23 if it is in state ID #22.
- the state machine controller may have further commands that it triggers in the engine to present or hide views, play sounds or movies, increase the user's score, and perform other functions as defined in its controller configuration.
- the logic is machine generated.
- a programmatic approach such as machine learning or a genetic algorithm, is utilized to recognize the existence of certain movements and user functions in the video-tree.
- the machine learning program identifies interactions occurring in the video-tree segments and matches those segments to the relevant portion of the configuration script.
- the paired logic is saved with the referenced video-tree segments.
- a machine learning approach is an appropriate technique where data-driven logic is created by inputting the results of user play-throughs into a machine learning program.
- the machine learning program dynamically optimizes the application experience to match what the statistics indicate has been most enjoyable or most successful.
- This allows the video-tree logic to be more adaptive and customized to individual users at time of execution. This in turn allows for dynamic, real-time application scripting, thereby providing a significant improvement over the current application experience, which is static and pre-scripted to a generic user type.
- the completed video and logic files are then made available for download 360 to first parties (the application developer), and third parties (such as advertising agencies, feature testing platforms).
- first parties the application developer
- third parties such as advertising agencies, feature testing platforms.
- the process of making the completed application experience available comprises: uploading the completed video-tree segments to a content distribution system, importing the computational logic to a database on a server, and providing access to these resources to the third parties.
- Any client software integrated with the presentation system can acquire these resources and present the end-user with the application reproduction.
- the resources themselves remain under private control and as such do not have to go through any third party (such as App Store) review or approval.
- Importing the computational logic into a database provides the ability to dynamically create variant presentations using server-side logic to customize an experience to a particular user upon request or to otherwise optimize the presentation using previously mentioned machine learning or genetic techniques.
- the video-tree once completed, is available for a variety of derivative uses. These include, but are not limited to: playable advertising; feature testing; live editing of application features based on user preferences; live data collection and storage of user interaction data correlated to the experience segment (unit); creation of a data array of touch events; playback with analytics and data visualizations; and automated A/B testing for performance evaluation.
- the feature testing presentation is solely video based. In this respect, pure video and video editing techniques are used to create parts of the application, and even, if desired, the entire application. Developing and integrating a feature into a game can take considerable effort in terms of 3D modeling, texturing, engine import, asset placement, scripting, state persistence, etc. The game or application will then have to be recompiled, possibly
- the video-tree technology described herein allows for application presentations that are both lightweight to create (any video is ingested as content) and present (there are no requirements for a distributor intermediary, or end-user authorization).
- the system platform is able to keep all application presentations in the most up-to-date condition, including downloading new or replacement resources as the original application changes.
- the system can integrate market data from a third party which provides information about the user, such as their age, gender, location, and language.
- the system has a library of user characteristics paired with a variety of player preferences, such that, for example, an avatar in a game will change genders, age and language based on who the user of the application is.
- Granular data of application user interactions can be used to optimize an application. Understanding, in aggregate and in detail, how users interface with an application can be helpful in making improvements to the application. Doing so within the environment of an application video-tree, allows for ease of access to the exact points at which a user performs a specific interaction. Granular analysis within the video-tree segments provides the developer with an organized, exploratory environment in which they can view how the user interacted, how they reacted to the interaction, and whether there were any negative responses to the interaction such as the user leaving the game, or stalling to move forward in the game. The underlying method allows developers, for the first time, the ability to replay the data against the user segments without having to record actual video of the user playing.
- the user data and the computational logic interact with the video-tree segments to provide accurate replay. Because the system is built with a deterministic configuration for the presentation and recording of all user interaction and engine state change information, the system is able to precisely replay the user's experience including all video and audio by running the state machine controller with the recorded user interaction as input.
- the architecture allows for a "record once, replay many" construction that allows the developer to recreate many user experiences without requiring those users to individually transmit recorded video.
- the system described herein collects touch data as a data array.
- the array is created from touch events, including touch parameters that define the nature of the touch. These parameters can include data on, for example, how long the user touched the screen, the direction of a swipe on the screen, how many fingers were used, and the location on the screen of the touch.
- the touch data array is then mapped to the video-tree segments, and related application logic.
- the touch data array can be replayed as video to show how the user touched the screen and what motions the touch produced based on the deployed application logic.
- Playback of real-user interactions can be enhanced with data, such as the number of users engaged in a specific interaction with the application.
- Visualizations can also be generated to show the likelihood of certain interactions based on data of past user behavior, such as the likelihood of certain touch interactions.
- a heat map is utilized to show the likelihood of a user swiping a certain direction on a flat screen when reaching a specific point in the video-tree.
- the playback and analytics can be filtered for specified criteria so that playback can be produced to represent a specific user type, such as men of a specific age range living in a specific region.
- A/B testing means that an application developer will randomly allow some users to access the control version of the application, and other users will access variants of the application. Doing so today is complicated by the fact that deploying variants of an application is challenging due to application reproduction costs, as well as the application store approval process (described in greater detail above). With the technology described herein, it is possible to apply a novel approach to A/B testing. This novel approach involves the collection of a plurality of user data, and automating the playing of the artificial user data against variants of video-trees.
- the application developer provides a hypothesis of how players might respond to the proposed application variant.
- the system automatically produces data of how users actually interact with the application variants.
- New artificial user data is generated and compared with the control application user data. This allows the developer to analyze how new application features will play out without having to make the new features available to the public via an application store. It allows for the efficient, robust and thorough exploration of a wide variation of features, and the production of new user-interaction data, which ultimately results in an optimized application evaluation process.
- the computer functions for carrying out the methods herein can be developed by a programmer, or a team of programmers, skilled in the art.
- the functions can be implemented in a number and variety of programming languages, including, in some cases mixed
- the technology herein can be developed to run with any of the well-known computer operating systems in use today, as well as others, not listed herein.
- Those operating systems include, but are not limited to: Windows (including variants such as Windows XP, Windows95, Windows2000, Windows Vista, Windows 7, and Windows 8, Windows Mobile, and Windows 10, and intermediate updates thereof, available from Microsoft Corporation); Apple iOS (including variants such as iOS3, iOS4, and iOS5, iOS6, iOS7, iOS8, and iOS9, and
- Apple Mac operating systems such as OS9, OS 10.x (including variants known as “Leopard”, “Snow Leopard”, “Mountain Lion”, and “Lion”
- OS9 Apple Mac operating systems
- OS 10.x including variants known as “Leopard”, “Snow Leopard”, “Mountain Lion”, and “Lion”
- UNIX operating system e.g., Berkeley Standard version
- Linux operating system e.g., available from numerous distributors of free or "open source” software
- Android OS for mobile phones.
- the executable instructions that cause a suitably-programmed computer to execute the methods described herein can be stored and delivered in any suitable computer-readable format.
- a portable readable drive such as a large capacity "hard-drive”, or a "pen-drive”, such as connects to a computer's USB port, an internal drive to a computer, and a CD-Rom or an optical disk.
- the executable instructions can be stored on a portable computer-readable medium and delivered in such tangible form to a purchaser or user, the executable instructions can also be downloaded from a remote location to the user's computer, such as via an Internet connection which itself may rely in part on a wireless technology such as WiFi. Such an aspect of the technology does not imply that the executable instructions take the form of a signal or other non-tangible embodiment.
- the executable instructions may also be executed as part of a "virtual machine" implementation.
- the technology herein is not limited to a particular web browser version or type; it can be envisaged that the technology can be practiced with one or more of: Safari, Internet Explorer, Edge, FireFox, Chrome, or Opera, and any version thereof.
- the methods herein can be carried out on a general-purpose computing apparatus that comprises at least one data processing unit (CPU), a memory, which will typically include both high speed random access memory as well as non-volatile memory (such as one or more magnetic disk drives), a user interface, one more disks, and at least one network or other communication interface connection for communicating with other computers over a network, including the Internet, as well as other devices, such as via a high speed networking cable, or a wireless connection. There may optionally be a firewall between the computer and the Internet. At least the CPU, memory, user interface, disk and network interface, communicate with one another via at least one communication bus.
- CPU data processing unit
- memory which will typically include both high speed random access memory as well as non-volatile memory (such as one or more magnetic disk drives), a user interface, one more disks, and at least one network or other communication interface connection for communicating with other computers over a network, including the Internet, as well as other devices, such as via a high speed networking cable, or a wireless connection.
- Computer memory stores procedures and data, typically including some or all of: an operating system for providing basic system services; one or more application programs, such as a parser routine, and a compiler, a file system, one or more databases if desired, and optionally a floating point coprocessor where necessary for carrying out high level
- Computer memory is encoded with instructions for receiving input from one or more users and for replicating application programs for playback. Instructions further include programmed instructions for implementing one or more of video tree representations, internal state machine and running a presentation. In some embodiments, the various aspects are not carried out on a single computer but are performed on a different computer and, e.g., transferred via a network interface from one computer to another.
- FIG. 1 Various implementations of the technology herein can be contemplated, particularly as performed on computing apparatuses of varying complexity, including, without limitation, workstations, PC's, laptops, notebooks, tablets, netbooks, and other mobile computing devices, including cell-phones, mobile phones, wearable devices, and personal digital assistants.
- the computing devices can have suitably configured processors, including, without limitation, graphics processors, vector processors, and math coprocessors, for running software that carries out the methods herein.
- certain computing functions are typically distributed across more than one computer so that, for example, one computer accepts input and instructions, and a second or additional computers receive the instructions via a network connection and carry out the processing at a remote location, and optionally communicate results or output back to the first computer.
- Control of the computing apparatuses can be via a user interface, which may comprise a display, mouse, keyboard, and/or other items, such as a track-pad, track-ball, touch-screen, stylus, speech-recognition, gesture-recognition technology, or other input such as based on a user's eye-movement, or any subcombination or combination of inputs thereof.
- implementations are configured that permit a replicator of an application program to access a computer remotely, over a network connection, and to view the replicated program via an interface.
- the computing apparatus can be configured to restrict user access, such as by scanning a QR-code, requiring gesture recognition, biometric data input, or password input.
- the manner of operation of the technology when reduced to an embodiment as one or more software modules, functions, or subroutines, can be in a batch-mode - as on a stored database of application source code, processed in batches, or by interaction with a user who inputs specific instructions for a single application program.
- the results of application simulation can be displayed in tangible form, such as on one or more computer displays, such as a monitor, laptop display, or the screen of a tablet, notebook, netbook, or cellular phone.
- the results can further be printed to paper form, stored as electronic files in a format for saving on a computer- readable medium or for transferring or sharing between computers, or projected onto a screen of an auditorium such as during a presentation.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
L'invention concerne un système et un procédé pour la recréation d'applications centrés sur la nécessité de recréation rapide d'applications et de supports lisibles à utiliser pour l'essai, la simulation, l'échantillonnage, la commercialisation et l'optimisation de caractéristiques. L'utilisation d'un système d'arborescence vidéo permet la reproduction, la simulation et la lecture d'applications par des première et troisième parties avec un degré de précision élevé. Le procédé sous-jacent pour la reproduction est axé sur une approche de ramification pour enregistrer des applications (c'est-à-dire des formats vidéo numériques) et le maillage de la vidéo numérique échantillonnée à l'aide d'une ramification taxonomique de trajets d'utilisateurs. La technologie simplifie le processus de fourniture de reproductions et d'échantillons précis et hautement efficaces d'applications. Les expériences d'échantillon sont fournies par l'intermédiaire d'un fichier de script et de configuration lié à une pluralité de ramifications vidéo. Les ramifications vidéo sont assemblées dans un procédé pour imiter l'aspect et la sensation de l'application d'origine.
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201662403638P | 2016-10-03 | 2016-10-03 | |
| US62/403,638 | 2016-10-03 | ||
| US15/614,425 US20180097974A1 (en) | 2016-10-03 | 2017-06-05 | Video-tree system for interactive media reproduction, simulation, and playback |
| US15/614,425 | 2017-06-05 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2018067600A1 true WO2018067600A1 (fr) | 2018-04-12 |
Family
ID=61831537
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2017/054984 Ceased WO2018067600A1 (fr) | 2016-10-03 | 2017-10-03 | Système d'arborescence vidéo pour reproduction, simulation et lecture multimédia interactives |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2018067600A1 (fr) |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20080091721A1 (en) * | 2006-10-13 | 2008-04-17 | Motorola, Inc. | Method and system for generating a play tree for selecting and playing media content |
| US20080220854A1 (en) * | 2007-03-08 | 2008-09-11 | Timothy Michael Midgley | Method and apparatus for collecting user game play data and crediting users in an online gaming environment |
| US20090258708A1 (en) * | 2008-04-11 | 2009-10-15 | Figueroa Dan O | Game movie maker |
| US20130190096A1 (en) * | 2012-01-19 | 2013-07-25 | Eyal Ronen | Usable Ghosting Features in a Team-Based Game |
| US20140082532A1 (en) * | 2012-09-14 | 2014-03-20 | Tangentix Limited | Method and Apparatus for Delivery of Interactive Multimedia Content Over a Network |
| US20140179427A1 (en) * | 2012-12-21 | 2014-06-26 | Sony Computer Entertainment America Llc | Generation of a mult-part mini-game for cloud-gaming based on recorded gameplay |
| US20140256420A1 (en) * | 2013-03-11 | 2014-09-11 | Microsoft Corporation | Univied game preview |
| US20160220909A1 (en) * | 2013-02-13 | 2016-08-04 | Unity Technologies Finland Oy | System and method for managing game-playing experiences |
-
2017
- 2017-10-03 WO PCT/US2017/054984 patent/WO2018067600A1/fr not_active Ceased
Patent Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20080091721A1 (en) * | 2006-10-13 | 2008-04-17 | Motorola, Inc. | Method and system for generating a play tree for selecting and playing media content |
| US20080220854A1 (en) * | 2007-03-08 | 2008-09-11 | Timothy Michael Midgley | Method and apparatus for collecting user game play data and crediting users in an online gaming environment |
| US20090258708A1 (en) * | 2008-04-11 | 2009-10-15 | Figueroa Dan O | Game movie maker |
| US20130190096A1 (en) * | 2012-01-19 | 2013-07-25 | Eyal Ronen | Usable Ghosting Features in a Team-Based Game |
| US20140082532A1 (en) * | 2012-09-14 | 2014-03-20 | Tangentix Limited | Method and Apparatus for Delivery of Interactive Multimedia Content Over a Network |
| US20140179427A1 (en) * | 2012-12-21 | 2014-06-26 | Sony Computer Entertainment America Llc | Generation of a mult-part mini-game for cloud-gaming based on recorded gameplay |
| US20160220909A1 (en) * | 2013-02-13 | 2016-08-04 | Unity Technologies Finland Oy | System and method for managing game-playing experiences |
| US20140256420A1 (en) * | 2013-03-11 | 2014-09-11 | Microsoft Corporation | Univied game preview |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20190087081A1 (en) | Interactive media reproduction, simulation, and playback | |
| US20190034213A1 (en) | Application reproduction in an application store environment | |
| Webb et al. | Beginning kinect programming with the microsoft kinect SDK | |
| Pongnumkul et al. | Pause-and-play: automatically linking screencast video tutorials with applications | |
| US11030796B2 (en) | Interfaces and techniques to retarget 2D screencast videos into 3D tutorials in virtual reality | |
| US9933924B2 (en) | Method, system and user interface for creating and displaying of presentations | |
| US20090083710A1 (en) | Systems and methods for creating, collaborating, and presenting software demonstrations, and methods of marketing of the same | |
| Rahman | Beginning Microsoft Kinect for Windows SDK 2.0: Motion and Depth Sensing for Natural User Interfaces | |
| US20180124453A1 (en) | Dynamic graphic visualizer for application metrics | |
| WO2022057722A1 (fr) | Procédé, système et appareil d'essai de programme, dispositif et support | |
| Adão et al. | A rapid prototyping tool to produce 360 video-based immersive experiences enhanced with virtual/multimedia elements | |
| US20200162798A1 (en) | Video integration using video indexing | |
| US11179644B2 (en) | Videogame telemetry data and game asset tracker for session recordings | |
| Rau et al. | Pattern-based augmented reality authoring using different degrees of immersion: A learning nugget approach | |
| Márquez et al. | Libgdx Cross-platform Game Development Cookbook | |
| JP7176806B1 (ja) | プログラム学習装置 | |
| DiGiano et al. | Integrating learning supports into the design of visual programming systems | |
| WO2018067600A1 (fr) | Système d'arborescence vidéo pour reproduction, simulation et lecture multimédia interactives | |
| CN119278479A (zh) | 将来自发布视频的编辑模型应用于第二视频的计算系统 | |
| Doran | Unity 2020 Mobile Game Development: Discover practical techniques and examples to create and deliver engaging games for Android and iOS | |
| Welinske | Developing user assistance for mobile apps | |
| WO2018085455A1 (fr) | Dispositif de visualisation graphique dynamique pour des métriques d'application | |
| US20070226706A1 (en) | Method and system for generating multiple path application simulations | |
| Badger | Scratch 1.4 | |
| WO2021084101A1 (fr) | Systèmes et procédés de remplacement d'un thème dans un environnement virtuel |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17859041 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC, EPO FORM 1205A DATED 10.07.19 |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 17859041 Country of ref document: EP Kind code of ref document: A1 |