[go: up one dir, main page]

AU2012203179A1 - Presentation content management and creation systems and methods - Google Patents

Presentation content management and creation systems and methods Download PDF

Info

Publication number
AU2012203179A1
AU2012203179A1 AU2012203179A AU2012203179A AU2012203179A1 AU 2012203179 A1 AU2012203179 A1 AU 2012203179A1 AU 2012203179 A AU2012203179 A AU 2012203179A AU 2012203179 A AU2012203179 A AU 2012203179A AU 2012203179 A1 AU2012203179 A1 AU 2012203179A1
Authority
AU
Australia
Prior art keywords
media
components
controller
presentation
dynamic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2012203179A
Inventor
David Chodyra
William James Horton
Giles Kingsley Newton
Richard Frank Skelly
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Direct TV Pty Ltd
Original Assignee
Direct TV Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2006272454A external-priority patent/AU2006272454A1/en
Application filed by Direct TV Pty Ltd filed Critical Direct TV Pty Ltd
Priority to AU2012203179A priority Critical patent/AU2012203179A1/en
Publication of AU2012203179A1 publication Critical patent/AU2012203179A1/en
Abandoned legal-status Critical Current

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

PRESENTATION CONTENT MANAGEMENT SYSTEM AND METHOD A presentation content management and creation system (10) comprises 5 a database (30) of sorted media components coupled to be in communication with a controller (50) for scheduling and rendering media components selected from the database into a real time media presentation. At least one output device (14) is coupled to be in communication with the controller for outputting the real time media presentation and the controller renders the selected media 10 components as the real time presentation is being communicated to the at least one output device.

Description

P/00/011 Regulation 3.2 AUSTRALIA Patents Act 1990 ORIGINAL COMPLETE SPECIFICATION STANDARD PATENT Invention Title: "PRESENTATION CONTENT MANAGEMENT AND CREATION SYSTEMS AND METHODS" The following statement is a full description of this invention, including the best method of performing it known to me/us: 1 TITLE PRESENTATION CONTENT MANAGEMENT AND CREATION SYSTEMS AND METHODS 5 FIELD OF THE INVENTION The present invention relates to presentation content management and creation systems and methods. BACKGROUND TO THE INVENTION 10 Presentations such as advertising are a ubiquitous feature of modern life and efforts are continually being made to devise improved methods of effective presentation and in particular advertising. One commonplace form of advertising found in, for example, retail outlets, trade shows and the like comprises a display, such as a CRT or LCD screen, coupled to a computer terminal or 15 playback device, such as VCR or DVD player, which displays images and plays audio to typically promote products and/or services. A more sophisticated system is disclosed in United States Patent Application Publication No. US 2003/0191688 in the name of Prince et al. The disclosed system, method and storage device comprises a commercial display 20 services application having a user interface that allows users to select and program advertising content from databases of diverse media formats such as audio-video advertising content, static advertising content and audio-clip content. This system therefore allows users to tailor the content of the advertising to particular customers. 25 However, one drawback of both of the aforementioned advertising systems is that the advertisements are pre-produced, they are presented in a 2 fixed series or sequence and are continuously repeated, for example, throughout the day in a looped arrangement. Research has demonstrated that repeated exposure to the same advertisements can result in potential customers "tuning out" the advertisements. Additionally, employees are exposed to the repeated 5 advertisements for hours, days and even weeks, which provides for an undesirable work environment. Although employees can look away from the display, the audio is usually unavoidable, which can result in the volume being reduced by employees thus deteriorating the effectiveness of the advertising on the potential customers. The negative effect on the employees can also be 10 transferred to the potential customers, which can impact negatively on sales. Another system for delivering advertising content and other information is disclosed in WO 00/057308 assigned to Frankel and Company. Template multimedia presentations are assembled at a central location for a plurality of remote sites. The template multimedia presentations are transmitted to the 15 remote sites over a wide area network, internet or the like, and are stored on players at their respective sites. The players automatically access an enterprise database to retrieve data useful for modification of the template multimedia presentation into a site-specific multimedia presentation, preferably at predetermined intervals. The result is a site-specific multimedia presentation 20 incorporating changing enterprise data. Whilst this system provides improved efficiency in the distribution and presentation of advertisements, flexibility is limited because the site-specific multimedia presentations can only be modifications of the template multimedia presentation. US 6,526,411 in the name of Ward discloses a system and method for 25 creating dynamic play lists that allow for the dynamic addition and subtraction of play list items. The system and method takes into consideration user 3 preferences, user behaviour and the availability of new content. The system maintains a database of linkages between elements associated with content items as well as weighted linkages between elements and respective properties. When a new item is inserted into the database, the new item shares preference 5 weights and a number of preferences associated with items pre-existing in the database. Whilst this system and method enables users to experience new items that correlate with the specified user preferences or other bases for framing an initial input list that otherwise might not have been considered, the system and method only deals with such factors when a player of the system is 10 presenting pre-produced and deployed content. Consequently, the play lists disclosed in this patent are only dynamic in the sense that new, discrete items of pre-produced content can be inserted in the play list. US 2002/0138641 also discloses the concept of the dynamic play list and has the objective of a system for a media producer to dynamically string media 15 clips together while reducing or eliminating delays between media clips. A system and method are disclosed in which a dummy play list is created that causes a media player to request media clips from a proxy server. The proxy server dynamically determines where to redirect the requests resulting in the dynamic arrangement of the sequence of media clips to be played. Therefore, 20 the benefits of this system and method are also limited because they can only deal with how such choices could be made dynamically when the player is presenting pre-produced and deployed content. Furthermore, this system and method are directed exclusively to streamed media content and a variety of streaming media players. 25 Similarly, a system for electronically distributing, displaying and controlling advertising and other communicative media disclosed in WO 01/078273 is also 4 limited to only varying a schedule of discrete, pre-produced items of content. WO 01/078273 discloses a need to vary the content and its sequencing after it has been deployed. Media content to be displayed according to a schedule together with dynamic data to be displayed according to another overlying 5 schedule are mixed in a scheduler according to logs of user preferences and monitored, formatted and loaded for display in a scene renderer. WO 01/050401 discloses a system and method for distributing and controlling the output of media in public spaces and discloses the concept of the dynamic play list, the introduction of local content and the addition of further 10 content relevant to the consumer. It defines the output of related media to multiple devices as synchronization or synchronized delivery. A transient state variable interface module is disclosed that receives data reflecting transient conditions relevant to the public space. A logic controller module then dynamically selects between available media based at least in part on the state 15 of the transient state variables. This document also has the disadvantage of being limited to varying pre-produced content. Hence, current presentation solutions rely heavily on pre-produced media components, which limits the flexibility and control of such solutions because significant changes to the media components are not possible. In the case of, 20 for example, a video file, all of the control over the components typically exists when the media is being created in a program such as Adobe@ Premiere@. Therefore when this media is later played back, the control over each component that made up the video file is gone. Another disadvantage is that pre-produced media components, such as video files, tend to be large and therefore take 25 longer to distribute. The large file size does not allow distribution of the media to 5 be prompt if such distribution needs to be done across a network, such as the Internet. Hence, there is a need for a system, method and/or apparatus to address or at least ameliorate one or more of the aforementioned problems of the prior 5 art or provide a useful commercial alternative. In this specification, the terms "comprises", "comprising", "including" or similar terms are intended to mean a non-exclusive inclusion, such that a method, system or apparatus that comprises a list of elements does not include those elements solely, but may well include other elements not listed. 10 SUMMARY OF THE INVENTION In one form, although it need not be the only or indeed the broadest form, the invention resides in a presentation content management and creation system comprising: 15 a database of sorted media components; a controller coupled to be in communication with the database for scheduling and rendering media components selected from the database into a real time media presentation; at least one output device coupled to be in communication with the 20 controller for outputting the real time media presentation; wherein the controller renders the selected media components as the real time presentation is being communicated to the at least one output device. Suitably, the system further comprises an administrator module coupled to be in communication with the database and the controller. 25 The database, the controller and the administrator module may be coupled to be in communication in a store control unit.
6 Preferably, the media components selected from the database include at least one static media component and/or at least one dynamic media component. The dynamic media component may be selected when a change in the real 5 time presentation is required. Preferably, at least one attribute of at least one of the dynamic media components is determined by the controller. Examples of attributes include, but are not limited to: colour, opacity, position, size, duration, volume, layer order, text size, text style, blend level transparency or combinations thereof. 10 The system may further comprise a customer demographic database coupled to be in communication with a user interface and the database of sorted media components. The user interface may also function as the at least one output device. In response to one or more selections made by a user via the user 15 interface, the real time media presentation is communicated to the at least one output device. The one or more selections made by the user may include selecting whether or not advertisements are to be included in the real time media presentation. 20 If advertisements are to be included in the real time media presentation, the advertisements are selected by the controller on the basis of data relating to the user stored in the customer demographic database. Suitably, the advertisements are selected from an advertisement database coupled to be in communication with the controller. 25 In one embodiment, the media components scheduled and/or rendered by the controller are determined at least partially in response to signals detected by 7 one or more of the following devices coupled to be in communication with the controller: an image capturing device, a motion sensor, a sensitive/voice activated screen. In another form, the invention resides in a controller for a presentation 5 content management and creation system, said controller comprising: a scheduler module for selecting media components from a database of sorted media components and creating a play-list of scheduled media components; and a renderer module for rendering the scheduled media components into a 10 real time media presentation as the real time presentation is being communicated to at least one output device coupled to be in communication with the controller. The scheduler module may randomly select media components from the database of sorted media components via a list of media components stored in 15 the controller. Suitably, the media components are sorted at least by a media category or subcategory required in the presentation. Preferably, the scheduler and the renderer module separate the scheduled media components into dynamic components and static components and the 20 renderer module combines the static components and the dynamic components in the real-time presentation. Suitably, the dynamic components, if required, are selected according to one or more identifying parameters specified for the dynamic components. Preferably, the renderer module reselects at least one of the dynamic 25 components when a change in the real-time presentation is required. The renderer module may change the presentation of a media component 8 due to an internal input and/or an external input. In a further form, the invention resides in a method of creating a presentation including: selecting media components from a database of sorted media components; 5 creating a play-list of scheduled media components; and rendering the scheduled media components into a real time media presentation as the real time presentation is being communicated to at least one output device. The method may further include separating the media components 10 constituting the scheduled media into dynamic components and static components. The method may further include changing at least one of the dynamic components when a change in the real time media presentation is required. Changing at least one of the dynamic components may include: 15 determining a type and at least one parameter of the at least one dynamic component that requires changing; and selecting a replacement component from at least one component list according to the parameters. Preferably, the method further includes combining the static components 20 and the dynamic components in the real-time media presentation. The method may further include recording details of the media components for auditing purposes once displayed in the real time media presentation. Further features of the present invention will become apparent from the following detailed description. 25 9 BRIEF DESCRIPTION OF THE DRAWINGS By way of example only, preferred embodiments of the invention will be described more fully hereinafter with reference to the accompanying drawings, wherein: 5 FIG 1 is a schematic representation of a presentation content management system according to an embodiment of the present invention; FIG 2 is a schematic representation of operations of a controller for the presentation content management system shown in FIG 1; FIG 3 is a flowchart showing the steps performed by a scheduler module 10 of the controller; FIG 4 is a flowchart showing the steps performed by a renderer module of the controller; FIG 5 is a schematic representation of a system for a first application of the present invention; 15 FIG 6 is a schematic representation of the first application of the present invention; and FIG 7 is a schematic representation of a second application of the present invention. 20 DETAILED DESCRIPTION OF THE INVENTION Referring to FIG 1, there is provided a presentation content management system 10 according to an embodiment of the present invention comprising a store control unit (SCU) 12 coupled to be in communication with one or more visual and audio output devices 14. The output devices 14 can be, for example, 25 a plasma screen 16, a projector 18 and screen 20, a CRT 22, a plurality of CRTs 10 24 coupled to an RF unit 26 and/or a LCD screen 28, or other forms of visual and audio displays 29. The SCU 12 comprises a database 30 of media components 32, such as audio 34, video, 36, images 38 and text data 40 as well as surface data 42, 5 schedules 44 and administrative data 46. Database 30 is coupled to be in communication with an administrator module 48, which is coupled to be in communication with a controller 50. A user 52 can interact with the SCU 12 via the administrator module 48 via a user interface device which is linked to the administrator module 48 via the remote control module 54 and/or a point-of-sale 10 (POS) terminal 56. The SCU 12 provides audio content 55 and video content 57 to the output devices 14. The audio content 55 can utilise third generation audio coding (AC3) from Dolby Laboratories delivered via 5.1 channel or stereo. The video content 57 can be presented in anamorphic resolution using DVI, VGA, COMP, HDMI or RF communications. 15 With reference to FIG 2, the controller 50 comprises a scheduler module 58 coupled to be in communication with a renderer module 60. The scheduler module 58 generates a play list 62 of media to be presented over a predetermined time period and the renderer module 60 presents the media from the play list. 20 Media is defined herein as a collection of one or more components that can be static (predetermined) 61 or dynamic (selected during run-time) 63. Media can be the actual media to be presented, such as an audio video interleave (.avi) file, or media can be a description of one or more components 32 to be presented. Each media description contains a category, a subcategory 25 and a time duration/length. A component can be anything that is applied or presented by the system 10. Examples of components 32 are audio, graphics, 11 video, text and two- and/or three-dimensional objects. A dynamic component 63 has a list of parameters, each of which contains one or more criteria that allow it to be, or prevent it from being, selected at run-time by the scheduler module 58. Such parameters can be a time/date range, a genre, an audience classification 5 and so on. The controller 50 maintains a list of media in a media pool 64. Media listed in the media pool can be filtered by category and/or subcategory which is integral to the scheduling process. The controller 50 also maintains one or more lists of components 65, grouped by the component type. For example, there 10 may be an audio component list 66 and a video component list 68 each of which can be filtered by genre, audience classification, appropriate time of day or night to run and so on. As the scheduler module 58 generates the play-list 62, the dynamic components 63 of each media are selected. The dynamic components 63 are 15 varied according to a set of required parameters that the media describes. Such required parameters may be, for example, the location of the system, the time the media is scheduled to play and so on. The required parameters allow the scheduler module 58 to select an appropriate component from the component lists 65 for the dynamic component 63 in the media. An example of this can be 20 media which contains a dynamic component 63 that is a piece of audio to be played during the media. This piece of audio could vary according to when the media was scheduled to play. The audio desired during the day for example can differ to the audio desired at night. As well as scheduling dynamic components 63, the scheduler module 58 can vary the presentation of media caused by an 25 input. An example of this is varying the volume of audio and video media 12 components during a busier part of the day when the ambient volume is typically higher. Once media has been scheduled it is known as scheduled media 70. Once the scheduler module 58 has generated a final play-list 62, the renderer 5 module 60 takes over and begins presenting the scheduled media 70. As the scheduled media is played, it is known as real-time media 72. Once presented 73, details of the media presented are recorded for auditing and billing purposes 75. Once scheduled media is taken from the final play-list it can be dynamically adjusted or modified by the renderer module 60 in response to an internal input 10 74 and/or an external input 76. Internal inputs 74 are within the system 10 such as time and date inputs. For example, if media is played later then expected, the media can be adjusted to suit the new parameters. External inputs are external to the system 10 such as the user interface device, examples of which include a touch screen, an audio/visual sensor or an RFID scanner. Such internal and/or 15 external inputs can also affect how media or the schedule is presented. An example of this may be a user triggering a sensor that increases the volume of the media or causes different media to be loaded and presented. The scheduled media can be dynamically adjusted up to 30 times per second within a time line of the presentation to provide an unprecedented level of flexibility in media 20 presentation. The scheduling process performed by the scheduler module 58 will now be described in more detail with reference to the flowchart in FIG 3. In step 100, the scheduling process determines the total amount of time available. The scheduler module 58 takes the difference between any predetermined time/date 25 and the current time/date as the total run time.
13 In step 110, the details of a category are read and the available schedule time is divided into a user-specified amount of categories. Each category is given a weighting (percentage totalling 100%), which determines how much of the total time that category receives in the presentation. A category run time is 5 calculated by using the percentage weight against the total run time, as represented by step 120. The category weight is added to a running total to ensure the total does not exceed 100%. Within each category, one or more user-specified subcategories are chosen to distribute the time share of each category. Each subcategory is read 10 in step 130 and a run time for each subcategory is calculated in step 140. The rules of each subcategory applied to the category run time can calculate the amount of time allocated to each subcategory. In step 150, the media pool 64, which is the list of all media in the system 10, is sorted or filtered by category and subcategory to generate a subcategory 15 list for the relevant sub-category, as represented by step 160. A new subcategory list will be generated for each subcategory. In step 170, media is randomly selected from the subcategory list. The media within the subcategory list is randomly selected to fulfil the time share of each subcategory as evenly as possible to ensure one piece of media is not 20 played a disproportionate amount of time or the majority of the time. The randomly selected media from the subcategory list are added to a subcategory media list, as represented by step 180. With reference to step 190, if more time is available to be filled for that subcategory, further media are picked from the subcategory list. No more time is 25 available for further media of a particular category when the subcategory media list has reached its subcategory run-time. According to one embodiment, the 14 subcategory run-time is reached when the total length of all media in the subcategory media-list is greater then 30 seconds less than the subcategory run time and less than 120 seconds more than the subcategory run-time. Rules are applied to the randomly chosen media to ensure one piece of media is not 5 chosen predominantly over any other. If no more time is available, with reference to step 200, if further subcategories are required, steps 130-190 are repeated. If no more subcategories are required, the enquiry is made whether further categories are required in step 210. If so, steps 110-200 are repeated. If not, the subcategory 10 media lists are combined into an initial media list, as represented by step 220. With reference to step 230, an empty final play-list is created to store all the final media clips. The final play-list is a schedule of media that must be played and the times at which it must be played. Therefore, the first media to be inserted into the final play-list will have a time-to-play (TTP) that equals the time 15 the scheduler module 58 began scheduling. The second media will have a TTP of when the scheduler began to schedule plus the length of time of the first media and so on. As each media is inserted into the final play-list, the TTP of the next media to be played is determined by adding the TTP and length of the current media. 20 With reference to step 240, the final play-list is filled by randomly picking media from the initial list, which contains the appropriate amount of media for each subcategory. Various repeat rules can be applied at this time. One such rule can be that as media is randomly chosen from the initial list for the final play list, a check is made to ensure this media has not already been scheduled to 25 play in the previous three media scheduled to play, as represented by step 250. If the media has been played in any of the previous three media, with reference 15 to step 260, the media is reinserted into the initial list and in step 240 media is randomly chosen again from the initial list. With reference to step 270, once media has been selected for insertion into the final play-list at a proposed time to play, the media is checked for 5 dynamic components. A dynamic component is a part of the media that is variable and determined at run time. It is determined by one or more required parameters. These parameters give criteria for selecting a component to insert into the media. Such parameters may include, but are not limited to, the proposed time to play, the location of the system 10, or the output devices 14 10 thereof, the date a schedule is being generated and a genre. With reference to steps 280, if the media contains one or more dynamic components, the scheduler module 58 will determine the type and the required parameters of each dynamic component and, with reference to step 290, using the component lists 65 shown in FIG 2, the scheduler module 58 will pick one or more 15 appropriate components to insert. In addition to selecting components dynamically at run-time, the scheduler module 58 can also control the application/presentation of components based on different parameters. Therefore, the presentation of media can differ due to, for example, being presented at different times of the day. Such parameters can 20 include, but are not limited to, the proposed time to play (TTP), the location of the system 10, or the output devices 14 thereof, and the date a schedule is being generated. This is dynamically performed at run-time and can be applied to all media within the system 10. Once media has been inserted into the final play list, as represented by 25 step 300 in FIG 3, a check is made against a forced play-list, as represented by step 310. The forced play-list contains a list of media which is scheduled to run 16 at an exact time. A check is made against the forced play-list after media is inserted into the final play-list to ensure that the media in the forced play-list are played as close to the specified time as possible. If media in the forced play-list is due to be played at the current time, the media is removed from the forced 5 play-list, as represented by step 320, and is inserted in the final play-list as represented by step 330. If media in the forced play-list is not due to be played at the current time, the method of the scheduler module 58 proceeds to step 340. With reference to step 340, if more media remains in the initial list, more media is randomly picked from the initial list in step 240. If not, once all checks 10 have been made and all required media is inserted into the final play-list, the final play-list is complete, as represented by step 350 and, with reference to FIG 2, it becomes known as scheduled media 70. This simply means that this media has passed the scheduler module 58 and has been given a time-to-play. With reference to FIG 2, the rendering process presents scheduled media 15 70 from the final play-list 62. Once scheduled media 70 is taken from the final play-list 62, it is known as real-time media 72. To present the media, the renderer module 60 first separates all the individual components and each component is prepared individually for presentation. As scheduled media is presented, the renderer module 60 has the opportunity to alter the presentation 20 of components due to one or more internal inputs 74 and/or one or more external inputs 76, as described above. After each real-time media 72 is presented, the next is taken from the final play-list 62. The rendering process will now be described in more detail with reference to the flowchart in FIG 4. With reference to step 400, the first step in the 25 rendering process is to begin a timer. This timer allows the renderer module 60 to keep track of the effects and components that must be processed. Once a 17 timer is in place, the media can be split up into its individual components, as represented by step 410. Referring to step 430, to determine if there are any changes necessary to any dynamic components in the media, a check is made against all the internal inputs such as date and time, as represented by step 420. 5 If the current time is significantly different to the Time-to-Play (TTP) of the scheduled media, the renderer module 60 can make the necessary modifications. This is done by first identifying the dynamic components within the media, as represented by step 440 and the type and required parameters of the dynamic components, step 450. Once the type and required parameters are 10 determined, an appropriate replacement component can be selected from the component list 65, as represented by step 460. Once this step is complete the media becomes known as real-time media 72. At this stage, with reference to step 470, a check is made to determine if any input has been made that would modify the media that is currently playing. 15 This input could be in the form of a button being pressed by a user on a panel to play a particular media. If this occurs, the current real-time media is paused, the selected media is located, as represented by step 480. The selected media is loaded and begins to play, as represented by step 490. Once this media has run completely (unless interrupted by an internal or external input), the scheduled 20 real-time media is resumed. Effects, transitions and modifications to components are applied individually. With reference to step 500, if there are components to be presented, the first step in presenting a component is to apply the scheduled or default appearance to the component, as represented by step 510. Next, with reference 25 to step 520, all external inputs are checked to determine if any modification to the appearance of the component is necessary, step 530. An example where 18 this may be the case is when a noise cancelling audio sensor determines that the noise level in a location has risen to a certain level and amplification of a particular component is necessary. If necessary, the changes to the presentation are applied, step 540. Finally, any required transitions are applied to the 5 component before it is presented, as represented by step 550. Such a transition may be a fade between two components. With reference to step 560, each component is presented one after another and, with reference to step 570, the timer is updated to reflect the new time until no more components are left to be presented. A check is made at step 10 580 to ensure the media has not played through its pre-determined duration. If the duration of the media as not been reached, step 470 is re-visited to check for any input that would provoke a change to the media currently playing and continues until the duration of the media is reached. When the duration of the media is reached, with reference to step 590, the next scheduled media is 15 selected from the final play-list 62 and the process begins again. The truly dynamic nature of the combined scheduling and rendering system of the present invention is evident in its application as a powerful training aid. Training material can be driven at will by a presenter / operator bringing to the screen at any time the required content. Functions available include pause, 20 rewind, replay, skip, fast forward etc. Clearly, the present invention provides a highly flexible system and method of advertising content management and presentation that enables a wide range of organisations to promote advertising material in a large variety of ways in many different environments and scenarios. 25 Another application of the present invention is referred to as a Virtual Sales Person application that enables targeted advertising and messaging as a 19 direct result of the application of dynamic control being applied to the components of the media during the scheduling process and the rendering process described above. With reference to FIG 5, in addition to the store control unit (SCU) 12 and 5 the visual and audio output devices 14, the system comprises a customer interface, which, in one embodiment, includes an image capture device such as video camera 80, and/or a motion sensor, such as a passive infra-red (PIR) motion detector 82, and/or a sensitive/voice activated screen 84 coupled to be in communication with the SCU 12 and, according to one embodiment, coupled to 10 be in communication with the controller 50. The media and the components to be used in the media are selected and controlled dynamically by various events including, but not limited to, motion detection, sound detection, sound level via noise cancelling, any user interface, time of day, run time, date, location. All attributes of components are controlled 15 dynamically including, but not limited to, the attributes of size, position, transparency level, colour, volume, opacity. The components are accessed from the store control unit 12 when instructed by the scheduler module 58 and/or the renderer module 60. The instructions can be in part or wholly as a result of the play list 62 or any dynamically generated request at the run-time. 20 An example of the virtual sales person is shown in FIG 6 and the sequence of events progresses along the time line 90 from left right. With reference to the "no events" section, when there are no customers in the vicinity of the video camera 80, motion detector 82 and/or sensitive/voice activated screen 84, in one embodiment the images are visible and the audio is at 100%. 25 In this embodiment, an audio video interleave (.avi) file is employed, but alternatives can be used. In another embodiment, in the absence of customers 20 being detected, neither the images nor audio will be active or one or the other can be active if desired. When a customer enters, for example, a store, ("customer enters") the .avi images are visible and the audio is at 100%. The live feed relays images 5 captured by the video camera 80, for example of the customer, and includes the images of the customer in the presentation. There is a video cross fade for a period of, for example, 5 seconds and the live feed is visible, but the audio for the live feed is not audible. Next, as depicted further along the time line 90 in the section "avatar appears", the avatar (animated 3D component) is made visible 10 and its associated audio level is set at 70%. The live feed settings remain the same, but the .avi images are no longer visible and the associated audio is cross faded over 5 seconds to the 30% level in this embodiment. Where the customer interacts with the system ("customer interacts"), via any of the customer interface elements, such as the motion sensor 82 or video 15 camera 80, the live feed and .avi settings remain the same, but the audio associated with the avatar is dropped to 0% and the product being advertised is made visible and its associated audio level elevated to 70% to attract and engage the customer. Where the customer remains ("customer remains") as detected by the motion sensor 82 and/or video camera 80, the product logo is 20 made visible along with associated text, such as a ticker displaying the price, product features, a discount, bonuses, freebies or the like. Where the customer leaves the store or moves on to another part of the store ("customer leaves"), the logo, ticker, product images and audio, avatar data and live feed data are no longer visible or audible and the original images and audio are displayed. 25 Another application of the present invention is "entertainment on demand", such as "video on demand". The purpose of this application of the 21 controller 50, scheduler module 58 and renderer module 60 of the system 10 is to download and view and/or listen to entertainment content. With reference to FIG 7, the system 600 comprises entertainment content 605 sourced from entertainment content providers, an entertainment content data list 610, a 5 customer demographic database 620, a web based user interface 630 coupled to be in communication with the database 620, communication and delivery via cable/high speed internet connection 640 from a cable provider or internet service provider (ISP) coupled to be in communication with a user (audience) interface device 650. In FIG 7, user interface device 650 is depicted as a 10 personal computer (PC) including visual/audio display. However, it should be appreciated that in other embodiments, user interface device 650 can also be a laptop computer, personal digital assistant (PDA) or other communication device, such as a mobile telephone. In other embodiments, user interface device 650 can also be one or more of the aforementioned output devices 14, 15 such as a screen coupled to a set top box, hard drive or the like that enables a user to make selections and view content. The user (audience) selects from the entertainment content data list 610 via the user interface 650. The selection of entertainment is combined with demographic data from database 620 and matched to components from an 20 advertisement database 660 depending on an advertising option selected by the user. If the 'No' option 670 is selected by the user, only non-revenue components can be selected, such as movie trailers, further download offers, etc. If the 'Yes' option 690 is selected, components are selected from all available advertising components and matched using demographic info, movie 25 choice and preferences if indicated by the user. Permission 700 allowing download of entertainment content is subject to conditions, such as prior 22 payment, acceptance of advertising content, membership or any other defined condition such as user age, and is provided to the download site. The selected media and any components that may be required based on run-time instructions are uploaded and assembled by the controller 50, scheduler module 58 and 5 rendered by renderer module 60 to the customer device 650. The entertainment content may be distributed from one of many entertainment content mirror sites. The number of times or number of days that the entertainment can be accessed is controlled by the controller 50. Each time the entertainment is viewed, components are reselected dynamically according to rules. For 10 example, an advertisement run at 9.00 a.m. during entertainment may be a coffee advertisement and the advertisement run at 8.00 p.m. may be an alcohol advertisement. Dynamic components selected can be subject to predetermined parameters such as audience classification / actual run-time or any input during 15 run-time. The viewing rule can vary from once to an unlimited number. Under the unlimited viewing model, new media and components would automatically download whenever the customer logged on to the web interface 630 and seamlessly upload ready for the next viewing. All transactions are logged as 20 proof of purchase to the advertiser. This process can be further automated to download particular content whenever it becomes available always with fresh and relevant advertising which has already been pre-approved for delivery. This model of entertainment would therefore rival free-to-air television as an advertising medium and, in its purest 25 business application, be free to customers who choose to accept advertising. According to one embodiment, customers can also choose the advertisement 23 format. For example, all advertisements could be grouped to run at the start of a programme. Advertisers could also choose to advertise in conjunction with symbiotic or complimentary products from other advertisers, which could be interleaved as desired by the controller 50, scheduler 58 and renderer 60. 5 Another application of the present invention is in situations where it is imperative that changes in conditions or parameters are brought to the attention of an observer as soon as possible. Examples of such situations include, but are not limited to, medical and emergency environments, such as hospitals, plant monitoring, mining environments, aircraft and air traffic control environments. 10 For example, the present invention could be utilised for presenting patient critical information, such as heart rate, blood pressure, temperature and the like. Under normal patient conditions, or within acceptable tolerances according to the patient's condition, age, gender etc., the patient critical information could be displayed in a particular font and colour with or without associated audio. In a 15 critical or emergency condition, such as the patient experiencing cardiac arrest, one or more elements of the patient critical information could be displayed in a much larger font and more eye-catching colour to attract the observer's attention as soon as possible. This change could be accompanied by a very audible change in, or the introduction of, associated audio. Multiple patients could be 20 monitored simultaneously via a live feed, each patient having associated parameters determining how their patient critical information is displayed. For example, acceptable tolerances of patient critical information for a toddler are unlikely to be acceptable for an 80 year-old. Similar display varying capabilities would also be of great value in monitoring conditions of plant machinery and 25 mine sites and in air traffic control situations, for example, to display aircraft on safe courses differently from those on a collision course.
24 Hence, the systems and methods of the present invention thus provide a solution to the aforementioned problems of the prior art by virtue of the controller 50, scheduler module 58 and renderer module 60 of the presentation content management and creation systems and methods. The disadvantages of the 5 prior art looped systems are avoided because the present invention dynamically controls the selection, scheduling and rendering of the media components to avoid the repetition of the prior art. The present invention can produce a continually varying presentation where desired and can vary the content according to the required effects, the environment, such as background noise, 10 interaction from customers/users and both internal and external interrupts and inputs, such as those derived from patients/machinery, as described above. Changes up to 30 times per second within the time line of the presentation can be performed to modify the presentation to include, for example, forced play list content, as described above. Because all of the production or rendering is done 15 as the media is being displayed, this allows us to completely control and modify all of the attributes, such as, but limited to, the colour, opacity, position, size, volume, layer order, font size and style, blend level transparency, etc.) of each media component, whether that be an image or a text field or any other component at any time. 20 The Video On Demand delivery methods enable targeted advertising and associated revenue streams as direct result of the application of the dynamic control of the components of the media during the scheduling and rendering processes. The Virtual Sales Person methods enable targeted advertising and 25 messaging as a direct result of the application of the dynamic control of the components of the media during the scheduling and rendering processes.
25 The system and methods described with respect to the scheduler module 58 and renderer module 60 are designed to allow control of any available attributes of any available component by way of sensing from any source an input command. Such input can then be made to vary the resultant presented 5 media dynamically as it is displayed to the visual and audio output devices 14 of the system. The extent of control extends to, but is not limited by, component selection and presentation with presentation comprising one or more of size, position, colour, font, duration, opacity, visibility, and volume. Determinations thereof are continually made regarding these component attributes by the 10 renderer module 60 and are limited only by the processor in the SCU 12. The level of control afforded by the invention gives rise to the presentation, and in particular, advertising creation and delivery system which can be accessed by simple web based interfaces. The resultant dynamic content can not only be tailored to have a unique look and feel, but also deliver a 15 unique result each time it is viewed. The system is 100% scalable and high video production costs are eliminated. Furthermore, the file sizes associated with this method of content production and presentation are reduced to a fraction of the size of a traditionally produced video file, but deliver the high definition content required by today's modern screens. The present invention allows 20 media to be a composition of many smaller components, such as images, text fields, audio files, etc., which significantly reduce the overall size of the media. This file size compared to play time is completely disproportionate by current standards. For example, a 30s advertisement can occupy a mere 1MB in the present invention. This brings another clear advantage when, for example, a 25 presentation is delivered by broadband to a consumer's home. Downloaded 26 content begins playing immediately and because further content can be downloaded during this play time, the resultant delivery can be seamless. The control of the rendering process via timelines that interact dynamically with the schedule allows the same level of control available from 5 current DVD players. Skip, Skip to, Repeat, Fast Forward, Rewind, Pause, Freeze, Picture-in-Picture (PIP) are all functions of control of display attributes of content components and as such can be made available at all times to the viewer. This level of functionality further allows the user to drill down and request further information as a result of an onscreen prompt in the form of a 10 message, offer or the like. The auditing and reporting available allows for advertisers to be billed only after the content has been viewed and for their advertisement to be only offered to their desired demographic. The advertiser can be billed at differing rates based on, for example, the degree of demographic match achieved or the varying levels of interactivity. 15 Alternatively viewers can choose to accept advertising only from categories and companies of their choice. Advertising can be democratised and made affordable to the point that the local trader may compete equally with multinational companies for the viewers attention while still ensuring a revenue stream appropriate to the content which is at least equal to, but may under this 20 system due to market demands be greater than that currently developed by free to air television. The invention can also be applied across many differing platforms, such as IP telephony networks, Mobile 3G networks and viewed on desk top Video phones, handheld devices and the like. 25 Throughout the specification the aim has been to describe the invention without limiting the invention to any one embodiment or specific collection of 27 features. Persons skilled in the relevant art may realize variations from the specific embodiments that will nonetheless fall within the scope of the invention.

Claims (67)

1. A presentation content management and creation system comprising: a database of sorted media components including audio 5 components, visual components and dynamic components, the dynamic components allowing changes to be made to the media components of a scheduled real time media presentation; a controller coupled to be in communication with the database, the controller comprising a scheduler module for scheduling audio, visual and 10 dynamic components selected from the database into the scheduled real time media presentation; and at least one output device coupled to be in communication with the controller for outputting a real time media presentation; wherein the real time media presentation is rendered by a renderer 15 module of the controller as it is displayed by the at least one output device; and wherein rendering of the real time media presentation comprises controlling and modifying, in response to one or more inputs to the controller and one or more associated parameters, one or more attributes 20 of the one or more selected dynamic components of the scheduled real time media presentation to control and modify the appearance of the real time media presentation displayed by the at least one output device.
2. The system of claim 1, further comprising an administrator module coupled 25 to be in communication with the database and the controller. 29
3. The system of claim 2, wherein the database, the controller and the administrator module are coupled to be in communication in a store control unit. 5
4. The system of any preceding claim, wherein the media components selected from the database include at least one a static media component.
5. The system of any preceding claim, wherein at least one attribute of at least one of the dynamic components is determined by the controller. 10
6. The system of any preceding claim, wherein attributes of the dynamic components include: colour, opacity, position, size, duration, volume, layer order, text size, text style, blend level transparency or combinations thereof. 15
7. The system of any preceding claim, further comprising a customer demographic database coupled to be in communication with a user interface device and the database of sorted media components.
8. The system of claim 7, wherein the user interface device also functions as 20 the at least one output device.
9. The system of any preceding claim, wherein at least some of the media components in the database of sorted media components are provided by entertainment media content providers. 25
10. The system of claim 9, wherein in response to one or more selections 30 made by a user via the user interface, the real time media presentation is communicated to the at least one output device.
11. The system of claim 10, wherein the one or more selections made by the 5 user include selecting whether or not advertisements are to be included in the real time media presentation.
12. The system of claim 11, wherein if advertisements are to be included in the real time media presentation, the advertisements are selected by the 10 controller on the basis of data relating to the user stored in the customer demographic database.
13. The system of claim 12, wherein said advertisements are selected from an advertisement database coupled to be in communication with the controller. 15
14. The system of claim 1, wherein the media components scheduled and/or rendered by the controller are determined at least partially in response to signals detected by one or more of the following devices coupled to be in communication with the controller: an image capturing device, a motion 20 sensor, a sensitive/voice activated screen.
15. A controller for a presentation content management and creation system, said controller comprising: a scheduler module for: 25 selecting media components from a database of sorted media components including audio components, visual components and dynamic 31 components, the dynamic components allowing changes to be made to the media components of a scheduled real time media presentation; and creating a play-list of scheduled audio, visual and dynamic components; and 5 a renderer module for rendering the scheduled audio, visual and dynamic components into the real time media presentation as it is displayed by at least one output device coupled to be in communication with the controller; wherein rendering of the real time media presentation comprises 10 controlling and modifying, in response to one or more inputs to the controller and one or more associated parameters, one or more attributes of the one or more selected dynamic components of the scheduled real time media presentation to control and modify the appearance of the real time media presentation displayed by the at least one output device. 15
16. The controller of claim 15, wherein the scheduler module randomly selects media components from the database of sorted media components via a list of media components stored in the controller. 20
17. The controller of claim 15 or 16, wherein the media components are sorted at least by a media category required in the presentation.
18. The controller of claim 15, 16 or 17, wherein the scheduler separates the scheduled dynamic components from static components. 25
19. The controller of claim 18, wherein the dynamic components, if required, 32 are selected according to one or more identifying parameters specified for said dynamic components.
20. The controller of any of claims 16 to 19, wherein the renderer module 5 separates the components constituting the scheduled media into dynamic components and static components.
21. The controller of any of claims 16 to 20, wherein the renderer module reselects at least one of the dynamic components when a change in the 10 real-time presentation is required.
22. The controller of claim 20, wherein the renderer module combines the static components and the dynamic components in the real-time presentation. 15
23. The controller of any of claims 15 to 22, wherein the sorted media components are sorted by a media subcategory required in the presentation.
24. The controller of any of claims 15 to 23, wherein the renderer module 20 changes the presentation of a media component due to one or more of the following: an internal input, an external input.
25. The controller of any of claims 15 to 24, wherein attributes of the media components include: colour, opacity, position, size, duration, volume, layer 25 order, text size, text style, blend level transparency or combinations thereof. 33
26. A method of creating and presenting a presentation including: selecting media components from a database of sorted media components, the database including audio components, visual components and dynamic components, the dynamic components allowing changes to be 5 made to the media components of a scheduled real time media presentation; creating a play-list of scheduled audio, visual and dynamic components; rendering the scheduled audio, visual and dynamic components into 10 a real time media presentation as the real time media presentation is displayed by at least one output device; and controlling and modifying, in response to one or more inputs to a controller, and in accordance with one or more associated parameters, one or more attributes of the one or more selected dynamic components of the 15 scheduled real time media presentation to control and modify the appearance of the real time media presentation displayed by the at least one output device.
27. The method of claim 26, further including separating the media 20 components constituting the scheduled media into dynamic components and static components.
28. The method of claim 26 or 27, wherein changing at least one of the dynamic components includes: 25 determining a type and at least one parameter of the at least one dynamic component that requires changing; and 34 selecting a replacement component from at least one component list according to the parameters.
29. The method of claim 27 or 28, further including combining the static 5 components and the dynamic components in the real-time media presentation.
30. The method of any of claims 26 to 29, further including recording details of the media components for auditing purposes once displayed in the real time 10 media presentation.
31. The method of any of claims 26 to 30, wherein modifying one or more attributes includes modifying: colour, opacity, position, size, duration, volume, layer order, text size, text style, blend level transparency or 15 combinations thereof.
32. A presentation content management and creation system comprising the controller of claim 15. 20
33. The system of claim 1, wherein the one or more inputs are external inputs or internal inputs.
34. A presentation content management and creation system comprising: a database of sorted media components; 25 a controller coupled to be in communication with the database for scheduling and rendering media components selected from the database 35 into a real time media presentation; at least one output device coupled to be in communication with the controller for outputting the real time media presentation; wherein in response to one or more inputs to the controller, one or more 5 attributes of the one or more selected media components scheduled for presentation is modified as the real time media presentation is displayed by the at least one output device.
35. The system of claim 34, further comprising an administrator module 10 coupled to be in communication with the database and the controller.
36. The system of claim 35, wherein the database, the controller and the administrator module are coupled to be in communication in a store control unit. 15
37. The system of claim 34, wherein the media components selected from the database include at least one of the following: a static media component, a dynamic media component. 20
38. The system of claim 37, wherein a dynamic media component is selected when a change in the real time presentation is required.
39. The system of claim 34, wherein at least one attribute of at least one of the dynamic media components is determined by the controller. 25
40. The system of claim 39, wherein attributes of the media components 36 include: colour, opacity, position, size, duration, volume, layer order, text size, text style, blend level transparency or combinations thereof.
41. The system of claim 34, further comprising a customer demographic 5 database coupled to be in communication with a user interface device and the database of sorted media components.
42. The system of claim 41, wherein the user interface device also functions as the at least one output device. 10
43. The system of claim 43, wherein at least some of the media components in the database of sorted media components are provided by entertainment media content providers. 15
44. The system of claim 41, wherein in response to one or more selections made by a user via the user interface, the real time media presentation is communicated to the at least one output device.
45. The system of claim 41, wherein the one or more selections made by the 20 user include selecting whether or not advertisements are to be included in the real time media presentation.
46. The system of claim 45, wherein if advertisements are to be included in the real time media presentation, the advertisements are selected by the 25 controller on the basis of data relating to the user stored in the customer demographic database. 37
47. The system of claim 46, wherein said advertisements are selected from an advertisement database coupled to be in communication with the controller. 5
48. The system of claim 34, wherein the media components scheduled and/or rendered by the controller are determined at least partially in response to signals detected by one or more of the following devices coupled to be in communication with the controller: an image capturing device, a motion sensor, a sensitive/voice activated screen. 10
49. A controller for a presentation content management and creation system, said controller comprising: a scheduler module for selecting media components from a database of sorted media components and creating a play-list of scheduled 15 media components; and a renderer module for rendering the scheduled media components into a real time media presentation displayed by at least one output device coupled to be in communication with the controller; wherein, in response to one or more inputs to the controller, one or 20 more attributes of the one or more selected media components scheduled for presentation is modified as the real time media presentation is displayed by the at least one output device.
50. The controller of claim 49, wherein the scheduler module randomly selects 25 media components from the database of sorted media components via a list of media components stored in the controller. 38
51. The controller of claim 49, wherein the media components are sorted at least by a media category required in the presentation. 5
52. The controller of claim 49, wherein the scheduler separates the scheduled media components into dynamic components and static components.
53. The controller of claim 52, wherein the dynamic components, if required, are selected according to one or more identifying parameters specified for 10 said dynamic components.
54. The controller of claim 16, wherein the renderer module separates the components constituting the scheduled media into dynamic components and static components. 15
55. The controller of claim 54, wherein the renderer module reselects at least one of the dynamic components when a change in the real-time presentation is required. 20
56. The controller of claim 54, wherein the renderer module combines the static components and the dynamic components in the real-time presentation.
57. The controller of claim 49, wherein the sorted media components are sorted by a media subcategory required in the presentation. 25
58. The controller of claim 49, wherein the renderer module changes the 39 presentation of a media component due to one or more of the following: an internal input, an external input.
59. The controller of claim 49, wherein attributes of the media components 5 include: colour, opacity, position, size, duration, volume, layer order, text size, text style, blend level transparency or combinations thereof.
60. A method of creating and presenting a presentation including: selecting media components from a database of sorted media components; 10 creating a play-list of scheduled media components; rendering the scheduled media components into a real time media presentation as the real time presentation is displayed by at least one output device; and modifying, in response to one or more inputs to a controller, one or more 15 attributes of the one or more selected media components scheduled for presentation as the real time media presentation is displayed by the at least one output device.
61. The method of claim 60, further including separating the media 20 components constituting the scheduled media into dynamic components and static components.
62. The method of claim 61, further including changing at least one of the dynamic components when a change in the real time media presentation is 25 required. 40
63. The method of claim 62, wherein changing at least one of the dynamic components includes: determining a type and at least one parameter of the at least one dynamic component that requires changing; and 5 selecting a replacement component from at least one component list according to the parameters.
64. The method of claim 61, further including combining the static components and the dynamic components in the real-time media presentation. 10
65. The method of claim 60, further including recording details of the media components for auditing purposes once displayed in the real time media presentation. 15
66. The method of claim 60, wherein modifying one or more attributes includes modifying: colour, opacity, position, size, duration, volume, layer order, text size, text style, blend level transparency or combinations thereof.
67. A presentation content management and creation system comprising the 20 controller of claim 49.
AU2012203179A 2005-07-19 2012-05-30 Presentation content management and creation systems and methods Abandoned AU2012203179A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2012203179A AU2012203179A1 (en) 2005-07-19 2012-05-30 Presentation content management and creation systems and methods

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
AU2005903805 2005-07-19
AU2006272454A AU2006272454A1 (en) 2005-07-19 2006-07-19 Presentation content management and creation systems and methods
AU2012203179A AU2012203179A1 (en) 2005-07-19 2012-05-30 Presentation content management and creation systems and methods

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
AU2006272454A Division AU2006272454A1 (en) 2005-07-19 2006-07-19 Presentation content management and creation systems and methods

Publications (1)

Publication Number Publication Date
AU2012203179A1 true AU2012203179A1 (en) 2012-06-21

Family

ID=46489065

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2012203179A Abandoned AU2012203179A1 (en) 2005-07-19 2012-05-30 Presentation content management and creation systems and methods

Country Status (1)

Country Link
AU (1) AU2012203179A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110580611A (en) * 2018-06-11 2019-12-17 钉钉控股(开曼)有限公司 Resume information management method, and recruitment information management method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110580611A (en) * 2018-06-11 2019-12-17 钉钉控股(开曼)有限公司 Resume information management method, and recruitment information management method and device
CN110580611B (en) * 2018-06-11 2024-03-19 钉钉控股(开曼)有限公司 Resume information management method, recruitment information management method and recruitment information management device

Similar Documents

Publication Publication Date Title
US20110173521A1 (en) Presentation content management and creation systems and methods
US6188398B1 (en) Targeting advertising using web pages with video
AU2008247579B2 (en) User interfaces for web-based video player
KR101478275B1 (en) System and/or method for distributing media content
JP5337146B2 (en) Video overlay
BE1021661B1 (en) VIDEOPRESENTATION INTERFACE WITH IMPROVED NAVIGATION FUNCTIONS
US8949882B2 (en) System and method for enabling content providers to identify advertising opportunities
JP6179866B2 (en) How to set frequency limit for addressable content
US20130031593A1 (en) System and method for presenting creatives
US20030037332A1 (en) System and method for storyboard interactive television advertisements
CA2870050C (en) Systems and methods for providing electronic cues for time-based media
US20090307092A1 (en) System and method for providing media content
US20080276270A1 (en) System, method, and apparatus for implementing targeted advertising in communication networks
US20080163283A1 (en) Broadband video with synchronized highlight signals
US20090012880A1 (en) User Interface For Creating and Displaying Digital Signage
US9767463B2 (en) On demand product placement
CN101317191A (en) Selective ad display for multimedia content
JP2010507351A (en) Targeted video advertising
KR20120096065A (en) Systems and methods for determining proximity of media objects in a 3d media environment
US20120179968A1 (en) Digital signage system and method
JP2020129375A (en) Advertisement delivery method and advertisement delivery system using the method
Lekakos et al. An integrated approach to interactive and personalized TV advertising
CN102027501A (en) Media selection and personalization system
KR20000054044A (en) advertizing method using a moving picture window for an Internet broadcasting
AU2012203179A1 (en) Presentation content management and creation systems and methods

Legal Events

Date Code Title Description
MK4 Application lapsed section 142(2)(d) - no continuation fee paid for the application