System and method for media management
Cross-Reference To Related Applications
This application claims priority to U.S. Provisional Application Serial No.
60/460,649, filed on April 4, 2003. Priority to the prior application is expressly claimed, and the disclosure of the application is hereby incorporated by reference in its entirety.
Field Of The Invention
This invention relates to media management and editing, and particularly, but not exclusively to the control of media and related metadata in a non-linear media editing system.
Background Of The Invention
Non-linear editing is used to refer to editing of digital media in which direct access to media portions is possible, as opposed to linear editing which operates on sequentially stored formats such as tape. Non-linear editing provides many advantages over linear editing such as speed of access to media segments or portions, and the ability to preview and correct each edit decision without having to go to tape or disk first. It also enables more than one editor to access a common media segment simultaneously. This allows the editor or editors a greater degree of creative freedom in the editing process.
A typical editing sequence will start with one or more input media items being input into an editing system. These may be ingested from a digital capture or storage device, or may be digitised analog, and are sometimes referred to as Non-Production Media Items. One or more editors can then use segments of the input items and performing various operations, such as cuts, fades and colour correction, create a new media item, which can be referred to as a Production Media Item.
In order to facilitate management and editing of digital media, auxiliary information, or metadata is included with media items. In this way the media essence is effectively tagged for use in database-like operations such as searching, storage tracking and retrieval.
In an editing system a large amount of metadata can be included with (and may indeed be required for) media items and the segments which make up those items. As has been explained, while this information is helpful in the editing process, the present inventors have recognised that there is a need to manage, and to allow a user to access this information in an efficient manner.
It is an object of one aspect of the present invention to provide an improved method of controlling metadata in a media editing environment.
In a similar manner editing systems have to deal with a large amount of media essence. The volume of media passing through media organisations is ever increasing, with a large news organisation typically receiving over 300 hours of new footage every day. Some of this footage will be discarded, after being used or without being used at all, however it is desirable that certain footage, deemed to be of historical interest in some way, is archived for possible use in the future.
In video broadcasting applications it is therefore necessary to maintain archived footage capable of being replayed as broadcast quality video. Storage constraints are currently such that the quantity of broadcast video which it is desired to archive is too great to be kept on an online system, and tape archiving is typically used. In this way, large tape archives of various formats have been built up over the years.
In a media editing application, such as a news production facility, there is a conflict between the requirement to move items from online storage to archive, and the resources required to select and review material to be
archived. The present inventors have recognised that there is a need to manage, review and archive media essence in an efficient manner.
It is an object of one aspect of the present invention to provide an improved media management system and method.
It is an object of one aspect of the present invention to provide an improved system and method for video archiving and retrieval.
Brief Summary Of The Invention
Accordingly the present invention consists in one aspect of:
A method for controlling metadata associated with media items in a media management system in which a plurality of media items are related in parent- child relationships, comprising the steps of defining at least one metadata attribute; defining a first and a second propagation property; assigning either said first or second propagation property to said at least one metadata attribute; assigning a metadata attribute component to a first media item and, responsive to the assignment or modification; and identifying media items in the system related to said first media item and, based on the propagation property of the selected attribute, selectively assigning or modifying a metadata component attribute of the identified related media items accordingly.
By exploiting the relationships between media it is possible, with this selective control, to reduce the amount of annotation which would otherwise be necessary while at the same time ensuring that metadata which is only relevant to a certain group of items is not propagated outside of that group, avoiding a proliferation of unhelpful annotation.
Selective control according to propagation property may occur at either the identification stage or at the assignment and modification stage according to the particular implementation of the invention.
Preferably the first propagation property is a parent to child inheritance property, whereby a metadata attribute component is assigned or modified for identified related media items which are derived from said first item if the propagation property of said attribute is said first propagation property. More preferably the second propagation property is a bi-directional property, whereby a metadata attribute component is assigned or modified for identif ed related media items which are derived from said first item and for identif ed related media items from which said first item is derived if the propagat on property of said attribute is said second propagation property.
Preferably a metadata component attribute is assigned to a specified media segment of a media item, and related media items are defined as those media items within the system which include media corresponding to at least a portion of said media segment. It is then advantageous for the modification or assignment which occurs due to propagation to apply to a metadata attribute component of a related media item at a segment corresponding at least partially to said specified media segment. Using this method, propagated metadata can be assigned to particular segments or sequences of media, in a frame accurate fashion if required.
In order to achieve this attribute components are assigned start and end times corresponding to the specified media segment which they accompany. It should be understood that components need not necessarily be defined by start and end times, but should be defined by information which allows start and end times to be derived, for example a start time and a duration.
Using the second bi-directional property it is possible for a metadata component attribute of an item to be propagated to all related media items within the system. In this way metadata which will be of relevance to all corresponding media instances can be propagated to all related items accordingly, while other metadata which may not be of such widespread relevance is propagated only to a subset of items.
Values of existing components, if any, are preferably overwritten by propagated components. More preferably propagated metadata carries with it a time value indicating when that metadata was assigned or modified, and for any given item the most relevant metadata can be derived using these time values.
A further aspect of the invention provides:
A media editing system in which a plurality of media items are related in parent-child relationships, said system comprising a tree information database adapted to store information determining the relationships between media items; and a user interface to allow a user to assign a metadata attribute component to a media segment; wherein one or more attributes is governed by a bi-directional propagation rule, such that on allocation or modification of such an attribute component to a first media segment, the database is searched to identify both parent-related and child-related media items having at least a portion of media corresponding to that media segment and an attribute component at those identified portions in those related media items is allocated or modified accordingly
Propagation rules are preferably stored and maintained in a dedicated rules engine. Searching of the database can advantageously be performed by the rules engine.
The user interface preferably includes a graphical timeline representative of a media item or a segment of a media item top allow attribute components to be assigned simply and intuitively.
A still further aspect of the invention comprises:
Metadata for accompanying media items in a media management system in which a plurality of media items are related in parent-child relationships by including corresponding media segments, which metadata is propagated between parent and child media items; said metadata comprising a metadata attribute component associated with a segment of a media item; wherein said
metadata attribute component has one of at least two propagation properties indicative of how that component propagates to related media items.
By defining multiple different types of propagation between related items in this way selective control can be exercised over the automatic spread of metadata between related items.
Preferably one propagation property is a parent to child inheritance property, indicating that such a metadata components associated with an item is automatically associated also with child items derived from that item. Also preferably one propagation property is a bi-directional propagation property, indicating that such a metadata component associated with an item is automatically associated also with child items derived from that item, and parent items from which that item is derived.
This caters for types of metadata which are immediately of importance to all related items once assigned, for example rights restrictions.
Preferably the metadata accompanies a specified segment of a media item, and association is only to related items including at least a portion of that specified media segment. This allows more accurate control of the propagation to identifiably segments of media.
More preferably, bi-directional metadata propagation continues throughout the system so that bi-directional metadata associates with all related items within the media system which include at least a portion of that media segment.
Advantageously, metadata is classified into attributes, and all metadata for a given attribute has the same propagation property. In this way, once attributes are defined, values assigned or updates for those attributes will automatically be governed by the appropriate propagation property. For a given attribute for a given segment of a media item, it may be desirable that the most recently updated value of that attribute is propagated through the system, optionally overriding any existing value, such that the system is
always automatically updated with the latest data. Alternatively the metadata may include an indication of the time of amendment or assignment of that metadata. From this time it is possible to derive the most recently updated value.
Another aspect of the invention provides:
A method for managing metadata associated with a media item comprising maintaining a timeline associated with said media item, wherein attribute components are assignable to said media item at defined times; maintaining a database of attribute components for each media item, each component having an attribute value, a duration along said timeline, and a value representative of the time at which the component was assigned to said media item; processing the attribute components in said database to derive, for a given attribute, the most recently assigned value at each time along the attribute timeline; and displaying at each time on an attribute timeline the most recently assigned attribute value.
This allows a user to view the current attribute values of an item quickly and easily in a graphical format, without having to consider the various components and values which may be assigned to a media item, and which may include components which have been replaced or superseded.
Preferably an initial attribute value is assignable to an item and, for any time on said attribute timeline not having an assigned component, that initial value is displayed. This has the same a similar effect as assigning an initial component along the length of the timeline which may be used in an alternative method.
Advantageously components can be amended, and the value representative of the time at which the component was assigned is updated to the time of the amendment each time a component is amended. This results in the most up to date information always being displayed, without a new component necessarily having to be created to effect a change.
Preferably the method is operated in a media management system, so that multiple users can assign or update attribute values to items within the system via multiple user interfaces and, more preferably, each time an attribute value is assigned or amended, all user interfaces within the system are updated to reflect the assigned or amended value. This allows users to benefit from one another's annotation in a timely fashion.
Preferably components of a media item in the system may be assigned or updated automatically by propagation from another related media item.
Yet another aspect of the invention comprises:
A method for monitoring media usage in a media editing system comprising: maintaining a database of media items; defining a timeline associated with a media item, and a usage value representing the usage of said media item which varies along said timeline; searching said database and identifying portions of said media item occurring in derived media items; processing said identified portions and for each identified occurrence, updating the usage value for the corresponding portion of the timeline reflect the usage of said segment; displaying said usage value on said timeline.
The resulting display will, in many cases provide an indication of the usefulness of the different segments of the media item. This is ingeniously achieved without requiring any dedicated annotation, but by measuring the number of times each segment has been selected and used in other segments. By logging the results of editing processes a value added media item including a usage density measure is created.
Preferably the status of the media items in which portions of said media item occur is used in deriving the usage value. This allows the usage to be tailored to meet a specific requirement, or to exclude data which will not accurately reflect usage of the item. For a given production item for example, there may well still be corresponding raw footage within the system. It may be desirably that any remaining instances of 'raw' or 'rough cut' items are not included in deriving the usage value. This could leave only those instances which have
been selected for inclusion in an edit, or in a final edit to be included in the usage value.
Still another aspect of the invention comprises: A method for managing a restriction status of a media item in an editing system comprising setting an initial restriction status for the media item from one of a set of predetermined values; assigning one or more components of a restriction attribute to segments of said media item, which components may take one of the set of predetermined values; and deriving an overall restriction status for the media item based on the default status and the one or more usage restriction components.
This provides an 'at a glance' indication of the restriction status of an item which is preferably displayed to a user, advantageously whenever the media item is being viewed, edited, or worked on in some other way. While a more detailed review will be necessary to determine which segment of the item have which restrictions, the summary is invaluable for making an on the spot decision. Also, since the summary is a single value it can usefully be included on many GUI screens where a more comprehensive rights indication would not fit, or would be considered intrusive. In embodiments where the traffic light may be automatically updated after annotation by another system user, this provides a constant watch on the latest restriction status of an item using minimum screen real estate.
Preferably the overall restriction status is the result of a Boolean function of the default value and the component values. Also, preferably the overall restriction status varies according to the production status of the media item.
In an advantageously simple embodiment the possible values are colour values. Thus the overall restriction status can be displayed extremely easily with a mark of a certain colour, or by altering the colour of a an existing feature of a display. Preferably there are three possible values - red, amber and green.
A user may optionally manually override the derived status, but preferably only to a more restrictive status, thus ensuring a minimum restriction threshold which is automatically derived.
A further aspect of the invention provides:
A method for managing a media system comprising the steps of storing in a non-linear media system one or more media instances of a first resolution; storing in a database in said non-linear media system, metadata associated with said media instances; storing in a rules engine in said non-linear media system, one or more archive rules; automatically identifying one or more media instances of said first resolution to be archived by querying metadata stored in said database according to said one or more rules, and returning metadata identifying one or more media instances; and copying said identified first media instances to a media archive.
Because much of the media that should be archived can be identified using automated business rules, it is anticipated that most archived media may be automatically recommended. This will be advantageous, for example during major news events where manual identification of all appropriate media for archiving would require considerable resources.
Preferably archive rules comprise one or more archive terms, each term specifying a metadata field name, an operator, and a value to be satisfied by that operator. It is desirable that archive rules are executed periodically to identify media instances to be archived, each rule having a predefined periodic interval. In this way rules designed to identify regular media items eg. the One O'clock News, can be scripted and executed each day at, say, half past two in the afternoon, when it is known that the item will have been completed and not yet deleted.
It is also desirable that each archive rule has an assigned priority, and that archive rules are executed according to priority status, to handle situations where a resource conflict arises.
After copying identified instances to archive, the instances are preferably deleted from said non-linear media store, to maintain available on-line storage space. Advantage can be derived if the non-linear media system additionally includes a plurality of corresponding media instances at a second, lower resolution, such that a user a user of the non-linear media system can view the second lower resolution version of archived media instances after the higher resolution instances have been deleted.
The first resolution is preferably broadcast quality resolution and the second resolution is preferably web quality resolution.
A still further aspect of the invention provides:
A media archive system comprising a non-linear media store adapted to store a plurality of media segments, a linear media archive; a database adapted to store metadata associated with said plurality of media segments; and a rules engine adapted to store one or more archiving rules, wherein the data base is adapted to be queried periodically using said one or more archive rules to identify media segments meeting archive rule criteria, those identified segments being copied to said linear media archive.
The invention also provides a computer program and a computer program product for carrying out any of the methods described herein and/or for embodying any of the apparatus features described herein, and a computer readable medium having stored thereon a program for carrying out any of the methods described herein and/or for embodying any of the apparatus features described herein.
The invention also provides a signal embodying a computer program for carrying out any of the methods described herein and/or for embodying any of the apparatus features described herein, a method of transmitting such a signal, and a computer product having an operating system which supports a computer program for carrying out any of the methods described herein and/or for embodying any of the apparatus features described herein.
Th e invention extends to methods and/or apparatus substantially as herein described with reference to the accompanying drawings.
Any feature in one aspect of the invention may be applied to other aspects of the invention, in any appropriate combination. In particular, method aspects may be applied to apparatus aspects, and vice versa.
Furthermore, features implemented in hardware may generally be implemented in software, and vice versa. Any reference to software and hardware features herein should be construed accordingly.
The methods and apparatus described herein may be implemented in conjunction with media input, editing and transmission systems, aspects of which are described in the applicant's co-pending patent applications. In particular, aspects of a system for managing data for transmission are described in the applicant's co-pending patent application entitled "System and Method for Media Management", Attorney Reference No. IK/26522WO, filed on 5 April 2004, the disclosure of which is hereby incorporated by reference in its entirety. Aspects of a system and method for media data storage and retrieval are described in the applicant's co-pending patent application entitled "Data Storage and Retrieval System and Method", Attorney Reference No. IK/26523WO, filed on 5 April 2004, the disclosure of which is hereby incorporated by reference in its entirety. Aspects of a further system for the storage of data, in particular controlling media storage devices remotely, are described in the applicant's co-pending patent application entitled "Media Storage Control", Attorney Reference No. IK/26520WO, filed on 5 April 2004, the disclosure of which is hereby incorporated by reference in its entirety. A resource allocation system, which may be implemented as part of a media editing system is described in the applicant's co-pending patent application entitled "A Method and Apparatus for Dynamically Controlling a Broadcast Media Production System", Attorney Reference No. IK/26271 WO, the disclosure of which is hereby incorporated by reference in its entirety. Further aspects of a media processing system, are described in the applicant's co-pending patent application entitled "Media Processor", Attorney
Reference No. IK/26519WO, filed on 5 April 2004, the disclosure of which is hereby incorporated by reference in its entirety. A production management system, which may also be implemented in conjunction with the system described herein is described in the applicant's co-pending patent application entitled "System and Method for Processing Multimedia Content", Attorney Reference No. 13214.4001 , filed on 5 April 2004.
Brief Description Of The Drawings
Preferred features of the present invention will now be described, purely by way of example, with reference to the accompanying drawings, in which:
Figure 1 illustrates a media item structure;
Figure 2 is a schematic representation of a media management system;
Figure 3 shows the form of component metadata;
Figures 4 to 6 show metadata inheritance according to an embodiment of the invention;
Figure 7 is an example of summary traffic light metadata according to an embodiment of the invention;
Figures 8a to 8d illustrate component annotation and inheritance according to an embodiment of the invention;
Figures 9a and 9b also illustrate component annotation and inheritance Figure 10 is an example of application server structure for a media management system;
Figures 11 and 12 illustrate component metadata displays
Figures 13a to 13d show component timeline views according to an embodiment of the present invention; Figure 14 is an item usage view according to an embodiment of the present invention;
Figures 15a and 15b are exemplary display screens;
Figure 16 illustrates archive recommendation and deletion processes in a media management system;
Figure 17 illustrates the implementation of a media management system;
Figure 18 illustrates a server-client relationship in an exemplary media management system; Figure 19 shows the structure of a rule engine in an exemplary media management system.
Detailed Description
Figure 1 illustrates an exemplary structure for a media item which might be handled in a media management system in accordance with an embodiment of the present invention. The media item 100 is represented along a time axis extending horizontally across the page. The media item comprises three separate media objects or tracks, 102, 104 and 106. In this example track 102 is video, and tracks 104 and 106 are audio. The tracks can be referred to as media essence.
Each media object or track can be divided in time into segments or portions. It should be noted that segments may have boundaries aligned across all tracks, such as segment 108, however a single segment in one track may span two or more segments in another track, such as segments 110 and 112.
Media items additionally comprise metadata, which describe attributes associated with a media item, and which is used to assist in processing of the media essence within the system eg. storing, tracking, editing, archiving etc. These attributes may apply to the whole media item eg. item duration, or may be specific to segments within the media item eg. copyright holder (this will be discussed in greater detail below) A media item can therefore be said to be made up of media essence (tracks), and associated metadata (attributes).
Table 1 describes a number of metadata attributes which can be associated with a media item:
Table 1
While media and metadata are associated they can be used and stored separately and independently in the system of the present invention to advantageous effect.
The composition of media items and metadata will be explained in greater detail below.
A generalised media management system will now be described at a high level with reference to Figure 2 in order to illustrate various aspects of the invention. The various features shown will be described in more detail below, in relation to specific examples.
Turning to Figure 2, a metacore 200 is at the centre of the system, and comprises a metadata store 201 and a media store 202. Media intake, for example from video feeds, agencies, newsgathering teams etc. can be received via an edit matrix 206 which is controlled by a network control system 208. In order to effectively manage the incoming media, it is assigned metadata values which are stored in the metadata store. Media intake can also be received from viewing and editing services 210 and Archive service 212. The metadata values may be imported with the incoming media, may be
assigned values by a system operator or may be assigned default values. The associated media is then stored in the media store 202.
Users of the system, can use viewing and editing services 210 to view and edit media managed by the system, and can search the system by metadata attributes to find relevant media. Once the relevant metadata describing the desired media has been found, the system can retrieve the associated media from the media store (if it exists there) for use by the user. Users can create new media items from existing essence, but with new metadata (which may be derived from existing metadata as will be explained below) to be input into the metacore.
The media store is an online store, and media held within it can be accessed and manipulated directly via devices networked to the metacore. As explained above, the practical constraints of media storage dictate that only a certain volume of media can be maintained online in this way, and as new media is constantly fed into the system, existing media must be removed. This is particularly true of the media essence, and less so of the metadata. If it is determined that the media is important and cannot simply be deleted, it must be stored offline, or archived. Both the process of selecting material to be archived, and the process of archiving it require considerable resources.
An archive service 212 is therefore linked to the metacore. The archive service is in turn linked to one or more VTRs 214. The archive service identifies media, via its metadata, to be taken from the media store and recorded to tape (offline). The archive service can also act to re-ingest into the (online) media store tape based media.
The metacore, is connected to transmission servers 216. These transmission servers can accept media items which are ready to be broadcast on transmission system 218.
The system also supports web based output, and the metacore, is further linked to a post processor 220 which in turn feeds a web hosting 222.
The routing of video, audio, and communication signals between various media systems and facilities, both internal and external, can be referred to in terms of 'Bookings'. A Booking may simply specify a media feed from one location that is to be routed to another location, internal or external to the media management system. Bookings may also include recording of the media being routed. For example, a booking may be made to enable an on-air news presenter to interview someone live at a remote location. Bookings can also be communications only bookings, enabling staff from various locations to communicate via a dedicated communication link.
Bookings may be divided into Arrival bookings that are scheduled recordings, Departure bookings for media items that are to be played out from the system to another destination, and Archive bookings that represent requests for media to be moved to or from offline storage. Any tasks that require recording or playout of media can be tracked as a booking.
Bookings and resource management are described in greater detail in our co- pending application filed on 5 April 2004 and bearing attorney reference No. IK/26271 WO, to which reference is directed.
The process of routing audio video and communication signals between various elements in a media management system, and in particular the ingest of media into the system, will now be described with reference to a specific embodiment of a media management system.
In an exemplary media system according to Figure 2, all media items and media essence contained in the system are either the result of recording from the edit matrix, or from the archive service, or imported from the Editing services. A variety of recording methods are supported by the system, including methods for media sources that must be recorded live from within the news facility, from a tape source, or an Agency feed. These are managed by a centralized facility (an organisation unit) which is referred to as the
'Mediaport'. The Mediaport is also responsible for managing the subsequent accessing of recordings.
All of the media items recorded into the system are assigned metadata. The different types of metadata assigned to media itmes may vary according to the particular item. This metadata is assigned by the Mediaport or by Mediaport staff. The metadata may be automatically imported from an external source, assigned by the Mediaport, or automatically assigned a default value.
Metadata added automatically when a media item is created as a part of recording includes:
• Arrival Date/Time
• Item Identifier • Start Timecode
• Recorded/Created by
The system may usefully define 3 levels of restriction for metadata fields:
• Mandatory - a value must be entered
• Recommended - A value does not have to be entered, but if a value is not entered, the system displays a recommended value
• Optional - no restriction performed. Any value entered is accepted
For each data value entered or altered, the system records the following information:
• User name of person performing the change ® Valid user role performing the change o Date and lime of the change
• Metadata fields changed
• Old values and new values for each metadata field
Many recordings, such as news events, are unanticipated and require very rapid response by the news staff. In such instances a recording method known as Crash Recording is employed. The goal of Crash Recording is to provide a very rapid method for recording important breaking news. The steps required to initiate the Crash Recording must be kept minimal. All required metadata fields will be filled with default values. Clip name will be simplified to a default value of <CRASH RECORD/User Name/Date and Time> where 'user name' is derived from user login, and 'date and time' represents the time recording is started. Media Status will be set as 'Raw'. 'Item Identifier' will be set by default using a counter algorithm, with PR (production item) or MP (non product item) as a suffix. Start timecode for the media item shall begin at 00:00:00:00. Recorded/Created By will be derived from the users login, and outlet will default to the last outlet the user had selected. The default usage traffic light value is green, default usage restrictions are set to 'restricted to 24 hours without additional modification', story name and description are defaulted to 'Crash Record'.
It will be apparent that different applications will use different metadata values and that even in a given application, default item values may vary. All values set at the initiation of recording can, and most should, be modified later. In this way a crash recorded media item can be recorded and utilized short term with minimal effort, and the metadata details that are required for medium and long term utilization can be assigned as soon as practical.
The crash record scenario can also be used where a system user has a tape that needs to be ingested into the system. This creates a fast and simple way to get media into the Mediaport, while minimizing the time required on the recording station. All required metadata can be updated afterwards at a viewing/editing/logging station.
In addition to restricting values of metadata, the system will have restriction on who may add or view specific metadata fields. A metadata field may have restricted access or not be visible to certain users or user groups. Thus, a
specific metadata field may be 'Mandatory' for a Media Coordinator, but 'Not Visible' for a general BBC user.
Not all items are ingested into the system via a recording process. Some media items are metadata only, and these can be ingested via a Data Transfer. Data transfers can be received via a push process from a variety of external sources. Data transfer may have Mediaport bookings representing anticipated data transfers. Mediaport bookings should be used for anticipated data transfers so that users may track data transfers, and so that collection and dissemination of related metadata can begin prior to arrival.
The metacore is intended to serve as a short-term repository of currently relevant media. As time goes on, more and more media items contained in the metacore will represent archived assets. When users need access to archived media items, they will request them be recorded back into online storage. This initiates a chain of events that result in the simple recording of tape-based media from the archive into the Mediaport.
In a news application many recordings will be from agency feeds and will be recorded at regular times. The system therefore supports automatic recording. There are three types of automated recordings:
• Un-chunked - a single media item is created for the booking
• Regular Chunked - multiple media items, starting at regular intervals • Irregular Chunked - multiple media items are created at irregular, user specified intervals
The type of automated recording is specified when the booking is created. The complexities of automated bookings lie primarily in the creating automated booking functionality. The recording itself will simply proceed in an automated fashion, without user interaction or intervention. Mediaport staff will monitor the recordings for errors or failures as a part of their normal duties
ln certain applications the media system will be integrated into an existing or legacy system. It may be desired to use two systems cooperatively or for one system to replace another.
An example of a media system adapted for use in conjunction with a legacy bookings system (CBIS) will now be described. CBIS, the Central Bookings and Information Service (a software application), is an example of a legacy organization and software application that manages and coordinated resources and bookings.
The Mediaport is adapted to import bookings from CBIS. When the Mediaport imports a booking, much of the Mediaport metadata may be automatically derived from the CBIS booking information. Imported information should include:
ClipName
TrafficLight
StoryName
PictureFormat
Media Status
Outlet
CopyrightHolder
In this example the system can differentiate between Internal and External bookings. External bookings are any bookings with a CBIS reference number, inferring that the bookings have been or will be imported to the system from CBIS. Internal bookings are not imported from CBIS and will not have a CBIS reference number.
In this application, the typical Mediaport booking scenario is that an item is automatically imported from CBIS, and then reviewed by a Mediaport Manager who has assigned the booking to a Media Coordinator for recording.
ln a preferred embodiment, transcoding capabilities are provided within the system to enable the system to create additional media essence instances at different resolutions. This supports multiformat editing, and makes efficient use of available storage facilities and archiving capabilities, whilst maintaining searching and viewing functionality. The system will be able to create new media essence instances when these media resolutions needed. It is most preferable to encode and store multiple resolutions concurrently while recording. Alternatively multiple resolution instances can be encoded from the primary media input, as an automated process, during ingest. It is also desirable to be able to re-encode media for long-term media maintenance reasons. As encoding of media is in progress, a list of items to be transcoded can be viewed by Mediaport staff to monitor progress. An example of the different resolutions used in an embodiment of the system is shown in table 2:
Broadcast Information rate of approximately 25 Mbps. Quality Format used is DVCAM media, BetaSP or equivalent.
Desktop Frame-accurate and time-code accurate. Information rate of Quality approximately 1.5 Mbps.
Format used is MPEG-1 or equivalent to support the following requirements:
• Open standard
• Widely adopted
• Frame accuracy
• Bandwidth efficient
• Sufficient quality to edit
• Able support multiple streams
• Able to satisfy concurrent streams (i.e. split-audio capability)
Web Quality Information rate approximately from 56Kbps to 300Kbps depending on user requirements and network capacity. Not provided to frame-level accuracy Format to be RealMedia or equivalent. Table 2
It should be noted that audio tracks will also often need to be supported. It is preferable that two audio tracks be provided for each video track.
In a preferred embodiment the media core will have the following on-line storage capabilities:
o 500 hours of broadcast quality o 1500 hours of desktop quality (500 hours of current storage & 1000 hours for near-line media)
• 2400 hours of web quality (500 hours of current storage, 1000 for near- line media and 900 hours of archived media)
It is further preferably that there should be near-line storage capabilities for 1000 hours of broadcast quality media.
A particularly suitable application of the present invention is news production, and the development of news stories for broadcast. News stories are the reporting of details and background for regional and world events. News stories are very dynamic. News staff always have a large number of stories that are actively being reported, covering a range of categories. The relative importance of a story changes continuously. Depending on other current events, a single story can rise or fall in relative importance. This is reflected in the amount of broadcast coverage the story receives.
In an example of the present system in a news production application, the story is an item of metadata; an attribute of bookings, media items, and grouped stories. The story also has attributes including Story Description, Story Valid From Date/Time, Story Expiry Date/Time, Created By, Creation Date, Story Status Code, Top Story Indicator, Dominant Story, Indicator, Story Group, and Outlet. The story represents a news event that is being covered. Generally speaking, the story name is a short hand name or handle for the broadcast coverage that a news event is receiving. Bookings, media items
and production media items are all assigned a story name so that they can easily be identified as relating to a specific news event. Stories can be grouped into collections of story names that are related. Story groups are a collection of stories and have a set of stories as attributes.
Story names are intended to be short-term identifiers. Media items that are to be archived should have more substantive descriptive metadata to facilitate better search results and improved utilization. Story names provide a simple way to reference media items that have short-term value, may not receive much additional descriptive metadata, and will not be archived. Stories may have a variety of production items related to them.
Bookings can exist without a story association, however users should be encouraged to add story associates whenever possible to maximise the usefulness of the system. Similarly, media items and component media items also can exist without a story association, but again they will not be as recognizable to users. Story associations can be added and changed for both bookings and media items at anytime after a media item is created.
Conversely, stories can exist without any associated bookings or media items. Stories can be created within Jupiter simply to serve as the dominant story in a story group to organize the story picklist. A complex event may be covered though a series of related stories, with no production media items ever being created for the dominant story itself.
Preferably stories remain in the system until they are manually deleted. Story deletion and house cleaning will be a Mediaport user responsibility. Dominant stories must have their dominant status removed before they can be deleted.
Preferably, all stories will have 'Valid from Date' and an 'Expiry Dale' metadata attributes. This allows recognition of stories that are current. From this information Mediaport users can maintain current 'picklisfs' containing only stories that are current. This functionality also enables users to be able to create and identify stories that will receive coverage at some time in the
future. The system will also store stories that received coverage in the past. All story metadata including description, as well as Top Story' and 'Dominant Story' indicators for future, current and expired stories. This facility will enable a variety of options that will be discussed further in the description of search facilities and capabilities below.
Story names play a very important role in enabling users to identify relevant bookings and media items. They can also form a valuable bridge between the system and legacy systems.
In an example where bookings are imported from a legacy system, the first characters of the legacy booking identify the story name, and this will automatically import into the Mediaport. The clip naming convention will enable the users to recognize the same names across new and old systems.
An example of a clip naming convention is:
Story Name / Details / Arrival or Transmission Date and Time
It should be appreciated however that rules governing the clip naming convention will vary between outlets depending on equipment in use. In a preferred example, a story name must be selected in the following cases:
• When a non legacy generated booking is created • When a production media item is created or modified
• When media is recorded
The user interface will assist the user in selecting appropriate clip names by providing a selection of possible entries. A default value can be selected for the story name, so thai even crash recording can be simply accomplished.
Journalists are encouraged to view the day's pick list of stories and use an existing story name whenever they:
• Create or modifying bookings in the system
• Record media o Otherwise create a media Item o Make a local recording without a booking
The story picklist would quickly become unmanageable without the ability to group and associate stories. As the nature of a story changes over time, and issues become more complex, related stories can be grouped. An evolving story can be associated with other related or sub-stories and new story names. A new story name may eclipse or complement the original. Associating stories creates a story hierarchy. However, the hierarchy may only have one sub level. Thus, a story name can act as parent to a group of related stories. When a group of stories are associated because they are related, one story name of the group can be selected as dominant representing the parent of the group. It is also possible to ungroup or disassociate stories as necessary. It is preferred that only one story can be the dominant story for a group.
More metadata is needed for each story in addition to the story name. Each story should have a 'description' to help prevent stories that are very different from being accidentally selected or grouped. The Story description is optional, and can be updated by other Jupiter users.
Table 3 shows examples of metadata attributes of a story that can be added:
Table 3
Some metadata will be specified as mandatory, some as optional. Default values can be assigned so that some of these fields can be automatically populated. Also users may be restricted from adding, modifying or even viewing some metadata.
Stories must have an 'Expiry Date'. This defines how the duration of time the story will be valid within the system. Only current stories appear in the story picklist. Stories that have past their 'Expiry Date' are considered expired. If an 'Expiry Date' is not assigned at story creation, a default value will be assigned. Expiring stories is a way to remove the story from the pick list without losing the related metadata from the system. This can be done by changing the 'Expiry Date' metadata. The most common reasons for modifying a story will be to extend or expire story coverage. It may be useful to add user interface functions such as 'expire story' or 'extend 24 hours' to simplify these common tasks. Deleting a story not only removes it from the pick list, it totally removes it from the system. Before a story can be deleted, any associated media items must be associated with a different story.
Another metadata item attributed to stories is the 'Top Story Indicator'. This enables all users to easily identify the day's major news. The top story indicator is either set to 'yes' or 'no', with 'no' the default value.
Users of the system can perform various different types of searches by querying metadata stored online in the metacore, for various types of user activity. Different search facilities will be appropriate for different user groups.
Users will be able to search the system both for online media items, and also offline media items which have associated metadata in the metacore. This will preferably be items which were online in the system at some point in time. If desired, users can request that offline media items be moved online (as will be explained below in greater detail).
Users will also be able to use system search facilities to search for Stories. Story search is geared to the needs of Mediaport users and will facilitate Mediaport workflow and interoperability with other Mediaport applications and tools. Typically users will be concerned with searching/viewing stories by expiry date (so to extend as necessary), searching/viewing stories by status (so as to check 'unchecked' stories), and searching/viewing in the current pick list order (so as to review and if necessary change story associations).
Users will additionally be able to use the system's search facilities to search for Bookings. This enables users to create and save views for all bookings, internal and external, departures and arrivals using a simple search interface. The search spans a predefined set of metadata fields
It is desirable that different levels of search are provided to system users. Simple search enables users to search for media (via on-line metadata) using a simple user interface similar to common Web search engines such as Google™. The user specifies a single word or phrase but does not specify the metadata fields to search across. The system searches a predefined set of metadata fields.
Intermediate search is similar to the Simple Search but has additional control over the scope and filtering of the search criteria. The user may select to search for online media or to perform an archive search, which would present
a different set of search criteria depending on the search type. For example the search for current media items might include:
o Word or phrase search o Ability to include or exclude fields from the search o Choice of item, component, keyframe, bookmark
• Ability to constrain the search to a particular: o Category o Status • Outlet
• Date/time range
• Best media
The Intermediate Search capabilities will enable the user to Search for expired, current and future stories by all story metadata.
Advanced Search enables a user to search for media by all available metadata. Media Items, media segments, Keyframes and bookmarks, which match the search criteria, will be returned. Each search may consist of one or more search terms. Each term consists of a metadata field name (e.g. Story Name), operator (e.g. equal to) and search value (e.g. "War"). Search terms are joined together by Boolean operators (e.g. AND) that in turn make complex search criteria with the capability of spanning multiple metadata fields.
Predefined searches can be accessed using either shortcuts or selections from a longer preset list of searches. Mediaport users will typically have specific search requirements including, searching/viewing bookings for CBIS clean-up, record/playout decision, traffic light/copyright/usage restriction settings, search by status. The results of these predefined searches will be geared towards the role of the logged in user's workflow and interoperability with other applications and tools.
Searches and search results lists are persistent and roving. That is to say they travel with the user from machine to machine. Users can save searches and share saved searches, allowing team members to share collections of media items.
Users will also be able to define and subscribe to future searches. The results of the future search, which may not yet have entered the system are notified to the user when such bookings or media items that meet the users criteria are found (enter the system). The future search may be for bookings, media items or both. The user must enter an expiration date when saving a future search, after which results are no longer returned.
As explained above, the system advantageously allows users to view media items on-line from any enabled system terminal. Using the example of a news production application, this will preferably be any PC configured to use a news production system, for example ENPS (Electronic News Production Service) sold by Associated Press Inc. The viewing and logging tools may be available integrated within ENPS or as a stand alone tool.
Media can be played out and viewed from the system at any time, even as it is being captured. Users will also be able to view the media item's essence data file as it is being written. This differs from watching the feed or package because playback is 'on demand' from the system. Whenever the user selects the item for playback, they can start play from the beginning of the clip, regardless of what time the recording began. The system provides users with the ability to use the full set of media viewing functions such as fast forward, reverse, jog, shuttle and scrub.
Viewing will preferably use desktop resolution media instances, however in certain applications it may be desirable or even essential to use instances of higher or lower resolution. It is therefore not relevant for viewing purposes whether or where a broadcast quality equivalent of the viewed media essence is also available on-line. In this way, archive media can be viewed quickly and
easily to allow a user to make decisions based on a desktop representation of that media.
Where resource constraints occur it will be necessary to treat higher quality media preferentially. It is recognized for example that the timely serving of Web Media is of a lower priority than the timely serving of Desktop media. If the network is found to be impacted under heavy load, then services such as Web Hosting will be throttled back to ensure priority for desktop media.
There are two main editing facilities provided by the media system. The first is a desktop editor. The majority of editorial work on media will be done using the desktop editor application, running on any ENPS capable desktop PC. Whether as a tool integrated within ENPS, of as a stand alone application, users will be able to complete editorial tasks from simple 'Tops and Tails' editing - the process of selecting the beginning and end of a single shot for broadcast, to the creation of EDLs that will be automatically conformed by system for broadcast.
The Desktop editing work will begin by searching and selecting media items in system as described above. Selected media items are moved into the Desktop editor application from the system by adding the items to a Desktop Edit Clip List. Each user has an individual Desktop Edit Clip List, which makes the media item essence and metadata available to the user in the Desktop Editor. Likewise, the user is able to delete media from the Desktop Edit Clip- list. Deleting from the Desktop Edit Clip-List does not delete the media item from the system. In addition to the Desktop Edit Clip-List, users can also view the last 10 media items that they have logged or viewed.
Once a Desktop edit has been completed, the package can be shared in two ways. To enable a very flexible viewing and sharing of the edit information, the system enables the viewing of the edit as an EDL, a simple tab delimited list of events representing the edit. The EDL can also be shared with other applications or systems as a Private Text Document.
Depending on the native media format of the Desktop Editor application and the capabilities of the media storage subsystem, media may need to be moved to the local workstation.
Production media erns that are produced using the Desktop Editor are published to the system. The system does not have any version control capabilities, so each revision of a production media item published to Jupiter will be new and unique. There will be no association between revisions other than that manually implied when choosing the media item name.
The Desktop Editor only publishes media item EDLs, and at this stage, no media essence. The EDL is automatically conformed by the system. The publish process initiates the creation of a new media item, an automatic EDL conform process, creates a broadcast quality media instance, as well desktop and Web resolutions. When the production media item is published, the user will be notified of the estimated time of completion. The queue list of expected conforms can also be viewed. Once the automatic conform is complete the media item will work just as any other media item in the system. All searching/logging/viewing capabilities apply.
In a news application, a journalist often begins work on a news story, focusing on story content, producing a draft edit. This draft edit will be further refined for broadcast by an editor who addresses the more stylistic elements creating a polished production item. This process requires the decisions made by the journalist, referencing desktop resolution media, to be communicated to the Craft Editor, and media references redirected to broadcast resolution media instances within the system. The user will then use the Craft Editor application to take the draft edit and complete the refinement process.
As explained, the process of publishing from the Desktop Editor will create new media items within the system. These new media items can be used as source media for a Craft Edit. In this case, the Desktop Editor can be used to do a first pass edit of source materials that can selected for longer feeds and clips, creating very usable new media items that may be very appropriate for
Best Media and Archive Recommendation designation. Such items may be given a status of 'Rough cut'. No EDL metadata is necessary for the sharing of these assets. They will be available through normal search and view capabilities.
The Craft Editor uses broadcast quality media created within the system. However it will be necessary to export media items to the Craft Editor for working. The export process will enable the Craft Editor to access media items available from the system. All the viewing / logging / searching capabilities are available to users while using the Craft Editor. The user will also be able to use the History Clip-List to simplify the moving of media items to the Craft Editor.
Projects in process on a Craft Editor are not trackable within Jupiter unless a Placeholder media item is manually created. The Placeholder can be annotated to enable news users to track the progress of the Craft Edits progress.
Once a news package is completed on the Craft Editor, it must be moved to the system so that it is available for review and distribution. The media item metadata and essence must be published to the metacore. The Edit must first be conformed by the Craft Editor application creating a new media item, with new media essence. The system automatic conform capabilities are available for Desktop Editor created media items. Media Items cannot have a status promotion to 'Finished' until it is published to the system.
Media items published from the Desktop Editor application will automatically be conformed by the system in an asynchronous manner. Production media item EDLs are published to the system and then conformed. The queue of Production Media Items to be conformed can be viewed to monitor status. Users are notified of the expected duration of the conform process.
Production media items will be exported by the Craft using the Advanced Authoring Format (AAF)
Working with a news production system and the media system, users may view online media items, identify and request that offline media items be moved online, view media items, annotate and add to metadata, modify or delete information for media items, copy annotation between media items, create component media items, add bookmarks within media, create and add simple EDLs and production media items into the, develop scripts and write captions all in a very flexible, easy to use environment.
System application views (screens) are presented as non-overlapping, sizeable windows as selected by the user. It will provide the necessary menus to select and display views. Shortcuts will feature heavily so users can quickly and easily access the views they need for their day-to-day work - their role will dictate how each view is presented. Where necessary to aid workflow, combinations of views will be arranged suitably. The definition of the view presentation and combination will be configurable and stored as scenes.
Each view will have a required set of permissions that have to be satisfied in order for the view to be displayed. Each view in itself will have various menus/buttons etc that are also subject to the users permissions and will be disabled and enabled as appropriate.
Where required each of the views will support cut, paste, drag and drop between themselves, other views and ENPS.
The implementation details of an exemplary media management system providing the functionality described above will now be discussed with reference to Figure 17.
The metacore 1702 includes a client side applications group 1704, a media service 1706 including a media store, an applications server 1708 and a metadata store 1710. Client applications are written in C++ and communicate using J2EE (Java 2 Platform Enterprise Edition) component level communications (JNI-RMI invocations), J2EE messaging and queuing, (JMS
via Active JMS). Server applications including system management applications could be written in either Java or C++. Media storage, transfer and editing will typically be provided by a third party media system and associated protocol running on a gigabit Ethernet. The components of the metacore all run off a media gigabit Ethernet.
The metacore is linked to the transmissions domain 1712 by a transmission gateway 1714. The transmission gateway will communicate with the Transmission domains using the appropriate MOS protocol. The transmission gateway and domain is discussed in greater detail in our co-pending application filed on 5 April 2004 and bearing attorney reference No. IK/26522WO to which reference is directed.
Input feeds are routed through the Spur Central Apparatus Room (SCAR) matrix 1714 to the edit matrix 1716, for ingest into the metacore. The edit matrix features a filter comprising a dual redundant pair of PCs managing, filtering and auditing control requests from the system and transmission domains. Both the SCAR matrix and edit matrix are controlled by Broadcast Network Control System (BNCS Routers which are in turn controlled by the metacore using Fabian.
The CBIS will be configured to replicate to the Metadata Core on a regular basis (typically < 1min). The Metadata Core will then update any application screens reliant on data that has changed.
Considering now client server communication, with reference to Figure 18, system clients will be implemented as Win32 native clients 1802. As such a mechanism must be provided to allow the clients to communicate with the J2EE application server 1804. The client server communications will be facilitated via use of a Java-C++ bridge 1806.
The C++-Java Bridge allows C++ proxy stubs to be generated from Java classes. This allows any C++ client to behave exactly as a standard Java client.
A thin C++ wrapper will be provided (generated) around the required J2EE client API's (Application Program Interface) to allow the client to access components on the application server. The C++-Java Bridge will be used to generate C++ proxy stubs for the EJB (Enterprise Java Bean) remote and home interfaces, thereby, allowing the client to perform interaction with the application server in the same way as any Java client would.
Certain client views are required to receive events from the application server (e.g. notification on booking status changes). These will be sent to the clients in the form of JMS messages via JMS Service 1808 from the application server. The C++-Java Bridge will convert the message into an event and the appropriate action can then be taken by the client application. The client may register interest with any number of event topics. This will allow the client to receive events that represent actions performed by various metacore services. The payload of the message will vary depending on the type of event fired by the system service and will include all the information required by the client to perform the required action.
As explained above, using the system editing facilities, it is possible to create new media items to be stored in the system, based on media items ingested through the Mediaport. These media items, which have been created from other media items are know as production media items. The ingested items, from which these may be made are known as non-production media items. A production media item can be considered to be the 'child' of one or more 'parent' media items from which it is derived.
Production media items exist in the system in their own right, and have associated with them their own metadata. While certain of this metadata will need to be entered into the system at or after the time of creation (eg. Creation time and duration) other metadata fields can be, and indeed are desirably made consistent with the parent media items from which the production item is made.
It should be noted that in certain embodiments, production media items that are in process (being edited) are not identifiable within the system. Placeholder media items can be created so that such items in process can be tracked within the system.
As explained earlier, metadata typically comprises a number of attributes or fields. In one embodiment of the invention each attribute or field associated with a media item can take varying forms.
A metadata field can comprise only a single value associated with a media item eg. duration.
A metadata field can take the form of 'components'. Figure 3 shows a schematic representation of a media item 302 with the time axis extending horizontally across the page. Component metadata has an associated timeline illustrated as 304. In other words it is information which relates to a particular time segment of a media item. The metadata value for a particular attribute for a particular media item can therefore vary in time.
In addition to the timeline components themselves, component fields may also have a default value 310, which is used to provide a value for a component field at all time instances along a media item for which no other value has been assigned. In Figure 3, component values have been assigned for time segments 306 and 308, and the remainder of the component timeline takes the default value.
Certain component fields also have a summary value 312 and optionally an override value 314. The summary value, like the default value is an overall value for the whole media item, and is derived from the component values along the media item. This can for example be performed using an arithmetic or Boolean operation on the individual component values. The summary value may optionally be replaced by a user input override value. For certain metadata fields it will be advantageous to impose restrictions on users who can override a summary value, and rules defining the manner in which the
summary value may be overridden (eg. the summary value may only be made greater, or more restrictive). If the override is removed, or if the summary value changes such that the override no longer satisfies the rules, then the summary will be shown again.
Metadata fields may exhibit propagation properties. In a media system according to one embodiment of the present invention metadata propagation is supported both from parent to child media items and also from child to parent media items.
Inheritance propagation refers to metadata from an item being automatically associated also with one or more child items. Bi-directional propagation refers to metadata from an item being automatically associated with both parent item and child item(s).
Figure 4 shows an example of inheritance propagation of a particular metadata field from parent to child media items. A child media item 402 is created from two parent media items 404 and 406. The child item is defined by selecting a segment of item 404 between times ti and t2, and by selecting a segment of item 406 from time t3 to t4, and by cutting these segments together. This editing operation can be performed using the media system editing facilities described above.
The inherited metadata field in question is a component field. Child item components are automatically inherited from the appropriate parent metadata, ie. child metadata component 414 between Ti and T2 is derived from parent item 404 between times ti and t2, and child metadata component 416 between T2 and T3 is derived from parent item 406 between times t3 and t . In the case of component 416, no components were assigned to the parent media item and so the component value taken is the default value of the parent component. In component 414 however, the parent item has had components 420 and 422 assigned at particular times. Since these assigned components overlap with the segment used for creating the child item (ti to t2), child component 414 inherits components 426 and 428 having corresponding
values at the corresponding times. It should be noted that these components are only inherited in part, since only part of the relevant media is used in the child item. Components of metadata fields with inherited propagation will continue down a chain of inheritance (in part or as a whole) in a similar fashion.
It can be seen from Figure 5, that propagation within the media system is dynamic. Figure 5 shows the same media items as in Figure 4, but at a later lime a new component 502 (of a metadata field having an inheritance propagation type) has been added to parent item 406. This new component has been added to the portion of the parent which is included in the child media item, and therefore results in a corresponding new component 504 being added to the corresponding time portion of child item 402.
Figure 6 illustrates a metadata component having bi-directional propagation. Figure 6 again shows the media items of Figure 4, but at a later stage a new component 602 having bi-directional propagation is assigned to child item 402. This new component has been added to a portion of the child item which was derived from parent item 406, and therefore results in the addition of a corresponding new component 604 being added to the corresponding time portion of parent item 406.
Bi-directional propagation additionally functions in the same way as inherited propagation and therefore component 602 will propagate downwards to any media items which include any portion of child item 402 to which component 602 has been assigned (ie a 'grandchild' item for the purposes of Fig 6). In a similar fashion it should also be noted that component 604 will therefore be propagated to any other child items which include any portion of parent item 406 to which component 604 has been assigned. It will be understood that by creating a metadata field with a bi-directional propagation property, a component of that field will, when assigned to a media item, be propagated (in part or as a whole) to all other media items in the system which include any of the media portion to which that component was assigned.
In this example the concept of components has been used to carry metadata, and a component history built up of component layers has been shown to propagate throughout the system. This may have advantage in a system where more sophisticated metadata control is supported. It will be understood however thai the basic functionality explained in this example, of allowing certain metadata to propagate bi-directionally to both parent and child related items, may be achieved by alternative methods with varying levels of sophistication.
The media items themselves have been described and illustrated in the above explanation in order to assist understanding. It should be understood however, that while in most cases the edit decisions which define which portions of parent items are used to create child items, and hence the time relationship between parent and child items, will be based on the essence of a media item, once the relationship has been established, metadata propagation may occur independently of media essence.
Edits can be performed on a frame accurate basis, and components can be propagated in part or as a whole.
The relationships between parent and child items within the system are managed via metadata in the metadata store of the system. This is a database which stores metadata associated with each media item as XML object models. The database supports a relational data model (without necessarily decomposing XML into its relational equivalent) allowing the metadata for each item to exist in a tree structure indicating parent-child relationships. The database can be queried at an item level to return parent and child related items.
The compositional structure of the object models used is extensible and able to support the Advanced Authoring Format (AAF). As explained above however, some media itmes may be composed in a craft editor and be output as EDLs. The object model structure is therefore also capable of reading and writing such items as text files (CXM) via an adaptor.
By maintaining a record, via the metadata, of relationships between media items, a system user (eg performing an edit function) can navigate through the system to find related media items using 'derived from' and 'used in' query functionality. This provides the ability to:
°Navigate up through the parents to find any annotation which has not been propagated onwards
°Search for other usages of the same material ®Allow the convenient copy-and-pasting of specified annotation
As explained previously, the metadata for a media item is stored independently of the associated media essence. Media item metadata includes one or more pointers to the relevant segment or segments of essence, which may be stored in the media core or offline. Where two or more media items are related by a common media essence segment, they will each include a pointer to the same essence segment.
The metadata management will be provided by interfaces that allow users and the system to add, update and remove metadata. All of these operations will be performed via the application server. The service will also validate all metadata entered to ensure that it adheres to the metadata rules as defined by system users. The service will also provide the ability to export metadata information to and from the Jupiter system in an XML format.
Figure 10 shows the components of the metadata management service. Metadata requested by the clients will be extracted from the database and maintained as model objects in the application server 1002. The model objects will be responsible for extracting the data from the database and populating its values correctly. The model objects will also be responsible for implementing the persistence mechanisms used to store to the database.
All access from the client will be performed though workflow objects 1004. The workflow objects will use a business object 1006 to perform the relevant
metadata changes for the business method called. The business object will make use of services provided by the metadata service. This will allow it perform any get, update or delete operations.
If the operation is a gel operation then the server will be returning data. This data will be returned in the form of value objects 1008. These value objects are simple objects that encapsulate the business data and provide methods to access that data. The contents of the value object will vary depending on the current user role
The value object factory is responsible for creating the value objects that are returned to the client. It will also contain methods that allow it to convert model objects to and from value objects. It will also make use of the user management service to obtain details about which fields the user is allowed to see in their current role. It will use this information to only populate the correct fields in the value objects created.
Updates from clients will also only contain fields that have been altered by the user. The metadata service will ensure that only the relevant fields are updated.
A method of controlling metadata propagation will now be explained using a BBC news production application as an example.
In this example, metadata attributes can be defined as one of 4 types:
a) Single value e.g. item duration b) Component (no propagation) e.g. picture format, best media c) Component (Inheritance propagation only) e.g. archive flag, agency flag d) Component (Inheritance and bi-directional propagation) e.g. traffic light, usage restriction, copyright holder
Type (b) component attributes therefore do not inherit values from their parent item timelines. For all sections which are not otherwise marked-up with locally
applied components, the timeline will be set with the default value. Type (c) & (d) component attributes inherit from their parent items' corresponding portion of the timeline (taking into account any components applied on the parent). Locally applied components are also taken into account. If the item has no parents (i.e. it is not a production item), then it takes a default value from a default attribute as if it were a type (b).
Subject to permissions, a production item may be broken away from its parent items, severing the relationships and the inheritance of values. In this case it will be as though the item ceased to be a production item and becomes treated as if it were a non-production original item.
Type (c) attributes are inherited down to child items only, whereas Type (d) attribute annotation is required to appear consistent for all items using the same portion of media. We use type (d) attributes where annotation is required to propagate to all occurrences, from parent to child and from child to parent, wherever there is a relationship which identifies that the same portions of media are being used.
Type (c) annotation is simple to picture, with annotation on any item being inherited into further edits made from that item. Type (d) is more technically complex. In order to achieve our design requirement of annotation propagating to all related media, it is necessary to ensure that, no matter where the annotation is added by the user, the system will add the information at the corresponding point(s) from which all items that share the same media inherit their information. We take advantage of the same inheritance described for type (c) to distribute the information, but type (d) modifies the process of adding the data in the first instance. This will be explained further in the following description of the Traffic Light attribute. The 'Traffic Light' is an attribute which can take one of three values; red, amber or green. It carries with it both elements of usage-control over the images and elements of usage-control over editorial content e.g. Embargoed news (committee reports, crime figures etc.). Green signifies less restrictive or unrestricted use and Red signifies more restrictive or prohibited use of the
associated media. Different parts of BBC News operate with differing degrees of freedom to use other organisations pictures which results in a traffic light which has to be interpreted with local knowledge. Where there is doubt over the suitability of material it will be Amber. Where the material is very likely to be unusable it will be Red: probability is a factor in choosing the colour
The derivation and initial setting of an item's summary Traffic Light will depend on the Item's status. Items which are "Finished" or "Rough-Cut" are possibly for use on-air, and will be shown in their entirety, therefore the light must reflect the colour of the most restricted segment. Items with status of "Raw" are not destined for transmission so their rule can show amber even if there are segments which are red.
Figure 7 shows rules for rough cut and raw traffic light summary derivation. It can be seen that for finished and rough cut items, the summary 702 is red if any part of the item has a traffic light of red. The summary is green only when all of the item has a traffic light of green, and for all other cases the summary traffic light is amber. For Raw items, the summary 704 is green when all of the item has a traffic light of green, amber if some but not all of the item has a traffic light of green, and red (not shown) only if all of the item has a traffic light of red.
The Summary Item Traffic Light provides an at a glance summary view of the usability of an item, in the context of it either being for air or for further editing. The summary does not affect the timeline view (which is described later) for production edits - this is derived from the related media, taking account of components on this item - neither does it affect edits derived from this Item (the relevant timeline value is inherited). The summary may be manually changed to be more restrictive for editorial reasons and may be returned to the original derived.
A simple example of traffic light usage is shown in Figure 8. In Figure 8a, a feed 802, containing a desired package (and other packages) is received. The recording of the feed is, in effect, a master - it does not inherit from any
parent media. These original master recordings are required to take their default timeline value from a user's default setting 803 (in the absence of a parent). This avoids it being necessary to mark a full-length component with an amber traffic light (although this has a very similar effect for propagation purposes). Since it is a pool feed the default traffic light is set to amber.
Figure 8b shows Mediaport start to "top and tail" the desired package within this recorded feed to produce a first edit 804. Simultaneously, a news production team are doing a quick turnaround headline edit or OOV, 806 derived from the feed.
The two edits display a derived rights timeline, in this case built from the matching periods in the parent. The parent is all amber, so the two edits appear amber (as are the item summary values 808 and 810).
In Figure 8c it can be seen that Mediaport use components to annotate characteristics of the timeline which vary in time. In this example most of the item is BBC shot pictures, but the opening 15 seconds are from CNN and have only been agreed for use by the Six O'Clock News, and once only. Because this is very restrictive the mediaport uses a red traffic light component 812 for the opening and several green components 814 describing different scenes, all having BBC copyright.
The system derives new Item-level traffic light summary 815 for item 804 based on the mix of the individual segments of the timeline. In this example the timeline has been fully annotated with components: this need not be the case. Items may have only been given components against some portions of the timeline. Where there is no overriding component traffic light, that portion of the timeline will take on the value (or values) of the corresponding portions referred to in the parent. The value in the parent will itself have to be derived by applying the same rules. Two things are being derived, one is a time- varying view of the traffic light, the other is a single overall value for the Item traffic light summary.
Figure 8d shows the effect of annotation of item 804. The traffic light component is propagated upwards, as shown by arrow 816 and will locate and mark against the master items at the corresponding timecodes. The
appropriate parts of these traffic light components are then propagated down to News24 headline 806 as shown by arrow 818. After Mediaport have topped and tailed the package, the media essence for the master recording may be deleted to save storage space without affecting the propagation of components to headline 818.
A more complex traffic light example illustrated in Figure 9 will be briefly explained. A number of media items are shown having parent child relationships indicated by thin solid arrows as in previous figures. During or after editing, components 902 have been added to two items 904 and 906 specifying green from start to finish. The system locates the master in each instance and places a green traffic light component against the appropriate time portions so that ail other items using the same media also have appropriate components added. It should be noted that although Figure 9a shows components 902 being applied directly to the master, in some implementations upwards propagation of components may be performed step by step updating a single relationship at a time.
At a later time, referring now to Figure 9b, some of the pictures in item 908 are discovered to be private amateur footage with no access. Placing a red traffic light component against item 908 will override the green which was originally propagated from annotation at the lower level at a previous time. It is the most recently modified component which propagates throughout the system, overriding previously modified components.
While for the traffic light the most recent information is propagated, this may not be the case for all other attributes. For example, a copyright description may operate cumulatively. That is to say that each time a new component is added to an item it does not overwrite the old attribute information, but rather adds to it, since a media segment may attract a number of different types of copyright.
Another attribute closely related to the traffic light attribute is the usage restriction attribute. This is the textual reason for the traffic light eg "only for use on six o'clock news".
Usage restriction operates in a very similar fashion to the traffic light described above in that an item will by default take the default usage restriction, but component usage restrictions can also be added. Usage restrictions propagate up and down in the inheritance structure as described above.
One difference between the traffic light and the usage restriction in the summary value. When a new item is created through the editing process from various parent items, it is possible to derive a summary Traffic Light, but it is less easy to derive a single piece of text which could adequately cover the result. "Various" has been suggested. In other words the rule for deriving the summary value simply assesses whether the value is the same for the whole item; if so that value is displayed, otherwise "various" is displayed.
Methods of inheritance of attributes from a parent item to a child item have been described above. The same possibility has been considered for descriptive information, but on analysis has been found not to be very useful. The item-level description of a 1 hour recording will be very general. A component description of, say, 30 seconds will be more specific, but may not bear any resemblance to the 5 seconds being used in an edit. A Keyframe is a descriptive attribute attached to a single frame or a small number of frames and so the information carried should still be useful and accurate in the context of the newly edited item. Keyframes are therefore inherited from parent to child items as described.
As explained earlier, a wide variety of metadata having different types of properties can exist with media items. The system can present this data to a user in a number of ways. Taking the example of Fig 11 , there is shown on an annotated timeline a media item 1102 with a default traffic light of amber and two annotated components, the first 1104 with a green traffic light, and the
second 1106 with a red traffic light. To display this information to a user a text view of the component data 1108 can be displayed.
Of more use to the user, however is to derive a timeline for the media item as shown in Figure 12. In this case, rights information is being viewed. The system derives and displays, for the whole length of the item the relevant traffic light status 1202, whether it be from an assigned component, an inherited component or a default value. This can be thought of as a flattened or 'end on' timeline with the most recent components shown on top and obscuring previous components or items behind. Also shown is a textual representation 1204 of the flattened timeline. Although the textual representation does not show as much information as the component data view 1108, by moving the cursor over a particular point on timeline 1202, full component information for that point can be displayed.
In order for the system to derive and present summary or flattened timeline information when it is required, it needs to know which annotations are the most recent for the relevant attribute or group of attributes. Again using rights as an example, modifying any attribute in the rights group of attributes on an existing component will cause it to be moved down the 'annotation timeline' for the 'Rights View' of a Media Item. The Media Item itself never moves down the annotation timeline, regardless of any modifications made to any of its attributes. Deleting a component would remove it from the annotation timeline, therefore allowing previous annotations to take precedence.
In this example it can be seen that the annotation timeline is inherently linked with the example of metadata inheritance using components as described above, since when a segment of a media item is used in another item, and metadata from a component field is to be propagated, the values taken for that segment are the annotation timeline values which would be displayed for that segment.
For example, Figure 13a shows an annotation timeline of a media item 1302 and components 1304, 1306 and 1308, along with a flattened timeline view
1310. It will be seen that since item 1302 and component 1304 have the same traffic light value, no distinction is made between them in the flattened timeline view, and they are effectively merged. In Figure 13b, component 1304 is modified to change the traffic light from green to amber. This causes it to move down the timeline to position 1312, and the timeline is modified accordingly with the traffic light value of component 1312 now obscuring or overriding the value of component 1306.
In Figure 13c, al a still later lime the default traffic light value of item 1302 has been changed. It can be seen from the resulting flattened timeline rights view 1318 that this does not move the media item down the timeline, but rather the change is 'filtered through' the components chronologically further down the annotation timeline.
Finally in Figure 13d, the resulting timeline 1320 is shown if component 1312 were deleted. Here it can be seen that component 1306 now takes precedence and its value is displayed in the timeline.
Views of components can apply either to single attributes or to groups of attributes. For single attributes, the system needs to know the last date/time the attribute was modified. For groups of attributes, the system needs to know the last date/time when any of the attributes in the group was modified (it does not need to know which attribute in the group it was). For example, the Rights group includes the attributes:
• Copyright
• Copyright Description
• Usage o Usage Restriction
The timeline display as described can be embedded in third party editing systems. This will show compositional events, their type and duration. If provides not only event information from the system but also provides the
liability to display metadata properties from the meta core. This will take the form of an area of screen real estate that the system user will control directly. This control will be self contained in that it will have it's own keyboard and mouse events that will obtain property information from the system such as copyright information.
Another property which can be derived for a given media item is a usage value. This value is indicative of how many times any particular portion of a media item occurs in another, related media item. This value can then be displayed in a timeline view of the media item in a fashion similar to the rights timeline described above. An example of a usage timeline is shown in Figure 14. Portions of the media item which are not shared with any other items in the system are indicated as a light colour or clear as shown at timeline portion 1400. Portions 1402 and 1402 have a first usage value indicating that these portions are included in one other media item in the system, and are shown darker. Portions 1406 and 1408 have a second usage value indicating that these portions are included in two other items within the system. It can be seen that the usage timeline provides a usage density display with darker areas being more heavily used.
It should be noted that (for at least a certain period) any production media item will automatically have at least one instance of corresponding media for its entire duration; namely the parent or parent items it was derived from, assuming these have not been deleted. In certain cases therefore where production items are being considered, it may be desirable to assign the lowest usage value to media portions having a single occurrence of corresponding media. Alternatively 'raw' items could be excluded from the derivation of the usage value.
All media items related to a given media item can be determined since relationship information is stored in the metadata store. The usage value for a particular portion of a media item is derived by searching the metadata store to determine the number of related media items which have a pointer to the same media essence as that particular portion.
Figures 15a and 15b show user displays of an exemplary media management system. Figure 15a includes a traffic light summary indicator 1502 and a timeline view 1504 for a media item being worked on. There are also a number of media viewing windows 1505. Figure 15b includes Iwo traffic light summary indicators and also a timeline view 1508.
As mentioned previously, media item essence exists at different qualities within the system (e.g. Desktop, broadcast etc). Media Items will be ingested into the system at Broadcast Quality. When a Media Item is ingested at broadcast quality, further Media Item Instances will automatically be created. These are web quality, desktop quality and keyframe quality, and therefore, normally, four Media Item Instances will exist on the system for a Media Item. The non-broadcast quality Media Item Instances are for viewing purposes within the system only.
Archiving a Media Item means that a broadcast quality copy of the Media Item essence is made on an offline storage item (e.g. tape). All essence is then deleted from the online and offline archive stores with the exception of the web quality essence and keyframe essence. The metadata for the Media Item, and the metadata for the associated keyframes, Components and Bookmarks are kept online. All operations from the system to Offline Archive are copy actions. A move action is performed by first copying the item to the relevant store and then later by the system deleting the essence from the current online store.
By keeping both the metadata and low quality essence online, users can continue to search and view archived items in the same way as online items are searched and viewed, as described above. These items will have the archive flag set to indicate that the broadcast quality material is archived. The metadata will additionally include a tape ID to facilitate a user requesting that items on tape be brought back online.
A web hosting caters for users on low bandwidth network with a requirement to access media For example, bureau users have 56Kb or ISDN connections. Such low bandwidth availability precludes working with 1.5 Mbps Desktop media. However, they will still be able to access and update metadata, whilst also being able to view Web media.
Any system user with the appropriate access privileges can recommend a media item for archiving from a terminal networked to the metacore by manually setting the archive flag. This causes the media item to be reviewed at the archive service to determine whether it should be recorded offline to tape, or deleted. An item may be deleted at this stage for a variety of reasons including resource constraints. This review is typically performed manually by a member of archive staff.
The review at the archive service will result in one or more of the following actions:
• Early Deletion
• Keep Online (indefinitely or review again at a specified date) • Copy/Move Offline (archive)
The system also has the facility automatically to make archive recommendations based on Archive/Keep Rules. The automatic rules are configurable, adaptable and periodic. The periodic rate of the search can be configured as well.
At periodic intervals the system will check all current media items for matches with a number of archive rules. These rules define various metadata fields and values of those fields, that the media item must match in order to be selected by the rules. Examples of metadata field used in archive rules are:
o Outlet Name • Media Category
• Programme Name (when media category "Program")
• Sequence Type (when media category "Sequence")
• Creation date/time start o Creation date/time end o Age (age of media item in hour, min, sec)
Each rule will consist of one or more archive terms. Each archive term consists of a metadata field name, an operator, and a value. The operators allowed by the system will vary depending on the type of the metadata field (i.e. in some cases only an equivalence will be allowed). The archive terms will be linked together using Boolean operators (AND/OR). This will allow archivists to create complex archive criteria that will match certain sets of media items (provided they have been marked up correctly).
Automatic archive rules (and other services requiring rules matching) are supported by a rules matching service as illustrated in Figure 19. The rules matching service will provide the ability to identify business objects within the system whose metadata matches certain criteria. The criteria for matching will be complex and may involve interaction with other system business objects. The outcome of a rule match will also vary depending on the rule being evaluated.
Due to the potential complexity of matching rules it is proposed that a scripting environment be provided to allow users to create complex rules. It is assumed that only technically able and responsible users will be able to create rules. The service will provide the ability for rules to be defined using a defined syntax. For some metadata fields the comparison value may consist of wildcard characters.
The system will also allow system wide pre-defined rules to be implemented to enable users with lesser privileges to make use of the rule matching
Referring to Figure 19, on start-up, the RuleFactory 1902 will obtain all rules that have previously been persisted to the rule store 1904. Note that the rule factory will be able to read rules from file and a defined JDBC connection. Once extracted from the persistent store each rule is maintained within one of a number of RuleSets 1906 that is used to identify if as pertaining to a type of rule e.g. an archive rule. This allows the RuleEngine to identify all rules required to perform evaluation on a specific group of rules.
Rules are evaluated by a call to the rules evaluation engine 1910. This call passes a collection of business objects 1912 to be evaluated and the name of a RuleSet containing the rules to be used.
The rule engine is able to perform the action defined by the rule immediately or defer it to a later date/time. The decision to defer is made through the use of a priority defined by the rule when it is created. If the action is deferred until a later date/time a match object is constructed and placed in a match queue 1916 until processed by the rule action processor 1914. This processor is run as a low priority thread and will only action the rules when system resources allow.
Each rule contains:
• Name - a unique name used to identify the rule.
• Priority - used by the rule engine to decide when an action should be taken i.e. now or when resources allow.
• Active - A simple switch used to turn the rule on and off.
• Condition script - This script is used to declare how matching is to be performed and returns true if a match occurs.
® Action script - This script is used to perform any operations that are required as a result of a match. This script may simple pass handling of the match over to another system service for processing. « Group action script (optional) - This script is used if the rule is set-up to allow group matches. See below for details on grouping of matches.
• Item object type - This is business object type that will be exposed to the script in order to evaluate the match
• Match object type (optional) - This is the object that may be used if the script passes on handling of the match to a pre-defined component.
The rules can be set up to perform a number of different tasks on the media item when a match occurs. For instance when the rule matches a media item the system may update the media item archive decision code to "recommend for archive". Alternatively the rule may be set up to automatically create a new departure booking and assign the matching media item(s) to it. For instance, this will allow rules to be set up that collect up all media items that were used in the one o'clock news and automatically create an archive departure booking to record them to tape. When rules match a media item the system will update the policy match code to indicate that the media item has been matched by a rule. This is required as some rules may wish to filter out media items that have already been matched by a rule. This also implies that the order in which rules are checked is very important especially where media items are likely to be matched by more than one rule.
The automated rules will therefore result in one or more of the following recommendations:
• Early Deletion
• Manual Review • Keep Online (indefinitly or review again at a specified date)
• Copy/Move Offline
For example, an archive rule might be defined as:
Outlet=TEiM
Category=PACKAGE
Status=FINISHED
Age=12 hrs
if NOT already held off-line
If true, outcome would be:
SET Archive flag when 24 hrs old, Copy OFFLINE and KEEP ONLINE FOR 7 DAYS Then DELETE ONLINE ESSENCE.
This would have the effect of extending the on-line life of a 12-hr old finished Ten O'clock news package from 48 hours (default) to 7 days and putting it straight away into the queue for Offline copying. It would also be flagged "Archive" when it reached 24 hours. (The condition 'NOT already held off-line' prevents the accidental re-archiving of reingested archive items).
The following example would automatically mark all day-old raw recordings more than 5 minutes long for review by archivists before deletion:
Category=RECORDING Status=RAW Age = 24 hrs, if NOT already held off-line, Duration>5'00".
If true the outcome would be
MARK FOR MANUAL REVIEW
The following example would delete a chunk of a News 24 off-air automated recording called "News 24 1430-1500 [Date]" earlier than the default 48 hours if we wished to do this :
Category=ROT Status=RAW Age = 24 hrs
CLIPNAME=News 24 1430-1500 $date
If true the outcome would be:
DELETE NOW
The following example would ensure that anything created as a Presentation Category item (trails, stings etc) is immediately marked to be held in the system for 4 weeks and then came up for review again (at which time it could be marked to keep for another 4 weeks:
Category=PRESENTATION
If true the outcome would be
DO NOT SET Archive flag
KEEP ONLINE FOR 4 WEEKS Then REVIEW
Via a user interface a user can view modify, add, delete and review archive rules. The user will be able to prioritise how the list is viewed. Media Items will be viewed in a list prioritised by 'archive decision'. Subscription option will display a list of all automatic archive recommendations the user is subscribed to. Storage Items will also be displayed in a list with right click menu for options such as Add, Maintain, details and Associate storage with media item. The interface will also allow users to adjust the priority or order in which the rules are exercised against the media items.
The system will search all available bookings, media items, components, and keyframes for content that may be appropriate for archiving.
The actual rule evaluation will be executed either as a one off operation or as a set of 'batched' rules.
The archive recommendations list can contain recommendations for media items previously archived offline. In the automated case, the system can
eliminate previously archived media items from being entered into the list. In the manual case, the user is able to make an archive recommendation for media items already archived but the system will warn the user when they attempt to do so.
When Media Items (either production or recorded) first come into existence on the system they are automatically assigned a deletion date or PlannedDeletionTime metadata attribute. This is the time at which that media item (essence and metadata) will, in the absence of any intervention, be deleted from the media core. This attribute will have been assigned a default value, which for news applications might be 48 hours, or may have a value manually assigned when the media item was input to the metacore. The user- defined lifetime is a global setting, which applies to all Media Items, irrespective of their status, category etc.
The system will attempt to remove all media items for which the Planned Deletion Date/Time has expired. When the deletion date is reached therefore, the system first checks whether the media item has been previously archived. If it has been archived, only the broadcast and desktop qualities are deleted. This leaves the web quality and keyframe qualities online to allow a user to perform search and viewing functions. If it has not been previously archived, the system will delete all essence and metadata for the Media Item, (this includes component metadata, bookmark metadata, keyframe essence and metadata). The system will not, however, remove items that are currently on a list of items recommended for archive awaiting review or which form part of an archive departure booking awaiting copying to tape (archive queue), even if their Planned Deletion Date/Time has past. This ensures that media items that have been marked for archive are not deleted by the system until the playout to offline archive has been completed successfully.
There are therefore essentially two ways to prevent the deletion of a Media Item, archiving it (recommending an item for archive may extend its lifetime within the system but does not necessarily guarantee the item is archived) or extending its deletion date/time.
The system will prevent a deletion from occurring if the Media Item is being used in an unconformed Production Media Item or if there is a planned departure booking for the Media Item. If the Media Item is being used in a conformed Production Media Item, the system will allow deletion of all the Media Item's essence but will prevent deletion of the Media Item's metadata; this metadata will only be available by reference from the Production Media item - i.e. it will never appear in search results.
Where there is a conflict between rules or actions which are amending the deletion date, a manual amendment takes precedence over an automated amendment, and where more than one automated amendment takes place to a media item, the most recent amendment takes precedence.
The process of archive recommendation and deletion is illustrated in Figure 16. There are four basic possible archive states for online media items (which could be a recently created item ('current') or a reingested archive item also already held offline)
The first type 1602, is for those items which are not recommended for archive either manually or as the result of an automated rule. These are deleted when their deletion date is reached (subject to the item already being held offline in which case only the essence is deleted as explained above) as indicated at 1610.
The second type 1604, consists of items which do not satisfy an automated rule, but which have been manually recommended for archive. These are passed for archive review at 1612. This results in one of three actions. The item may be deleted as shown at 1610. The item may alternatively be kept online until a later date as shown at 1614 (which may involve changing the planned deletion date) for further review or for deletion. If deletion is selected after the the selected date, but a new recommendation is subsequently made in the meantime then the item is again reviewed. Lastly the item may be Archived as shown at 1616. It should be noted that actions 1614 and 1616
may both take place for a particular item. If the item is to be archived and not also maintained online, the online essence is deleted at the planned deletion time (excluding metadata, web resolution and bookmarks).
The third type 1606, comprises items which have not been manually recommended for archive, but which satisfy an archive rule. These are treated according to the rule outcome as shown at 1618. These outcomes are early deletion as shown at 1620 (subject to checking that item is not held offline), or manual review 1612, keep online 1614, and/or archive 1616 as shown.
The last type of item 1608 is where a manual recommendation has been made and an archive rule has been satisfied. This is passed for review at 1622, where it is decided whether to proceed with archive rule outcome or amend according to journalist recommendation. All media is stored using a storage item. A storage item may be videotape, a data format tape, or a logical device like a partition on a hard drive or an optical worm drive. The system keeps track of where media items are by tracking which storage item it is stored on. When a videotape of media is recorded for the archive, a new storage item must be created, and selected for the recording. This creates a media item instance on the new storage device. Users can view the details or attributes of a storage item such as:
Storage Item ID Storage Item Name • Physical Location
Storage Type (Tape, Hard Disk)
Contents (all media items on in the storage item)
Archive Tape ID (the Archive catalog number)
Storage items are modified whenever they are moved, more media is added to the storage item, or if the name, ID or tape ID must be changed. If practical, a new storage item should be created if a tape storage item is duplicated
creating duplicate media item instances of everything on the tape. Storage items can also be deleted.
Requests for items from the offline archive are handled using the booking processes. A user will submit a request from archive when they require access to archived media items. Archivists will locate the relevant tape(s) and insert it into a system enabled VTR. The user will then enter the tape ID (s) and the VTR location into the system.
They will then be able to pick the relevant media items located on the tape(s) and begin the recording. The system will automatically queue the tape to the correct timecode and begin ingest of the media items. This ingest will be performed using record tools. The system will prompt the user when/if other tapes are required.
The system also allows 'legacy' archive tapes to be ingested into the system. That is, archive tapes which are not formatted according to the present system. In this case, the tape ID will not be a recognized system archive tape ID. The system will create a new media item with default values (similar to crash recording). This may be exactly the same as crash recording except the archivist is allowed to manually set the archive flag. The archivist will be responsible for queuing the tape to the correct location and beginning the recording. Once recorded the archivist can then use the standard modify media item tools to update any metadata from the existing archive systems. If multiple media items are required for a single tape then the archivist must perform separate ingests. The system will continue to track all details about this tape in the same way as for system archive tapes.
It will be understood that the present invention has been described above purely by way of example, and modification of detail can be made within the scope of the invention.
Each feature disclosed in the description, and (where appropriate) the claims and drawings may be provided independently or in any appropriate combination.