[go: up one dir, main page]

GB2569179A - Method for editing digital image sequences - Google Patents

Method for editing digital image sequences Download PDF

Info

Publication number
GB2569179A
GB2569179A GB1720539.4A GB201720539A GB2569179A GB 2569179 A GB2569179 A GB 2569179A GB 201720539 A GB201720539 A GB 201720539A GB 2569179 A GB2569179 A GB 2569179A
Authority
GB
United Kingdom
Prior art keywords
digital image
image sequence
digital
layer
media element
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1720539.4A
Other versions
GB201720539D0 (en
Inventor
A'court Christopher Osman John
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
John Acourt Christopher Osman
Original Assignee
John Acourt Christopher Osman
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by John Acourt Christopher Osman filed Critical John Acourt Christopher Osman
Priority to GB1720539.4A priority Critical patent/GB2569179A/en
Publication of GB201720539D0 publication Critical patent/GB201720539D0/en
Publication of GB2569179A publication Critical patent/GB2569179A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Editing a digital image sequence 28 comprises the steps of selecting a digital image sequence 34, 35, 36 to be accommodated by a digital image layer 35 having a digital image layer perimeter, a start time and an end time. An interaction layer 40 separate from the digital image layer is generated and has an interaction layer perimeter. The interaction layer which can be overlaid on the image sequence is edited comprising selecting a digital media element 30, selecting a digital media element location inside the interaction layer perimeter, selecting a digital media element start time, selecting a digital media element end time, publishing the image sequence comprising the edited interaction layer and editing the interaction layer of the published image sequence. The invention aims to facilitate the incorporation of video editing capabilities into, for instance, social media platforms and to provide social video/image editing as a collaborative or competitive experience. Also live zooming and temporal navigation of the digital image sequence using an invisible user interface may be used.

Description

(57) Editing a digital image sequence 28 comprises the steps of selecting a digital image sequence 34, 35, 36 to be accommodated by a digital image layer 35 having a digital image layer perimeter, a start time and an end time. An interaction layer 40 separate from the digital image layer is generated and has an interaction layer perimeter. The interaction layer which can be overlaid on the image sequence is edited comprising selecting a digital media element 30, selecting a digital media element location inside the interaction layer perimeter, selecting a digital media element start time, selecting a digital media element end time, publishing the image sequence comprising the edited interaction layer and editing the interaction layer of the published image sequence. The invention aims to facilitate the incorporation of video editing capabilities into, for instance, social media platforms and to provide social video/image editing as a collaborative or competitive experience. Also live zooming and temporal navigation of the digital image sequence using an invisible user interface may be used.
At least one drawing originally filed was informal and the print reproduced here is taken from a later filed formal copy.
12>
04 18
Λ
SF
Ui i.
fcZ
Γ ·
aital *ψΤ
04 18
04 18
Viewer
04 18
04 18
r~™
7-M.....:
04 18 :ίγ
Mswpat
04 18
04 18
Application No. GB1720539.4
RTM
Date : 15 May 2018
Intellectual Property Office
The following terms are registered trade marks and should be read as such wherever they occur in this document:
INSTAGRAM
FACEBOOK
TWITTER emoji
Intellectual Property Office is an operating name of the Patent Office www.gov.uk/ipo
Method for editing digital image sequences
Field of the Invention
The present invention relates to video editing and viewing systems and methods for video editing and viewing, in particular to video editing systems and methods for enabling video editing as a social media platform experience.
Background to the Invention
The ‘video experience’ has remained strictly passive since its inception, and despite technology innovations and new markets for video like the internet, the smartphone and social media, video is still a predominately passive experience.
The social media phenomenon has driven a great number of technological advances over the previous decades, resulting in a great many features being available to facilitate social interaction in a variety of ways. Current forms of social media platforms permit users to interact socially through the communication of text-based comments, sound recordings, images and videos.
In most cases, while these forms of communication suffice, there is a current gap in the capability of currently available social media platforms to permit interactive viewing and social editing of media uploaded by a user.
One of the challenges facing video editing for use in social media platforms is to allow users to contribute without having any prior technical knowledge or skills in editing video content.
Aside from social media applications, interactive and social video editing has additional applicability to, for example, creative social imagery applications and platforms; professional training and coaching (sports, scientific, medical, military, etc); performance analysis and feedback, education and learning experiences (public and professional online lessons, training and workshops); interactive simulations and tours (scientific & technological, historical sites, cultural experiences like art gallery and museum exhibitions); interactive product and brand marketing (product tours, brand lifestyle experiences, property market, holiday market, etc); interactive visual evidencing and audits (footage of crime scenes and criminal acts for police and legal systems, housing rental market, vehicle rental market, etc); real time in-media socialising, collaborating, sharing and commenting (for social media platforms); and interactive moment highlights (sports, news events etc for social media platforms like Twitter™ and Facebook™).
It is therefore desirable to provide a system and method for enabling integration of video editing into, for instance a social media platform experience, which may lower the barrier-toentry to video editing and simplify the integration of such functionality into, for instance, a platform for social video editing.
Summary of the invention
In accordance with a first aspect of the present invention there is provided a method for editing a digital image sequence, the method comprising the steps of:
a) selecting a digital image sequence to be accommodated by a digital image layer, such that the digital image layer comprises said digital image sequence, the digital image layer having a digital image layer perimeter, the digital image sequence having a digital image sequence start time and a digital image sequence end time;
b) generating an interaction layer separate from the digital image layer, the interaction layer having an interaction layer perimeter;
c) editing the interaction layer, the editing comprising the steps of:
selecting a digital media element;
selecting a digital media element location inside the interaction layer perimeter; selecting a digital media element start time;
selecting a digital media element end time;
d) publishing an image sequence, the published image sequence comprising the edited interaction layer.
The present invention aims to make video/image editing a social experience by providing innovative and easy to use viewing and editing techniques and social features, to provide a simple and streamlined social video/image editing experience and platform. As such the method further aims to provide video/image editing as a collaborative and/or a competitive experience. The method in accordance with the first aspect of the present invention preferably provides a user with the ability to publish an image sequence, which preferably takes the form of a video, having additional digital media elements overlaid and present within an interaction layer. The interaction layer preferably comprises an interactive user metadata layer defining user-specific and/or interaction layer-specific metadata, such as descriptive metadata, structural metadata and/or administrative metadata. This preferably provides a temporal scene augmentation feature, wherein the user is preferably provided with the ability to add digital media elements, which may be external digital assets such as, for example, images, animations, and digital 3-D objects, into the interaction layer, at a desired temporal position within the image sequence. The user preferably selects a digital image sequence to be accommodated by the digital image layer, and further selects a digital media element to be accommodated by the interaction layer, which is separate to, and preferably temporally and spatially synchronised to, the digital image layer. As such, the user is preferably enabled to provide input to the interaction layer in order to select the location of the digital media element, preferably within the interaction layer perimeter, as well as the start time and end time of the digital media element relative to the start time and end time of the digital image sequence. Preferably such digital media elements or assets can be selected from a library located on a non-transitory memory, and added at a specific time code within, for example, a video, or at a certain time within an image sequence. Preferably it is also then possible to configure the digital media element or asset to be removed at a specific time.
The “adding” and “removing” of digital media elements or assets is preferably determined by the digital media element start time and the digital media element end time respectively. Preferably the start time of the digital media element is at or after the start time of the digital image sequence, and the end time of the digital media element is preferably at or before the end time of the digital image sequence. In such away, digital media elements may be placed within the interaction layer perimeter, the interaction layer preferably appearing as overlaid over the digital image layer, such that said digital media elements may be experienced at times during the digital image sequence, said times being selected by the user. Preferably, the digital media elements or assets may have an “enter” and/or “exit” effect, which include, for example, an animation or sound. The “enter” and/or “exit” effect may preferably occur for a period following or preceding the digital media element start time or the digital media element end time. More preferably the “enter” and/or “exit” effect are defined by digital media element data associated with said digital media elements or assets.
In the method of the present invention, the placing of a digital media element on the interaction layer, within the interaction layer perimeter, preferably does not alter the digital image sequence of the digital image layer. In such preferable embodiments, the same digital image sequence may be used multiple times, preferably with multiple interaction layers comprising different digital media elements. Preferably the publishing of the image sequence is by way of a social media platform. As such the published image sequence can preferably be shared with other users.
The method further comprises the step of:
e) editing the interaction layer of the published image sequence.
In accordance with the first aspect, the interaction layer of a published image sequence may be further accessed and edited by a user. Preferably the further edited interaction layer may be comprised within a published image sequence. In such embodiments, a published sequence may be further accessed and edited by a second user, wherein the editing of the published sequence comprises step c) of the method according to the first aspect of the present invention such that only the interaction layer of the published sequence is edited by the second user.
Preferably therefore the method provides for multiple users to add synchronised transparent interaction layers over the frames of a video/digital image sequence or a digital still image to augment the underlying imagery in the image, frame or across an image sequence by placing interactive and/or non-interactive digital media elements (which may, for example include additional imagery, film, audio, graphics, animations and effects) onto the interaction layer.
The method preferably further comprises the step of:
f) viewing the published image sequence.
A published image sequence may preferably be viewed by a user, which may be a user with whom the published image sequence was shared. Preferably the viewing of the published image sequence is performed using a display screen comprised in one selected from the group: a mobile device; a virtual reality device; an augmented reality device; a mixed reality device; a hybrid reality device.
Preferably the viewing of the published image sequence may be accompanied by experiencing the published image sequence. Experiencing the published image sequence in the context of the present invention refers to the perceiving of the published image sequence and may optionally include the provision of input instructions. Experiencing the published image sequence preferably occurs by way of the interaction layer of the published image sequence. Experiencing the interaction layer of the published image sequence may preferably involve viewing and perceiving the content of the interaction layer and providing input instructions on or local to a digital media element located on the interaction layer of the published image sequence. Experiencing the interaction layer of the published image sequence may preferably involve locating a desired time point within the published image sequence, or activating a function of an interactive digital media element located on the interaction layer of the published image sequence. In such a way, experiencing the interaction layer of the published image sequence while viewing the published image sequence preferably provides a different user experience to editing the interaction layer as set out in steps c) and preferably e) of the method of the first aspect.
Input instructions while experiencing the published image sequence may preferably come by way of touching a touch sensitive screen or by providing input instructions to another device accepting input such as a peripheral accessory; a motion detection device; a virtual reality device; an augmented reality device; a hybrid reality device; a mixed reality device; and a sensor.
The sensor can preferably be hardware attached to user’s body, hardware held by user, or hardware in the physical space surrounding user. Detected actions can preferably be either through a physical interaction with the hardware controller (for example a horizontal swipe motion on a touch sensitive surface, or by physically pressing a button on a hardware device) or via the hardware detecting physical movement and the software interpreting it as an action (for example a horizontal swipe motion of arm/hand/finger in 3D space, or a tap gesture in 3D space).
Preferably the published image sequence further comprises the digital image layer, the digital image layer comprising the digital image sequence. More preferably the published image sequence comprises the interaction layer overlaid on top of the digital image layer. More preferably the published sequence comprises a plurality of interaction layers each overlaid over one another and over the digital image layer. More preferably the plurality of interaction layers may be sorted and/or ordered as desired by a user.
In accordance with preferable embodiments of the present invention, the published image sequence comprises the interaction layer overlaid on top of the digital image layer. The interaction layer of the published image sequence preferably comprises the digital media element and the digital image layer of the published image sequence preferably comprises the digital image sequence. As such, a user may preferably view the digital image sequence of the digital image layer, wherein the digital media element of the interaction layer appears over the digital image sequence for the selected time, duration and location as defined by the digital media element start time, end time and location. In most preferable embodiments, the interaction layer of the published image sequence may comprise a plurality of digital media elements.
Preferably the interaction layer is synchronised to the digital image layer.
The frames of the digital image sequence comprised within the digital image layer are preferably synchronised with the interaction layer. The interaction layer, which may be considered as an interactive user metadata layer, preferably comprises digital media elements, which may themselves be interactive, and are arranged to be present at user specified times and for specified durations. Multiple interaction layers (user metadata layers) can preferably be synchronised to the same digital image sequence by multiple users. An interaction layer (user metadata layer) can preferably be synchronised to a digital image sequence multiple times by multiple users on a digital social platform. Users can preferably view other users’ interaction layers (user metadata layers) synchronised to, and overlaid over, the original digital image sequence, one at a time.
Preferably the digital image layer is protected from editing by a user.
In accordance with most preferable embodiments of the present invention, the digital image layer may not be edited by a user, such that only the interaction layer may be edited by a user. In such a way, the digital image sequence comprised within the digital image layer may be used multiple times by multiple users, each able to edit their own interaction layer and publish an image sequence comprising said interaction layer to accompany the digital image sequence.
Preferably the digital image sequence comprises one selected from the following: a single digital image; a plurality of digital images; digital video.
Preferably the digital image sequence comprises a single digital image. In other preferable embodiments, the digital image sequence comprises a plurality of digital images. In further preferable embodiments, the digital image sequence comprises a digital video. In such embodiments, the digital media element and the digital image sequence may be experienced by a user simultaneously.
Preferably the digital media element start time is equal to, or after the digital image sequence start time. Preferably the digital media element end time is equal to, or before the digital image sequence end time. Preferably the digital media element location is inside the interaction layer perimeter.
In preferable embodiments, a digital media element within an interaction layer of a published image sequence may be viewed at the same time as a digital image sequence comprised within the digital image layer. As such, the digital media element start time, digital media element end time, the digital media element duration and the digital media element location are preferably confined within the temporal and spatial limits determined by the digital image sequence comprised within the digital image layer.
Preferably the digital media element comprises at least one selected from the range: a digital image; a digital image sequence; a digital video; a digital audio sequence; a digital text object; a graphic object; a sprite; a vector; an icon; a digital shortcut; a hyperlink.
Preferably the interaction layer perimeter is of equal size or smaller than the digital image layer perimeter.
Preferably the interaction layer of the published image sequence is interactive. More preferably the digital media element comprised within the interaction layer is interactive. In the context of the present invention, the term interactive refers to the function of being responsive to input. In accordance with preferable embodiments of the present invention, the interaction layer and the digital media element are responsive to input. Preferably the input is provided by an input means, which preferably includes a touch-sensitive screen arranged to output the digital image sequence, the digital image layer, the interaction layer, the digital media element and the published image sequence of the present invention to a user.
In accordance with preferable embodiments, when a user is viewing the published image sequence, the user may provide input to the interact with a digital media element while viewing the digital image sequence.
Preferably the digital media element comprises a unique identifier.
Preferably the digital media element comprises data characteristic of at least one selected from the range: an adjustable start time; an adjustable end time; an adjustable location.
Preferably the image sequence is published on a social media platform.
Preferably the method forms part of a virtual reality, augmented reality, mixed, reality, or hybrid reality experience. Preferably the method of the first aspect is used in one application selected from the following: creative social imagery applications and platforms; professional training and coaching (sports, scientific, medical, military, etc); performance analysis and feedback, education and learning experiences (public and professional online lessons, training and workshops); interactive simulations and tours (scientific & technological, historical sites, cultural experiences like art gallery and museum exhibitions); interactive product and brand marketing (product tours, brand lifestyle experiences, property market, holiday market, etc); interactive visual evidencing and audits (footage of crime scenes and criminal acts for police and legal systems, housing rental market, vehicle rental market, etc); real time in-media socialising, collaborating, sharing and commenting (for social media platforms); and interactive moment highlights (sports, news events etc for social media platforms like TwitterTM and FacebookTM).
Preferably editing the interaction layer is performed using one selected from the group: a touch-sensitive screen; at least one motion-sensitive controller.
In accordance with a second aspect of the present invention, there is provided a method of temporal navigation of a digital image sequence having a length defining a timeline, the method comprising the steps of:
a) receiving an input from a user, the input with respect to a digital viewport defining a closed border of the digital image sequence; wherein the input from a user is effected via an interface that is invisible to the user;
b) determining a current time on the timeline of the digital image sequence;
c) determining start coordinates of the input within the viewport;
d) determining end coordinates of the input within the viewport;
e) using the start coordinates and the end coordinates and end time to determine a distance vector;
f) using the distance vector and the length of the digital image sequence to determine a desired time on the timeline of the digital image sequence; and
g) navigating the digital image sequence to the desired time on the timeline of the digital image sequence.
Preferably the receipt of an input in step a) does not occur via interaction with any visible user interface elements. Preferably the method according to the second aspect of the present invention permits the temporal navigation of a digital image sequence without the use of visible user interface elements and therefore provides for an uninterrupted and uncluttered temporal navigation experience. The lack of a visible user interface preferably also provides the user with the ability to much more easily and accurately navigate a digital image sequence than more traditional video navigation user interfaces, which may include a visible timeline bar.
Preferably, in step c) a start time of the input is determined. More preferably in step d) an end time of the input is determined. Preferably the start time and the end time define the duration of the input. More preferably, in step e), the start time and end time of the input is used in combination with the start coordinates and the end coordinates to determine a velocity of the input. Most preferably, in step f), the velocity is also used to determine a desired time on the timeline of the digital image sequence.
Preferably the distance vector is substantially aligned with a horizontal axis with respect to an orientation of the viewport.
In accordance with a third aspect of the present invention, there is provided a method of livezooming, capturing and storing a zoomed digital image of a digital image sequence having a length defining a timeline, the method comprising the steps of:
a) playing a digital image sequence having a length defining a timeline, the digital image sequence visible through a viewport defining a closed border for the digital image sequence;
b) receiving an input from a user, the input defining a desired region of the digital image sequence within the viewport; wherein the input from a user is effected via an interface that is invisible to the user; and
c) scaling the digital image sequence relative to the viewport to provide a zoomed digital image defined by the desired region within the viewport.
Preferably the zoomed digital image may be manipulated using further input instructions.
Preferably, the method further comprises step of: d) capturing and storing the zoomed digital image. Preferably the zoomed digital image is stored on a non-transitory memory.
Preferably the receipt of an input in step b) does not occur via interaction with any visible user interface elements. Preferably the method according to the third aspect of the present invention permits the live zooming and capturing of a desired region of a digital image sequence without the use of visible user interface elements and therefore provides for an uninterrupted and uncluttered live zooming and capturing experience. The lack of a visible user interface preferably also provides the user with the ability to much more easily and accurately zoom and capture a desired region within a digital image sequence than more traditional zoom and capture user interfaces. Preferably the capture of a desired region comprises the capture of a digital image of the desired region. Preferably the digital image comprises one selected from a static image; an array of static images.
In accordance with a fourth aspect of the present invention there is provided a computer program product including a program for a processing device, comprising software code portions for performing the steps of the method according to the first, second and/or third aspect of the present invention, when the program is run on the processing device.
Preferably the computer program product comprises a computer-readable medium on which the software code portions are stored, wherein the program is directly loadable into an internal memory of the processing device.
Preferably the computer program product is adapted to function on a portable device. Preferably the portable device is a smart phone.
Detailed Description
Specific embodiments will now be described by way of example only, and with reference to the accompanying drawings, in which:
FIG. 1 provides a flow chart of steps comprised within a method for editing a digital image sequence in accordance with the second aspect of the present invention;
FIG. 2 provides a conceptual view of a computer program product in accordance with the second aspect of the present invention;
FIG. 3 provides a conceptual view of a digital image sequence according to the first aspect of the present invention;
FIG. 4 provides a conceptual view of a digital image sequence according to the present invention having a temporal navigation function according to the second aspect of the present invention;
FIG. 5 provides a flow chart of a method according to the present invention comprising a temporal navigation method according to the second aspect of the present invention;
FIG. 6-1 to 6-4 provides conceptual views of a method according to the present invention comprising a temporal zooming, capturing and storing function according to the third aspect of the present invention;
FIG. 7-1 to 7-2 provides conceptual views of a method according to the present invention comprising a temporal zooming, capturing and storing function of FIG. 6-1 to FIG 6-4;
FIG. 8-1 and FIG. 8-2 provides conceptual views of a method according to the present invention further comprising a temporal zooming, capturing and storing function of FIG. 6-1 to FIG 6-4 and further comprising a boundary collision function; and
FIG. 9 provides a flow chart of a method according to the present invention further comprising the temporal zooming, capturing and storing function of FIG. 6-1 to FIG. 6-
4.
Referring to FIG. 1, a flow chart representing comprised within a method 10 according to the first aspect of the present invention are shown, the steps comprising: selecting a previouslycaptured and stored digital video to be loaded into a digital image layer having a digital image layer perimeter, the video having a start time and an end time 12; generating an interaction layer separate from the digital image layer, the interaction layer having an interaction layer perimeter 14; editing the interaction layer, the editing comprising the steps detailed below. The first step comprising selecting a previously stored digital image of a smiling face from a nontransitory memory. Other options for a digital media element are available and could, for instance, include an image, photograph, illustration, video, animation, tune, song, comment and/or text object. The digital image of the smiling face in the example described might have an interactive function, such that it is responsive to input from a user to provide, for example, a visual and/or audible effect and/or a link and/or hyperlink. These examples are given to aid exemplification and additional embodiments will be appreciated wherein the digital media element comprises any digital media suitable for experiencing within a digital image sequence. The second step for editing the interaction layer in the example embodiment shown in FIG. 1 comprises selecting a location for the digital image of a smiling face, the location being inside the interaction layer perimeter; selecting a start time for the digital image of the smiling face which is after the start time of the video. The third step comprising selecting an end time for the digital image of the smiling face which is before the end time of the video 16. The method shown also comprising the steps of publishing a video on social media, the published video comprising the edited interaction layer overlaid over the digital image layer 18; viewing the published video and experiencing the interaction layer of the published video 20; and editing the interaction layer of the published video by adding a text comment having a start time immediately after the end time of the digital image of the smiling face 21.
Referring to FIG. 2 a conceptual view of a processing device 22 arranged to run a computer program product 24 of the second aspect of the present invention is shown, the computer program product arranged to perform the steps of the method of FIG. 1. The processing device 22 comprises a non-transitory memory 26 arranged to accommodate a digital image sequence 28 taking the form of previously captured and stored digital video and a digital media element taking the form of a stored digital image of a smiling face 30. The processing device 22 further comprises a processor 32 in digital communication with the non-transitory memory 26 and in further digital communication with an input portion 42 arranged to accept input from a user.
The processor 32 is arranged to process the input from the user, received from the input portion 42. The processor 32 is further arranged to process the computer program product 24 comprising a digital image layer 35 and an interaction layer 40, the interaction layer 40 overlaid over the digital image layer 35 to form a viewport 38. The interaction layer 40 provides a means by which a user may interact with, and experience the digital image sequence of the digital image layer 35 without directly altering said digital image sequence. Interaction in the present context refers to the provision of input from a user.
The digital image layer 35 is arranged to accommodate a digital image sequence 34, 35, 36, comprising a digital image sequence start time, a digital image sequence end time and a digital image sequence aspect ratio defining a perimeter of said digital image sequence 34, 35, 36. In the embodiment shown, the digital image sequence comprises a first digital image 34 arranged to appear in the viewport 38 at the digital image sequence start time to (not shown), a third digital image 36 arranged to appear in the viewport 38 at the digital image sequence end time t2 (not shown) and a second digital image 35 arranged to appear in the viewport 38 at a time ti (not shown) occurring between the digital image start time to and the digital image end time t2. The interaction layer 40 is arranged to accommodate a chosen digital media element 30, which in the embodiment shown comprises a static digital image of a smiling face 30.
The digital media element 30 is configured to appear within the viewport 38 at time ti (the digital media element start time) and disappear from the viewport 38 at time t2 (the digital media element end time). The start and end times can be selected within a user interface and the times manipulated using a temporal navigation system.
A plurality of digital media elements or asset surfaces can have digital media element start times and digital media element end times that may cause multiple digital media elements to coincide for a period during the digital image sequence.
As such, a method and processing device is shown providing an editing mode and a viewing mode, wherein the editing mode permits interaction with the interaction layer to add or remove content, whereas the viewing mode permits interaction with the interaction layer by experiencing the interaction layer, and thus also the digital image layer which is preferably located thereunder. The viewing mode 20 permits experiencing the interaction layer which includes temporal navigation through the published image sequence and interacting with the digital media element located on the interaction layer of the published image sequence. The interaction layer of FIG. 2 shows that interaction with the published image sequence comprises interaction with the interaction layer in the absence of a user interface, wherein the viewport comprises solely the digital image sequence overlaid with the interaction layer, which may comprise digital media elements. The method according to the first aspect of the present invention therefore preferably provides a social editing and viewing feature, wherein a user may create and publish an edited video to be viewed, interacted with, and edited by other users. The features of the present invention preferably permit the absence of a cluttered user interface providing viewing, interaction with and editing of a digital image sequence via the presence of the interaction layer, wherein the interaction layer is preferably overlaid over the digital image sequence comprised within a separate, but temporally and spatially synchronised, digital image layer.
Each digital media element shown comprises associated digital media element data, which in the embodiment shown comprises a bundle of metadata comprising at least one of the following:
1. a static descriptor of what the digital media element is (which may for example be a path to said digital media element located on a non-transitory memory, and may comprise a unique identifier);
2. a time at which the digital media element is to appear;
3. a time at which the digital media element is to disappear;
4. a digital media element location (which may for example be relative to the interaction layer or the viewport, and may indicate for example the centre point of the digital media element); and
5. any extended metadata, which may, for example, include supported customisations.
In the embodiment shown, digital media element data comprising a metadata bundle is stored in the non-transitory memory along with the digital image sequence. When the sequence is browsed using a temporal navigation system, the digital media element is added, triggered and removed based on said digital media element data.
The embodiment shown therefore comprises two primary modes, an editing mode and a viewing mode - interaction with the interaction layer being permitted across both the editing mode and the viewing mode.
The digital media element is described as being a digital image of a smiling face. Additional embodiments will be appreciated wherein the digital media element 30 comprises any other form of digital media including a digital image (for example a photograph, an emoticon, an emoji, a text image, an illustration and/or a graphic design) a digital image sequence (for example, a video, an animation and/or a sequence of images), a digital audio sequence (for example, a song, a soundtrack, a tune and/or a musical note) or a digital text object. The digital media element may further take the form of an interactive digital element such as a button arranged to perform a visual and/or audible effect or a hyperlink arranged to direct a user to, for example, one selected from: a brand page on a first social platform on which the first aspect of the present invention is used; a second social platform external to the first social platform (such as, for example, Instagram™, Facebook™, Twitter™ etc); a web page; a chatbot; apopup applet/extension; a map; a geo location service; a messaging platform; an online learning platform. Interactive is used in this context to mean responsive to an input. The input might come from the user in the form of a touch of a touch-sensitive screen. Additional embodiments will be appreciated wherein input may come from any acceptable form of input for use in methods of editing digital image sequences or experiencing the published image sequences. Such embodiments may for example include a peripheral accessory; a motion detection device; a virtual reality device; an augmented reality device; a hybrid reality device; a mixed reality device; and a sensor. The sensor may, for example, include hardware attached to a user’s body, hardware held by user, or hardware in the physical space surrounding user. Detected actions can, for example, be either through a physical interaction with the hardware controller (for example a horizontal swipe motion on a touch sensitive surface, or by physically pressing a button on a hardware device) or via the hardware detecting physical movement and the software interpreting it as an action (for example a horizontal swipe motion of arm/hand/finger in 3D space, or a tap gesture in 3D space).
The processor of the embodiment shown is preferably further enabled to process a temporal navigation system, enabling traversal through the digital image sequence, which is dependent upon the type of input portion present in the system, but can be generalised as:
1. direct manipulation of a surface, as shown in FIG. 3, such as swipe gestures on a control surface (which might be, for example, a 2-dimensional touch sensitive screen); and/or
2. indirect manipulation of a 2 dimensional projection, as shown in FIG. 4, for example using handheld controllers (such as the motion controls shown) in a virtual reality setting;
each of which results in a motion capture event which is detected by the processor.
The flowchart of FIG. 5 gives an overview of events which can occur once a motion capture event is detected by the processor:
1. cycle begins
2. motion capture begins (input is received, for example, when a finger touches down on a touch-sensitive surface; a physical controller action button is actuated);
3. a current time of the digital image sequence is determined;
4. current coordinates of the input on the interaction layer and/or viewport is registered;
5. based on the previous coordinates, the processor determines if an appropriate linear distance along an axis (for example a horizontal axis) has been travelled;
a. if an appropriate linear distance along an axis has not been travelled, the processor causes retention of the current settings and waits for the next cycle of events detailed in FIG. 5;
b. if an appropriate linear distance along an axis has been travelled, the processor moves to step 6;
6. a distance vector and optionally a velocity, defined by the linear distance along the axis and the velocity of the motion respectively, is determined and saved;
7. a seek-to (desired) time for the digital image sequence is calculated based on: a) the distance vector; b) the length of time of the digital image sequence; c) (optionally) the velocity of the motion, d) any other constant factors. As a worked example, a 100 pixel motion in the positive horizontal direction could lead to an addition of 1 second to the current time. Note: the determination of the seek-to time will also factor in whether the ‘next’ seek-to time would be before the digital image sequence start time, or after the digital image sequence end time - in which case the minimum and maximum seek-to times are set respectively;
8. the processor causes the dispatch a seek message to an appropriate software/hardware component, for example, setting a point within a video to display according to the seek-to time, or which image in a series of images to display according to the seek-to time;
9. a new current time is recorded according to the seek-to time and the current horizontal position is recorded; and
10. cycle ends.
Note: at any point a software product performing the method of at least one aspect of the present invention can respond to an asynchronous cancel from an input portion (for example, from a user’s finger removed from the surface of a touch-sensitive screen or a physical controller button is lifted). At that point, any buffered values for things such as accumulated distance travelled or current velocity are reinitialised to earlier values.
The effect of the above process is to be able to scrub smoothly, either forwards or backwards through a visual medium, such as a digital image sequence, using a motion across a surface, physical movement of a controller or other input instructions to an input device, without requiring any on-screen visual cues such as buttons or icons. This represents a direct interaction with the visual medium. Additional embodiments will be appreciated wherein the motion capture begins following input from any acceptable means of motion detection equipment or input devices such as controllers or sensors, which may detect physical bodily movement (facial/limb/hand/finger movement and physical gestures) as an input method.
Temporal Zoom and Image Capture
The embodiment shown in FIG. 2 further comprises a zoom managing function, which provides the ability for a user to manipulate the scale and position of visible digital media elements, or portion of the digital image sequence, with reference to a ‘viewport’ as shown in FIG. 6.
The viewport is a fixed-size, 2-dimensional plane that is overlaid over a digital image layer and as such projects onto the digital image behind it (in this instance ‘image’ refers to a single frame in a paused video or a single image in a digital image sequence, provided for the purpose of exemplification only).
As with the horizontal control motion defined for FIG. 4 and FIG. 5, the scale of an image can be manipulated using specific functions defined for an input portion, for example a surface control gesture on a touch-sensitive screen such as a ‘pinch’ (as depicted in Fig 6-1) or by moving physical controllers together or apart in 3-D space (as depicted in Fig 6-3). (Note, the physical controller action would require activation using a different set of control buttons to that of the horizontal control motion defined for FIG. 3).
A scaled image that extends past the viewport could then be moved around in 2-D space by using further functions defined for an input portion such as a surface control gesture on a touch-sensitive screen, for example a two finger pan (FIG 6-2), or in the case of 3-D space, by moving two controllers in the same direction (Fig 6-4), again ensuring that the motion is activated by using the appropriate controller buttons.
In the embodiment shown, the zoom and move actions defined above do not impact the dimensions of the viewport, rather the image behind the viewport is scaled and it’s positional offset from the viewport is adjusted (FIG. 7).
In the case where the viewport zoom scale is unity (FIG. 7-1), the bounds of the image is set such that it is sized to fill the viewport, whilst maintaining the aspect ratio, with at least one of the sides being the same as its corresponding viewport side length.
In other words, if the image is the same aspect ratio as the viewport, you would expect it to be scaled to exactly fit the viewport.
If the zoom ratio is increased, the image is scaled to that ratio (whilst maintaining the aspect ratio) and the image displayed to the user is a projection of the viewport onto a portion of the scaled image.
If part of the image is outside the viewport, the controller motion in the horizontal and vertical directions would have the effect of adjusting the offsets of the image from their origin (wherein the coordinates 0, 0 coincides with the top left). For example, a positional offset of Ό, 0’ means that the top-left of the image is coincident with the top left of the viewport.
The image displayed to the user is a function of both the zoom and the positional offsets (determined by the various frames of reference within a software product arranged to carry out the method of the present invention).
Boundary Collision
There are two instances where the viewport described above may contain empty areas with no image:
1. if the image is scaled to a ratio that means one or more of its dimensions is smaller than that of the corresponding viewport dimension (FIG 8-1); and/or
2. if the positional offsets are modified such that at least one of either the horizontal or vertical edges of the image are inside the viewport (FIG 8-2).
This is undesirable, and to overcome this:
1. the zoom managing function of the embodiment shown sets a minimum image scale of unity. This means that the physical dimensions of the image can never be smaller than the viewport; and
2. when the movement of the image is complete (for example when a user’s fingers are removed from a touch-sensitive screen or controller buttons are released), the zoom managing function tests whether the image boundaries are inside the viewport. If so the positional offset is adjusted to ensure that the top left of the image coincides with the top left of the viewport (a ‘snap’ effect).
This ensures that the viewport is always filled with content from the media behind it.
Snapshot
Once the scale and position of the media displayed in the viewport discussed above is set, it maybe be desirable in the embodiment shown in FIG. 2 to take a snapshot of the contents of the viewport.
An example embodiment of the snapshot function of the present invention is shown in FIG. 9, outlining the steps of:
1. the snapshot process is activated (for example upon receipt of input instructions by the input portion, which may be from a user’s fingers touching a button or icon on a touch-sensitive screen);
2. the current zoom scale and positional offset is determined. Note that the positional offset is a function of the scale, for example a 10 pixel motion in the horizontal and vertical direction in the frame of reference of the viewport is actually a > 10 pixel movement in the frame of reference of the image if the zoom scale is > 1;
3. generate a full size representation of the digital image sequence asset from the various downstream players (for example, in the case of a video a user may request an image representation of the frame at the current time of the video). In this case, to “generate” means to create and save within a memory;
4. scale the digital image sequence asset (for example, the image representation of the desired frame of the video discussed above), maintaining the aspect ratio of the asset;
5. crop a rectangle from the scaled asset based on the positional offset and size of viewport;
6. return the scaled and cropped image and save to the transitory memory or nontransitory internal storage;
7. clean up any intermediate files and release transitory memory resource;
8. end the snapshot process.
Note: in some embodiments, depending on the software used, steps 3, 4 and 5 may be executed in a single action.
It will be appreciated that the above described embodiments are given by way of example only and that various modifications may be made to the described embodiments without departing from the scope of the invention as defined in the appended claims. The structure and orientation of any of the elements of the present invention shown may be of an alternative design and shaping, and various modifications may be made thereto whilst remaining within the scope of the present disclosure. For example, embodiments have been described having relevance to social media applications. Additional embodiments will be appreciated having applicability to creative social imagery applications and platforms; professional training and coaching (sports, scientific, medical, military, etc); performance analysis and feedback, education and learning experiences (public and professional online lessons, training and workshops); interactive simulations and tours (scientific & technological, historical sites, cultural experiences like art gallery and museum exhibitions); interactive product and brand marketing (product tours, brand lifestyle experiences, property market, holiday market, etc); interactive visual evidencing and audits (footage of crime scenes and criminal acts for police and legal systems, housing rental market, vehicle rental market, etc); real time in-media socialising, collaborating, sharing and commenting (for social media platforms); and interactive moment highlights (sports, news events etc for social media platforms like Twitter™ and Facebook™).

Claims (19)

1. A method for editing a digital image sequence, the method comprising the steps of:
a) selecting a digital image sequence to be accommodated by a digital image layer having a digital image layer perimeter, the digital image sequence having a digital image sequence start time and a digital image sequence end time;
b) generating an interaction layer separate from the digital image layer, the interaction layer having an interaction layer perimeter;
c) editing the interaction layer, the editing comprising the steps of:
selecting a digital media element;
selecting a digital media element location inside the interaction layer perimeter; selecting a digital media element start time;
selecting a digital media element end time;
d) publishing an image sequence, the published image sequence comprising the edited interaction layer; and
e) editing the interaction layer of the published image sequence.
2. A method as claimed in claim 1, wherein the published image sequence further comprises the digital image layer, the digital image layer comprising the digital image sequence.
3. A method as claimed in claim 2, wherein the published image sequence comprises the interaction layer overlaid on top of the digital image layer.
4. A method as claimed in any one of the preceding claims, wherein the interaction layer is synchronised to the digital image layer.
5. A method as claimed in any one of the preceding claims, wherein the digital image layer is protected from editing by a user.
6. A method as claimed in any one of the preceding claims, wherein the digital image sequence comprises one selected from the following: a single digital image; a plurality of digital images; digital video.
7. A method as claimed in any one of the preceding claims, wherein the digital media element start time is equal to, or after the digital image sequence start time.
8. A method as claimed in any one of the preceding claims, wherein the digital media element end time is equal to, or before the digital image sequence end time.
9. A method as claimed in any one of the preceding claims, wherein the digital media element location is inside the interaction layer perimeter.
10. A method as claimed in any one of the preceding claims, wherein the digital media element comprises at least one selected from the range: a digital image; a digital image sequence; a digital video; a digital audio sequence; a digital text object; a graphic object; a sprite; a vector; an icon; a digital shortcut; a hyperlink.
11. A method as claimed in any one of the preceding claims, wherein the interaction layer perimeter is of equal size or smaller than the digital image layer perimeter.
12. A method as claimed in any one of the preceding claims, wherein the interaction layer of the published image sequence is interactive.
13. A method as claimed in any preceding claim, wherein the digital media element comprises a unique identifier.
14. A method as claimed in any preceding claim, wherein the digital media element comprises data characteristic of at least one selected from the range: an adjustable start time; an adjustable end time; an adjustable location.
15. A method for temporal navigation of a digital image sequence having a length defining a timeline, the method comprising the steps of:
a) receiving an input from a user, the input with respect to a digital viewport defining a closed border of the digital image sequence; wherein the input from a user is effected via an interface that is invisible to the user;
b) determining a current time on the timeline of the digital image sequence;
c) determining start coordinates of the input within the viewport;
d) determining end coordinates of the input within the viewport;
e) using the start coordinates and the end coordinates and end time to determine a distance vector;
f) using the distance vector and the length of the digital image sequence to determine a desired time on the timeline of the digital image sequence; and
g) navigating the digital image sequence to the desired time on the timeline of the digital image sequence.
16. A method for live-zooming, capturing and storing a zoomed digital image of a digital image sequence having a length defining a timeline, the method comprising the steps of:
a) playing a digital image sequence having a length defining a timeline, the digital image sequence visible through a viewport defining a closed border for the digital image sequence;
b) receiving an input from a user, the input defining a desired region of the digital image sequence within the viewport; wherein the input from a user is effected via an interface that is invisible to the user; and
c) scaling the digital image sequence relative to the viewport to provide a zoomed digital image defined by the desired region within the viewport.
17. A method as claimed in any one of claims 1 to 14, wherein the method further comprises the steps of at least one of the methods of claim 15 and claim 16.
18. A computer program product including a program for a processing device, comprising software code portions for performing the steps of any one of claims 1 to 17 when the program is run on the processing device.
19. The computer program product according to claim 18, wherein the computer program product comprises a computer-readable medium on which the software code portions are stored, wherein the program is directly loadable into an internal memory of the processing device.
GB1720539.4A 2017-12-08 2017-12-08 Method for editing digital image sequences Withdrawn GB2569179A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1720539.4A GB2569179A (en) 2017-12-08 2017-12-08 Method for editing digital image sequences

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1720539.4A GB2569179A (en) 2017-12-08 2017-12-08 Method for editing digital image sequences

Publications (2)

Publication Number Publication Date
GB201720539D0 GB201720539D0 (en) 2018-01-24
GB2569179A true GB2569179A (en) 2019-06-12

Family

ID=61007317

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1720539.4A Withdrawn GB2569179A (en) 2017-12-08 2017-12-08 Method for editing digital image sequences

Country Status (1)

Country Link
GB (1) GB2569179A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040039934A1 (en) * 2000-12-19 2004-02-26 Land Michael Z. System and method for multimedia authoring and playback
US20090297118A1 (en) * 2008-06-03 2009-12-03 Google Inc. Web-based system for generation of interactive games based on digital videos
US20130145269A1 (en) * 2011-09-26 2013-06-06 University Of North Carolina At Charlotte Multi-modal collaborative web-based video annotation system
US20130311561A1 (en) * 2012-05-21 2013-11-21 DWA Investments, Inc Authoring, archiving, and delivering interactive social media videos
US20170039867A1 (en) * 2013-03-15 2017-02-09 Study Social, Inc. Mobile video presentation, digital compositing, and streaming techniques implemented via a computer network
US20170131855A1 (en) * 2011-03-29 2017-05-11 Wevideo, Inc. Multi-source journal content integration systems and methods and systems and methods for collaborative online content editing

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040039934A1 (en) * 2000-12-19 2004-02-26 Land Michael Z. System and method for multimedia authoring and playback
US20090297118A1 (en) * 2008-06-03 2009-12-03 Google Inc. Web-based system for generation of interactive games based on digital videos
US20170131855A1 (en) * 2011-03-29 2017-05-11 Wevideo, Inc. Multi-source journal content integration systems and methods and systems and methods for collaborative online content editing
US20130145269A1 (en) * 2011-09-26 2013-06-06 University Of North Carolina At Charlotte Multi-modal collaborative web-based video annotation system
US20130311561A1 (en) * 2012-05-21 2013-11-21 DWA Investments, Inc Authoring, archiving, and delivering interactive social media videos
US20170039867A1 (en) * 2013-03-15 2017-02-09 Study Social, Inc. Mobile video presentation, digital compositing, and streaming techniques implemented via a computer network

Also Published As

Publication number Publication date
GB201720539D0 (en) 2018-01-24

Similar Documents

Publication Publication Date Title
JP7457082B2 (en) Reactive video generation method and generation program
DK179350B1 (en) Device, Method, and Graphical User Interface for Navigating Media Content
US9703446B2 (en) Zooming user interface frames embedded image frame sequence
CN109313812B (en) Shared experience with contextual enhancements
CN111726676B (en) Image generation method, display method, device and equipment based on video
CN113298602B (en) Commodity object information interaction method, device and electronic equipment
CN115134649B (en) Method and system for presenting interactive elements within video content
WO2017032078A1 (en) Interface control method and mobile terminal
KR20150007875A (en) Photograph image generating method of electronic device, and apparatus thereof
EP3449390A1 (en) Augmented media
CN109947506A (en) Interface switching method, device and electronic equipment
Tsang et al. Game-like navigation and responsiveness in non-game applications
EP3190503B1 (en) An apparatus and associated methods
GB2513865A (en) A method for interacting with an augmented reality scene
KR20170120299A (en) Realistic contents service system using leap motion
GB2569179A (en) Method for editing digital image sequences
KR20150017832A (en) Method for controlling 3D object and device thereof
CN116744065A (en) Video playback method and device
TWI762830B (en) System for displaying hint in augmented reality to play continuing film and method thereof
AU2017200632B2 (en) Device, method and, graphical user interface for navigating media content
CN118689577A (en) Display method and device of lock screen wallpaper
CN120179116A (en) Display method and device of extended reality space and extended reality device
HK40028932B (en) Video-based image generation method, display method, apparatus, and device
HK40028932A (en) Video-based image generation method, display method, apparatus, and device
HK1250535A1 (en) Augmented media

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)