GB2575975A - Computer implemented methods for generating and selecting media data - Google Patents
Computer implemented methods for generating and selecting media data Download PDFInfo
- Publication number
- GB2575975A GB2575975A GB1812312.5A GB201812312A GB2575975A GB 2575975 A GB2575975 A GB 2575975A GB 201812312 A GB201812312 A GB 201812312A GB 2575975 A GB2575975 A GB 2575975A
- Authority
- GB
- United Kingdom
- Prior art keywords
- media data
- audio file
- computer implemented
- recording
- input
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/165—Management of the audio stream, e.g. setting of volume, audio stream path
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04886—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/162—Interface to dedicated audio devices, e.g. audio drivers, interface to CODECs
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/34—Indicating arrangements
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Selecting data of a digital audio file comprises receiving a first signal associated with a first touch input of a first timing position marker 210 on a touch sensitive GUI interface 200 displaying a graphical representation 204 of at least a portion of the digital audio file. The first touch input corresponding to a spatial position of a first predefined set of positions 206 on the touch sensitive interface. A second signal associated with a second touch input of a second timing position marker 212 on the touch sensitive interface is received. The second touch input corresponding to a spatial position of a second pre-defined set of positions 208. The first predefined set of positions being spatially separate to the second pre-defined set of positions. A portion 216 of the digital audio file is selected based at least on the second timing position and includes the first timing position. The separation of the markers on the GUI prevents accidentally engaging an unintended marker. A method for generating media on a mobile device includes recording first and second media data upon first to fourth inputs from a touch screen and merging the first and second media data.
Description
Computer Implemented Methods for Generating and Selecting Media Data
Field of the invention
The present invention is in the field of generating or selecting media data, in particular, but not exclusively, generating or selecting audio and visual media files recorded using a mobile computing device.
Background
Computer devices are used for recording and playing back media. This may be audio and/or visual media for example Podcasts. Mobile computing devices are increasingly being used to generate media data. Different mobile computing devices exist, including different versions of software systems. Different software may be used to generate the media files including a bespoke media recording/playback application (or App) that runs on an operating system (OS). Mobile computing devices and their associated operating systems typically evolve through time becoming more complex and having new features.
A software developer typically needs to ensure that a new App is compatible with the appropriate mobile computing devices and their associated operating systems. Conversely a user may also need to make sure that the App that they are going to download or use is compatible with the mobile device and/or operating system running upon the mobile device. A user may need to get a new version of a media recording software if they get a new mobile computing device with a different operating system. This may be because the new operating system is an updated version or simply an alternative version of the original operating system they had previously been using. This symptom of alternative operating system versions is often call fragmentation. There therefore arises software incompatibility issues when users or developers are looking to implement new media recording/playback software that may be undesirable for both the developers and a user's perspective.
Summary
According to a first aspect of the present invention there is presented a computer implemented method for: use on a mobile computing device; selecting data of a digital audio file, the method comprises: receiving a first signal associated with a first touch input on a touch sensitive interface of the mobile computing device, the touch sensitive interface displaying a graphical representation of at least a portion of the digital audio file; the first touch input corresponding to a spatial position of a first predefined set of positions on the touch sensitive interface; and in any order, A) determining a first timing position in the digital audio file based on the first signal; B) receiving a second signal associated with a second touch input on the touch sensitive interface; the second touch input corresponding to a spatial position of a second pre-defined set of positions on the touch sensitive interface; the first pre-defined set of positions being spatially separate, on the touch sensitive interface, to the second pre-defined set of positions; and, determining a second timing position in the digital audio file based on the second signal; selecting at least a portion of the digital audio file based at least on the second timing position, the said selected portion comprising and plurality of timing positions including the first timing position.
The first aspect may be adapted in any way described herein, including, but not limited to any one or more of the following.
The computer implemented method may be configured such that the graphical representation of at least a portion of the digital audio file may comprise spatially distributed data values along a direction across the interface; the spatial distribution corresponding to the timing of the playback positions.
The computer implemented method may be configured such that the first and second predefined set of positions run parallel to the said direction across the interface.
The computer implemented method may be configured such that the first and/or second set of pre-defined positions may respectively comprise a region on the interface wherein each position is adjacent to another position within the respective set.
The computer implemented method may comprise: receiving a third signal associated with a third touch input on the touch sensitive interface; the third touch input corresponding to a further position of the second pre-defined set of positions on the touch sensitive interface; determining a third timing position in the digital audio file based on the first signal; the second timing position being different from the third timing position; selecting the said portion based further on the third timing position.
The computer implemented method may comprise: displaying an interactive image object about the first touch input position; and, receiving a signal corresponding to a further touch input on the said interactive image object; performing an operation on the selected portion based on the said received signal.
The computer implemented method may be configured such that the selected portion of the digital audio file is a first selected portion, the method comprising: performing an operation on the first selected portion of the digital audio file and a second selected portion of the audio file that is different to the first selected portion; the operation being performed when the second timing position or third timing position of the first selected portion is the same as a respective second or third timing position of the second selected portion.
The computer implemented method may be configured such that the operation is performed upon an input signal associated with the user moving any of: A) the first selected portion; B) an image object associated with any of the second or third timing positions of the first selected portion.
The computer implemented method may comprise: generating a further digital audio file by any of: removing the unselected portions of the said digital audio file; and/or;
choosing the selected portion of the digital audio file; and, outputting the further digital audio file.
According to a second aspect of the present invention there is provided: a mobile computing device comprising a processor, a touch sensitive interface and a memory; the memory comprising a digital audio file, the processor configured to: receive a first signal associated with a first touch input on a touch sensitive interface of the mobile computing device, the touch sensitive interface displaying a graphical representation of at least a portion of the digital audio file; the first touch input corresponding to a spatial position of a first predefined set of positions on the touch sensitive interface; in any order, A) determine a first timing position in the digital audio file based on the first signal; B) receive a second signal associated with a second touch input on the touch sensitive interface; the second touch input corresponding to a spatial position of a second pre-defined set of positions on the touch sensitive interface; the first pre-defined set of positions being spatially separate, on the touch sensitive interface, to the second pre-defined set of positions; and, determine a second timing position in the digital audio file based on the second signal; select at least a portion of the digital audio file based at least on the second timing position, the said selected portion comprising and plurality of timing positions including the first timing position.
The second aspect may be adapted in any way described herein, including, but not limited to any one or more of the optional features described above for the first aspect.
According to a third aspect of the present invention, there is presented a computer implemented method for generating media data using a mobile computing device comprising a processor and a memory; the method comprises, using the processor for: upon receiving a first input, starting recording first media data from one or more input media signals; upon receiving a second input, stopping recording the first media data; and storing the first media data in a memory; upon receiving a third input, starting recording second media data from the one or more input media signals; upon receiving a fourth input, stopping recording the second media data; storing the second set of media data in the memory; generating the media data at least by merging the first media data with the second media data.
The third aspect may be adapted in any way described herein, including, but not limited to any one or more of the following.
The computer implemented method may be configured such that: the first and second media data are respectively, first and second instances of running a media recording module; the said stopping recording of the first media data comprises releasing the recording module.
The computer implemented method may be configured such that: stopping of recording the first media data is dependent upon a first interaction with one or more image objects on a touchscreen of the mobile computing device.
The computer implemented method may be configured such that: the starting recording of the second media data is dependent upon a second interaction with said one or more image objects on the touchscreen of the mobile computing device.
The computer implemented method may be configured such that: stopping recording the first media data is upon a first interaction with a first state of a first image object on the touch sensitive interface; starting recording the second media data is upon a second interaction with second state of the first image object.
The computer implemented method may be configured such that the start of the second media data follows the end of the first media in the media file.
The computer implemented method may be configured such that the media data is audio data.
The computer implemented method may be configured such that the input media signals are generated using a microphone of the mobile computing device.
The computer implemented method may be configured such that upon stopping recording the first media data, the method disables the microphone.
The computer implemented method may be configured such that: the steps of the first aspect are implemented by an audio recording module stored on the memory; the audio recording module relinquishes control of the microphone upon stopping recording the first media data.
The computer implemented method may comprise: generating respective filepaths associated with the first and second media data; storing the filepaths in an array; generating the media data by accessing the array.
According to a fourth aspect of the present invention, there is provided a mobile computing device comprising a processor and a memory; the processor configured to: upon receiving a first input, start recording first media data from one or more input media signals; upon receiving a second input, stop recording the first media data; and, storing the first media data in a memory; upon receiving a third input, start recording second media data from the one or more input media signals; upon receiving a fourth input, stop recording the second media data; store the second set of media data in the memory; generate the media data at least by merging the first media data with the second media data.
The fourth aspect may be adapted in any way described herein, including, but not limited to any one or more of the optional features described above for the third aspect.
Embodiments of the present invention will now be described in detail with reference to the accompanying drawings, in which:
Brief list of figures
Figure 1 shows a flow diagram of a method for generating media data described herein; Figure 2 shows a block diagram of the components of a mobile computing device as described herein;
Figure 3 shows a block diagram of software module components of a software program running a method as described herein;
Figure 4 shows a flow diagram of a method for selecting digital audio data described herein;
Figures 5a-5c show schematic screen shots of a Ul during the process of selecting audio data;
Figure 6a shows a mobile device having the Ul of the SplashActivity of an illustrated example of an app of the present disclosure;
Figure 6b shows a mobile device having the Ul of the AuthenticationActivity of the illustrated example;
Figure 6c shows a mobile device having the Ul of the MainActivity of the illustrated example;
Figure 7 shows a mobile device having a Ul of the illustrated example allowing the user to navigate the app;
Figure 8a shows a mobile device having a Ul of the illustrated example showing an expandable bottom bar;
Figure 8b shows a mobile device having a Ul of the illustrated example showing an expanded view;
Figure 8b shows a mobile device having a Ul of the illustrated example showing a notification;
Figure 9 shows a Ul with a record button that overlays another Ul of the illustrated example;
Figure 10a shows a Ul of the illustrated example having the recording button;
Figure 10b shows a Ul of the illustrated example having a list of drafts;
Figure 10c shows a Ul of the illustrated example that displays a waveform based on
MediaRecorder's current maximum amplitude;
Figure 11a shows a Ul of the illustrated example that shows an editing screen;
Figure lib shows a Ul of the illustrated example that shows an editing screen;
Figure 11c shows a Ul of the illustrated example that shows an editing screen;
Figure 12a shows a Ul of the illustrated example that shows a publish screen;
Figure 12b shows a Ul of the illustrated example that searches a location;
Figure 12c shows a Ul of the illustrated example that shows options to publish.
Detailed description
There is presented a computer implemented method for generating media data using a computing device. The computing device may be a mobile computing device and from hereinafter examples referencing a mobile computing device may equally apply to any computing device where technically appropriate. Figure 1 shows an example of the method 2 described herein.
The media data may be a computer readable media file for playing on a media player.
The method comprises:
At step SI, upon receiving a first input, starting recording first media data from one or more input media signals.
At step S2, upon receiving a second input, the method comprises stopping recording the first media data and storing the first media data in a memory.
At step S3, upon receiving a third input, the method comprises starting recording second media data from the one or more input media signals.
At step S4, upon receiving a fourth input, the method comprises stopping recording the second media data; storing the second set of media data in the memory;
At step S5 generating the media data at least by merging the first media data with the second media data.
This method may be adapted according to any feature described herein. The media data may be audio data or other media data detailed herein. The adaption may include any one or more of: the removal or modification of any one or more of the abovementioned steps S1-S5; addition of one or more further steps at the start, end or in between any of the steps SI to S5. For purposes of describing this method, the generated media data described at step S5 may be referred to as a Merged Media File (MMF).
Figure 2 shows an example of an apparatus for executing the computer implemented method. The apparatus in this example shown in figure 2 is a mobile computing device 2 comprising a processor 4, a memory 6, a touch screen 8 and one or more communication components 10 for receiving and transmitting data signals from and to the mobile computing device 2. The memory 6 contains one or more software modules 12a, 12b, 12c. As described elsewhere herein, the computing device may be another type of computing device than a mobile computing device and the elements shown in figure 2 may be removed or modified according to any feature or example described herein. Other elements of the computing device 2 may also be included as described elsewhere herein. The computing device may also be referred to herein as a 'computer' or 'computer system'.
There is also presented a computer readable medium storing computer readable program code which when executed by a computer processing, gives effect to any of the methods described herein.
An example of an advantage of the method described above is now presented wherein it should be appreciated that other examples of advantages may also come about by executing the method described herein. A person using software implementing the method described above may generate a MMF through a User Interface (Ul) that is generated from the running of the method. The Ul may have a pause button where the user temporarily stops the recording with an inherent intent to continue the recording of the MMF at a later time wherein the MMF is intended to be plated back as a whole. Unlike other media software recorders that operate a 'pause' functionality by stopping the gathering data into a data store but otherwise keeping media recording resources occupied and resuming under the same instance of the media recording module, the present method saves multiple instances of the execution of the recording module and then merges them upon an indication that the media recording session is ended. The MMF may therefore contain a merged sum of a plurality of instances of the same media recorder. Performing the recording and generating the MMF using a merged concatenation of separate instances allows software of the present method to be utilised by a wider variety of computer devices and operating systems (OS). This is because the commands and software routines used to start and stop recordings are more prevalent than more current features such as routines that use a 'pause' functionality that does not release computation and device resource.
For example, if the present method were embodied in an App that utilised other preexisting software modules on the background OS, then it is more likely that the OS has the start and stop recording code on the OS compared to the code needed to pause the recording. The present method therefore improves the OS-interoperability of an App embodying the method.
Figure 3 shows an example of a software App 100 which, when executed is configured to give rise to the methods described herein. The App 100 comprises a plurality of primary modules 102,104,106, each of which, when executed, gives rise to a Ul on a display of the mobile computing device. Each of the primary modules comprises one or more secondary modules. These secondary modules 107 may initiate a separate Ul that can overlay the Ul of its associated primary module. Although figure 3 shows three primary modules, each having two secondary modules 107, it is to be understood that the app 100 may have more or fewer primary and secondary modules.
The App may be configured such that each of the primary modules 102,104,106 may be individually addressed and utilised by other software applications.
The App may be an Android ® based App with the primary modules being Activities and the secondary modules being Fragments.
The example shown in figure 3 may have a Splash activity 102 with an introduction screen, an authentication activity 104 with editable image objects for a user to insert data for registering and/or logging into the system, a main activity which has a Ul allowing the user to utilise the media recording features in the app. In one example of the min activity, the main activity has a recording / editing / publishing fragment and a fragment
Outputting
The computer implemented method may further be configured to output the MMF or a selected portion (see the editing section below) by transmitting it over a network. This may distribute the MMF over the network to a plurality of recipients. For example, the MMF may be a Podcast.
The MMF may be uploaded to an external device. The external device may be a server that is accessible via a network such as the internet or other network as described elsewhere herein.
Recording
The method may use a recording module. This may be a module of the App itself or may be a module residing elsewhere, for example on the OS and being called via an Application Program Interface (API).
The input media signals are received from a media recording apparatus of the mobile computing device. The starting recording any of the first or second media data may comprise enabling the media recording apparatus of the mobile device. The stopping of the recording of any of the first or second media data may comprises disabling the media recording apparatus of the mobile computing device.
Media recording apparatus includes any one or more of, but not limited to: a microphone, a camera.
Media data
The media data is preferably digital media. The first media data and second media data are the same format. The first and second media data may comprise audio and/or video. The first media data and second media data may each be individually playable files in the same format.
Any audio format may be used, including, but limited to any of the following: 3GPP (.3gp); MPEG-4 (.mp4, ,m4a); ADTS raw AAC (.aac); MPEG-TS (.ts,); FLAC (.flac); Type 0 and 1 (.mid, .xmf, .mxmf); RTTTL/RTX (.rtttl, .rtx); OTA (.ota); iMelody (.imy); MP3 (.mp3); Matroska (.mkv); WAVE (.wav); Ogg (.ogg); Matroska (.mkv,).
An example of a preferred audio format is m4a.
Any video format may be used including, but not limited to any of the following: 3GPP (,3gp); MPEG-4 (.mp4); 3GPP (,3gp); MPEG-4 (.mp4); MPEG-TS (.ts, AAC audio only, not seekable,); WebM (.webm); Matroska (.mkv). Video resolution may be any of: SD (Low quality); SD (High quality), HD (720p or 1080p).
In some examples, the media is audio only.
The media data may be used as part of a podcast. The media may be a series of audio and/or video digital media files which are distributed over the Internet through Web feeds to portable media players. The method may output the media file to a remote server, which in turn stores the media files and allows them to be distributed. The upload to the remote server can be initiated once media recording has stopped. The upload may also be conditionally initiated by requesting an input from the user and upon receiving the requested input, uploading the media file to the remote server. This input may be an indication to publish. The input may be via a touch input on an interactive image object on a GUI. The merged MMF may be edited by the user using the app. Methods of editing may include any editing including but not limited to the method of selecting data presented elsewhere herein.
Ul pause and restart recording
The different media data recorded prior to being merged may be different instances of the recording module such that the above-said first and second media data are respectively, first and second instances of running a media recording module. The said stopping recording the first media data may comprise releasing the recording module.
The stopping of recording the first media data may be dependent upon a first interaction with one or more image objects on a touchscreen of the mobile device. This image object may be presented as a button, such as a pause button.
The starting recording of the second media data may be dependent upon a second interaction with said one or more image objects on the touchscreen of the mobile device. The interactive image object used by the user to stop the recording of the first media data may be a different state of the same image object used to start the recording of the second media data. For example, the interactive image object may be displayed as a pause button (one state) whilst the first data is being recorded. When the user touches the pause button, the first media data recording is stopped as described. This may stop the recording process of the first media data, disables (for example) the microphone used to record the first media data and releases resources associated with a current recorder module. The first data may then be stored on a memory on the mobile device. A file path of the first media data may be stored into an array. The interactive image object then reverts to another state showing a record button. When the user interacts with the record button the recording of the second media data starts. Therefore, the method may be configured such that stopping recording the first media data is upon the first interaction with a first state of a first image object on the touchscreen of the mobile device; starting recording the second media data is upon a second interaction with second state of the first image object on the touchscreen of the mobile device.
Merging the first and second media data may be achieved by a parsing process that may utilises the filepaths of the stored data in the array.
The method may be configured such that the start of the second media data follows the end of the first media in the media file. The start of the second media data immediately follows the end of the first media in the media file
Editing
When the above method creates a media file it may then provide a user editing functionality. This functionality allows a user to edit the file before it is published and output. The editing functionality may also be provided for audio files that have not been created by the methods described herein. Furthermore, the editing functionality now described may be embodied on any computing system.
There is further presented, as another aspect of the present disclosure, a computer implemented method for selecting data of a digital audio file. Figure 4 shows an example of this method. The method comprises:
(at step S101) receiving a first signal associated with a first touch input on a touch sensitive interface, the touch sensitive interface displaying a graphical representation of at least a portion of the digital audio file; the first touch input corresponding to a spatial position of a first predefined set of positions on the touch sensitive interface;
and in any order,
A) (at step S102) determining a first timing position in the digital audio file based on the first signal;
B) (at step S103) receiving a second signal associated with a second touch input on the touch sensitive interface; the second touch input corresponding to a spatial position of a second pre-defined set of positions on the touch sensitive interface; the first pre-defined set of positions being spatially separate, on the touch sensitive interface, to the second pre-defined set of positions; and, (at step S104) determining a second timing position in the digital audio file based on the second signal;
(at step S105) selecting at least a portion of the digital audio file based at least on the second timing position, the said selected portion comprising and plurality of timing positions including the first timing position.
This method may be adapted according to any feature described herein. The adaption may include any one or more of: the removal or modification of any one or more of the abovementioned steps S101-S105; addition of one or more further steps at the start, end or in between any of the steps S101 to S105. For purposes of further discussion, the term 'timing position' may also be referred to as 'playback position'.
The predefined set of positions relates to the method only recognising and accepting inputs on the touch interface in a certain areas/s of the interface. If a touch input is provided on a position that does not correspond to one of the predefined positions then that input is not used by the method as described above, but can of course be an input for another function. The positions may be pre-stored in memory or generated by the method upon displaying the image of the waveform of the audio file. The position set may be defined based upon a property of the digital media, for example its size and shape when displayed as a GUI image.
This method therefore allows a user interacting with a touch to edit an audio waveform easily because the initial point can be used to denote a marker indicating where on the waveform the user is interested in selecting data whilst the second touch can be used to indicate the extent of the portion of the data to select. Because the user has to position her/her finger or pointer at different positions on the touch interface for selecting the two playback positions the computing device does not confuse the selection of the second playback position with a re-selection of the first playback position. The user is therefore able to quickly
Figures 5a and 5b show an example of this method. A touch enabled Graphical User Interface (GUI) 200 shows an image 202 showing a graphical representation 204 of the digital audio file. A first region 206 is shown that contains the first set of predefined positions. A second region 208 is shown that contains the second set of predefined positions. Figure 5b shows the same GUI of figure 5a where the user has touched: a first position 210 on the GUI 200 corresponding to a position of the first pre-defined set of positions (in the first region 106); a second position 212 on the GUI 100 corresponding to a position of the second pre-defined set of positions (in the second region 208). In this example dotted line image objects 214 are displayed extending over the image 202 from the touch position so the user can identify where in the audio file they have selected. The double arrow 216 indicates the selected portion of the waveform, hence portion of the audio file selected.
Further optional features for this method are now discussed wherein the method can be adapted have any one or more of the described features.
Any of the received signals may be electronic signals received from one or more touch sensors operatively connected to the touch sensitive interface. The touch input can be a touch or contact by a users' finger or any other implement such as a stylus or other physical pointer.
The graphical representation of at least a portion of the digital audio file may comprise spatially distributed data values along a direction across the GUI; the spatial distribution corresponding to the timing of the playback positions. The first and second pre-defined set of positions may run parallel to the said direction. An example of this is shown in figures 5a and 5b.
The predefined position sets may be located anywhere on the GUI as long as that are not overlapping. They may be spatially separated such that regions of the GUI exist between the two predefined regions such that a touch area is present on the interface that does not form part of a predefined set.
The first and/or second set of pre-defined positions may respectively comprise a region wherein each position is adjacent to another position within the respective set.
The regions may comprise a spatial strip of the GUI extending substantially along the length of the waveform (i.e. parallel to the above-said 'direction'). The image of the waveform may be a rectangular image showing data inside the rectangle. The data may be displayed about axis running parallel to the said direction. The audio data may be displayed as amplitude values that vary in time along the said axis, where the length of the axis represents time on the audio file.
A region containing the predefined positions may overlap the GUI image and/ or may be adjacent to an edge of the image. Preferably an edge running parallel to the axis or said 'direction'. For example, bordering the edge and extending inwardly and / or outwardly from the said image edge.
The second timing position on the media file may be different from, or the same as, the first timing position on the media file.
The method may further comprise: receiving a third signal associated with a third touch input on the touch sensitive interface; the third touch input corresponding to a further position of the second pre-defined set of positions on the touch sensitive interface; determining a third timing position in the digital audio file based on the first signal; the second timing position being different from the third timing position; selecting the said portion based further on the third timing position.
Figure 5c shows an example similar to figures 5a and 5b where the second 212 and third 215 timing positions, respectively relating to second and third positions (shown as 212 and 215) on the GUI, are the boundaries of the selected portion 216.
The user may initially touch one position and then move the touch contact, for example swiping and releasing in a gesture. The said spatial position on the GUI may be touch position anywhere along the gesture, for example at the end point where the touch is removed from the touch screen. The second and third touches may be performed simultaneously, for example by a user touching the screen with two fingers and moving them apart.
The method may display an image object, such as a line, that extends over the waveform image to show the user where on the waveform one of the positions is. The actual positions may be represented my object markers on the GUI that may be present at any of: starting a user gesture, during a user gesture, at the end of a user gesture, after a user has removed the touch from the screen. Interactive image objects may be used on the GUI such that a person touches and moves an image object in a gesture. The image object may reside about one of the predefined set of positions and be movable (and releasable) about the GUI along the plurality of positions in the set.
Upon receiving first signal, the method may output a graphical object overlaying at least one of: A) the first timing position on the graphical representation of the digital audio file; B) a portion of the second set of predefined positions. An example of this is shown in figure 5c where a dotted line extends across the image of the waveform and into the region having the predefined positions.
An interactive image object may be used as a marker for indicating the first position. This marker may be called the first marker or the first interactive image object, and may be displayed upon the user touching the GUI with the first touch input. The marker may reside at least partially over, or fully over the location of the first touch input. The marker is spatially separated on the screen from other markers associated with the second and third (or further) touch positions. This marker can then be used by the user to edit or otherwise manipulate the selected portion of the audio file. Examples of editing or manipulating the selection include any of: cutting, copying, pasting, moving (for example via a dragging and moving the selected area via a movement gesture) or providing the user with a list on the GUI of optional actions to take.
The method may therefore display an interactive image object about the first touch input position; and, receive a signal corresponding to a further touch input on the interactive image object; perform an operation of the selected portion based on the said signal.
This operation could be, for example, to move the selected portion and insert it into a different part of the waveform. Having the user interact with the image object (marker) about the first timing position (on the GUI) means that the manipulation can be achieved without the user accidentally changing the selection area, which may be dictated by the position on the GUI one or more (for example two) other user interactive markers. Thus, in the example shown in figure 5c, the user can select the portion of the media file using the bottom two markers 212, 215 (either by moving them and or creating them in the first instance by touch a point on the GUI), but control or manipulate the selected area using the top marker 210. The physical separation of these marker on the GUI allows the user to easily perform different functions without accidentally engaging with an unintended marker that is used to set another function.
The method wherein the selected portion of the digital audio file is a first selected portion, the method may perform an operation on the first selected portion of the digital audio file and a second selected portion of the audio file that is different to the first selected portion; the operation being performed when the second timing position or third timing position of the first selected portion is the same as a second or third timing position of the second selected portion.
This operation may be an automatic merging of the two selected areas or a request for a user to input whether they would like to merge the selected areas. Other operations are also possible. For example, an initial section of the audio file may have been selected, it may have its own first marker used by a user to control/manipulate the initial section and two spatially separate boundary selection markers indicating the start and end timing positions in the overall audio file where that selection starts and ends. Any one or more of these markers may have lines extending outwardly and away from the marker going at least partially across the waveform image in a direction perpendicular to the direction of time in the waveform image in the GUI. This selection may be made using the methods described herein. Another similar selection of the overall audio file may be made using the methods described herein. The two selections may be initially spatially separate from each other on the GUI such that all of the portions of the audio file in the one selection are different to the portions of the other selection. Figure lib shows an example of a GUI where two such selections have been made. The user may then touch the GUI to engage with the first marker of one of the selections and drag using a touch gesture (or otherwise control the selection (e.g. cutting and pasting)) to move it adjacent to the other selection. When moved adjacent the other selected area the outer selection border of one selection is either immediately adjacent to or overlapping (or within a predefined positional tolerance on the GUI) with an outer selection border of the other selection with one selected area extending outwardly (in GUI position) from the boundary edge of the other selected area.
Therefore, the method may be configured such that the operation is performed upon an input signal associated with the user moving any of: the first selected portion;
an image object associated with any of the second or third timing positions of the first selected portion.
For example, a user may have selected a plurality of regions of the media file (see for example figure lib) and may want to change the start or stop time position of the selection by interacting with a marker on the GUI representing and controlling an end timing position and moving it. When the marker is moved so that is adjacent, or coincident or moves over a boundary timing point (on the GUI)
The method may be configured to output the selected portion of the digital audio file. This process may include generating new audio file containing the one or more selected portions.
This output may include sending the selected portion or selected portions (if a plurality of selected portions has been done by a user) to a remote device, such as a server. This action of sending out the selected portion may be automatic upon the user indicating the editing process is finished. The action of only selecting the one or more selected portions to output (and disregarding other non-selected portions of the audio file) may also be automatic upon the user indicating the editing process is finished.
The method may merge two or more selected portions of the media file together upon the user indicating the editing process is finished so that the user does not have to provide further input to create the merged file. This may also reduce the output file size. Merging may be accomplished using any method described herein including abutting the selections together in time so that the end timing point of one selection coincides with the starting timing point of another selection.
The method may also provide for, upon receiving a user input, the playback, on the mobile device of the selected portions of the media data. This may occur prior to, or after, a user providing an input to indicate the editing process has finished and/or the selected portion of the MMF is to be output (for example published by being uploaded to a remote server).
The computing device
Further to the features described above for the computing device, the following features may also form part of the computing device of any of the methods described herein.
The computing device can be an electronic device and comprise an operating system. The computing device may be a mobile device such as, but not limited to a phone, tablet or laptop. The operating system can be a real-time, multi-user, single-user, multi-tasking, single tasking, distributed, or embedded. The operating system (OS) can be any of, but not limited to, Android ®, iOS ®, Linux ®, a Mac operating system, a version of Microsoft Windows ®. The OS may have inbuilt software modules that may be called by a method of the present application, for example by using an API. The systems and methods described herein can be implemented in or upon computer systems.
The computing device may include various combinations of a central processor or other electronic processing component, an internal communication bus, various types of memory or storage media (RAM, ROM, EEPROM, cache memory, disk drives, etc.) for code and data storage, and one or more network interface cards or ports for communication purposes.
The devices, systems, and methods described herein may include or be implemented in software code, which may run on such computer systems or other systems. For example, the software code can be executable by a computer system, for example, that functions as the storage server or proxy server, and/or that functions as a user's terminal device. During operation the code can be stored within the computer system. At other times, the code can be stored at other locations and/or transmitted for loading into the appropriate computer system. Execution of the code by a processor of the computer system can enable the computer system to implement the methods and systems described herein.
The computing devices may comprise various communication capabilities to facilitate communications between different devices. These may include wired communications (such as electronic communication lines or optical fibre) and/or wireless communications. Examples of wireless communications include, but are not limited to, radio frequency transmission, infrared transmission, or other communication technology. The hardware described herein can include transmitters and receivers for radio and/or other communication technology and/or interfaces to couple to and communicate with communication networks.
The computing device may be able to communicate with other electronic devices, for example, over a network. The computing device may be able to communicate with an external device using a variety of communication protocols. A set of standardized rules, referred to as a protocol, may be used utilized to enable electronic devices to communicate. A network may be a small system that is physically connected by cables or via wireless communication (a local area network or LAN). The computing device may be a part of several separate networks that are connected together to form a larger network (a wide area network or WAN). Other types of networks of which the computing device can be a part of include the internet, telcom networks, intranets, extranets, wireless networks, and other networks over which electronic, digital and/or analog data can be communicated.
The computing device may comprise a visual interface and a touch interface. These interfaces maybe separate or may be part of the same component, for example a touch enabled graphical display.
Further example of operation
The following is an illustrated example of an app embodying the methods described herein.
The features described in this example can be modified according to any of the other features and steps described elsewhere herein. Furthermore, the methods described elsewhere herein may be modified according to any feature described in this illustrative example. The illustrated example describes certain software modules, for example 'PhoneStateListener'. It is understood that other functionally equivalent modules on different OS platforms may be used.
The App is a social audio/podcast mobile app with enhanced audio editing functionality compatible with Android handsets running Android Jelly Bean 4.3.1 (SDK ver. 18) and above. The main functionality consists of:
- Sign up
- Sign in
- Reset password
- Record podcast
- Edit podcast
- Publish podcast
- Save/Load podcast draft
- Listen to podcast
- Like podcast
- Add comment (including audio comment)
- Mention users and add hashtags
- Share podcast (and its bookmark with a timestamp)
- Search for people/tags/places
- Follow/Unfollow
- Receive notifications
- View user's profile
- Edit profile
- Find Friends on Facebook/Twitter
App structure
Activities and Fragments
The app consists of 3 Activities (Figures 6a-6c). All of them extend AppCompatActivity in order to guarantee a broad compatibility, especially when it comes to the User Interface (Ul):
Figure 6a shows a mobile device having the Ul of the SplashActivity. This activity used a nonlayout approach that loads activity with a style containing the splash screen image so it loads faster and doesn't suffer from blank screens.
Figure 6b shows a mobile device having the Ul of the AuthenticationActivity. This activity contains all the Authentication Fragments (landing/login, sign up, reset password) presented in one FrameLayout container. This allows the App to present all the fragments smoothly with a horizontal navigation and also maintain the back stack.
Figure 6c shows a mobile device having the Ul of the MainActivity. This Ul holds the BottomBar, BottomPlayer and two fragment containers. One container for recording/editing/publishing fragments and second for all the rest. This approach provides easy access to all the main screens as well as keeps the recording/editing/publishing flow separately, so it prevents accidental flow disruptions/screen dismissals.
Navigation
The app's navigation consists of a bottom navigation bar with 5 tabs - Homefeed, Discover, Record, Notifications and Profile (Figure 7). Access to the Settings is available from the Profile Fragment by tapping icon in the right top corner.
Bottom Player + Audio service
For podcast playback the app uses a custom BottomPlayer that extends LinearLayout along with BottomSheetBehaviour that is responsible for dragging it up and down (Figure 8a and 8b). When the player is being expanded the bottom bar hides. When the app goes to the background the audio continues to play in the service and provide media control (play, pause, etc.) via a notification (Figure 8c). The software modules PhoneStateListener and TelephonyManager are used for pausing the playback when receiving a phone call. Android's AudioManager API (OnAudioFocusChangeListener() method) handles audio behaviour when a user connects/disconnects headphones.
The app provides an easy access to the record button that triggers the recording/editing/publishing flow. A separate container is used that is on top of the bottom bar. This prevents accidental recording disruptions (Figure 9). Figure 10a shows the Ul having the recording button.
The recording/editing/publishing flow was the most challenging to implement as we've stumbled upon a few issues that we had to overcome.
Record Screen
For the recording we've used a native MediaRecorder API that works well with built in/external microphones as this solution would be the most stable, clean to use and compatible with target Android versions. However, it turned out that some methods. For example, we couldn't use MediaRecorder.pause() and MediaRecorder.resume() The app is usable from API levels of 18 onwards. Certain available modules aren't available for API level below 24, for example 'pause' and 'resume' type modules.
For the illustrated examples disclosed herein, every time a user pauses the recording the app calls MediaRecorder.stopO and MediaRecorder.release(). This stops the recording process, disables microphone and releases resources associated with a current MediaRecorder object. When that's done, the app adds the file path of the last recording to an array. When user resumes the recording, the whole process starts again. Finally, when the user finishes the process and decides to proceed to either 'publish' or 'edit' screen, the app merge all the files from the array using Open Source Mp4Parser library (Licensed under the Apache License, Version 2.0). This is possible because the audio files of the same type and the decoder settings are the same.
The app is adapted to visualize the recording process in a way that's clear for the user. Since MediaRecorder doesn't give you access directly to the audio buffer, an FFT algorithm was not used for the waveform. Instead the app uses a custom RecorderVisualizerView that displays a waveform based on MediaRecorder's current maximum amplitude (Figure 10c) . This solution is beneficial as it indicates when recording is on, and also shows the sound dynamics. The use of this simple view that is refreshed/animated using Handler.postDelayed() every 20 milliseconds makes the process lightweight and smooth.
Drafts Screen
From the Record screen, user can open Drafts Fragment (Figure 10b) where all previously saved and not published podcasts are displayed within a RecyclerView's cells. Android Room Persistence Library manages the database that holds file paths and basic information about the podcast (title, date and time).
Edit Screen
The App comprises an Edit Screen. This is where the users can manipulate their podcast files before they get published. The App may use a method as previously described to select the portion of the podcast file to edit.
The app creates a waveform of the audio file, that can be scrolled, scaled, is accurate and also fast to generate.
The optimal recording format across various Android devices has the follow settings give the best results when it comes to speed, quality and compatibility:
Output Format = MPEG4
Audio Encoder = AAC
Audio Sampling Rate = 44100 Audio Encoding Bit Rate = 96000
Instead of keeping cached audio values for all zoom levels, the app dynamically draws the waveform based off input values and a scaling factor (see figure 11a).
The above modifications let us cut the processing time by almost 50% compared to Ringdroid
Example:
Using a low spec device: LG Nexus 5 (Chipset: Qualcomm MSM8974 Snapdragon 800, CPU: Quad-core 2.3GHz Krait 400, GPU: Adreno 330, 2 GB RAM)
Ringdroid - it took 3.51s to generate a waveform of a 30s sample audio recorded in app and display it in the fragment.
present app - it took 1.86s to generate a waveform of a 30s sample audio recorded in app and display it in the fragment.
The App gives users additional functionality, by having 2 layers of manipulation:
- Markers
- Audio file editing
Markers are lightweight, adjustable and easy to use section indicators that can be easily added, removed, moved horizontally, merged and re-sized. They are of random unique colour so are easy to distinguish and have timestamps attached to the edges (Figure lib). If markers are used in the app by the user then only selected areas may be published. A user can preview the final outcome by tapping on Preview button that enables MediaPlayer API, speaker or headphones and plays only what's within the markers.
The app has marker views that are triggered when user touches the WaveForm. The is typically a massive amount of user interactions on this Fragment. The app therefore limits the WaveFormView touch active areas that are responsible for markers to the top and the bottom of the view only, so it doesn't interfere with scaling and scrolling. A top part recognizes (via GestureDetector + custom calculations of time, offset, etc.) a short tap that adds new marker, long tap that removes a marker and left/right movements for relocating the selected section. Bottom of the view recognizes the movement and relocates the edges of the marker independently. When two markers of different selection areas meet and a user confirms the merge, both timestamps (start of the first marker and the end of the second) are being stored, a new marker based of those timestamps gets added replacing those two previous selections.
The above solution covers most of the scenarios of the basic audio editing. It lets a user select areas to be published and skip those unwanted - like long pauses, noises, etc.
Sometimes however, there might be a situation when a user wants something more advanced. For example, to change the order of the podcast and make the last few minutes being played at the beginning. This is also possible thanks to a copy/paste/delete functionality supported by the app.
When the marker is selected and Copy button is tapped, both timestamps (start/end) of the selection are being stored and the white Edit Marker appears (Figure 11c). This marker can be then moved freely on the Waveform. When user taps Paste button, both timestamps (start/end) of the selection are being stored, the parts before and after the white Edit Marker (on the right hand side) are being cut using a writeFile() method of the Google's open source class SoundFile.java (Licensed under the Apache License, Version 2.0). This method generates separate audio files in a correct format using MediaCodec API and a FileOutputStream, based of all timestamps (source marker's start/end and edit marker's exact position). After that everything gets re-merged using Open Source Mp4Parser library (Licensed under the Apache License, Version 2.0).
When a user finishes the editing and taps 'Next' not having any markers left on the screen, then the whole file is passed onto 'Publish Screen' otherwise the file made of selected areas only is being passed to the 'Publish Screen'.
Publish Screen
Publish screen (Figure 12a) lets a user set a Title and Caption within EditText views. A keyboard is presented when the view is tapped. 'Insert Photo' TextView triggers an image picker/camera and lets the user include an image.
'Location' opens a fragment where user can search, select a location and pass it on back to the Publish Screen. This screen uses a SearchView that takes user input and a RecyclerView for the list of results (Figure 12b).
Simple waveform view has been added also, along with a progress indicator and a media control section so users can replay their podcast and make sure everything is as it should. At the bottom of the screen there are two buttons available: 'Publish Now' and 'Publish Later' (Figure 12c).
Publish Now uploads the audio file to the server, creates the Request body with all podcast information and sends that via REST call to our server. When that's done, the record/edit/publish flow finishes and all fragments in that container are removed.
Publish Later saves the file path and all information using Android Room Persistence Library that manages the database. This can be then accessed via 'Drafts Screen'.
In order to address various issues and advance the art, the entirety of this disclosure shows by way of illustration various embodiments in which the claimed invention(s) may be practiced and provide for superior Computer Implemented Method for Generating Media
Data. The advantages and features of the disclosure are of a representative sample of embodiments only, and are not exhaustive and/or exclusive. They are presented only to assist in understanding and teach the claimed features. It is to be understood that advantages, embodiments, examples, functions, features, structures, and/or other aspects of the disclosure are not to be considered limitations on the disclosure as defined by the claims or limitations on equivalents to the claims, and that other embodiments may be utilised and modifications may be made without departing from the scope and/or spirit of the disclosure. Various embodiments may suitably comprise, consist of, or consist essentially of, various combinations of the disclosed elements, components, features, parts, steps, means, etc. In addition, the disclosure includes other inventions not presently claimed, but which may be claimed in future.
Claims (22)
1. A computer implemented method for:
use on a mobile computing device;
selecting data of a digital audio file, the method comprises:
receiving a first signal associated with a first touch input on a touch sensitive interface of the mobile computing device, the touch sensitive interface displaying a graphical representation of at least a portion of the digital audio file; the first touch input corresponding to a spatial position of a first predefined set of positions on the touch sensitive interface;
and in any order,
A) determining a first timing position in the digital audio file based on the first signal;
B) receiving a second signal associated with a second touch input on the touch sensitive interface; the second touch input corresponding to a spatial position of a second pre-defined set of positions on the touch sensitive interface; the first predefined set of positions being spatially separate, on the touch sensitive interface, to the second pre-defined set of positions; and, determining a second timing position in the digital audio file based on the second signal;
selecting at least a portion of the digital audio file based at least on the second timing position, the said selected portion comprising and plurality of timing positions including the first timing position.
2. The computer implemented method as claimed in claim 1 wherein the graphical representation of at least a portion of the digital audio file may comprise spatially distributed data values along a direction across the interface; the spatial distribution corresponding to the timing of the playback positions.
3. The computer implemented method as claimed in claim 2 wherein the first and second pre-defined set of positions run parallel to the said direction across the interface.
4. The computer implemented method as claimed in any preceding claim wherein the first and/or second set of pre-defined positions may respectively comprise a region on the interface wherein each position is adjacent to another position within the respective set.
5. The computer implemented method as claimed in any preceding claim comprising: receiving a third signal associated with a third touch input on the touch sensitive interface; the third touch input corresponding to a further position of the second pre-defined set of positions on the touch sensitive interface;
determining a third timing position in the digital audio file based on the first signal; the second timing position being different from the third timing position;
selecting the said portion based further on the third timing position.
6. The computer implemented method as claimed in any preceding claim; the method comprising:
displaying an interactive image object about the first touch input position; and, receiving a signal corresponding to a further touch input on the said interactive image object;
performing an operation on the selected portion based on the said received signal.
7. The computer implemented method as claimed in any preceding claim; wherein the selected portion of the digital audio file is a first selected portion, the method comprising: performing an operation on the first selected portion of the digital audio file and a second selected portion of the audio file that is different to the first selected portion; the operation being performed when the second timing position or third timing position of the first selected portion is the same as a respective second or third timing position of the second selected portion.
8. The computer implemented method as claimed in claim 7, wherein the operation is performed upon an input signal associated with the user moving any of:
A) the first selected portion;
B) an image object associated with any of the second or third timing positions of the first selected portion.
9. The computer implemented method as claimed in any preceding claim, the method comprising:
generating a further digital audio file by any of:
removing the unselected portions of the said digital audio file; and/or; choosing the selected portion of the digital audio file; and outputting the further digital audio file.
10. A mobile computing device comprising a processor, a touch sensitive interface and a memory; the memory comprising a digital audio file, the processor configured to:
receive a first signal associated with a first touch input on a touch sensitive interface of the mobile computing device, the touch sensitive interface displaying a graphical representation of at least a portion of the digital audio file; the first touch input corresponding to a spatial position of a first predefined set of positions on the touch sensitive interface;
in any order,
A) determine a first timing position in the digital audio file based on the first signal;
B) receive a second signal associated with a second touch input on the touch sensitive interface; the second touch input corresponding to a spatial position of a second pre-defined set of positions on the touch sensitive interface; the first predefined set of positions being spatially separate, on the touch sensitive interface, to the second pre-defined set of positions; and, determine a second timing position in the digital audio file based on the second signal;
select at least a portion of the digital audio file based at least on the second timing position, the said selected portion comprising and plurality of timing positions including the first timing position.
11. A computer implemented method for generating media data using a mobile computing device comprising a processor and a memory;
the method comprises, using the processor for:
upon receiving a first input, starting recording first media data from one or more input media signals;
upon receiving a second input, stopping recording the first media data; and storing the first media data in a memory;
upon receiving a third input, starting recording second media data from the one or more input media signals;
upon receiving a fourth input, stopping recording the second media data;
storing the second set of media data in the memory;
generating the media data at least by merging the first media data with the second media data.
12. The computer implemented method as claimed in claim 11 wherein:
the first and second media data are respectively, first and second instances of running a media recording module;
the said stopping recording of the first media data comprises releasing the recording module.
13. The computer implemented method as claimed in any of claims 11-12 wherein: stopping of recording the first media data is dependent upon a first interaction with one or more image objects on a touchscreen of the mobile computing device.
14. The computer implemented method as claimed in claims 13 wherein: the starting recording of the second media data is dependent upon a second interaction with said one or more image objects on the touchscreen of the mobile computing device.
15. The computer implemented method as claimed in any of claims 11-14, the method configured such that:
stopping recording the first media data is upon a first interaction with a first state of a first image object on the touch sensitive interface;
starting recording the second media data is upon a second interaction with second state of the first image object.
16. The computer implemented method as claimed in any of claims 11-15, the method configured such that the start of the second media data follows the end of the first media in the media file.
17. The computer implemented method as claimed in any of claims 11-16 wherein the media data is audio data.
18. The computer implemented method as claimed in claim 17 wherein the input media signals are generated using a microphone of the mobile computing device.
19. The computer implemented method as claimed in claims 18 wherein upon stopping recording the first media data, the method disables the microphone.
20. The computer implemented method as claimed in any of claims 18 or 19 wherein: the steps of claim 11 are implemented by an audio recording module stored on the memory;
the audio recording module relinquishes control of the microphone upon stopping recording the first media data.
21. The computer implemented method as claimed in any of claims 11-20 wherein the method comprises:
generating respective filepaths associated with the first and second media data;
storing the filepaths in an array;
generating the media data by accessing the array.
22. A mobile computing device comprising a processor and a memory; the processor configured to:
upon receiving a first input, start recording first media data from one or more input media signals;
upon receiving a second input, stop recording the first media data; and storing the first media data in a memory;
upon receiving a third input, start recording second media data from the one or more input media signals;
upon receiving a fourth input, stop recording the second media data;
store the second set of media data in the memory;
generate the media data at least by merging the first media data with the second media data.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB1812312.5A GB2575975A (en) | 2018-07-27 | 2018-07-27 | Computer implemented methods for generating and selecting media data |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB1812312.5A GB2575975A (en) | 2018-07-27 | 2018-07-27 | Computer implemented methods for generating and selecting media data |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| GB201812312D0 GB201812312D0 (en) | 2018-09-12 |
| GB2575975A true GB2575975A (en) | 2020-02-05 |
Family
ID=63518039
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| GB1812312.5A Withdrawn GB2575975A (en) | 2018-07-27 | 2018-07-27 | Computer implemented methods for generating and selecting media data |
Country Status (1)
| Country | Link |
|---|---|
| GB (1) | GB2575975A (en) |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20100324709A1 (en) * | 2009-06-22 | 2010-12-23 | Tree Of Life Publishing | E-book reader with voice annotation |
| WO2011083962A2 (en) * | 2010-01-06 | 2011-07-14 | Samsung Electronics Co., Ltd. | Method and apparatus for setting section of a multimedia file in mobile device |
| EP2672686A2 (en) * | 2012-06-05 | 2013-12-11 | LG Electronics, Inc. | Mobile terminal and method for controlling the same |
| US20130332836A1 (en) * | 2012-06-08 | 2013-12-12 | Eunhyung Cho | Video editing method and digital device therefor |
| JP2014044600A (en) * | 2012-08-28 | 2014-03-13 | Yasuaki Iwai | Voice storage management apparatus and program |
| WO2015184423A2 (en) * | 2014-05-30 | 2015-12-03 | Apple Inc. | Audio editing and re-recording |
| US20150370474A1 (en) * | 2014-06-19 | 2015-12-24 | BrightSky Labs, Inc. | Multiple view interface for video editing system |
| WO2016044269A1 (en) * | 2014-09-16 | 2016-03-24 | Citrix Systems, Inc. | Capturing noteworthy portions of audio recordings |
-
2018
- 2018-07-27 GB GB1812312.5A patent/GB2575975A/en not_active Withdrawn
Patent Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20100324709A1 (en) * | 2009-06-22 | 2010-12-23 | Tree Of Life Publishing | E-book reader with voice annotation |
| WO2011083962A2 (en) * | 2010-01-06 | 2011-07-14 | Samsung Electronics Co., Ltd. | Method and apparatus for setting section of a multimedia file in mobile device |
| EP2672686A2 (en) * | 2012-06-05 | 2013-12-11 | LG Electronics, Inc. | Mobile terminal and method for controlling the same |
| US20130332836A1 (en) * | 2012-06-08 | 2013-12-12 | Eunhyung Cho | Video editing method and digital device therefor |
| JP2014044600A (en) * | 2012-08-28 | 2014-03-13 | Yasuaki Iwai | Voice storage management apparatus and program |
| WO2015184423A2 (en) * | 2014-05-30 | 2015-12-03 | Apple Inc. | Audio editing and re-recording |
| US20150370474A1 (en) * | 2014-06-19 | 2015-12-24 | BrightSky Labs, Inc. | Multiple view interface for video editing system |
| WO2016044269A1 (en) * | 2014-09-16 | 2016-03-24 | Citrix Systems, Inc. | Capturing noteworthy portions of audio recordings |
Non-Patent Citations (4)
| Title |
|---|
| Gary Symons, VeriCorder Technology, 27 Jan 2009, 'How to Use Poddio Sound Editor on iPhone', available from: https://www.youtube.com/watch?v=sQ1ZmJMIO2E, accessed 24/07/19 * |
| Less-Is-Mor Ltd, 1 Feb 2017, LIMOR, Apple iTunes Store, [online], Available from:https://itunes.apple.com/gb/app/limor/id1151545350?mt=8 * |
| Nick Garnett, 1 July 2013, Voddio 4, YouTube, [online], Available from: https://www.youtube.com/watch?v=BsN7ElJ6UMY , accessed on 21/11/18 * |
| Studio 1 on 1, 3 January 2018, DTouch Tips & Tricks Episode 4 - Audio editing with touch, YouTube, [online], Available from: https://www.youtube.com/watch?v=kMQeWPTlfEM, accessed on 21/11/18 * |
Also Published As
| Publication number | Publication date |
|---|---|
| GB201812312D0 (en) | 2018-09-12 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP7195426B2 (en) | Display page interaction control method and apparatus | |
| CA2799440C (en) | Content gestures | |
| US9558162B2 (en) | Dynamic multimedia pairing | |
| US9977584B2 (en) | Navigating media playback using scrollable text | |
| US20110145745A1 (en) | Method for providing gui and multimedia device using the same | |
| TWI545942B (en) | System and method of outputting multi-lingual audio and associated audio from a single container | |
| JP2019054510A (en) | Method and system for processing comment included in moving image | |
| US9153217B2 (en) | Simultaneously playing sound-segments to find and act-upon a composition | |
| JP2009230468A (en) | Reproduction device, method of controlling reproduction device and control program | |
| US11099731B1 (en) | Techniques for content management using a gesture sensitive element | |
| CN113392260A (en) | Interface display control method, device, medium and electronic equipment | |
| CN115605837A (en) | Game console application with action card chain | |
| US11089374B2 (en) | Direct navigation in a video clip | |
| US9990117B2 (en) | Zooming and panning within a user interface | |
| GB2575975A (en) | Computer implemented methods for generating and selecting media data | |
| WO2024165040A1 (en) | Information display method and apparatus, device and storage medium | |
| US11140461B2 (en) | Video thumbnail in electronic program guide | |
| JP7420642B2 (en) | Video playback device and video playback method | |
| JP2015210631A (en) | Content management system, management device, terminal device, content processing method and management program | |
| JP6287320B2 (en) | Image processing apparatus and image processing program | |
| JP6916860B2 (en) | Programs, systems, and methods for playing videos | |
| KR20130048960A (en) | Method, terminal, and recording medium for controlling screen output | |
| CN103561304A (en) | Method and device for pushing video to video playing device | |
| JP2015038790A (en) | Information processing apparatus and operation position control method | |
| JP2018085150A (en) | Electronic equipment and programs |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| WAP | Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) |