[go: up one dir, main page]

WO2014088917A1 - Systèmes et procédés de création musicale - Google Patents

Systèmes et procédés de création musicale Download PDF

Info

Publication number
WO2014088917A1
WO2014088917A1 PCT/US2013/072481 US2013072481W WO2014088917A1 WO 2014088917 A1 WO2014088917 A1 WO 2014088917A1 US 2013072481 W US2013072481 W US 2013072481W WO 2014088917 A1 WO2014088917 A1 WO 2014088917A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound
music
continuous
user
logic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2013/072481
Other languages
English (en)
Other versions
WO2014088917A8 (fr
Inventor
Thomas P. ROBERTSON
Kyle J. JOHNSEN
Adam Brown
Brian RUGGIERI
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Georgia Research Foundation Inc UGARF
Original Assignee
University of Georgia Research Foundation Inc UGARF
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Georgia Research Foundation Inc UGARF filed Critical University of Georgia Research Foundation Inc UGARF
Priority to US14/648,040 priority Critical patent/US20150309703A1/en
Publication of WO2014088917A1 publication Critical patent/WO2014088917A1/fr
Publication of WO2014088917A8 publication Critical patent/WO2014088917A8/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0008Associated control or indicating means
    • G10H1/0025Automatic or semi-automatic music composition, e.g. producing random music, applying rules from music theory or modifying a musical piece
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/101Music Composition or musical creation; Tools or processes therefor
    • G10H2210/141Riff, i.e. improvisation, e.g. repeated motif or phrase, automatically added to a piece, e.g. in real time
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/375Tempo or beat alterations; Music timing control
    • G10H2210/381Manual tempo setting or adjustment
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/091Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith
    • G10H2220/096Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith using a touch screen
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/091Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith
    • G10H2220/101Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith for graphical creation, edition or control of musical data or parameters
    • G10H2220/106Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith for graphical creation, edition or control of musical data or parameters using icons, e.g. selecting, moving or linking icons, on-screen symbols, screen regions or segments representing musical elements or parameters

Definitions

  • the disclosure herein relates to music creation systems (e.g., tablet or pad- based computing devices), methods, and graphical user interfaces.
  • the present disclosure relates to music creation systems and methods including graphical user interfaces configured for user interaction to create music .
  • the graphical user interface may define one or more regions or spaces used to create music that may rel ate to one or more characteristics of the music being created. For example, a music portion space may be depicted in the graphical user interface and one or more continuous sound structures may be added to the space. If a sound structure is moved up or down vertically within the space, the volume of the sound structure may be adjusted up or down, respectively. Likewise, if a sound structure is moved left or right horizontally within the space, the spatial location of the origin of the sounds representative by the sound structure may be adjusted left or right, respectively (e.g., between left and right speakers, within a multi-channel spatial arrangement of speakers, etc.).
  • the exemplary systems and methods described herein may be described as being able to provide users with the ability to record, arrange, and mix an entire song via an intuitive interface, which may be accomplished through touches, swipes, and fractal patterning to drive the majority of music design. Further, the exemplar embodiments may capture the essence of music creation and may project the music creation as a visual representation in a three-dimensional music space. Alongside the mtuitiveness of the exemplary systems and methods, the touch-based design of one or more exemplary systems and methods may create an efficient music production application.
  • the present disclosure relates to an intuitive touch- based digital audio workstation (DAW) that streamlines music-making and recording processes for its users.
  • DAW digital audio workstation
  • the DAW maintains an innovative fractal design (e.g., based on a "sound orb" template) that allows a user to visualize the music creation, arrangement, and mixing process in a three- dimensional space.
  • the exemplary DAW may provide greater precision in songwriting, decreased time for prod ction, greater visual understanding of the "wall of sound,” and a complete manipulation of a spatiotemporal music space.
  • One exemplary system for allowing a user to create music may include computing apparatus configured to generate music, sound output apparatus operativeiy coupled to the computing apparatus and configured to output sound generated by the computing apparatus, an input interface operativeiy coupled to the computing apparatus and configured to allow a user to create a portion of music, and a display apparatus operative!)' coupled to the computing apparatus and configured to display a graphical user interface.
  • the computing apparatus may be configured to depict a music portion space in the graphical user interface of the display apparatus for creating the portion of musi c, and al low a user, using the input apparatus, to add one or more continuous sound structures to the music portion space.
  • Each of the one or more continuous sound structures may include a plurality of sound elements arranged around a continuous loop representing a period of time,
  • One exemplary method for al lowing a user to create music may include depicting a music portion space in a graphical user interface of a display apparatus for creating a portion of music and allow a user, using input apparatus, to add one or more continuous sound structures to the music portion space.
  • Each of the one or more continuous sound structures may include a plurality of sound elements arranged around a continuous loop representing a period of time.
  • Exemplary logic encoded in one or more non-transitory media that includes code for execution and when executed by a processor operable to perform operations may include depicting a music portion space in a graphical user interface of a display apparatus for creating a portion of music and allow a user, using input apparatus, to add one or more continuous sound structures to the music portion space.
  • Each of the one or more continuous sound structures may include a plurality of sound elements arranged around a continuous loop representing a period of time.
  • apparatus may be further configured to execute or the method or logic may further include allowing a user, using the input apparatus, to move each continuous sound structure of the one or more continuous sound structures vertically within the music portion space to adjust the volume of the continuous sound structure.
  • apparatus may be further configured to execute or the method or logic may further include depicting a sound structure addition area on the graphical user interface for displaying a plurality of continuous sound structures to be used in the music portion space to create the portion of music and allowing a user, using the input apparatus, to add one or more continuous sound structures to the music portion space using the sound structure addition area.
  • the apparatus may be further configured to execute or the method or logic may further include allowing a user, using the input apparatus, to move each continuous sound structure of the one or more continuous sound structures horizontally within the music portion space to adjust the spatial orientation of the continuous sound structure
  • the computing apparatus may be further configured to execute or the method or logic may further include depicting a tempo adjustment area on the graphical user interface for displaying a tempo of the portion of music and allowing a user, using the input apparatus, to adjust the tempo of the portion of music using the tempo adjustment area of the graphical user interface.
  • the computing apparatus may be further configured to execute or the method or logic may further include depicting a music portion movement area on the graphical user interface for displaying additional music portions and allowing, using the input apparatus, a user to switch to another music portion and to add another music portion using the music portion movement area of the graphical user interface.
  • the computing apparatus may be further configured to execute or the method or logic may further include allowing, using the input apparatus, a user to select a continuous sound structure from the music portion space to edit the continuous sound structure.
  • One exemplar ⁇ ' system for allowing a user to create music may include computing apparatus configured to generate music, sound output apparatus operatively coupled to the computing apparatus and configured to output sound generated by the computing apparatus, an input interface operatively coupled to the computing apparatus and configured to allow a user to edit a continuous sound structure, and a display apparatus operatively coupled to the computing apparatus and configured to display a graphical user interface.
  • the computing apparatus may be configured to depict the continuous sound structure on the graphical user interface, wherein the continuous sound structure may include a plurality of sound elements arranged around a continuous loop representing a period of time. Each of the plurality of sound elements may be configurable using the input apparatus between an enabled configuration and a disabled configuration.
  • the enabled sound element When a sound element is in the enabled configuration, the enabled sound element may represent a sound to be output at the moment of time within the period of time where the enabled sound element is located in the continuous loop.
  • the computing apparatus may be further configured to allow, using the input apparatus, a user to select one or more of the plurality of sound elements to configure the one or more sound elements in the enabled or disabled configurations.
  • One exemplary method for allowing a user to create music may include depicting a continuous sound structure on the graphical user interface.
  • the continuous sound structure may include a plurality of sound elements arranged around a continuous loop representing a period of time.
  • Each of the plurality of sound elements may be configurable using the input apparatus between an enabled configuration and a disabled configuration.
  • the enabled sound element When a sound element is in the enabled configuration, the enabled sound element may represent a sound to be output at the moment of time within the period of time where the enabled sound element is located in the continuous loop.
  • the exemplar ⁇ ' method may further include allowing, using an input apparatus, a user to select one or more of the plurality of sound elements to configure the one or more sound elements in the enabled or disabled configurations.
  • Exemplary logic encoded in one or more non-transitory media that includes code for execution and when executed by a processor operable to perform operations may include depicting a continuous sound structure on the graphical user interface.
  • the continuous sound structure may include a plurality of sound elements arranged around a continuous loop representing a period of time.
  • Each of the plurality of sound elements may be configurable using the input apparatus between an enabled configuration and a disabled configuration.
  • the enabled sound element When a sound element is in the enabled configuration, the enabled sound element may represent a sound to be output at the moment of time w ithin the period of time w here the enabled sound element is located in the continuous loop.
  • the exemplary logic encoded in one or more non-transitory media that includes code for execution and when executed by a processor operable to perform operations may further include allowing, using an input apparatus, a user to select one or more of the plurality of sound elements to configure the one or more sound elements in the enabled or disabled configurations.
  • the computing apparatus may be further configured to execute or the method or logic may further include allowing, using the input apparatus, a user to change the pitch of one or more sound elements of the plurality of sounds elements, In one or more exemplary systems, methods, or logics the computing apparatus may be further configured to execute or the method or logic may further include, when a user changes the pitch of a sound element, the depth of the sound element may be changed in the graphical user interface (e.g., the three- dimensional depth of the sound element may be changed, projecting into or out of the display pane, etc.).
  • the computing apparatus may be further configured to execute or the method or logic may further include depicting a sound effect addition area on the graphical user interface for displaying a plurality of sound effects to be used to modify one or more sound elements of the plurality of sounds elements and allowing, using the input apparatus, a user to add one or more sound eifects to one or more sound elements space using the sound effect addition area.
  • the computing apparatus may be further configured to execute or the method or logic may further include displaying a volume adjustment element and allowing, using the input apparatus, a user to adj ust the volume of the con tinuous sound structure using the volume adjustment element.
  • One exemplary system for allowing a user to create music may include computing apparatus configured to generate music, sound output apparatus operatively coupled to the computing apparatus and configured to output sound generated by the computing apparatus, an input interface operatively coupled to the computing apparatus and configured to allow a user edit a continuous music arrangement to create music, and a display apparatus operative! ⁇ ' coupled to the computing apparatus and configured to display a graphical user interface.
  • the computing apparatus may be configured to depict the continuous music arrangement.
  • the continuous music arrangement may include a plurality of locations arranged around a continuous loop representing a period of time.
  • the computing apparatus may be further configured to allow a user, using the input apparatus, to add one or more music portions to one or more locations of the plurality of locations of the continuous music arrangement and allow a user, using the input apparatus, to increase or decrease an amount of locations of the plurality of locations of the continuous music arrangement.
  • One exemplary computer-implemented method for allowing a user to create music may include depicting a continuous music arrangement.
  • the continuous music arrangement may include a plurality of locations arranged around a continuous loop representing a period of time.
  • the exemplary method may further include allowing a user, using an input apparatus, to add one or more music portions to one or more locations of the plurality of locations of the continuous music arrangement and allowing a user, using the input apparatus, to increase or decrease an amount of locations of the plurality of locations of the continuous music arrangement.
  • Exemplary logic encoded in one or more non-transitory media that includes code for execution and when executed by a processor operable to perform operations may include depicting a continuous music arrangement.
  • the continuous music arrangement may include a plurality of locations arranged around a continuous loop representing a period of time.
  • the exemplary logic encoded in one or more non-transitory media that includes code for execution and when executed by a processor operable to perform operations may further include allowing a user, using an input apparatus, to add one or more music portions to one or more locations of the plurality of locations of the continuous music arrangement and allowing a user, using the input apparatus, to increase or decrease an amount of locations of the plurality of locations of the continuous music arrangement.
  • apparatus may be further configured to execute or the methods or logics may further include depicting a music portion addition area on the graphical user interface for displaying a plurality of music portions and allowing a user, using the input apparatus, to add one or more music portions to one or more locations of the plurality of locations of the continuous music arrangement using the music portion addition area.
  • FIG. 1 is a block diagram of an exemplary extracorporeal music creation system including input apparatus, display apparatus, and sound output apparatus that may utilize the graphical user interfaces and methods described herein.
  • FIG. 2 is a diagrammatic illustration of one or more modes of operation of graphical user interfaces as described herein.
  • FIGS. 3A-3D are screenshots of exemplary graphical user interfaces for the
  • FIG . 4 is a screenshot of an exemplar graphical user interface for the Edit
  • FIG. 5 is a screenshot of an exemplary graphical user interface for the
  • FIG. 6 is another screenshot of an exemplary graphical user interface for the
  • FIG. 7 is a portion of the exemplary graphical user interface of FIGS. 3A-3D depicting a tempo adjustment area.
  • FIG. 8 is a screenshot of an exemplary graphical user interface for a
  • configuration menu e.g., accessible from the Loop Mode of FIG. 3 A.
  • FIG. 9 is a screenshot of another exemplary graphical user interface for the
  • FIG. 10 is a portion of the exemplary graphical user interface of FIGS. 3 A-
  • FIG. 11 is a portion of the exemplary graphical user interface of FIGS. 3A-
  • FIG. 12 depicts exemplary continuous sound structures for the Edit Mode of
  • FIG. 13 is a portion of the exemplary graphical user interface of FIGS. 3 A-
  • FIGS. 14A-14B are screenshots of exemplary graphical user interfaces for the Song Mode of FIG. 2.
  • FIG. 15 is an overheard view of a depiction of a user moving an exemplary system within an exemplary music portion space.
  • embodiments described herein may include many elements that are not necessarily shown to scale. Still further, it will be recognized that timing of the processes and the size and shape of various elements herein may be modified but still fall within the scope of the present disclosure, although certain timings, one or more shapes and/or sizes, or types of elements, may be advantageous over others.
  • FIG. 1 An exemplary computer system 10 depicted in FIG. 1 may be used to
  • the exemplary computer system 10 includes computing apparatus 12.
  • the computing apparatus 12 may be configured to receive input from input apparatus 20 and transmit output to display apparatus 22 and sound output apparatus 24.
  • the computing apparatus 12 includes data storage 14.
  • Data storage 14 allows for access to processing programs or routines 16 and one or more other types of data 18 that may be employed to carry out exemplary methods and/or processes for use in creating music and/or sounds (e.g., some of which are sho wn generally in FIGS. 2-15).
  • the computing apparatus 12 may be configured to generate music based on input f om a user using the input apparatus 20 to manipulate graphics depicted by the display apparatus 22.
  • the computing apparatus 12 may be operatively coupled to the input
  • the computing apparatus 12 may be electrically coupled to each of the input apparatus 20, the display apparatus 22, and the sound output apparatus 24 using, e.g., analog electrical connections, digital electrical connections, wireless connections, bus-based connections, etc.
  • a user may provide input to the input apparatus 20 to manipulate, or modify, one or more graphical depictions displayed on the display apparatus 22 to create and/or modify sounds and/or music that may be outputted by the sound output apparatus 24.
  • various peripheral devices may he operatively coupled to the computing apparatus 12 to be used within the computing apparatus 12 to perform the functionality, methods, and/or logic described herein.
  • the system 10 may include input apparatus 20, display apparatus 22, and sound output apparatus 24,
  • the input apparatus 20 may include any apparatus capable of providing input to the computing apparatus 12 to perform the functionality, methods, and/or logic described herein.
  • the input apparatus 20 may include a touchscreen (e.g., capacitive touchscreen, a resistive touchscreen multi- touch touchscreen, etc.), a mouse, a keyboard, a trackball, etc.
  • the display apparatus 22 may include any apparatus capable of displaying information to a user, such as a graphical user interface, etc., to perform the functionality, methods, and/or logic described herein.
  • the display apparatus 22 may include a liquid crystal display, an organic light-emitting diode screen, a touchscreen, a cathode ray tube display, etc.
  • the sound output apparatus may be any apparatus capable of outputting sound in any form (e.g., actual sound waves, analog or digital electrical signals representative of sound, etc.) to perform the functionality, methods, and/or logic described herein.
  • the sound output apparatus may include an analog connection for outputting one or more analog sound signals (e.g., 2.5 or 3.5 millimeter mono or stereo output, etc.), a digital connection for outputting one or more digital sound signals (e.g., optical digital output such as TOSL1N , BDMI, etc.), one or more speakers (e.g., stereo speakers, multi-channel speakers, surround sound speakers, etc.), etc.
  • the processing programs or routines 16 may include programs or routines for performing computational mathematics, matrix mathematics, standardization algorithms, comparison algorithms, vector mathematics, numeration,
  • Data 18 may include, for example, sound data, music data, instrument data, tempo data, sound frequency distribution data, sound processing data, stereo panning/sound positioning data, sound pitch data, graphics (e.g., 3D graphics, etc.), graphical user interfaces, results from one or more processing programs or routines employed according to the disclosure herein, or any other data that may be necessary for carrying out the one and/or more processes or methods described herein,
  • the system 10 may be implemented using one or more computer programs executed on programmable computers, such as computers that include, for example, processing capabilities, data storage (e.g., volatile or non-volatile memory and/or storage elements), input devices, and output devices.
  • Program code and/or logic described herein may be applied to input data to perform functionality described herein and generate desired output information.
  • the output information may be applied as input to one or more other devices and/or methods as described herein or as would be applied in a known fashion.
  • the program used to implement the methods and/or processes described herein may be provided using any programmable language, e.g., a high level procedural and/or object orientated programming language that is suitable for communicating with a computer system. Any such programs may, for example, be stored on any suitable device, e.g., a storage media, that is readable by a general or special purpose program running on a computer system (e.g., including processing apparatus) for configuring and operating the computer system when the suitable device is read for performing the procedures described herein.
  • a programmable language e.g., a high level procedural and/or object orientated programming language that is suitable for communicating with a computer system.
  • Any such programs may, for example, be stored on any suitable device, e.g., a storage media, that is readable by a general or special purpose program running on a computer system (e.g., including processing apparatus) for configuring and operating the computer system when the suitable device is read for performing the procedures described herein.
  • the system 10 may be implemented using a computer readable storage medium, configured with a computer program, where the storage medium so configured causes the computer to operate in a specific and predefined manner to perform functions described herein.
  • the system 10 may be described as being implemented by logic (e.g., object code) encoded in one or more non-transitory media that includes code for execution and when executed by a processor operable to perform operations such as the methods, processes, and/or functionality described herein.
  • system 10 may be configured at a remote site (e.g., an
  • application server that allows access by one or more users via a remote computer apparatus (e.g., via a web browser), and allows a user to employ the functionality according to the present disclosure (e.g., user accesses a graphical user interface associated with one or more programs to process data).
  • the computmg apparatus 12 may be, for example, any fixed or mobile computer system (e.g., a tablet computer, a pad computer, a personal computer, a mini computer, an APPLE IPAD tablet computer, an APPLE IPHONE cellular phone, an APPLE IPOD portable device, a GOOGLE ANDROID tablet a GOOGLE ANDROID portable device, a GOOGLE ANDROID cellular phone, etc.).
  • the exact configuration of the computing apparatus 12 is not limiting, and essent ally any device capable of providing suitable computing capabilities and control capabilities may be used.
  • the output generated by the computing apparatus 12 may be analyzed by a user, used by another machine that provides output based thereon, etc.
  • a digital file may be any medium (e.g., volatile or non-volatile memory, a CD- ROM, a punch card, magnetic recordable tape, etc.) containing digital bits (e.g., encoded in binary, trinary, etc.) that may be readable and/or writeable by computing apparatus 12 described herein.
  • a file in user- readable format may be any representation of data (e.g., ASCII text, binary numbers, hexadecimal numbers, decimal numbers, audio, graphical) presentable on any medium (e.g., paper, a display, sound waves, etc.) readable and/or understandable by a user.
  • data e.g., ASCII text, binary numbers, hexadecimal numbers, decimal numbers, audio, graphical
  • any medium e.g., paper, a display, sound waves, etc.
  • the user interface may provide various features al lowing for user input thereto, change of input, importation or exportation of files, or any other features that may be generally suitable for use with the processes described herein.
  • the user interface may allow default values to be used or may require entry of certain values, limits, threshold values, or other pertinent information.
  • processors including one or more microprocessors, DSPs, ASICs, FPGAs, or any other equivalent integrated or discrete logic circuitry, as wel l as any combinations of such components, or other devices.
  • processors including one or more microprocessors, DSPs, ASICs, FPGAs, or any other equivalent integrated or discrete logic circuitry, as wel l as any combinations of such components, or other devices.
  • processor or “processing circuitry” may generally refer to any of the foregoing logic circuitry, alone or in combination with other logic circuitry, or any other equivalent circuitry.
  • Such hardware, software, and/or firmware may be implemented within the same device or within separate devices to support the various operations and functions described in this disclosure.
  • any of the described components may be implemented together or separately as discrete but interoperable logic devices. Depiction of different features, e.g., using block diagrams, etc., is intended to highlight different functional aspects and does not necessarily imply that such features must be realized by separate hardware or software components. Rather, functionality may be performed by separate hardware or software components, or integrated within common or separate hardware or software components.
  • the functionality ascribed to the systems, devices and methods described in this disclosure may be embodied as instructions and/or logic on a computer-readable medium such as RAM, ROM, NVRAM, EEPROM, FLASH memory, magnetic data storage media, optical data storage media, or the like.
  • the instructions and/or logic may be executed by one or more processors to support one or more aspects of the functionality described in this disclosure.
  • the exemplary systems, methods, and logic for use in creating music described herein may include multiple modes as depicted in FIG. 2.
  • the exemplary systems, methods, and logic may include a Loop Mode 50 for editing a music portion space as described herein with reference to FIGS.
  • Each of the modes 50, 70, 80, 90 may include one or more functions that a user may utilize to create, edit, and/or visualize music as shown in FIG. 2 and further described herein with reference to FIGS. 3-15.
  • the Loop Mode 50 may allow a user to add sound structures to a music portion space, change the volume of the sound structures, change the spatial location of the sound structures, change the tempo of the music portion space, change between measures or music portion spaces, and/or create new measures or music portion spaces.
  • the Edit Mode 70 may be accessed after adding a sound structure to a music portion space (e.g., by selecting, or touching, a sound structure), which may bring the sound stmcture forward (e.g., the graphical user interface may zoom into view the sound structure more closely).
  • the Edit Mode 70 may allow a user change the pitch of one or more sound elements of the sound structure, toggle one or more sound elements from being active or inactive within the sound structure, change the volume of the sound structure, and/or apply sound effects to the sound stmcture and/or sound elements of the sound structure.
  • the Arrangement Mode 80 may allow a user to add one or more music locations and/or add and arrange any music portions or measures previously created to, e.g., create an arrangement of music or song. After an arrangement or song has been created, it may be visually depicted using a graphical user interface such that a user may observe the song graphically while it is played, or output, through sound output apparatus,
  • a graphical user interface (GUI) 100 may be displayed in Loop Mode 50 in which as user may create and/or edit music.
  • the GUI 100 may depict a music portion space 102 for creating a portion, or measure, of music.
  • the music portion space 102 may define a three- dimensional space extending up to 360 degrees about a central location.
  • the GUI 100 may depict a portion of the 360 degree music portion space 102.
  • GUI 100 is depicted on a tablet computer including a gyroscope and/or other position sensors, a user may be able to physically move the tablet computer left or right (e.g., rotate) around the central location to view more of the 360 degree, three-dimensional music portion space 102.
  • the exemplary systems and methods may always begin in Loop Mode 50.
  • Loop Mode 50 may be defined as being the music portion, or measure, building mode, while a music portion/measure may be defined as a collection of contmuous sound stmctures, or sound orbs, placed amongst a 360° music portion space 102. Loop Mode 50 may be described as the place where new music portions, or measures, may be created/added and/or where sounds within the music portions are edited.
  • FIG. 15 Part of a music portion space 31 is depicted in FIG. 15.
  • the music portion space 31 extends 360 degrees about a user 30.
  • the user 30 is holding a tablet computer 34 configured to provide the GUI and software described herein in three different positions about the music portion space 31.
  • a region, or window, 36 of the music portion space 31 is depicted on the graphical user interface.
  • the tablet computer 34 As the user 30 rotates 32 the tablet computer 34 about the music portion space 31, a different region 36 of the music portion space 31 is depicted .
  • the music portion space 31 may be described as being a circle, e.g., as
  • a gyro area, or button, 103 may be located on the GUI
  • the gyro option may allow a user to fully experience the 360° music making environment, and specifically how sounds structures, or sound orbs, are located around the user. This gyro feature may turn the tablet device into a "window" into the music portion space where sounds can be placed at any position around or about the user.
  • an exemplary GUI 100 for the Loop Mode 50 As shown in FIG. 3 A, an exemplary GUI 100 for the Loop Mode 50, or
  • Loop Mode GUI may include a sound structure addition area 1 10 including one or more (e.g., a plurality) sound structures, or sounds, 112 (e.g., bass, keys, drums, etc. ) that may be used to create music.
  • the Loop Mode GUI 100 may include a configuration area 101 that may be selected to access one or more configuration options for the systems and methods described herein. As shown, the configuration area 101 may be located in the upper righ t corner of the GUI 100.
  • the configuration menu 440 may be opened (e.g., initiated or triggered to be displayed) by selecting a configuration area, or button, 101 in the GUI 100 shown in FIG. 8.
  • the configuration menu 440 may include features for saving 442 and loading 444 songs, clearing an entire song, and exporting 446 a song to a file such as mp3 file.
  • the configuration menu 440 could also be described as being the "home" to any audio, graphical or
  • the menu 440 could also be described as being the "home" for sharing music created using the exemplary systems and methods with friends through various social media applications.
  • the configuration menu 440 may further include a new project area 445 to start a new project, a plurality of saved files 447 for saving files to or loading files from, and a new file 448 area for creating a new save file.
  • the configuration menu 440 may further include a gyro button 103 as described herein.
  • the sound structure addition area 110 may transform 111 (e.g., flip, revolve, morph, etc.) into a specific sound structure addition area 113 that includes more specific sounds, or sound structures, 115 related to the sound stracture 1 12 selected.
  • the specific sound structure addition area 113 includes a plurality of different "Drums" sounds, or sound structures, 1 15 that may be added or used within the music portion space 102, Additionally, already-created or preset sound structures 162 related to "Drums" may be located in the specific sound structure addition area 113.
  • Selection of the one of the specific sounds 115 or sound structures 1 13 may transform 164 the specific sound structure addition area 1 13 into another menu 161 that allows selection of the specific sound structures 160 (e.g., touch and drag the sound structure into the music portion space 102).
  • a user may preview the sound or sound structure 115, 160 by briefly selecting it (e.g., touching or clicking it) to trigger the sound apparatus to play or output the sound or sound structure 115, 160.
  • a sound, or sound structure, 112 is shown being dragged 124 from the sound structure addition area 1 10 to the music portion space 102 in FIG, 3B.
  • the sound structure 12 may define a continuous sound structure 1 14, which will be described further herein with reference to FIG, 4.
  • the GUI 100 further includes a play/pause button 150 that a user may select to play or pause music presently being created on the GUI 100, an arrangement mode area, or button, 151 that a user may select to switch to Arrangement Mode 80, a song mode area, or button, 152 that a user may select to switch to Song Mode 90, and a tempo adjustment area 130 that a user may use to adjust the tempo of the continuous sound structures 114 located in the music portion space 102,
  • the exemplary embodiments include collections of music sounds located in folders on a rotating menu within the sound structure addition area 1 10 such as, e.g., drams, bass, keys, pads, etc.
  • a collection of sound samples may be located within each folder relative to the many stylings of the primary sound file (e.g., sub kick, Detroit High Flat, lo-fi snare, etc. can all be accessed from the "Drums" folder).
  • users may be able to extend the smaller samples and expand their sound file library by unlocking such features as add-on paid content.
  • users may drag the associated sound structure or orb into the spatiotemporai music space. Sounds can be previewed by touching their respective buttons in the menu.
  • each continuous sound structure 1 14 directly correlates to where the continuous sound structure 114 is dropped in the music portion space and the position of the continuous sound structure relative to the user affects how the sound is heard from the speakers.
  • a user should notice a highlighted sound element (e.g., blue highlighting) circling the continuous sound structure 114, much like a clock ticking, An enlarged view of th e tempo adjustment area 130 is depicted in FIG. 7.
  • the tempo adjustment area 130 includes a textual description 132 that displays or recites the current tempo (as shown, 220 beats per minute (BPM)), a decrease tempo area or button 134, and an increase tempo area or button 136.
  • Each of the decrease tempo and increase tempo areas 134, 136 are in the shape of an arrow extending in opposite directions to, e.g., represent decreasing or increasing the tempo.
  • a user may use a two-finger swipe 138 (e.g., two fingers contacting a touch screen near each other and moving at the same time), either upwards or downwards, anywhere within the GUI 100 to increase or decrease, respectively, the tempo of the music portion space 102.
  • a user may touch the tempo adjustment area, or meter, 130 at the top right (e.g., top right corner) of the display.
  • a user can also use a two-finger swipe to adjust tempo.
  • a tempo adjustment may also be reflected visually in the speed at which the highlight of the sound element cycles through the 16 steps (e.g., sound elements or orbs) about the continuous sound structure 114.
  • Each of the continuous sound structures 1 14 may be moved (e.g., selected/touched and dragged by a user) vertically to adjust the volume of the continuous sound stmcture 114 and horizontally to adjust the spatial orientation of the continuous sound structure 114 (e.g., about a three dimensional space).
  • the continuous sound structure may be moved upwardly 116, and to decrease the volume of a particular continuous sound structure 114, the continuous sound structure 114 may be moved downwardly 118, Further, for example, to move the spatial orientation (e.g., where the sound comes from when output using speakers, headphones, etc.) leftward, the continuous sound stmcture 114 may be moved leftward 122 in the space 102, and to move the spatial orientation rightward, the continuous sound stmcture 114 may be moved rightward 120 in the space 102.
  • the spatial orientation e.g., where the sound comes from when output using speakers, headphones, etc.
  • a sound stmcture 114 may be moved beyond the viewable wi ndow or regi on of the space 102, and t hus, the user may rotate the computing apparatus, e.g., tablet computer, about the space 31 as described herein with respect to FIG. 15.
  • the computing apparatus e.g., tablet computer
  • the music portion space 102 is a three-dimensional music space and sound samples (e.g., continuous sound structures) may be dragged into the three-dimensional music space such that the sound samples correspond to the real-world spatial position that a user will hear the sound through sound output apparatus such as e.g., headphones, speakers, etc. (e.g., a continuous sound stmcture placed to the left in the three-dimensional music space will be heard from the left of the user such as through the left side speakers).
  • the spatial orientation, or location, of a continuous sound stmcture 114 may be represented in the music portion movement area 140 located at the bottom of the display.
  • the central, or middle, window (out of the three windows) in the music portion movement area 140 may depict the entire music space 102 for the active measure or music portion.
  • the sounds corresponding to those structures 114 will be output from 90 degrees to the left and 90 degrees to the right, respectively, through sound output apparatus.
  • the first continuous sound stmcture 114 will play sounds or music from the left side of a user and the second continuous sound structure 114 will play sounds or music from the righ side of a user.
  • music portion movement area 140 may allow a user to control the panning or sound position of the continuous sound structures 114 visually. Further, the music portion movement area 140 may allow a user to add additional music portions (e.g., the rightmost window), view the current music portion in a zoomed-out view (e.g., the center window), and view the previously- created music portion (e.g., the leftmost window).
  • An enlarged view of the music portion movement area 140 is depicted in FIG. 10. As shown, the music portion movement area 140 may include a previous portion area 146, a current portion area 148, and an add portion area 149.
  • the measures or portions may be traversed by using a portion selection area 142 that depicts the name of the current portion, or measure, 143 (e.g., as shown, "Measure 1 "), a move-to- previous portion area 144, and a move-to-next portion area 145 (e.g., each of the previous portion and next portion are in the shape of an arrow extending in opposite directions to, e.g., graphically represent traversing through the portions or measures).
  • the music portion movement area 140 may allow a user to change between music portions, or measures, that have been created in Loop Mode 50.
  • the music portion movement area 140 displays the name 143 of the current music portion, or measure, in Loop Mode 50.
  • each music portion can be changed to better differentiate measures between one another, (e.g., names may include Drum Intro, Bass Drop, etc.).
  • Each window within the music portion movement area 140 may described as being a viewport to allow a user to easily see the 360° layout of the current, previous, and next music portions or measures. Touching different music portions, or measures, in the viewport may give another quick alternative to changing measures.
  • the music portion movement area 140 and the viewports defined therein may allow a user to create new blank music portions if, e.g., at least one continuous sound stmcture 114 is located in the current, or present, music portion (e.g., because, otherwise, users from would be making multiple blank music portions).
  • the music portion movement area 140 if the music portion movement area 140 is taking up too much screen real estate or is unwanted, the music portion movement area 140 can be dragged down off screen and hidden until needed again.
  • the "Keys" continuous sound structure 114 is located to the right and upward in comparison to the "Bell crash” continuous sound structure 114, and likewise, the "Keys” continuous sound structure 114 may have a greater volume and be spatially oriented more leftward than the "Bell crash” continuous sound structure 114. Such orientations are described further herein.
  • the current portion area 148 of the music portion movement area 140 may include graphical representatives of the sound structures 114 therein such that, e.g., a user can view the locations of the sound structures 114 with respect to at least a portion of the music portion space 102.
  • a user may access copies of the music portion using area 109 of the sound structure addition area 110 (which may only appear after at least one music portion has been created).
  • previously-created portions or measures may be saved and accessed within the sound structure addition area 110 such that a user may select such previously-created portions or measures to add them to the present portion or measure as shown in FIG. 13, For example, a user may select the area
  • a sound structure 114 may be removed, or deleted, from a music portion by selecting and dragging 117 the sound structure 114 to a trash area 119 as shown in FIG. 9.
  • the bottom-right of the display includes a "trash can" icon where users can drag sound structures or orbs for removal from the 3D spatiotemporal music space.
  • continuous sound structures 1 14 may be "linkable” across music portions. As shown in FIG. 3D, both sound structures 114 are "linked,” which may mean that the sound structures 1 14 are linked to their corresponding sound structures 114 in another music portion or measure as represented by the "chain link” icon 107. When the sound structures 1 14 are linked to the corresponding sound structures 114 in another music portion, adjusting one sound structure 114 (e.g., increasing volume, moving spatial location, adjusting active/inactive sound elements, etc.) will also affect the other corresponding, or linked, sound structure 114 in another music portion or measure. Additionally, when deleting or removing a linked sound structure 1 14, the GUI 100 may alert a user that the sound structure 114 is linked and ask the user whether they would like to remove all linked sound structures 114 or only the sound structure 114 in the present music portion.
  • adjusting one sound structure 114 e.g., increasing volume, moving spatial location, adjusting active/inactive sound elements, etc.
  • the continuous sound structures 1 14 may be become linked to the previous music portion from where it came.
  • This "linking" means that the positions, or configurations, of the continuous sound structures 114 in both music portions are shared. For example, the states and pitches of the continuous sound structures 1 14 may be copied over from the original music portion to the next. Further, linking continuous sound stmciures 114 in music portions may allow a user to retain sound positions and/or frequencies in the music space throughout an entire composition.
  • These positions may be demonstrated in the music portion movement area 140 located at the bottom of the display.
  • moving a linked continuous sound structure 114 in the original portion may cause the linked continuous sound structure 114 in the new music portion to follow its positioning (e.g., vertical for volume, horizontal for spatial positioning, etc.).
  • a link button (e.g., located at the top of the individual continuous sound structure) may be available so users can control automation with ease (e.g., the dynamics of the sound file in the music space, such as panning and volume placement).
  • users can arrange continuous sound structures 1 14 moving from one frequency and volume level in the music space to another between music portions (e.g., a continuous sound structure 114 could move from the extreme left to right from one music portion to the next).
  • Edit Mode 70 which is shown in FIG. 4, may be initiated or triggered by a user selecting an individual sound structure 114 from the Loop Mode as shown in FIGS. 3A-3D.
  • a continuous sound structure 204 is depicted in the space 202 of an exemplary Edit Mode graphical user interface (GUI) 200. It may be described that in Edit Mode 70, the selected continuous sound structure 204 jumps to the front of the screen as a larger image in order for the user to manipulate the parameters of beat mapping, pitch, and effects more easily.
  • GUI graphical user interface
  • a user may to touch a continuous sound structure 1 14 in Loop Mode 50 to bring the continuous sound structure closer to the user and in front of all other continuous sound structures 1 14 so that it can be more easily manipulated.
  • the display may "zoom in” on the selected sound continuous sound structure 114, 204.
  • a user can map beats, control pitch, manage tempo, and add effects to each sound element 220, or smaller orb, of the continuous sound structure 204.
  • the continuous sound structure 204 includes a plurality of sound elements 220 arranged about a continuous loop representing a period of time.
  • the continuous sound structure 204 may further include an identifier 225 (as shown in FIG. 4, "Bell Crash").
  • a circle is depicted, a loop of any shape may be used as a continuous sound structure 204 (e.g., circle, square, octagon, oval, etc.).
  • the continuous sound structure 204 may be defined as being “continuous” because it is repetitive and does not define an end. Instead, if one were to describe a portion of the continuous sound structure 204 as a starting location, the ending location would be adjacent the starting location such that a complete loop will have been made.
  • the continuous sound structure 204 includes 16 sound elements 220, each representing 1/16th of the period of time that the continuous sound structure 204 represents.
  • the continuous sound structure 204 could represent 1 second, and therefore, each sound element 220 may represent 1/16 of a second.
  • the tempo of a music portion may be adjusted, which in turn, adjusts the period of the continuous sound structures 114 located in the music portion space 102 described herein with reference to FIGS. 3A-3D,
  • Each of the plurality of sound elements 220 may be configurable between an enabled, or active, configuration and a disabled, or inactive, configuration.
  • the enabled sound element 220 represents a sound to be output at the moment of time within the period of time where the enabled sound element is located in the continuous loop of the continuous sound structure 204.
  • a disabled sound element 220 represents no sound to be output at the moment of time within the period of time where the enabled sound element is located in the continuous loop of the continuous sound structure 204.
  • a user may enable or disable a sound element 220 by touching the sound element 220 when using the GUI 200 on a touchscreen device or tablet.
  • the pitch of each of the sound elements 220 may be adjusted by selecting and moving the sound element upwardly or downwardly 226 in the three- dimensional space 202 defined by the GUI 200. For example, as shown in FIG. 12, a user may select a sound element 220 and move it downwardly towards 203 the center of the sound structure 204 to adjust the pitch lower. Conversely, a user may select a sound element 220 and move it upwardly away 205 from the center of the sound structure 204 to adjust the pitch higher. It may be described that Edit Mode 70 provides the ability to change the pitch of a single beat sound element 220 of a continuous sound structure 204.
  • each beat/sound element 220 may be adjusted to one of 25 different, or unique, tones (e.g., shown numerically from -12 to 12 which provides 25 tone options in a scale that includes'O").
  • a user may apply one or more (e.g., two) sound effects/modifiers by, e.g., dragging effects file(s) from a sound effect addition area/menu 210 on the left-hand side of the display.
  • effects may include low and high pass filters, band pass, reverb, distortion, delay, compression, octave accentuation, flange, wah effects, phase effects, stereo/movement effects, etc.
  • continuous sound structures 204 can be isolated within the measure with the solo button 223 located on the right side of the continuous sound structure 204 in Edit Mode.
  • a user wil l only hear the selected continuous sound structure 204, which may allow greater control in determining volume level and frequency position as it relates to the spatiotemporal music space. Additionally, the volume of the continuous sound structure 204 may be adjusted by using the slider 230. For example, a user may select and move a portion of the sli der upwardly and do wnwardly 232 to adjust the volume of the continuous sound structure 204. It may be described that Edit Mode 70 may provide a user the ability to adjust (e.g., increase or decrease) the volume of a continuous sound structure 114 by, e.g., sliding a slider up and/or down.
  • the continuous sound structures 114 may be stationary on the display until Edit Mode 70 is exited (e.g., by another touch). Still further, when a user manipulates the volume of each continuous sound structure 114, the volume adjustment may also be visual!)' represented in the music portion movement area 140 at the bottom of the display. Once Edit Mode 70 is exited after volume change, the continuous sound structures 114 will be located in their new location corresponding to the new volume adjustment (e.g., located in a different vertical location within the music portion space 102 based on the new volume adjustment).
  • the visual indication 222 may be presented to indicate which sound element 220 is currently playing.
  • the visual indication 222 may include a different color or highlight (e.g., glowing, blinking, etc.) for the actively-playing sound element 220.
  • the visual indication 222 may continue clockwise 224 around the continuous sound structure 204 throughout the time period of the continuous sound structure 204.
  • Sound effects 216 may be added to the continuous sound structure 204 to affect one or more of the sound elements 220 or the entire continuous sound structure 204.
  • the sound effects 216 may be added 214 from a sound effect addition area 210 which may include a plurality of sound effects 212 such as, e.g., low pass filters, high pass filters, reverb, etc.
  • Arrangement Mode 80 which is shown in FIG. 5, may be initiated or
  • the Arrangement Mode 80 includes a graphical user interface (GUI) 300 that defines a space 302 and a continuous music arrangement 304 located in the space 302.
  • the continuous music arrangement 304 may include a plurality of locations 320 arranged around a continuous loop representing a period of time. Additional locations 320 may be added to the continuous music arrangement 304 by selecting the "plus" button 330.
  • One or more previously-created music portions 312 (e.g., created using the
  • GUI of FIGS. 3A-3D may be added to one or more locations 320 of the continuous music arrangement 304 by selecting and moving 314 the music portions 312 from the music portion addition space 310 to one or more locations 320 of the continuous music arrangement 304. After at least one music portion 312 has been added to the continuous music arrangement 304, the continuous music arrangement 304 may provide a playable song.
  • the song may be played using a play/pause button 340, Additionally, after a music portion 312 has been moved to a location 320, the location 320 may he moved 322 upwardly and/or downwardly to increase and/or decrease, respectively, the number or amount of times the music portion 312 should be played at that location (e.g., repeat at that location such as 2 times, 3 times, 6 times, etc.). i91] It may be described that, once in Arrangement Mode 80, a user may have the ability to choose how many locations 320 and/or music portions in specific locations 320 may be added to the song. In at least one embodiment, a user may utilize up to 16 locations 320 in a song.
  • a user may utilize more or less than 16 locations 320 in a song.
  • Locations 320 (for the addition of measures or music portions) may be arranged in a loop defining a continuous music arrangement 304 that may operate in a similar fashion as the continuous sound structures described herein (e.g., one at a time, clockwise, etc.).
  • the locations 320 may be visually indicated (e.g. visually indicated as being red) as being empty in Arrangement Mode 80.
  • a measure, or music portion may be selected and moved (e.g., dragged) from the music portion addition space 31 0 onto the locations 320. Further, locations 320 on the continuous music arrangement 304 can stay blank if the song being created is intended to have one or more music portions or measures of silence.
  • the contmuous music arrangement 304 in Arrangement Mode 80 will play, traverse, or increment, in a clockwise manner, but the scale may be 16 sections or beats per single music portion (e.g., which is an example of the fractal design strategy).
  • dragging music portions, or measures, onto the locations 320 of continuous music arrangement 304 may cause the colors of the locations 320 to change to let the user identify the order of the music portions, or measures, in the song.
  • a music portion may be played multiple times on the same location 320 on the continuous music arrangement 304 by increasing a number of repeats for a given location 320. For example, the number of repeats of an individual music portion at a location 320 may be increased by dragging the location 320 upwards and may be decreased by dragging the location 320 downwards. When a music portion is dragged to a location 320, the number of repeats may default to 1.
  • the visual indication of the number of repeats may decrease in the location 320, stepwise, as each repeat of the music portions is completed, which may allow the user to more easily track the progression of the song.
  • music portions can be removed from a location 320 of the
  • continuous music arrangement 304 if a user desires, which may be accomplished by a longer touch on the targeted location 320 where the music portion has been located. If location 320 does not have a measure, the entire location 320 may be removed from the sequence, or song, and the remaining location will be moved up in the playing order accordingly.
  • the exemplary continuous sound structures described herein may be referred to as a "sound orb."
  • a continuous sound structure may described as being a spherical representation of musical beats with a central large orb (e.g., disc, circle or sphere), surrounded by 16 smaller orbs, which each represent 1/16th of a particular sequence of sounds making up a single "loop.”
  • Examples of the sounds produced by each smaller orb include, but are not limited to, bass, drums, keys, and pads, one shots, and variations thereof.
  • the controls for the volume of the sounds associated with each sound orb may reside in the central large orb.
  • Volume may be represented by a scale that can be adjusted by touch (e.g., touching and dragging the volume indicator up or down to increase or decrease the volume, respectively).
  • the volume level of each sound orb is also linked to the sound orb's vertical location in the "sound space" (e.g., music portion space).
  • the user can adjust the vertical position of the sound orb to adjust the volume of that specific sound orb within a 3-dimensional (3-D) arrangement of multiple sound orbs (e.g., other sound orbs independent from one another within the same 3-D space). Changing the vertical position of a sound orb will also change the volume in the volume control within that particular sound orb, and vice versa.
  • a continuous sound structure 400 is depicted in FIG. 6.
  • the sound elements, or smaller orbs, 420 that surround the continuous sound structure, or central larger orb, 400 generate the "beats" of the sound file associated with the sound. These sounds may be generated when the sound elements 420 are "active.”
  • the activated sound elements 420 may be visually indicated as being active 426 (e.g., color-indicated such as a different color than non-activated orbs) or inactive 428.
  • the elements 420 are active 426 when green and inactive 428 when red.
  • each continuous sound structure 400 includes 16 sound elements 420, or smaller orbs, each of which comprises l/16th of a "sound loop,” or period of time of the continuous sound structure 400.
  • this embodiment utilizes 16 sound elements 420, it is to be contemplated that other embodiments may utilize more or less sound elements 420, and/or the number, or amount, of sound elements 420 may be user selectable. For example, a user may choose to include 8 sound elements 420 for each continuous sound structure 400 and each of sound elements may represent l/8th of a sound loop. The time taken for the "blue" indicator to complete a single “loop” is tied to, or directly related to, the "tempo" of the continuous sound structure 400,
  • the song tempo as beats per minute (BPM) may displayed in the top right hand comer of the 3-D music space by a "slider.”
  • the tempo can be altered by touching arrows to the left or right of the slider, or via a two finger vertical swipe.
  • the track tempo is also represented on the sound orb by the speed in which the blue illumination cycles through the 16 beat orbs surrounding the central volume orb.
  • the current BPM may be shown at all times during both Song and Loop
  • Modes 90, 50 may allow for easy access to change the tempo of the song, in at least one embodiment, when BPM is modified, it may be changed for the entire song (e.g., each measure or music portion) the user is composing. In at least on embodiment, each individual measures, or music portion, may have different, user selectable tempos. In at least one embodiment, the BPM may be user settable, or selectable, between 50 and 250.
  • Th e position of the continuou s sound structures within the three dimensional music space that houses (e.g., within which the sound orbs may be iocated) the continuous sound structures may determine the position of the sound for the user (e.g., a sound orb placed to the back of the space will be heard from behind the user), which may allow a user control of the panning, or sound position, of the continuous sound structures visually.
  • the exemplary embodiments described herein provide 360 degrees of sound manipulation (e.g., the location of where the sounds/music of each sound structure may be selected by moving it within the music portion space).
  • the embodiments described herein may include collections of music sounds located in folders on a rotating menu (e.g., a sound structure area) on the left-hand side of the tablet display (e.g., drams, bass, keys, and pads, etc.) as shown in FIGS. 3A-3D and FIG. 11.
  • a collection of sound files that relate to the primary sound may be located within each folder. For example, sub kick, Detroit High Hat, and lo-fi snare can all be accessed from the "drums" folder.
  • users may be able to extend the sound file library by unlocking them through an iii-app purchase system.
  • Loop Mode 50 may be the starting point for the exemplary systems and methods.
  • users have the freedom to create measures, or music portions, that may be part of their composition by picking and editing sounds from a rotating sound structure menu.
  • Loop Mode may play a continuous loop of the current mode the user is editing, which allows the user to hear the effects of the changes made to individual, smaller orbs (e.g., sound elements) or the spatiality of sounds as the user's view changes.
  • Arrangement Mode 80 may allow the user to compose at a larger scale using all measures, or music portions, created during Loop Mode 50.
  • the Arrangement Mode 80 may be described as being the second level of a "fractal" view on song making. Whereas, each sound element in Loop Mode 50 represents a single beat of a sound, each location in Arrangement Mode 80 represents one or more loops of a single music portion or measure. The user can specify a song length by adding music portions to the composition or increasing the number of times a certain music portion repeats within the composition.
  • Song Mode 90 may be selected from by selecting the song mode button 152 from Loop Mode 50 of FIGS. 3A-3D.
  • Song Mode 90 as shown in FIGS. 14A- 14B, users can observe the song as it plays out music portion by music portion within a graphical user interface 170 based on the composition created in
  • Arrangement Mode 80 For example, continuous sound structures 114 of the music portions arranged in Arrangement Mode 80 may slide downwardly across the GUI 170 as they are being played.
  • Song Mode 90 may be further described as a rich, visual representation of the song or music built in Arrangement Mode 80.
  • Song Mode 90 may be accessed once an arrangement, or song, has been created and may be toggled on/off from Loop Mode 50.
  • the first measure, or music portion, of the song/composition may be loaded on screen.
  • Song Mode 90 will start paused so that the user can start it when they desire.
  • Song Mode 90 begins to play, the music portions will play for the amount of repeats that were specified in Arrangement Mode 80, and then the next music portion in the arrangement may drop down from above into the current view (e.g., replacing the sound orbs, or continuous sound structures, from the previous measure, or music portion).
  • a user may have the ability to navigate around the 360° scene while the song is playing to get a different vantage and listening point for the song (e.g., a user may change his/her spatial orientation with respect to the song).
  • a user may toggle back into Loop Mode 50 of the currently playing measure, or music portion, to make changes (e.g., low level changes) and/or toggle back into Arrangement Mode 80 to make composition changes.
  • the exemplary systems and methods may be programmed for note length.
  • this note length feature may allow users to create variable note arrangements and melodies without the tediousness of navigating between measures (e.g., a note can play and stop at intervals set by the user on the sound orb).
  • This note length feature may shorten the time to completion and may provide intricacy in note manipulation.
  • the note length feature may be visually represented by a colored line that connects the beat orbs around the core sound orb that holds the sound fi le.
  • a lighting effect may provide cosmetic appeal to the intricacy of this feature.
  • One or more embodiments may have the ability to record vocals and place the created files into the music space as a continuous sound structure, or sound orb. Further, this same recording feature may also be available for MIDI controllers and other traditional analog instruments (e.g., guitar, bass, etc.).
  • one or more embodiments may have an export feature that may allow a user's songs to be compressed into a sound file (e.g., MP3, Wave, etc.) for sharing on social media platforms (e.g., FACEBOOK, TWITTER,
  • a sound file e.g., MP3, Wave, etc.
  • social media platforms e.g., FACEBOOK, TWITTER,
  • one or more embodiments may include a spectral analysis mode that may map the sounds of the created beats to the backdrop of the sound wall.
  • each composition may have a unique visualization of the music being played that may enrich the overal l experience for the user.
  • the primary user interface may be through the tablet touch screen, using direct manipulation techniques for interaction with the instalment selection interface and rhythm orbs. Changes in viewpoint will be provided through two mechanisms. The first is a swipe based interface that allows students to rotate the scene by using their fingers to swipe left, right, up, or down with corresponding view changes.
  • modem tablets have motion-sensing capability through
  • the exemplary systems and methods may use these capabilities to enable immersive kinesthetic viewing of the 3D composition as shown in FIG. 15.
  • a user may keep the tablet held straight in front of their eyes (at a normal viewing distance) as they rotate their head.
  • the motion sensors wil l track the orientation of the tablet, and the rendering perspective will be adjusted to provide the sense that the user is viewing the virtual world from within it, e.g., from an egocentric perspective, rather than viewing the virtual world from afar.
  • the audio will also adjust accordingly, such that if the user faces a rhythm orb object, it will sound as though it is in front of them, and if there is another rhythm orb to the left, it will sound as though it is to the left.
  • a structure for progressive achievement may be provided and for enabling users to share their compositions with others.
  • two modes may exist: Training Mode and Game Mode.
  • Training Mode may not be defined to be used with regard to whether it is a solitary or social activity. In Training Mode, users will be gradual!)' introduced to the interface and features through a series of tutorial "levels," as is commonly found in video games. Users wil l be asked to match goal configurations as closely as possible. The closer the user comes to directly matching the pre-constructed goal, the higher their "grade" for a given round.
  • Training Mode may encourage players to explore the space in ways that they might not know were possible.
  • the first tutorial may be to simply add a single drum sound and have that sound have 1 ⁇ 2 of its beats in the sequence turned on, followed by tutorials on changing pitch, adding sound effects, and mixing sounds spatially.
  • Game Mode may be subsequent to the training mode and may be a single player experience. In Game Mode, students will be challenged to demonstrate their proficiency with the interface based on their ability to utilize the
  • the tutorial levels will correlate with ganiepiay levels.
  • teachers may guide students with regard to the "difficulty" of implementing different music flmctionaiities (e.g., beat frequency, sound pitch, sound effects, spatial orientation of sounds).
  • students may be provided with audio-only examples of music samples, and may need to replicate the music with their spatial composition.
  • the music may need to be within an empirically determined percentage from the original music with subsequent levels becoming increasingly- more complex in terms of the number of instruments, and spatial arrangement of those instruments.
  • badges can also be defined for particular kinds of compositions or those that make use of techniques introduced throu gh the challenge mode of the game. This may encourage players to experiment with different kinds of audio constructions in the virtual space. For example, one badge, possibly called the “Interpretive Dance Badge,” would require movement on the part of the listener to achieve a "proper” listening of the audio track. All social interaction features will be accessible through the tablet interface using, e.g., a secure server for storage. Students may be assigned an anonymous ID for use with the app to protect their identities, although only students, their teachers, and the researchers will have access to the data. The complete disclosures of the patents, patent documents, and publications cited herein are incorporated by reference in their entirety as if each were individual [y incorporated.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)
  • Electrophonic Musical Instruments (AREA)

Abstract

L'invention concerne des exemples de systèmes et de procédés qui permettent à un utilisateur de créer de la musique à l'aide de structures sonores continues et d'autres éléments graphiques à l'aide d'une interface graphique d'utilisateur. Le procédé comprend : la représentation d'un espace de portion de musique dans l'interface graphique d'utilisateur d'un appareil d'affichage, afin de créer une portion de musique ; et le fait de permettre à l'utilisateur, par le biais d'un appareil d'entrée, d'ajouter au moins une structure sonore continue à l'espace de portion de musique, chacune des au moins une structure sonore continue comprenant une pluralité d'éléments sonores disposés autour d'une boucle continue représentant une durée.
PCT/US2013/072481 2012-11-29 2013-11-29 Systèmes et procédés de création musicale Ceased WO2014088917A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/648,040 US20150309703A1 (en) 2012-11-29 2013-11-29 Music creation systems and methods

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201261731214P 2012-11-29 2012-11-29
US61/731,214 2012-11-29

Publications (2)

Publication Number Publication Date
WO2014088917A1 true WO2014088917A1 (fr) 2014-06-12
WO2014088917A8 WO2014088917A8 (fr) 2014-07-24

Family

ID=50883891

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2013/072481 Ceased WO2014088917A1 (fr) 2012-11-29 2013-11-29 Systèmes et procédés de création musicale

Country Status (2)

Country Link
US (1) US20150309703A1 (fr)
WO (1) WO2014088917A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109756628A (zh) * 2018-12-29 2019-05-14 北京金山安全软件有限公司 一种功能按键音效的播放方法、装置及电子设备

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9471205B1 (en) * 2013-03-14 2016-10-18 Arnon Arazi Computer-implemented method for providing a media accompaniment for segmented activities
US9378718B1 (en) * 2013-12-09 2016-06-28 Sven Trebard Methods and system for composing
EP3206408B1 (fr) * 2014-10-10 2020-12-30 Sony Corporation Dispositif et procédé de codage, dispositif et procédé de lecture et programme
US10635384B2 (en) * 2015-09-24 2020-04-28 Casio Computer Co., Ltd. Electronic device, musical sound control method, and storage medium
EP3407232B1 (fr) * 2017-05-23 2021-07-28 Ordnance Survey Limited Authentification spatiotemporelle
DE112020001542T5 (de) * 2019-03-26 2022-01-13 Sony Group Corporation Informationsverarbeitungsvorrichtung und informationsverarbeitungsverfahren
USD952658S1 (en) * 2019-04-16 2022-05-24 Samsung Electronics Co., Ltd. Display screen or portion thereof with graphical user interface
CN112799581A (zh) * 2021-02-03 2021-05-14 杭州网易云音乐科技有限公司 多媒体数据处理方法及装置、存储介质、电子设备
US20230154445A1 (en) * 2021-11-15 2023-05-18 Snap Inc. Spatial music creation interface
USD1071957S1 (en) 2022-12-07 2025-04-22 Hyph Ireland Limited Display screen with graphical user interface

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6201769B1 (en) * 2000-04-10 2001-03-13 Andrew C. Lewis Metronome with clock display
US20020152877A1 (en) * 1998-01-28 2002-10-24 Kay Stephen R. Method and apparatus for user-controlled music generation
US20060000345A1 (en) * 2002-12-19 2006-01-05 Hajime Yoshikawa Musical sound production apparatus and musical
US20090235809A1 (en) * 2008-03-24 2009-09-24 University Of Central Florida Research Foundation, Inc. System and Method for Evolving Music Tracks
US20100319517A1 (en) * 2009-06-01 2010-12-23 Music Mastermind, LLC System and Method for Generating a Musical Compilation Track from Multiple Takes

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7884275B2 (en) * 2006-01-20 2011-02-08 Take-Two Interactive Software, Inc. Music creator for a client-server environment
JP5589432B2 (ja) * 2010-02-23 2014-09-17 ヤマハ株式会社 発音制御装置及びプログラム
US8907191B2 (en) * 2011-10-07 2014-12-09 Mowgli, Llc Music application systems and methods

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020152877A1 (en) * 1998-01-28 2002-10-24 Kay Stephen R. Method and apparatus for user-controlled music generation
US6201769B1 (en) * 2000-04-10 2001-03-13 Andrew C. Lewis Metronome with clock display
US20060000345A1 (en) * 2002-12-19 2006-01-05 Hajime Yoshikawa Musical sound production apparatus and musical
US20090235809A1 (en) * 2008-03-24 2009-09-24 University Of Central Florida Research Foundation, Inc. System and Method for Evolving Music Tracks
US20100319517A1 (en) * 2009-06-01 2010-12-23 Music Mastermind, LLC System and Method for Generating a Musical Compilation Track from Multiple Takes

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109756628A (zh) * 2018-12-29 2019-05-14 北京金山安全软件有限公司 一种功能按键音效的播放方法、装置及电子设备
CN109756628B (zh) * 2018-12-29 2021-03-16 北京金山安全软件有限公司 一种功能按键音效的播放方法、装置及电子设备

Also Published As

Publication number Publication date
WO2014088917A8 (fr) 2014-07-24
US20150309703A1 (en) 2015-10-29

Similar Documents

Publication Publication Date Title
US20150309703A1 (en) Music creation systems and methods
US10224012B2 (en) Dynamic music authoring
US8367922B2 (en) Music composition method and system for portable device having touchscreen
CN100447723C (zh) 控制计算机化装置的方法和设备
US7453035B1 (en) Methods and systems for providing musical interfaces
EP2760014B1 (fr) Courbe de résultat interactive pour ajuster les paramètres audio d' un enregistrement utilisateur.
US9076264B1 (en) Sound sequencing system and method
EP2239727A1 (fr) Appareil et programme de performance musicale
US10496199B2 (en) Device and method for controlling playback of digital multimedia data as well as a corresponding computer-readable storage medium and a corresponding computer program
WO2010034063A1 (fr) Système de contenus audio et vidéo
US10430069B2 (en) Device, a method and/or a non-transitory computer-readable storage means for controlling playback of digital multimedia data using touch input
US20170206055A1 (en) Realtime audio effects control
CN101661783B (zh) 信息处理设备以及信息处理方法
JP2016193051A (ja) ゲーム装置及びゲームプログラム
CN112883223A (zh) 音频展示方法、装置、电子设备及计算机存储介质
US20140266569A1 (en) Controlling music variables
US8799819B2 (en) Graphical user interface for multi-tap delay
JP5433988B2 (ja) 電子音楽装置
JP5682285B2 (ja) パラメータ設定プログラム及び電子音楽装置
JP6987405B2 (ja) ゲームシステム、それに用いるコンピュータプログラム、及び制御方法
Krout et al. Music technology used in therapeutic and health settings
JP4192461B2 (ja) 情報処理装置及び情報処理システム並びに情報処理用プログラム
Adams et al. SonicExplorer: Fluid exploration of audio parameters
Ren et al. Interactive virtual percussion instruments on mobile devices
JP5389876B2 (ja) 音声制御装置、音声制御方法、及び音声制御プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13860079

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 14648040

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13860079

Country of ref document: EP

Kind code of ref document: A1