WO2025012857A1 - Instrument de musique qui numérise et traite des signaux, et synthétise des sons et procédés associés - Google Patents
Instrument de musique qui numérise et traite des signaux, et synthétise des sons et procédés associés Download PDFInfo
- Publication number
- WO2025012857A1 WO2025012857A1 PCT/IB2024/056771 IB2024056771W WO2025012857A1 WO 2025012857 A1 WO2025012857 A1 WO 2025012857A1 IB 2024056771 W IB2024056771 W IB 2024056771W WO 2025012857 A1 WO2025012857 A1 WO 2025012857A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- string
- signals
- instrument
- sensors
- musical instrument
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/32—Constructional details
- G10H1/34—Switch arrangements, e.g. keyboards or mechanical switches specially adapted for electrophonic musical instruments
- G10H1/342—Switch arrangements, e.g. keyboards or mechanical switches specially adapted for electrophonic musical instruments for guitar-like instruments with or without strings and with a neck on which switches or string-fret contacts are used to detect the notes being played
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/02—Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos
- G10H1/04—Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos by additional modulation
- G10H1/053—Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos by additional modulation during execution only
- G10H1/055—Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos by additional modulation during execution only by switches with variable impedance elements
- G10H1/0551—Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos by additional modulation during execution only by switches with variable impedance elements using variable capacitors
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H3/00—Instruments in which the tones are generated by electromechanical means
- G10H3/12—Instruments in which the tones are generated by electromechanical means using mechanical resonant generators, e.g. strings or percussive instruments, the tones of which are picked up by electromechanical transducers, the electrical signals being further manipulated or amplified and subsequently converted to sound by a loudspeaker or equivalent instrument
- G10H3/14—Instruments in which the tones are generated by electromechanical means using mechanical resonant generators, e.g. strings or percussive instruments, the tones of which are picked up by electromechanical transducers, the electrical signals being further manipulated or amplified and subsequently converted to sound by a loudspeaker or equivalent instrument using mechanically actuated vibrators with pick-up means
- G10H3/18—Instruments in which the tones are generated by electromechanical means using mechanical resonant generators, e.g. strings or percussive instruments, the tones of which are picked up by electromechanical transducers, the electrical signals being further manipulated or amplified and subsequently converted to sound by a loudspeaker or equivalent instrument using mechanically actuated vibrators with pick-up means using a string, e.g. electric guitar
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H3/00—Instruments in which the tones are generated by electromechanical means
- G10H3/12—Instruments in which the tones are generated by electromechanical means using mechanical resonant generators, e.g. strings or percussive instruments, the tones of which are picked up by electromechanical transducers, the electrical signals being further manipulated or amplified and subsequently converted to sound by a loudspeaker or equivalent instrument
- G10H3/14—Instruments in which the tones are generated by electromechanical means using mechanical resonant generators, e.g. strings or percussive instruments, the tones of which are picked up by electromechanical transducers, the electrical signals being further manipulated or amplified and subsequently converted to sound by a loudspeaker or equivalent instrument using mechanically actuated vibrators with pick-up means
- G10H3/18—Instruments in which the tones are generated by electromechanical means using mechanical resonant generators, e.g. strings or percussive instruments, the tones of which are picked up by electromechanical transducers, the electrical signals being further manipulated or amplified and subsequently converted to sound by a loudspeaker or equivalent instrument using mechanically actuated vibrators with pick-up means using a string, e.g. electric guitar
- G10H3/186—Means for processing the signal picked up from the strings
- G10H3/188—Means for processing the signal picked up from the strings for converting the signal to digital format
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H5/00—Instruments in which the tones are generated by means of electronic generators
- G10H5/007—Real-time simulation of G10B, G10C, G10D-type instruments using recursive or non-linear techniques, e.g. waveguide networks, recursive algorithms
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2220/00—Input/output interfacing specifically adapted for electrophonic musical tools or instruments
- G10H2220/155—User input interfaces for electrophonic musical instruments
- G10H2220/265—Key design details; Special characteristics of individual keys of a keyboard; Key-like musical input devices, e.g. finger sensors, pedals, potentiometers, selectors
- G10H2220/275—Switching mechanism or sensor details of individual keys, e.g. details of key contacts, hall effect or piezoelectric sensors used for key position or movement sensing purposes; Mounting thereof
- G10H2220/295—Switch matrix, e.g. contact array common to several keys, the actuated keys being identified by the rows and columns in contact
- G10H2220/301—Fret-like switch array arrangements for guitar necks
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2220/00—Input/output interfacing specifically adapted for electrophonic musical tools or instruments
- G10H2220/155—User input interfaces for electrophonic musical instruments
- G10H2220/395—Acceleration sensing or accelerometer use, e.g. 3D movement computation by integration of accelerometer data, angle sensing with respect to the vertical, i.e. gravity sensing
Definitions
- the present invention relates to musical instruments. More particularly, the present invention relates to a digital electronic musical instrument comprising a synthesizer based on a physical simulation, a method allowing digitizing and processing signals generated continuously by said instrument, and a method for synthesizing sounds by means of the digital instrument.
- One of the most common digital musical systems or instruments are systems based on MIDI such as those described by patent applications US 8093482 B1 , US 2011/239848 A1 , and US 2022/208160 A1. Briefly, these systems use a processor that receives signals emitted by sensors and generates an output signal in MIDI format. However, these systems make use of discrete signals that are usually limited to an identity of a note played, its intensity and, eventually, some modulated parameter. As a result, these systems are incapable of executing and interpreting more subtle techniques and gestures resulting from interpretation by a performer.
- the present invention provides a musical instrument that allows digitizing and processing signals to accurately replicate the actions of a performer in such a way that allows performing and interpreting the subtle techniques and gestures resulting from the interpretation by a performer, more particularly such subtle techniques and gestures are peculiar to a stringed musical instrument.
- Another aspect of the present invention is a method for digitizing and processing such shares.
- another aspect of the present invention is a method for synthesizing sounds from the digitized and processed signals.
- the present invention relates to a musical instrument that digitizes and processes the actions of a performer, wherein said instrument comprises two parts, wherein one of the parts digitizes the action of the non-deft hand of the performer and the other part digitizes the action of the deft hand of the performer.
- the instrument comprises a body and a neck, wherein the body of the instrument is the part of the musical instrument that digitizes the action of the deft hand of the performer while the neck of the instrument is the part of the musical instrument that digitizes the action of the non-deft hand of the performer.
- One aspect of the present invention relates to a musical instrument that digitizes and continuously processes analog signals, digital signals, or both, produced by said instrument, and wherein said instrument comprises
- a body comprising a container and a lid, thus defining an inner volume
- said body further comprises at least one string located on the outer surface of the lid and extending along the lid, wherein said at least one string is a metal string, two string damping media, wherein the first medium is located proximate to one end of the at least one string and the second medium is located proximate to the opposite end of the at least one string and wherein both media are in contact with the at least one string, at least one spring located on the outer surface of the lid, wherein said at least one spring is linked to the corresponding at least one string, at least one microphone located on the outer surface of the lid, below the corresponding at least one string and without contact with the string; at least one capacitive sensing integrated circuit wherein said at least one capacitive sensing integrated circuit is located within the inner volume and wherein said at least one capacitive sensing integrated circuit is connected with the at least one string; at least one analog-to-digital converter, or A/D converter, located within the interior volume,
- the modulator is integrated into an inertial measurement unit (IMU), located in the fretboard and connected to the microcontroller, wherein the at least one microphone, the at least one capacitive sensing integrated circuit, the at least one A/D converter, the at least one spring, and the at least one row of sensors correspond to each of the strings of the at least one string, in a 1 to 1 ratio.
- IMU inertial measurement unit
- the instrument body comprises 1 , 2, 3, 4, 5, 6, 7, 8 or more strings, more preferably it comprises 4, 5 or 6 strings, still more preferably, the instrument body comprises 6 strings.
- the length of the at least one string is between 10 and 40 cm, more preferably, between 25 and 35 cm, still more preferably, the length of the string is 30 cm.
- the damping media comprise a viscoelastic material.
- said viscoelastic material is viscoelastic foam rubber.
- the at least one microphone is located below the corresponding at least one string at a distance of between 5, 6, 7, 8, 9 or 10 mm, more preferably at a distance of 5 mm.
- the at least one capacitive sensing integrated circuit is connected to the at least one string, to the plurality of sensors of the fretboard, and to the microcontroller.
- the at least one A/D converter digitizes the information coming from the at least one microphone with a given sampling frequency, preferably the sampling frequency is between 12 and 192 kHz, preferably the sampling frequency is selected from the group consisting of 12, 24, 48, 96 and 192 kHz, even more preferably the sampling frequency is 96 kHz.
- the instrument body comprises a control and processing unit (CPU), wherein said unit controls and processes the information coming from at least one A/D converter and the microcontroller.
- CPU control and processing unit
- control and processing unit comprises at least one sound channel.
- said unit comprises 1 , 2, 3, 4, 5, 6, 7, 8 or more sound channels, more preferably it comprises 4, 5 or 6 sound channels. Even more preferably, said unit comprises 6 sound channels.
- control and processing unit controls and processes the information coming from the at least one A/D converter with a given sampling frequency, preferably said sampling frequency is between 12 and 192 kHz, preferably the sampling frequency is selected from the group consisting of 24, 48, 96, 192 kHz, even more preferably the sampling frequency is 96 kHz.
- control and processing unit models at least one virtual string.
- the audio output comprises an analog audio output, wherein said analog audio output comprises a digital-to-analog converter (DAC) in communication with an audio signal conditioning circuit, wherein said audio signal conditioning circuit conditions the signal for reproduction.
- DAC digital-to-analog converter
- the digital-to-analog converter converts signals with a given sampling frequency, preferably said sampling frequency is between 12 and 192 kHz, preferably the sampling frequency is selected from the group consisting of 24, 48, 96, 192 kHz, even more preferably the sampling frequency is 48 Hz.
- the audio signal conditioning circuit is in communication with a reproduction means which may be comprised by the instrument, or may be located outside the instrument, and wherein said reproduction means is a speaker or a headphone, more preferably a speaker.
- the plurality of sensors comprised on the fretboard printed circuit board comprises capacitive sensors or pressure sensors, more preferably, the plurality of sensors comprises capacitive sensors.
- the at least one row of sensors comprises at least 6 sensors.
- said row comprises 6, 7, 8, 9, 10, 11 , 12 or more sensors, preferably, the at least one row of sensors comprises 10, 1 1 or 12 sensors. Even more preferably, the at least one row of sensors comprises 12 sensors.
- the plurality of sensors of the instrument fretboard comprises 72 sensors, wherein said 72 sensors are distributed in 6 sensor rows, wherein each sensor row comprises 12 sensors.
- the at least one additional modulator of the model is selected from an accelerometer, gyroscope, magnetometer, or a combination thereof, and wherein said modulator is integrated into an inertial measurement unit (IMU).
- IMU inertial measurement unit
- step B) digitize the plurality of analog signals of step A), wherein i) the plurality of signals coming from the at least one string is digitized by means of at least one capacitive sensing integrated circuit, ii) the plurality of signals coming from the at least one microphone is digitized by means of at least one A/D converter with a frequency of between 12 and 192 kHz, iii) the plurality of signals coming from the at least one sensor of the plurality of sensors of the fretboard is digitized by means of at least one capacitive sensing integrated circuit;
- step D) processing the plurality of digital signals obtained from step C) to obtain a plurality of input signals, wherein said processing comprises i) converting the signals coming from the at least one string into an increase of the friction coefficient, ii) converting the signals coming from at least one microphone into a force, iii) converting the signals coming from the plurality of sensors of the fretboard into a friction, iv) converting the signals coming from the at least one additional modulator of the model into a force, volume, friction, position where a force is applied, or position where the sound is read.
- the method for digitizing and processing signals of the present invention is performed in real time.
- the threshold value of step Ci) is set with reference to the noise of the digital signals resulting from step A) or B). More preferably, said threshold value is determined as 20% greater than said noise.
- the method for processing and digitizing signals is carried out independently for each string of the at least one string, for each microphone of the at least one microphone, for each sensor of the plurality of sensors of the fretboard, and for each additional model modulator of the at least one additional model modulator.
- the position and pressure of the performer's non-deft hand on the circuit board is used to modify the length of the string and consequently the resonant frequency of the at least one virtual string, generating different notes.
- the pressure difference can be used to generate sound effects typical of real instruments such as a fully plucked string, harmonics, slurs between two notes, quenching, etc.
- Another aspect of the present invention is a method for synthesizing sounds comprising the steps of
- step D) sending the plurality of output signals produced in step C) to an audio output for their reproduction.
- the method for synthesizing sounds of the present invention is performed in real time.
- the value of n for the n nodes is between 100-500 nodes, preferably 500. Accordingly, the value of (n-1) for the number of springs is between 99 and 499 springs, preferably 499 springs.
- the Figure 1 shows a schematic representation of the instrument of the present invention showing the components and their connections.
- the Figure 2 shows a representative schematic of the modeling of a virtual string according to the modeling as described in the present invention.
- the Figure 3 shows a schematic of a preferred embodiment of the digitizing and processing method of the present invention in combination with a preferred embodiment of the sound generation method of the present invention.
- the Figure 4 shows a block diagram of one embodiment of the musical instrument of the present invention.
- deft hand will be used to refer to both the right hand of a right-handed guitarist and the left hand of a left-handed guitarist
- non-deft hand will be used to refer to both the left hand of a right-handed guitarist and the right hand of a left-handed guitarist.
- deft hand and non-deft hand should not be interpreted in relation to the performer's ability, but only to the spatial arrangement in which the performer places the hands in relation to the instrument.
- the term "performer” shall be understood to mean a person who acts upon the instrument for the purpose of generating a music, sound, sound effect, or a combination thereof.
- action of a performer or “action of the performer” shall be understood to mean any action, including and not limited to movements, direct and indirect contacts, executions, strokes, strumming, techniques and gesticulations, among others, performed upon the instrument by the performer that is detectable by the instrument, i.e. that generates at least one non-null signal by the instrument, for the purpose of generating a music, sound, sound effect, or a combination thereof.
- the term may refer to a singular action or to a series of consecutive actions.
- the term "digitizing" refers to any electronic process carried out by any type of converter, wherein a signal or plurality of signals of any type of non-digital nature is converted into a digital signal or into a plurality of digital signals.
- string when not accompanied by any adjective or qualifier will refer to a real, physical string made of any material suitable for use as required by the instrument of the present invention and upon which a performer can physically interact in order to perform any of the methods disclosed in the present invention, unless the context clearly indicates otherwise.
- real string “string of the body”, “string of the instrument” and “string of the body of the instrument” shall be considered synonymous and shall be used interchangeably.
- the term "virtual string” will refer to a string resulting from mathematical modeling, in any of the embodiments disclosed in the present specification, executed by means of the software of a control and processing unit, and on which a signal or a plurality of signals coming from the instrument, preferably signals resulting from the actions of a performer, can be applied.
- the expression “of the model” should be understood as referring to the mathematical model used in the modeling of the virtual string.
- the terms “body of the instrument” and “part of the musical instrument that digitizes the action of the deft hand of the performer” are to be understood as synonyms and, therefore, will be used interchangeably.
- neck of the instrument and “part of the musical instrument that digitizes the action of the performer's non-deft hand” shall be considered synonymous and used interchangeably.
- the term "damped string” shall be understood to mean a string whose vibrations subsequent to the initial vibration are attenuated. Accordingly, any process to attenuate the vibrations of a string subsequent to the initial vibration shall be understood as “damp”, “damping” or “the damping” and these terms shall be used interchangeably when they refer to a string that is subjected to such a process.
- real time when referring to a process or action, shall be understood as meaning that said process or action is completed in an amount of time that is not significantly perceptible to the performer or that said process or action is completed in an amount of time that does not represent an inconvenience to the execution of the performer's actions, and/or the operation of the instrument.
- top face of the base should be understood as the face on which the performer will execute the actions to be digitized and processed by the instrument.
- bottom face of the base should be understood as the face or region of the base that is diametrically opposite to the top face.
- the term "parameter” refers to the at least one virtual string or the modeling of the at least one virtual string, it should be understood as magnitudes modeling physical properties (such as mass, spring hardness, air friction, etc.) of the at least one virtual string or the modeling of the at least one virtual string that determine its properties independently from the signals coming from the musical instrument.
- variable refers to the at least one virtual string or the modeling of the at least one virtual string, it should be understood as magnitudes modeling physical properties (such as string length, friction coefficients, applied forces, etc.) of the at least one virtual string or the modeling of the at least one virtual string that determine its behavior in a dependently from the signals coming from the musical instrument.
- the term "input signal”, “input signals” and “plurality of input signals” will be used interchangeably and will refer to those digital signals resulting from the method for digitizing and processing of the present invention in any of its embodiments which are used to calculate or determine the variables of the at least one virtual string in any of its embodiments.
- output signal For the purposes of the present invention, the term "output signal”, “output signals” and “plurality of output signals” will be used interchangeably and will refer to those digital signals resulting from the application, in any of its embodiments, of the input signals on the at least one virtual string of the present invention in any of its embodiments.
- One aspect of the present invention is a musical instrument 100 that digitizes and processes signals, preferably, wherein said signals result from the actions of a performer, such that the digitization and processing of the signals allows the techniques and gestures of the performer to be accurately replicated.
- the techniques and gestures that the instrument replicates comprise legato, tapping, slapping, glissando, vibrato, finger tapping, pick tapping, pick dragging, pizzicato, string quenching, strumming, snapping, harmonic generation, among others.
- the musical instrument 100 comprises a body 101 and a neck 102, wherein said body 101 and neck 102 are linked such that the neck 102 extends from the body 101 following its longitudinal axis.
- the body 101 is between 20 and 40 cm long, between 7 and 30 cm wide and between 4 and 10 cm thick.
- the neck 102 of the instrument is between 20 and 40 cm long, between 4 and 8 cm wide and between 1 and 3 cm thick.
- the body 101 comprises a container and a lid pivotally connected to the container, thus defining an inner volume.
- the instrument 100 of the invention comprises a neck 102 comprising a base with a partially or completely flat top face, wherein said top face is that face upon which the performer executes the actions to be digitized and processed by the instrument, and wherein said neck 102 further comprises: a fretboard 107 disposed on at least a planar portion of the top face of the base of the neck 102 comprising a printed circuit board 108, comprising a plurality of sensors 110 of the fretboard wherein said plurality of sensors 110 are arranged in at least one row 109 of sensors 110, and wherein said circuit board 108 is connected to the at least one 113 capacitive sensing integrated circuit via a wired connection;
- modulator 111 is integrated to an inertial measurement unit (IMU) located in the fretboard 107 of the neck 102 and which is connected to the microcontroller 114.
- IMU inertial measurement unit
- the at least one row 109 of sensors 110 is arranged such that it is aligned on the longitudinal axis with its corresponding string of the at least one string 103.
- the body 101 of the instrument 100 comprises at least one string 103 of a stringed musical instrument, preferably said at least one string 103 is of an acoustic or electric stringed instrument. More preferably, said at least one string is an acoustic guitar string or an electric guitar string.
- the at least one string 103 is made of metal and is made of a metal selected from steel, nickel, brass, bronze, or any combination thereof. In a particularly preferred embodiment, the at least one string 103 is made of steel and nickel.
- each of the strings of the at least one string 103 is independently connected to the capacitive sensing integrated circuit 113.
- the at least one string 103 is a damped string with two damping media 104a and 104b, wherein the 104a medium is located proximate to the end of the at least one string 103 located most distal to the fretboard and the 104b medium is located proximate to the end of the at least one string 103 that is closest to the fretboard and wherein both media are in contact with the at least one string 103.
- "Proximate to the end of the string” will be understood to mean any distance that allows the damping medium to produce the desired damping and reduce vibrations subsequent to the initial vibration of the string. A person skilled in the art will be able to determine the position for the exact location of the damping media. Likewise, a person skilled in the art will be able to determine the optimum size that these damping means can be to produce the desired damping.
- the inclusion of the viscoelastic material gives the instrument the ability to avoid reflections of the initial signal, thus achieving greater precision in the replication of the performer's intention and avoiding spurious signals, giving the invention the advantage of achieving greater precision in the replication of techniques and gestures performed by the performer.
- the body 101 of the instrument 100 comprises two supports.
- said supports are located on the outer surface of the lid and at opposite ends of the at least one string, wherein said two supports comprise a first support which is used as a tensioning point of the at least one string and a second support which is used as a bridge or anchor point.
- the method of anchoring the at least one string is by attaching the string to the second body support.
- the body 101 of the instrument 100 comprises two supports, wherein the first support is located at the more distal end relative to the neck 102 of the instrument 100 and the second support is located at the nearer end relative to the neck 102 of the instrument 100.
- the first support comprises at least one spring 105.
- said support comprises 1 , 2, 3, 4, 5, 6, 7, 8 or more springs 105. More preferably, said support comprises 4, 5 or 6 springs 105. Even more preferably, said support comprises 6 springs 105.
- Each of the at least one spring 105 is arranged in such a way that a spring is arranged for each of the strings of the at least one string 103.
- said at least one spring 105 is linked to the at least one string 103 from a single end.
- all the springs shall be arranged from the same end of each of the strings, so that in the instrument all the springs are located on the first support.
- the tension of the strings of the instrument is similar to that of the strings of a conventional length instrument despite the fact that the strings of the instrument of the present invention are significantly shorter.
- the musical instrument 100 comprises at least one polyphonic microphone, more preferably said at least one microphone is a hexaphonic microphone.
- the body 101 of the instrument 100 comprises a capacitive sensing integrated circuit 113, wherein said circuit 113 is connected to the at least one string 103 such that each of the strings of the at least one string 103 is independently connected.
- the connection between the at least one string 103 and the capacitive sensing integrated circuit 113 is made via a wired connection.
- the at least one capacitive sensing integrated circuit 113 is used to detect the direct or indirect contact, for example, through an instrumental performance element, for example, a pick or a bow, of the performer with the at least one string 103. Further, the at least one capacitive sensing integrated circuit 113 receives signals from the at least one string 103, from the plurality of sensors 110 of the fretboard, or both and sends signals to the microcontroller 114.
- the at least one capacitive sensing integrated circuit 113 comprises a group of sensing integrated circuits for sensing the quenching of the at least one string 103 and another group of capacitive sensing integrated circuits for sensing the force and position coming from the sensors 110 of the fretboard 107.
- the instrument body comprises at least one A/D converter 112.
- the bit depth chosen for the at least one A/D converter 112 is such as to allow sufficient resolution to feed the simulation, preferably, the bit depth is 16 bits.
- each of the at least one A/D converter 112 is arranged such that an A/D converter 112 is arranged for each of the microphones 106 that sense the strings of the at least one string 103. More particularly, said converters 112 and microphones 106 are connected via a wired connection.
- the at least one A/D converter 112 digitizes information coming from the at least one microphone 106 at a given sampling rate, such as to allow the capture of audible components of the performance transient (20 Hz to 20 kHz) and of ultrasonic vibrations that may contribute to the final model synthesis result (20 kHz to 48 kHz). In one embodiment, the at least one A/D converter 112 digitizes the information coming from the at least one microphone 106 with a given sampling frequency, preferably the sampling frequency is between 12 and 192 kHz, preferably the sampling frequency is between 12, 24, 48, 96 and 192 kHz, even more preferably the sampling frequency is 96 kHz.
- This high sampling rate allows the musical instrument 100 of the present invention to achieve replicating with greater detail and resolution the movements performed by the at least one string 103 and to transmit that information for use in the modeling of the at least one virtual string 200, thus allowing a more accurate replication of the movements performed on the real string.
- an expressiveness is obtained by the musical instrument that is more similarto the gesticulations and techniques of the performer that is not obtained by other musical instruments in the prior art.
- the musical instrument 100 features a microcontroller 114.
- the microcontroller 114 is responsible for controlling, concentrating and distributing the at least one digital signal that does not come from the at least one A/D converter 112 to the control and processing unit 115. For example, said at least one digital signal coming from the at least one additional modulator 111 or from the at least one capacitive sensor 110.
- said microcontroller 114 is connected to the capacitive sensing integrated circuit 113 via an I2C connection, or an SPI connection.
- the microcontroller 114 used is a 32-bit microcontroller with SPI, I2C digital communication ports and digital inputs, model ESP32.
- the body 101 of the instrument 100 comprises a control and processing unit 115 wherein said control and processing unit 115 controls and processes information coming from the at least one A/D converter 112.
- the control and processing unit 115 is connected to the at least one A/D converter 112 through the use of at least one sound channel.
- each of the at least one sound channel is arranged in such a way that a sound channel is arranged for each A/D converter of the at least one A/D converter 112. That is, by way of example, when the instrument 100 comprises 6 sound channels, it will also comprise 6 microphones such that each sound channel receives information from a single microphone.
- control and processing unit 115 operates with a latency such that it is not significantly noticeable to the performer or is not a drawback to the execution of the performer's actions, and/or the operation of the instrument.
- control and processing unit 115 operates with a latency of 2, 3, 4, 5, 6, 7, 8, 9, or 10 ms, more preferably, said unit 115 operates with a latency of 2, 3, 4 or 5 ms, more preferably, said unit 115 operates with a latency of 2 ms.
- control and processing unit 115 comprising the body 101 of the instrument 100 digitizes information coming from the at least one microphone 106 at a given sampling rate.
- said sampling frequency of between 12 and 192 kHz, preferably the sampling frequency is between 12, 24, 48, 96 and 192 kHz, still more preferably the sampling frequency is 96 kHz.
- control and processing unit 115 communicates with the following elements in the following ways: at least one A/D converter 112 via an I2C or SPI connection, to the microcontroller 114 via USB connection the at least one audio output 118 or 119, by means of a DAC and an audio conditioning circuit 117 if the audio output 118 is analog, or by means of a wired connection if the audio output 119 is digital.
- control and processing unit 115 comprises a graphics processing unit (GPU), a field programmable gate array (FPGA), an embedded system, or any other high-performance co-processor.
- GPU graphics processing unit
- FPGA field programmable gate array
- embedded system or any other high-performance co-processor.
- control and processing unit 115 may be located outside the musical instrument 100 on an external processor, for example, inside a computer or notebook, and be in communication with the microcontroller 114 and the A/D converters 112 via a serial digital connection, such as USB. Additionally, when the external processor is a computer or notebook, the audio output used by the musical instrument shall be that comprised by the computer or notebook.
- the control and processing unit 115 is a computer that may be located within the interior volume of the body 101 of the instrument 100, or located outside said instrument 100. In a particular embodiment, said computer may be an Nvidia Jetson nano, Jetson TX2 NX, or any other computer with similar computing capability, or any combination thereof.
- control and processing unit 115 models at least one virtual string 200 for each of the at least one string 103 of the body 101.
- the control and processing unit 115 models 1 , 2, 3, 4 or more virtual strings for each of the at least one string of the body.
- control and processing unit 115 models the at least one virtual string 200 such that each virtual string of the at least one virtual string 200 corresponds to each string of the at least one string 103 of the body 101.
- said modeling when modeling at least two virtual strings, said modeling may be independent or interrelated, i.e., said at least two strings may be modeled separately or together.
- the instrument 100 comprises an analog audio output 118, wherein said analog audio output 118 comprises a digital-to-analog converter 116 (DAC), wherein said DAC 116 converts the signal generated by the model into an analog audio signal; and at least an audio signal conditioning circuit 117 that communicates with the at least one DAC converter 116, wherein said audio signal conditioning circuit conditions the signal for reproduction.
- DAC digital-to-analog converter
- audio signal conditioner circuit 117 is in direct communication to a reproduction medium, wherein said reproduction medium may be a speaker or a headphone, most preferably a speaker.
- the audio signal conditioning circuit 117 is in indirect communication to a reproduction medium, for example, via an amplifier or an effects pedal.
- the reproduction means may be inside the musical instrument, for example, inside the body 101 or inside the neck 102, or outside the musical instrument 100, for example, being a autonomous speaker, a computer, or any other reproduction means suitable for the reproduction of music, sounds, or sound effects.
- the neck 102 of the instrument is linked to the body 101 of the instrument 100 via a heel, wherein said heel generates a linkage between the bottom face of the base of the neck 102 and the region of the body 101 of the instrument 100 closest to said bottom face, wherein said bottom face is the face or region of the base that is diametrically opposite the top face of the base of the neck 102.
- the fretboard 107 comprises a circuit board 108 of the fretboard, wherein said circuit board 108 comprises a plurality of sensors 110 preferably distributed in at least one row 109 of integrated sensors 110.
- this integrated circuit board 108 is connected to the at least one capacitive sensing integrated circuit 113 via a wired connection.
- the fretboard 107 of the instrument 100 comprises a plurality of sensors 110.
- the plurality of sensors 110 of the fretboard 107 comprises capacitive sensors or pressure sensors, more preferably, the plurality of sensors 110 of the fretboard 107 comprises capacitive sensors.
- the plurality of sensors 110 comprises sensors protruding superficially from the fretboard 107, such that there is a height difference between the position of the plurality of sensors 110 and the fretboard 107 wherein they are located. In this way the plurality of sensors provides a tactile cue of their position to the performer. A person skilled in the art will know how to optimize the exact height at which these sensors protrude 110 in order to achieve the intended functionality in the best way.
- the plurality of sensors 110 comprises metal sensors, preferably copper sensors, more preferably enameled copper sensors.
- the at least one row 109 of sensors 110 comprises capacitive sensors.
- each of the at least one row 109 of sensors 110 is arranged such that each row of sensors of the at least one row 109 of sensors 110 are aligned with each string of the at least one string 103 of the body 101 along a longitudinal axis.
- each row of the at least one row 109 of sensors 110 comprises the same number of sensors 110.
- each sensor 110 of the at least one row 109 of sensors 110 corresponds to the position of each of the frets of a stringed musical instrument or to the position on the fretboard of a fretless stringed instrument.
- the at least one row 109 of sensors 110 comprises capacitive sensors.
- the plurality of sensors 110 are connected to the at least one capacitive sensing integrated circuit 113, wherein said capacitive sensing integrated circuit 113 comprises an integrated mpr121 .
- the plurality of sensors 110 of the fretboard 107 of the instrument 100 comprises 72 sensors.
- said 72 sensors are distributed in 6 sensor rows, wherein each sensor row comprises 12 sensors.
- each sensor corresponds to the position of each of the frets of a stringed musical instrument.
- the musical instrument 100 comprises at least one additional modulator 111 of the model.
- said additional modulator 111 is located on the neck 102 of the instrument 100, more preferably it is located at the base of the neck 102.
- the at least one additional modulator 111 of the model is integrated into an inertial measurement unit (IMU) and sends a continuous signal to the microcontroller 114 via an I2C or SPI connection.
- IMU inertial measurement unit
- the at least one additional modulator 111 of the model is selected from an accelerometer, gyroscope, magnetometer, or a combination thereof.
- the at least one additional modulator 111 of the model transmits information that can be used to modify at least one variable or parameter of the at least one virtual string, of the at least one node of the n nodes, or of the at least one spring of the n-1 springs, for example: mass of the nodes, friction coefficients, position where the force is applied, tuning, output level, additional force, among others.
- the additional modulator 111 of the model is an accelerometer.
- the function of the accelerometer is to capture parameters such as vibrato or small gestures of the performer, which modify the sound of the string once the initial transient has passed, and introduce them to the model in the form of forces or modulation of model parameters.
- the musical instrument 100 of the invention comprises a power supply arranged in the inner volume of the body 101 , connected directly or indirectly to all parts of the instrument requiring electrical power.
- the power supply may be any power supply that a person skilled in the art would consider suitable to be able to power the instrument 100 of the present invention and to enable the methods of the present invention to be carried out.
- the instrument 100 comprises at least one microphone 106, at least one capacitive sensing integrated circuit 113, a plurality of sensors 100 of the fretboard 107, and at least one additional modulator 111 of the model, wherein at least one of these elements produces a plurality of signals received by the processor 115, and wherein said plurality of signals is used by the processor 115 to calculate the parameters of the mathematical model of the at least one virtual string.
- the musical instrument 100 comprises at least one string 103, at least one microphone 106, at least one capacitive sensing integrated circuit 113, at least one A/D converter 112, at least one spring 105, and at least one row 109 of sensors 110
- each of the at least one microphone, at least one integrated circuit, at least one A/D converter, the at least one spring, and the at least one row of sensors are arranged in such a way that a single microphone, integrated circuit, A/D converter, spring, and row of sensors are disposed with a single string of the at least one string 103 comprised by the instrument 100.
- the instrument when it comprises 6 strings, it will also comprise 6 microphones, 6 integrated circuits, 6 A/D converters, 6 springs and 6 rows of sensors such that each microphone, A/D converter, spring and row of sensors will be associated with a single string.
- generating a plurality of signals of step A) comprises generating null signals, non-null signals, and a combination of null and non-null signals, wherein a "null signal" is that signal which is filtered out as a result of the method for digitizing and processing of the present invention.
- the musical instrument 100 of the present invention produces only a plurality of null signals in the absence of actions of the performer thereon.
- the signal generation of step A) comprises generating at least one non-null signal, wherein said at least one non-null signal results from the action of a performer on the musical instrument 100 of the present invention.
- the action of the performer on the musical instrument 100 of the present invention comprises establishing direct contact between the deft hand of the performer and the at least one string 103 of the instrument 100, or establishing indirect contact, e.g., via an instrumental performance element, e.g., a pick or a bow, between the deft hand of the performer and the at least one string 103 of the instrument 100.
- an instrumental performance element e.g., a pick or a bow
- the action of the performer on the musical instrument 100 of the present invention comprises establishing direct contact between the non-deft hand of the performer and at least one sensor 110 of the plurality of sensors 110 of the fretboard 107.
- the action of the performer on the musical instrument 100 of the present invention comprises establishing a direct or indirect contact between the deft hand of the performer and the at least one string 103 of the instrument 100 and, simultaneously, establishing a contact between the non-deft hand of the performer and at least one sensor 110 of the plurality of sensors 110 of the fretboard 107, wherein said at least one string 103 and said at least one sensor 110 are located on the same longitudinal axis.
- generating signals by the at least one string 103 of step A) may comprise generating sounds and movements, wherein said generating sounds and movements result from the action of a performer.
- sounds coming from the at least one string 103 are detected by the at least one microphone 106.
- movements of the at least one string 103 are detected by the capacitive sensing integrated circuit 113.
- the plurality of signals of step A) coming from the at least one string 103 and the plurality of signals coming from the plurality of sensors 110 of the fretboard 107 come from a string and a sensor that are aligned.
- the at least one string 103 is used as a capacitive sensor that sends signals to the at least one capacitive sensing integrated circuit 113.
- the performer's actions on the at least one string 103 produce, at least one capacitance change signal associated with the input of the integrated which can be used to determine the contact between at least one part of the performer's body, for example, a hand or at least one finger, and the at least one string 103 of the instrument 100. It can also be used to detect an indirect contact, for example, via an instrumental performance element, e.g., a pick or a bow, between the performer and the at least one string 103 of the instrument 100.
- an instrumental performance element e.g., a pick or a bow
- the performer's actions upon at least one sensor 110 of the plurality of sensors 110 of the fretboard 107 produce at least one capacitance change signal associated with the input of the integrated.
- the origin of the at least one capacitance change signal produced by at least one sensor 110 of the plurality of sensors 110 of the fretboard 107 is used to determine the position on the fretboard 107 of the contact between at least one part of the performer's body and at least one sensor 110 of the plurality of sensors 110.
- the intensity of the at least one capacitance change signal may be used to determine the pressure exerted on the at least one sensor 110 of the plurality of sensors 110 of the fretboard 107.
- the digitization and processing method of the present invention which makes it possible to incorporate pressure values into the modeling of the virtual string, it is possible to generate a spectrum of sounds, for example, a fully plucked string, harmonics, slurs between two notes, trills, quenching, etc.
- continuous pressure sensing is performed by using the printed circuit board 108 with capacitive sensors 110, which are read by at least one integrated circuit 113 that delivers a digital signal proportional to the change in capacitance added by finger contact with the surface. The more pressure is applied to the sensor 110, the more contact surface is generated on the plate 108 of the finger and a digital signal proportional to the pressure exerted can be obtained.
- the digitization of the plurality of signals coming from the at least one microphone 106 of step B) is performed by the at least one A/D converter 112, wherein said digitization is performed with a frequency of between 12 and 192 kHz, preferably between 24, 48, 96 and 192 kHz, more preferably with a frequency of 96 kHz.
- the threshold value used in step C) of the method can be determined by the performer before or during the execution of the actions A person skilled in the art will be able to determine said threshold value according to the required needs that will allow the best functioning of the method of the present invention.
- this threshold value may be determined with reference to the noise of the input signal. More preferably, said threshold value will be 20% higher than the maximum value of the noise.
- the signal filtering of step C) is carried out by the control and processing unit 115 of the instrument.
- the signal processing step D) of the method comprises converting the signals coming from the at least one string 103 into an increase in the coefficient of friction, by means of the steps of a) sensing the change in capacitance produced by direct or indirect contact of the performer with the at least one string 103, b) obtain a digital value through the corresponding capacitive sensing integrated, where the sensing capacity value is proportional to the contact exerted, and therefore the measured signal is higher, c) using the values obtained from step a) to determine the increase in the coefficient of friction of the virtual string.
- the signal processing step D) of the method comprises converting the signals coming from the at least one microphone 106 into a force, by the steps of a) deriving the digital signal coming from at least one A/D converter 112 as a function of time, b) using the value obtained from step a) as a force exerted on the at least one node of the at least one virtual string.
- the signal processing step D) of the method comprises converting the signals coming from the at least one sensor 110 of the plurality of sensors 110 into a friction or motion restriction, by means of using the origin position at of the signal coming from the at least one sensor 110 of the fretboard 107 and the intensity of the capacitance change signal to determine the node where a force proportional to the measured signal is applied.
- said force is
- the signal processing step D) of the method comprises converting the signals coming from the at least one additional 111 modulator of the model into a force or other modulation of the model, following the steps of a) obtaining the sensing values of the at least one additional modulator 111 , through the microcontroller, b) using the values obtained from the step a) to modify at least one variable or parameter of the at least one virtual string, of the at least one node of the n nodes, or of the at least one spring of the n-1 springs.
- the method for digitizing and processing signals of the present invention allows emulating the quenching of the at least one string 103.
- each string of the at least one string 103 is provided with a connection to another capacitive sensing integrated circuit thus using the metallic string as a capacitive sensor.
- continuous variation between full stopping of at least one string 103 and intermediate stops such as those achieved on a real stringed instrument is used.
- said method for emulating string quenching is used to emulate intermediate stopping or full stopping of at least one string 103. In one embodiment, said method for emulating string quenching is used to emulate a continuous variation between an intermediate stopping and a full stopping of at least one string 103.
- the parameters of the virtual string to be defined in step A) of the method comprise the mass of the nodes 201 , the coefficients k F of friction, the constants k R of the springs 202, the position of the frets, the spring balance position, the length of the virtual string, the distance on the X-axis of the virtual string with respect to the fretboard, the node from which the sound is to be extracted, the node on which the force is to be applied, among others.
- each of the parameters are determined for each node of the n nodes 201 and for each spring of the (n-1) springs 202. Said parameters may be determined and modified before or during the execution of the instrument 100 of the present invention.
- the equilibrium position of any node of the n nodes 201 and any spring of the (n-1) springs 202 is the position taken by the n nodes 201 and the (n-1) springs 202 when the instrument 100 is not subjected to any action by the performer, or is the position taken by the n nodes 201 and the (n-1) springs 202 when the unit 115 receives only null signals.
- the virtual string variables to be defined in step A) comprise the displacement of each node of the n nodes 201 with respect to the equilibrium position, the speed of movement of each node of the nnodes 201 , the position where force is applied on the at least one virtual string 200, the position of the performer's non-deft hand on the 108 circuit board, the pressure of the performer's non-deft hand on the 108 circuit board, the direct or indirect contact of the performer on the at least one string 103, the movements and accelerations applied on the body 101 of the instrument 100 detected by means of the at least one additional modulator of the model 111.
- the modeling of the at least one virtual string 200 of step A) comprises establishing a system in two dimensions X and Y, wherein the X axis is the axis perpendicular to the virtual string 200 and the Y axis is the axis longitudinal to the virtual string 200, and wherein the sound produced by said at least one virtual string 200 is determined based on the respective distance to the equilibrium position and velocity of the n nodes 201 comprising said at least one virtual string 200.
- the (n-1) springs 202 making up the at least one virtual string 200 follow the dynamics described by Hooke's Law, or corrected versions of Hooke's Law.
- the exact values of the virtual string parameters can be used by the performer to determine the pitch, timbre, tuning, type of instrument to be imitated by modeling the virtual string. Such parameters can result in sounds that are not obtainable with a real physical instrument, such as a string with infinite sustain, or a mixture of sounds.
- the application the plurality of input signals of step B) of the method for reproducing sounds of the present invention correspond to the application of a force, wherein said force is applied on at least one node 201 , preferably, said force is applied on at least one group of nodes comprising between 1 -20 nodes, preferably 10 nodes, and wherein force is modulated by means of a Gaussian function according to:
- Fexti F o ⁇ e ⁇ a l ⁇ Ncenter)2
- Fexti the force on the i-th node
- F o the force coming from the signal of at least one microphone 106 after being derived
- a the parameter that regulates the width of the Gaussian
- i the index of the node
- N ce nter the central position of application of the force.
- the plurality of input signals modulated by means of a Gaussian function as described above is the plurality of input signals resulting from the conversion of the signals coming from the at least one microphone 106.
- the sound reproduction method of the present invention comprises applying the following forces on each node of the n nodes that comprise the modeled virtual string is subjected to the following forces: where F1 is the force resulting from the interaction between node i and its neighbor (i-1) linked through a spring, where kR is the spring constant and D , J is the distance between the i-th node and its neighbor (i-1); where F2 is the force resulting from the interaction between node i and its neighbor (i+1), linked through a spring, where k is the spring constant and Do, i+i> is the distance between the i-th node and its neighbor (i+1);
- F4 C ⁇ Xn
- F5 is the restriction to movement produced by the plurality of input signals resulting from digitizing and processing of the signals coming from the plurality of sensors 110 of the fretboard 107
- C is a value proportional to the measured capacity (pressure)
- X is the position on the X-axis (position). That is to say that the sum of forces is:
- the at least one node 201 on which the force Fext is applied may be the same one that will produce a plurality of output signals, while the at least one node 201 on which the force F4 is applied will undergo a motion restriction that allows modulating the variable "string length". Therefore, in one embodiment of the invention the at least one node 201 to which the force Fext is applied does not receive the force F4 and vice versa.
- the string modeling comprises the use of the following equation to determine the forces F1 and F2 that a node i exerts on a neighboring node i-1 (and its equivalent for i+1) where D is calculated as: where DY2 is a new parameter representing the distance between nodes on the Y-axis squared.
- F x (i) is the force that node i exerts on node j in the X axis
- kR is the spring constant
- D is the modulus of the distance between nodes
- D eq is the distance with respect to the spring equilibrium point
- D ( i,i-i) is the distance between neighboring nodes in the X axis.
- the string modeling comprises calculating the positions in successive steps by means of Verlet's algorithm.
- producing output signals of step C) of the method comprises calculating the position of each node of the n nodes 201 with regards to the equilibrium position, the speed of movement of each node of the n nodes 201 , the position where force is applied on the at least one virtual string 200, the position of the performer's non- deft hand on the circuit board 108, the pressure of the performer's non-deft hand on the circuit board 108.
- it comprises calculating the position on the perpendicular axis (X-axis) of each node of the n nodes 201 relative to its equilibrium position and the velocity with which each node of the n nodes 201 moves.
- producing output signals of step C) comprises monitoring the position of one or a small group of nodes such as, for example, between 1 and 20 nodes, more preferably 10 nodes.
- the plurality of signals generated by the instrument 100 of the present invention is a result of an action of the performer, wherein said action comprises establishing direct or indirect contact between at least a part of the performer's body, preferably, the deft hand of the performer and the at least one string 103 of the instrument 100 and, simultaneously, establishing contact between another part of the performer's body, preferably, the non-deft hand of the performer and the plurality of sensors 110 of the fretboard 107.
- producing output signals comprises modulating the variables of at least one node of the n nodes 201 by applying at least one force, friction, parameter modulation, restriction, and/or a combination thereof.
- the audio signal is generated by monitoring the position and velocity of one or a small group of nodes such as, for example, between 1 and 20 nodes, more preferably 10 nodes.
- the software is based on the integration of Newton's equations of motion using a Verlet algorithm.
- the virtual system is composed of a series of nodes linked by quadratic potentials (Hooke's law) keeping the end positions fixed.
- the simulated system has only 2 dimensions, one longitudinal (Y) to the string and one perpendicular to the string (X), the sound is produced by recording the deflection in the perpendicular axis of the equilibrium position.
- a very important and original part is to link the virtual system with a perturbation generated in the real world. This is achieved by applying on at least one node 201 a force proportional to the information provided by a sensor.
- This sensor can be the initial sound disturbance read by a microphone 106 coupled to an actual 103 string or the capacitance measured on a touch sensor 110 that treads on the strings.
- Verlet's algorithm calculates the positions in successive steps and records the sound produced as the distance to equilibrium of the position of a chosen node or a small group of nodes, for example, between 1 and 20 nodes, more preferably 10 nodes.
- sending signals step D) of the method herein comprises sending the plurality of output signals to an analog audio output 118 or to a digital audio output 119, preferably an analog audio output 118.
- step D) comprises the substeps of i) converting the plurality of digital output signals to a plurality of analog output signals by using a digital-to-analog converter (DAC) 116, ii) conditioning the plurality of analog output signals resulting from step i) by means of an audio signal conditioning 117 circuit.
- DAC digital-to-analog converter
- the step D) of the sound reproduction method of the present invention comprises a step of modifying the plurality of output signals prior to their reproduction, preferably, prior to their conversion to an analog signal.
- This modification step includes the application of digital effects, for example, distortions, echoes, cameras, among others.
- the sound reproduction method of the present invention can be used to reproduce at least one of the following effects: fully plucked string, slurs between two notes, harmonics, trills, quenching.
- the sound reproduction method of the present invention can be used to reproduce the sound of another musical instrument, such as, for example, guitars, basses, violins, violas, violas, charangos, double basses, sitar, cuatro, ukulele and any other string instrument.
- another musical instrument such as, for example, guitars, basses, violins, violas, violas, charangos, double basses, sitar, cuatro, ukulele and any other string instrument.
- the instrument of the present invention comprises two parts, wherein the part of the instrument used to digitize the action of the deft hand of the performer is the body of the instrument, while the part of the instrument used to digitize the action of the non-deft hand of the performer is the neck of the instrument.
- the musical instrument of the present invention comprises thus a body and a neck, wherein the body of the instrument comprises
- a body comprising a container and a lid, thus defining an inner volume, wherein said body further comprises six (6) strings located on the outer surface of the lid and extending along the lid, wherein said 6 strings are metal electric guitar strings, two viscoelastic foam rubber media, wherein the first medium is located proximate to one end of the at least one string and the second medium is located proximate to the opposite end of the at least one string and wherein both media are in contact with the at least one string, six (6) springs located on the outer surface of the lid, where said six (6) springs are linked to the corresponding strings, six (6) microphones located on the outer surface of the lid, below the corresponding strings and without contact with the string;
- capacitive sensing integrated circuits wherein said capacitive sensing integrated circuits are located within the interior volume and wherein they are connected to the six (6) strings by means of a wired connection, Six (6) analog-to-digital converters, or A/D converters, located inside the interior volume, where said A/D converters are connected to the corresponding microphones by means of a wired connection, a microcontroller, located in the inner volume of the body, connected to the capacitive sensing integrated circuit via an I2C connection, an audio output located in the interior volume of the body, wherein said at least one audio output is analog and comprises a digital-to-analog converter (DAC), an audio signal conditioning circuit that communicates with the at least one DAC; a control and processing unit (CPU), located in the interior volume of the body, where said control unit is in communication with the six (6) A/D converters via an I2C or SPI connection, the microcontroller via a USB connection, the audio output, via a DAC, and wherein
- DAC digital
- a printed circuit board comprising seventy-two (72) capacitive sensors, and wherein said seventy-two (72) sensors are arranged in six (6) sensor rows, the sensor rows comprising 12 sensors each, and wherein said circuit board is connected the at least one capacitive sensing integrated circuit of the body via an I2C connection,
- an accelerometer where the accelerometer is integrated into an inertial measurement unit (IMU), located in the fretboard of the neck and connected to the body's microcontroller via an I2C or SPI connection, wherein the six (6) microphones, the six (6) A/D converters, the six (6) springs and the six (6) rows of sensors correspond to each of the strings of the six (6) strings, in a 1 to 1 ratio.
- IMU inertial measurement unit
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Nonlinear Science (AREA)
- Power Engineering (AREA)
- Electrophonic Musical Instruments (AREA)
Abstract
La présente invention concerne un instrument de musique qui permet de reproduire avec une grande précision les gestes et les interprétations subtiles d'un interprète. Un autre aspect de la présente invention concerne un procédé de numérisation et de traitement de signaux au moyen de l'instrument de musique et un procédé de synthèse de sons au moyen de l'instrument de musique pour la reproduction.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| ARP20230101813 | 2023-07-11 | ||
| ARP230101813A AR129892A1 (es) | 2023-07-11 | 2023-07-11 | Instrumento musical que digitaliza y procesa señales, y sintetiza sonidos y métodos relacionados |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025012857A1 true WO2025012857A1 (fr) | 2025-01-16 |
Family
ID=92302688
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/IB2024/056771 Pending WO2025012857A1 (fr) | 2023-07-11 | 2024-07-11 | Instrument de musique qui numérise et traite des signaux, et synthétise des sons et procédés associés |
Country Status (2)
| Country | Link |
|---|---|
| AR (1) | AR129892A1 (fr) |
| WO (1) | WO2025012857A1 (fr) |
Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5587548A (en) * | 1993-07-13 | 1996-12-24 | The Board Of Trustees Of The Leland Stanford Junior University | Musical tone synthesis system having shortened excitation table |
| US6049034A (en) * | 1999-01-19 | 2000-04-11 | Interval Research Corporation | Music synthesis controller and method |
| US20080236374A1 (en) | 2007-03-30 | 2008-10-02 | Cypress Semiconductor Corporation | Instrument having capacitance sense inputs in lieu of string inputs |
| US20110239848A1 (en) | 2010-04-02 | 2011-10-06 | Idan Beck | Electronic musical instrument |
| US8093482B1 (en) | 2008-01-28 | 2012-01-10 | Cypress Semiconductor Corporation | Detection and processing of signals in stringed instruments |
| US20130180384A1 (en) * | 2012-01-17 | 2013-07-18 | Gavin Van Wagoner | Stringed instrument practice device and system |
| US20150262559A1 (en) * | 2014-03-17 | 2015-09-17 | Incident Technologies, Inc. | Musical input device and dynamic thresholding |
| US20180047373A1 (en) * | 2012-01-10 | 2018-02-15 | Artiphon, Inc. | Ergonomic electronic musical instrument with pseudo-strings |
| US20220208160A1 (en) | 2019-07-21 | 2022-06-30 | Jorge Marticorena | Integrated Musical Instrument Systems |
-
2023
- 2023-07-11 AR ARP230101813A patent/AR129892A1/es unknown
-
2024
- 2024-07-11 WO PCT/IB2024/056771 patent/WO2025012857A1/fr active Pending
Patent Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5587548A (en) * | 1993-07-13 | 1996-12-24 | The Board Of Trustees Of The Leland Stanford Junior University | Musical tone synthesis system having shortened excitation table |
| US6049034A (en) * | 1999-01-19 | 2000-04-11 | Interval Research Corporation | Music synthesis controller and method |
| US20080236374A1 (en) | 2007-03-30 | 2008-10-02 | Cypress Semiconductor Corporation | Instrument having capacitance sense inputs in lieu of string inputs |
| US8093482B1 (en) | 2008-01-28 | 2012-01-10 | Cypress Semiconductor Corporation | Detection and processing of signals in stringed instruments |
| US20110239848A1 (en) | 2010-04-02 | 2011-10-06 | Idan Beck | Electronic musical instrument |
| US20180047373A1 (en) * | 2012-01-10 | 2018-02-15 | Artiphon, Inc. | Ergonomic electronic musical instrument with pseudo-strings |
| US20130180384A1 (en) * | 2012-01-17 | 2013-07-18 | Gavin Van Wagoner | Stringed instrument practice device and system |
| US20150262559A1 (en) * | 2014-03-17 | 2015-09-17 | Incident Technologies, Inc. | Musical input device and dynamic thresholding |
| US20220208160A1 (en) | 2019-07-21 | 2022-06-30 | Jorge Marticorena | Integrated Musical Instrument Systems |
Also Published As
| Publication number | Publication date |
|---|---|
| AR129892A1 (es) | 2024-10-09 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US6066794A (en) | Gesture synthesizer for electronic sound device | |
| CN101473368B (zh) | 用于产生代表具有键盘和弦的乐器的声音的信号的设备 | |
| US7538268B2 (en) | Keys for musical instruments and musical methods | |
| USRE37654E1 (en) | Gesture synthesizer for electronic sound device | |
| CN105027192A (zh) | 增强数字音乐表现力的装置和方法 | |
| Goebl et al. | Sense in expressive music performance: Data acquisition, computational studies, and models | |
| US9082384B1 (en) | Musical instrument with keyboard and strummer | |
| Dimpker | Extended Notation: The depiction of the unconventional | |
| US20060243123A1 (en) | Player technique control system for a stringed instrument and method of playing the instrument | |
| US9812110B2 (en) | Digital musical instrument and method for making the same | |
| Traube et al. | Indirect acquisition of instrumental gesture based on signal, physical and perceptual information | |
| JP2022071098A5 (ja) | 電子機器、電子楽器、方法及びプログラム | |
| Serafin et al. | Gestural control of a real-time physical model of a bowed string instrument | |
| Chafe | Simulating performance on a bowed instrument | |
| EP2073194A1 (fr) | Instrument de musique électronique | |
| WO2025012857A1 (fr) | Instrument de musique qui numérise et traite des signaux, et synthétise des sons et procédés associés | |
| Overholt | Advancements in violin-related human-computer interaction | |
| Arencibia | Discrepancies in pianists’ experiences in playing acoustic and digital pianos | |
| Nichols II | The vbow: An expressive musical controller haptic human-computer interface | |
| Champagne et al. | Investigation of a novel shape sensor for musical expression | |
| Taki | A General Method for the Creation of New Electronic Musical Instruments and A New Electronic Wind Instrument | |
| Yoo et al. | ZETA violin techniques: Limitations and applications | |
| WO2024251999A1 (fr) | Suivi des mains d'un joueur d'instrument de musique | |
| Braasch | Expanding the saxophone with different tone generators and a foot controller for complementary voices | |
| Reddy et al. | Implementation of automatic piano player using Matlab graphical user Interface |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24755056 Country of ref document: EP Kind code of ref document: A1 |