[go: up one dir, main page]

WO2008001766A1 - Music game device - Google Patents

Music game device Download PDF

Info

Publication number
WO2008001766A1
WO2008001766A1 PCT/JP2007/062794 JP2007062794W WO2008001766A1 WO 2008001766 A1 WO2008001766 A1 WO 2008001766A1 JP 2007062794 W JP2007062794 W JP 2007062794W WO 2008001766 A1 WO2008001766 A1 WO 2008001766A1
Authority
WO
WIPO (PCT)
Prior art keywords
signal
music
value
unit
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2007/062794
Other languages
French (fr)
Japanese (ja)
Inventor
Tetsuro Itami
Yukie Yamazaki
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Konami Digital Entertainment Co Ltd
Original Assignee
Konami Digital Entertainment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Konami Digital Entertainment Co Ltd filed Critical Konami Digital Entertainment Co Ltd
Publication of WO2008001766A1 publication Critical patent/WO2008001766A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • A63F13/10
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/424Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving acoustic input signals, e.g. by using the results of pitch or rhythm extraction or voice recognition
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/45Controlling the progress of the video game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/814Musical performances, e.g. by evaluating the player's ability to follow a notation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H3/00Instruments in which the tones are generated by electromechanical means
    • G10H3/12Instruments in which the tones are generated by electromechanical means using mechanical resonant generators, e.g. strings or percussive instruments, the tones of which are picked up by electromechanical transducers, the electrical signals being further manipulated or amplified and subsequently converted to sound by a loudspeaker or equivalent instrument
    • G10H3/125Extracting or recognising the pitch or fundamental frequency of the picked up signal
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1081Input via voice recognition
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6063Methods for processing data by generating or executing the game program for sound processing
    • A63F2300/6072Methods for processing data by generating or executing the game program for sound processing of an input signal, e.g. pitch and rhythm extraction, voice recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
    • G10H2210/066Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for pitch analysis as part of wider processing for musical purposes, e.g. transcription, musical performance evaluation; Pitch recognition, e.g. in polyphonic sounds; Estimation or use of missing fundamental
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/135Musical aspects of games or videogames; Musical instrument-shaped game input interfaces

Definitions

  • the present invention relates to a music game machine having a function of capturing a user's voice and reflecting it in game content.
  • Patent Document 1 Japanese Unexamined Patent Publication No. 2001-29649
  • a music game machine that takes in a music playback signal output from a music playback device, determines the characteristics of the music to be played back by the music playback signal, for example, Giannole, and reflects the determination result in the game content is considered. Has been.
  • a dedicated processing unit for analyzing an audio signal is added in addition to the components for determining the characteristics of the music, a functionally redundant portion is added and wasted. May occur.
  • an object of the present invention is to provide a music game machine capable of realizing the function of reflecting each of the music features and the user's voice in the game content without waste.
  • the music game machine of the present invention is output from a signal processing unit that outputs a signal corresponding to at least one of a differential value and an integral value related to a specific frequency component of an input signal, and a music reproduction device.
  • a music playback signal input unit that takes in a music playback signal and inputs it to the signal processing unit
  • a voice input unit that takes a user's voice and converts it into a voice signal
  • the characteristics of the music to be played back by the music playback signal are determined, and the signal output from the signal processing unit in response to the input of the audio signal
  • a discriminating unit that classifies the audio signal based on the discriminating unit, and a game control unit that reflects each of the characteristics of the music discriminated by the discriminating unit and the classification result of the audio signal in the game content.
  • the signal processing unit receives at least one of a differential value and an integral value related to a specific frequency component with respect to an input of a music playback signal or a user-powered audio signal from a music playback device.
  • a signal corresponding to either one is output.
  • the feature of the music is discriminated by the discriminating unit, or the audio signal is classified, and each of the feature and the classification result is reflected in the game content by the game control unit.
  • Features such as music genre and tempo correlate with the differential value or integral value of the specific frequency component of the music playback signal, so if the correlation is used, the signal corresponding to the input of the music playback signal It is possible to discriminate music features based on signals output from the processing unit.
  • audio signals are also common to music playback signals in that they are sound waveforms.
  • the differential value or integral value of a specific frequency component of the speech signal shows changes that correlate with various factors such as the utterance content, that is, the tone, strength, inflection, and word difference. . Focusing on these changes, even if advanced speech analysis such as speech recognition is not desired, speech signals can be classified to some extent according to the utterance content.
  • the signal processing unit is shared, and in some cases, part of the processing function of the determination unit is also shared, so that more game functions can be provided to the user at a low cost.
  • the signal processing unit outputs a signal corresponding to a differential value and an integral value of the specific frequency component
  • the determination unit includes the signal processing unit.
  • the signal output from the signal is taken in a predetermined sampling unit time, and it is determined whether or not the value of each signal exceeds a predetermined level within the sampling unit time, and it is determined that a value exceeding the predetermined level is detected.
  • a data generation unit for generating analysis data obtained by counting the number of times every predetermined sampling period and for each signal; and determining characteristics of the music based on the total value described in the analysis data; and A data analysis unit for classifying the signals.
  • the degree of variation of the differential value and the integral value output by the signal processing unit is reflected in the total value of the analysis data created by the determination unit.
  • These variations correlate with characteristics such as music genre and tempo, or changes in the content of the user's utterances, so the aggregated values can be used to distinguish music characteristics or classify user voices. can do.
  • the aggregate value related to the differential value increases when the signal strength frequently changes, and the overall signal level is increased.
  • the total value for the integral value becomes large.
  • the signal processing unit includes a signal corresponding to at least one of a differential value and an integral value of the low frequency component of the input signal, and a differential value of the high frequency component of the input signal. And a signal corresponding to at least one of the integral values, and the determination unit captures each of the signals output from the signal processing unit by a predetermined sampling unit time, and outputs the sampling unit. It is determined whether or not the value of each signal exceeds a predetermined level in time, and the analysis data obtained by collecting the number of times that the value exceeding the predetermined level is detected for each predetermined sampling period and for each signal is obtained.
  • a data generation unit for generating the data, and a data analysis unit for discriminating the characteristics of the music and classifying the audio signal based on the total value described in the analysis data It may have.
  • the feature of music can be discriminated by using the total value described in the analysis data, or the user's voice can be classified.
  • a signal corresponding to at least one of the differential value and the integral value of the low frequency component and a signal corresponding to at least one of the differential value and the integral value of the high frequency component of the input signal are respectively transmitted from the signal processing unit. Since it is output, the tendency of each change of the low frequency component and the high frequency component can be grasped from the total value, and the music feature can be discriminated more finely.
  • the analysis data of the data generator By utilizing the function to generate the audio signal for the classification of the audio signal, the additional burden required to provide the function for classifying the audio signal can be further reduced.
  • the data analysis unit calculates an average value of each of the aggregate values, and evaluates variation of the obtained average value to thereby calculate the music.
  • the speech signal may be classified based on the magnitude relation of the average value.
  • the genre of music can be discriminated using the correlation existing between the genre of music and the variation of the differential value or integral value of the specific frequency component of the music playback signal.
  • the average value of the aggregated values is used in the same way as music feature discrimination, so it is possible to share the calculation function of the aggregated values. It is possible to further reduce the additional burden required to establish
  • the data analysis unit corresponds to any of a plurality of patterns (patterns A to E) in which the magnitude relationship of the average values is predetermined.
  • the audio signal may be classified according to whether or not. According to this embodiment, it is possible to easily classify the magnitude relation of the obtained average values by setting a plurality of patterns in advance.
  • the data analysis unit calculates the difference between the two average values as one of the average values.
  • the branch reference value obtained by division may be further referenced to determine which of the plurality of patterns corresponds to the magnitude relationship of the average value. By setting the branch reference value, the number of patterns can be increased to further classify the audio playback signal.
  • the determination unit determines a character to appear on the game screen based on the classification result of the audio signal
  • the game control unit is connected to the data analysis unit.
  • the sound signal classification result may be reflected in the game content by causing the determined character to appear on the game screen.
  • both the audio playback signal output from the music playback device and the audio signal output from the audio input unit are used as the signal processing unit. Since the signal corresponding to at least one of the differential value and the integral value related to the specific frequency component of those signals is input and output, the characteristic of music and the differential value or integral of the specific frequency component of the music playback signal are output. While distinguishing music features using correlation with values, it is possible to appropriately classify audio signals by focusing on changes in differential values or integral values corresponding to the user's utterance content. As a result, at least the signal processing unit is shared, and in some cases, part of the processing function of the determination unit is also shared, so that more play functions can be provided to the user at a low cost.
  • FIG. 1 is a diagram showing a state in which a portable game machine of the present invention is placed between a portable music player and an earphone.
  • FIG. 2 Block diagram of the part related to the discrimination of music in the control system of the game machine of FIG.
  • FIG. 3 is a functional block diagram of the control unit of FIG.
  • FIG. 4 is a diagram showing a relationship between a music playback signal and a sampling period.
  • FIG. 5 is a diagram showing an example of the relationship between the waveform of the integrated value and the sampling unit time within the sampling period.
  • FIG. 6 A diagram showing the contents of analysis data.
  • FIG. 7 is a diagram showing the contents of calculation result identification data.
  • FIG. 8 is a diagram showing the contents of discrimination reference data.
  • FIG. 9 is a diagram showing the contents of history data.
  • FIG. 10 Flow chart showing the analysis data generation process executed by the control unit.
  • FIG. 11 is a flowchart showing a data analysis process executed by the control unit.
  • FIG. 12 is a diagram showing a pattern for classifying audio signals.
  • FIG. 13 is a view showing the contents of retained character data.
  • FIG. 14 is a flowchart showing a data analysis processing routine executed by the control unit in the microphone mode.
  • FIG. 1 shows a portable game machine according to one embodiment of the present invention.
  • a game machine 1 as a music game machine is used in combination with a portable music player 100, and includes a case 2 and an LCD 3 as a display device attached to the front surface of the case 2. .
  • the casing 2 is provided with a line input terminal 4, a phone terminal 5, and a microphone 6.
  • the line input terminal 4 is connected to the line output terminal 101 of the portable music player 100 via the relay cable 102.
  • the phone terminal 5 is connected to the earphone 103. That is, the game machine 1 of this embodiment is used by being interposed between the portable music player 100 and the audio output device to be combined therewith.
  • the audio output device combined with the portable music player 100 is not limited to the earphone 103.
  • the portable music player 100 is not limited as long as it can output a music reproduction signal for audio conversion to various audio output devices such as a speaker and a headphone. The details are unquestionable.
  • the music player is not limited to a portable type, and includes various devices that output music such as home audio, television, personal computer, and commercially available portable electronic games.
  • the microphone 6 takes in the user's voice and converts it into a voice signal. In addition to the user's voice, the microphone 6 can be used to input ambient sounds, voice or music from electronic devices, etc.
  • the game machine 1 functions as a relay that passes the music playback signal output from the line input terminal 4 of the portable music player 100 to the earphone 103, and music output from the portable music player 100. It functions as a game machine that analyzes the playback signal and provides the user with a game according to the analysis result.
  • FIG. 2 is a block diagram showing a configuration of a part related to the function of taking in and analyzing music reproduction signals and audio signals in the control system provided in the game machine 1.
  • the game machine 1 includes a line input terminal 4 as a music playback signal input section and a bypass path R1 for passing an analog music playback signal and a voice signal from the microphone 6 as a voice input section to the phone terminal 5 and the line input terminal 4 And the music playback signal and audio signal captured from the microphone 6 via the branch path R2. It has a signal processing unit 10 for processing and a control unit 11 for capturing the output signal of the signal processing unit 10. Paths Rl and R2 are all represented by one in the force diagram composed of three lines of the right channel, the left channel, and the first channel.
  • the line input terminal 4 and the microphone 6 are connected by a switching switch (not shown) that switches between them, and can switch between a music reproduction signal and an audio signal as an input signal. For example, when the line input terminal 4 is inserted from the relay cable 102, the switching switch connects the line input terminal 4 and the branch paths Rl and R2. If there is no plug, the switch connects microphone 6 and branch paths Rl and R2.
  • the signal processing unit 10 includes a pair of low-pass filters (LPF) 12A and 12B that allow only a low-frequency component of a music reproduction signal captured from the line input terminal 4 or an audio signal converted by the microphone 6 to pass through.
  • LPF low-pass filters
  • HPF high-pass filter
  • a differentiation circuit 14 that differentiates the output signal of LPF12A
  • an integration circuit 15 that integrates the output signal of LPF12B
  • an HPF13A A fine circuit 16 for differentiating the output signal
  • a / D converters 17A to 17C for converting the output signals of the circuits 14 to 16 into digital signals and outputting them to the control unit 11 are provided.
  • the frequency range that LPF 12A and 12B pass is set to 1000 Hz or less, for example, and the frequency range that HPF 13A passes is set to 1000 Hz or more, for example.
  • the set value of the frequency range is not limited to these examples.
  • the frequency range that LPF12A and 12B pass can be set to 500Hz or less, and the frequency range that HPF13A can pass is set to 1000Hz or more.
  • the frequency ranges that LPF 12A and 12B pass may be set equal or different from each other. If the two pass frequency ranges coincide with each other, a single LPF may be provided instead of the LPFs 12A and 12B, and the output signal may be branched to the differentiation circuit 14 and the integration circuit 15.
  • the control unit 11 is configured as a computer unit that combines a microprocessing unit (MPU) and peripheral devices necessary for the operation of the MPU, for example, storage devices such as RAM and ROM.
  • the control unit 11 is connected to the LCD 3 described above as a control target, and is connected to an input device 20 for giving game instructions and the like, and a speaker unit (SP) 21 for generating sound, sound effects, and the like.
  • the phone terminal 5 is also connected to the connection path to the speaker unit 21.
  • the control unit 11 provides various game functions to the user by executing processing such as displaying a game screen on the LCD 3.
  • FIG. 3 is a functional block diagram of the control unit 11.
  • the control unit 11 When the MPU (not shown) of the control unit 11 reads a predetermined control program from the storage device 25 and executes it, the control unit 11 has a data generation unit 30 as a determination unit and a logical unit inside the control unit 11. A data analysis unit 31 and a game control unit 32 are generated.
  • the data generation unit 30 processes the output signal of the signal processing unit 10 to generate analysis data D1, and stores this in the storage device 25.
  • the data analysis unit 31 reads the analysis data D1, refers to the calculation result identification data D2, determines the genre of music by a predetermined method, and updates the history data D4 according to the determination result.
  • the discrimination reference data D3 recorded in the storage device 25 is referred to for the Giannole discrimination.
  • the game control unit 32 executes the game according to a predetermined game program (not shown) while referring to the history data D4.
  • the control unit 11 analyzes the output signal and classifies the audio, and based on the classification result, the game screen of the LCD 3 It also has a function to make a character appear above.
  • the control unit 11 generates the analysis data D1 by processing the output signal of the signal processing unit 10 in the data generation unit 30 in the same procedure as the Giannole discrimination function, and stores it in the storage device 25.
  • the data analysis unit 31 reads the analysis data D1, classifies the voice signals by a predetermined method, and updates the retained character data D5 according to the classification result.
  • the game control unit 32 executes the game with reference to the retained character data D5 in addition to the history data D4 described above.
  • FIG. 4 is an example of a waveform of a music playback signal input from the line input terminal 4 to the signal processing unit 10.
  • the signal processing unit 10 low frequency components of the music playback signal are extracted by the LPFs 12A and 12B, and high frequency components are extracted by the HPF 13A.
  • the differential value of the extracted low-frequency component is output from the differentiation circuit 14, and the integrated value of the low-frequency component is output from the integration circuit 15.
  • the differential value of the high frequency component is output from the differentiation circuit 16.
  • the output differential value and integral value are converted into digital signals by the A / D converters 17A to 17C and input to the data generation unit 30 of the control unit 11.
  • the data generation unit 30 includes a sampling period Tm shown in FIG. 4 and an example of an output waveform of the integration circuit 15 as a reference time for processing the differential value and the integration value output from the signal processing unit 10. Two types of time lengths are set for the sampling unit time Tn shown in the figure.
  • the sampling period Tm is an integer multiple of the sampling unit time Tn. As an example, the sampling period Tm is set to 5 seconds, and the sampling unit time Tn is set to 20 milliseconds.
  • the differential value and the integral value are taken in by the sampling unit time Tn, and whether the differential value and the integral value each exceed the predetermined level within the sampling unit time Tn. Power. Then, the analysis data D1 is generated by collecting the number of times it is determined that a value exceeding a predetermined level is detected for each sampling period Tm and for each differential value and integral value. For example, if the integrated value of the low-frequency component in one sampling period Tm set in FIG. 4 fluctuates as shown in FIG. 5, the data generation unit 30 sets the integrated value to a predetermined value within each sampling unit time Tn. It is monitored whether or not the threshold value TH is exceeded.
  • the integral value exceeds the threshold value TH, it is determined that the integral value has exceeded a predetermined level. However, the number is counted as 1 if it exceeds once even if the integral value exceeds the threshold TH within one sampling unit time Tn. This determination process is repeated every sampling unit time Tn within the sampling period Tm, and the number of times that it has been determined that the predetermined level has been exceeded when the sampling period Tm has elapsed is counted. If the sampling period Tm is 5 seconds and the sampling unit time Tn is 20 milliseconds, the minimum number of times in one period Tm is 0 and the maximum value is 250.
  • the data generation unit 30 of the control unit 11 individually executes the above-described processing for each of the differential value and the integral value, and sequentially collects the measured number of times for each sampling period Tm.
  • the analysis data D1 is generated as shown in Fig. 6.
  • channel chO corresponds to the output from differentiation circuit 14
  • channel chl corresponds to the output from integration circuit 15
  • channel ch2 corresponds to the output from differentiation circuit 16.
  • Sample numbers s mpl to smpN correspond to cycle numbers from the start point of the music playback signal. here It is assumed that the music playback signal is equivalent to N cycles in total.
  • the total value sumOX of channel chO in sample number smpX (where X is 1 to N) is that the differential value of the low frequency component exceeds the predetermined level TH in the Xth sampling period TmX from the start of processing. Indicates the number of times determined. For example, sumOl corresponds to the number of times that the differential value of the low frequency component was determined to exceed the threshold TH in the first sampling period. The same applies to the other channels chl to ch2.
  • the data analysis unit 31 of the control unit 11 calculates the average value M0 to M2 for each channel, that is, for each differential value and integral value, the low frequency component and the high frequency component for the total value described in the analysis data D1.
  • the data analysis unit 31 refers to the calculation result identification data D2, and obtains identification values dMO, dMl, dM2, dCVO, dC V2 corresponding to the average values MO, Ml, M2, and coefficient of variation CVO, CV2, respectively.
  • the calculation result identification data D2 is a set of tables in which the average values MO, Ml, M2 and the coefficients of variation CVO, CV2 are associated with the identification values dMO, dMl, dM2, dCVO, dCV2.
  • the identification value is a value that represents each category when the range that the average value or coefficient of variation can take is divided by a predetermined number of steps. For example, in the average value MO table, as shown in FIG.
  • the range of values 0 to 250 that the average value MO can take is set to 4 by three threshold values a, b, and c (where a ⁇ b ⁇ c). Each category is represented by an identification value 0-3.
  • the data analysis unit 31 refers to the table in FIG. 7 to obtain any one value from 0 to 3 corresponding to the average value MO as the identification value dMO.
  • a similar table is prepared for the average values Ml and M2 and the coefficient of variation CV0 and CV2, though not shown.
  • the data analysis unit 31 acquires identification values dMl, dM2, dCV0, and dCV2 corresponding to the acquired average values Ml and M2 and variation coefficients CV0 and CV2 in the same procedure.
  • the identification values dMl and dM2 corresponding to the average values M 1 and M2 are three levels 0 to 2, respectively, and the identification values dC V0 and dCV2 corresponding to the coefficient of variation CV0 and C V2 are two levels 0 and 1, respectively. Segmented It is. However, the number of division stages of each identification value may be changed as appropriate.
  • the data analysis unit 31 arranges the acquired identification values dM0 to dM2 and dCV0 and dCV2 in the order of the identification values dM0, dMl, dM2, dCV0, and dCV2, thereby characterizing the waveform of the music playback signal.
  • a numerical value is acquired as a judgment value. For example, if the identification value dMO is 1, dMl is 0, dM2 force, dCVO force 0, dCV2 force, 10001 can be obtained as a semi-IJ straightness. In this example, 144 judgment values are obtained. Note that the arrangement order of the identification values dM0 to dM2 and dCV0 and dCV2 for obtaining the determination value is not limited to this embodiment, and may be arbitrarily specified.
  • the data analysis unit 31 determines the genre of music to be reproduced by the music reproduction signal based on the above-described five-digit determination value.
  • discrimination reference data D3 is referred to.
  • the genre is a concept used to distinguish the contents of music such as classic, rock, ballad, and JAZZ.
  • the data analysis unit 31 compares the determination reference data D3 with the genre that matches the obtained determination value as the genre corresponding to the music playback signal. For example, when the determination value is 10001, genre A is determined as the genre corresponding to the music playback signal as illustrated in FIG.
  • the data analysis unit 31 updates the history data D4 according to the determination result. For example, as shown in FIG. 9, the history data D4 is described in association with the genres A to X and the respective input counts Na to Nx, and the data analysis unit 31 adds 1 to the determined number of gains.
  • the history data D4 is updated by calculation.
  • a specific number may be provided in advance for the number of times the history data D2 is described, and the determined genre may be described in the history data D2 each time a determination result is output. In this case, if the number of descriptions exceeds a specific number, the oldest description will be deleted, and the history data D2 will be updated so that the latest discrimination result is described.
  • FIG. 10 shows an analysis data generation process that is executed by the control unit 11 (data generation unit 30) to generate the analysis data D1. Show.
  • This routine is executed on condition that, for example, the differential value and the integral value are output from the signal processing unit 10 in a state where the user instructs genre discrimination from the input device 20 (see FIG. 2).
  • the differential value and the integral value output from the signal processing unit 10 are sequentially accumulated in the internal buffer of the control unit 11 and subjected to processing by this routine.
  • step S1 the control unit 11 sets the variable n that specifies the number of the channel ch to be processed to the initial value 0, and then continues to step S2. Then, capture the output signal (differential value or integral value) equivalent to the sampling unit time of channel chn.
  • step S3 the control unit 11 determines whether or not the fetched output signal exceeds a predetermined level. If it exceeds the predetermined level, the control unit 11 proceeds to step S4, adds 1 to the internal counter for channel chn, and then proceeds to step S5. On the other hand, if the predetermined level is exceeded in step S3, the control unit 11 skips step S4 and proceeds to step S5.
  • step S5 the control unit 11 determines whether or not 2 is set in the variable n.
  • step S6 If not 2, add 1 to variable n in step S6 and return to step S2. On the other hand, if the variable n is 2 in step S5, the control unit 11 proceeds to step S7.
  • the output of each of the three channel ch0 ch2, that is, the differentiation circuit 14 for the low frequency component, the integration circuit 15 and the differentiation circuit 16 for the high frequency component is made to be equivalent to the sampling unit time. Inspected across the board.
  • step S7 the control unit 11 determines whether or not the processing for the sampling period Tm has been completed. For example, if the number of times that step S5 is affirmed is equal to the value obtained by dividing the sampling period Tm by the sampling unit time Tn, it may be determined that the processing for the sampling period Tm has ended. If the determination in step S7 is negative, the control unit 11 returns to step S1 and proceeds to process the signal of the next sampling unit time stored in the internal buffer. On the other hand, if step S7 is affirmed, the control unit 11 proceeds to step S8, and the value recorded in the internal counter is the sum of the sample numbers smpX corresponding to the current sampling period sumOX sumlX sum2X (see Fig. 6) To the analysis data D1 of the storage device 25. If analysis data D1 does not exist yet Create new analysis data D1 and record the total value in association with the first sample number smp1.
  • the control unit 11 resets the value of the internal counter to the initial value 0, and further determines whether or not the generation processing of the analysis data D1 is completed in the next step S10. For example, it can be determined that the processing is completed when a so-called silent state near the output force SO of all channels ch0 to ch2 continues for a predetermined time or more. If the processing is not completed, the control unit 11 returns to step S1. If it is determined that the processing has ended, the control unit 11 ends the analysis data generation processing routine. With the above processing, analysis data D1 as shown in FIG. 6 is generated.
  • FIG. 11 shows a data analysis processing routine executed by the control unit 11 (data analysis unit 31) in order to discriminate music gains from the analysis data D1.
  • This routine is executed after the end of the analysis data generation processing routine of FIG.
  • the control unit 11 sets the variable n for designating the number of the channel ch subject to data processing to the initial value 0 in the first step S21, and stores it in the subsequent step S22.
  • the total value of the channel number ch n corresponding to the variable n is read from the analysis data D1 recorded in the device 25, and the average value thereof and the coefficient of variation with respect to the total value of the differential values of the low frequency component and the high frequency component are calculated.
  • step S23 the control unit 11 determines whether or not 2 is set to the variable n. If not 2, 1 is added to the variable n in step S24 and the process returns to step S22. On the other hand, if the variable n is 2 in step S23, the control unit 11 proceeds to step S25. Processing power of steps S22 to S24 S Repeatedly calculates the average values M0 to M2 of the three channels ch0 to ch2 and the coefficient of variation CV0 and CV2 for the sum of the differential values of the low and high frequency components. Is done.
  • step S25 the control unit 11 refers to the calculation result identification data D4, and the identification values dM0, dMl, dM2, dCV0, respectively corresponding to the obtained average values M0 to M2 and the variation coefficients CV0 and CV2, Get dCV2.
  • the control unit 11 refers to the discrimination reference data D3 of the storage device 25 and refers to the genre corresponding to the 5-digit judgment value in which the identification values dM0, dMl, dM2, dC V0, dCV2 are arranged in order. Select the music genre by selecting.
  • the control unit 11 determines in the next step S27.
  • the history data D2 is updated so that 1 is added to the number of times of the separated genre, and then the data analysis processing routine is finished.
  • the microphone mode is activated when the user performs a predetermined operation for instructing the microphone mode to the input device 20 with the input from the microphone 6 to the signal processing unit 10 enabled, for example. This is a special mode.
  • the audio signal from the microphone 6 is input to the signal processing unit 10 in the same manner as the music playback signal from the line input terminal 4.
  • the differential value and integral value for the low frequency component of the audio signal and the differential value for the high frequency component of the audio signal are converted into a digital signal and output.
  • the data generation unit 30 of the control unit 11 Based on these output signals, the data generation unit 30 of the control unit 11 generates analysis data D1.
  • the analysis data D1 in this case is recorded for each of the channels ch0 to ch2 as the total value of the number of times that the differential value and the integral value exceed the threshold TH within the sampling period Tm. It is. That is, the processing content of the data generation unit 30 when an audio signal is input is the same as that when a music playback signal is input.
  • the data analysis unit 31 has different processing contents when processing the analysis data D1 corresponding to the music playback signal and when processing the analysis data D1 corresponding to the audio signal. That is, when processing the analysis data D1 corresponding to the audio signal, the data analysis unit 31 calculates the average values M0 to M2 for each channel for the total value described in the analysis data D1 (see FIG. 6). Then, the data analysis unit 31 compares the obtained average values M0 to M2 in magnitude relationship, so that the magnitude relationship between the average values M0, Ml, and M2 is any of the patterns A to E shown in FIG. The audio signal is classified according to whether it is applicable. For example, if average value M0> average value Ml> average value M2, the audio signal is classified into pattern B.
  • the patterns A to E characterize the magnitude relationship between the average values M0 to M2 by vectors connecting the average values M0 to M2.
  • a branch reference value F (MO-Ml) ZM0 is calculated which indicates the ratio of the difference between the average value M0 and the average value Ml.
  • This branch reference value F is used for pattern branching Is done.
  • the branch reference value F is used for the branch between the pattern A and the pattern C and the branch between the pattern D and the pattern E.
  • data analysis unit 31 determines a character to appear on the game screen according to the classified pattern. For example, one or more different characters are associated with each of the patterns A to E in advance, and the character is determined by the data analysis unit 31 selecting a character corresponding to the classified pattern according to the correspondence. Is done.
  • the character is a character, character or symbol, etc. arranged in addition to humans, animals and plants, imaginary animals and plants, and displayed in LCD3, and the display content is not particularly limited. Les.
  • the determined character is displayed on the game screen of the LCD 3 at an appropriate timing according to the game control by the game control unit 32. Further, the data analysis unit 31 updates the retained character data D5 according to the determined contents of the character.
  • the character is counted as a character that can be held by the user and recorded in the held character data D5.
  • the retained character data D5 is configured as a table in which the characters A to X that can be displayed on the game screen and the characters retained by the user are associated with each other. Updates the stored character data D5 by adding 1 to the number of characters that appear according to the pattern classification result. Depending on the content of the game, it may be set so that a plurality of the same characters can be held, or it may be set so that only one character can be held. If the number of characters that can be held is limited, the number of held character data D5 is not updated after the set number of characters.
  • control unit 11 Prior to the processing of FIG. 14, the control unit 11 (data generation unit 30) executes the analysis data generation processing routine shown in FIG. 10 in order to generate analysis data D1 corresponding to the audio signal. Is the same as the process for the music playback signal, so the explanation is omitted.
  • the control unit 11 subsequently executes a data analysis processing routine shown in FIG.
  • step 1 the variable n that specifies the number of the channel ch to be processed is set to the initial value 0, and in step S32, the analysis data D1 stored in the storage device 25 corresponds to the variable n. Read the total value of channel number chn and calculate the average value.
  • step S33 the control unit 11 determines whether or not 2 is set to the variable n. If not 2, 1 is added to the variable n in step S34 and the process returns to step S32. On the other hand, if the variable n is 2 in step S33, the control unit 11 proceeds to step S35. By repeating the processing of steps S32 to S34, the average value of each of the three channels ch0 to ch2 is calculated.
  • step S35 the control unit 11 compares the magnitude relationship of the average values M0 to M2.
  • step S36 the control unit 11 determines whether or not the average values M0 to M2 are in a predetermined magnitude relationship, that is, the relationship between the patterns A and C or the relationships between the patterns D and E described above. If there is a predetermined magnitude relationship, the control unit 11 calculates the branch reference value F in step S37, and then proceeds to step S38. If a negative determination is made in step S36, the control unit 11 skips step S37 and proceeds to step S38.
  • step S38 the control unit 11 compares the average values M0 to M2, and if the branch reference value F is calculated, the audio signal is converted into patterns A to E based on the branch reference value F (Fig. 12)). In subsequent step S39, the control unit 11 determines a character corresponding to the classified pattern as a character to appear on the game screen. Thereafter, the control unit 11 proceeds to step S40, updates the retained character data D5 so that 1 is added to the determined number of characters, and thereafter ends the data analysis processing routine.
  • the character determined as described above can be used in various ways in the game executed by the game control unit 32.
  • the game control unit 32 may cause the character described in the retained character data D5 to appear as a guest character different from the breeding character.
  • the guest character can be positioned as a character that the user calls as desired and displays it on the LCD 3, for example.
  • a game machine that collects guest characters using voice input by making different guest characters appear according to the classification result of the audio signal. Functions can be provided to the user.
  • the music the user listens to via the game machine 1 by referring to the history data D4 It is possible to analyze the frequency of each game, the user's preference, and reflect the genre discrimination result in the game content executed by the game control unit 32. For example, when the game control unit 32 executes a game for nurturing a character, the characteristics of the character, personality, etc. are changed according to the distribution of the number of discriminations for each genre described in the history data D4. The operation can be performed by the game control unit 32.
  • the user's voice can be classified using the functions of the signal processing unit 10, the data generation unit 30, and the data analysis unit 31 for use in discriminating music, and the classification result can be reflected in the game contents. .
  • the function of reflecting each of the music features and the user's voice in the game content without waste, so it is possible to increase more at a low cost without the need to add a dedicated processing circuit for voice discrimination. It is possible to provide a play function.
  • the signal processing unit is not limited to the one provided with the differentiation circuit and integration circuit for the low frequency component of the audio reproduction signal or the audio signal and the differentiation circuit for the high frequency component, and at least one of the differentiation circuit and the integration circuit for the specific frequency component.
  • the differential value and integral value of the music playback signal reflects the tendency of music changes. By outputting at least one of the differential value and integral value from the signal processing unit, the characteristics of the music can be determined from the tendency of the change. And classify speech.
  • the differential value is output for each of the low frequency component and the high frequency component.
  • the integrated value of both frequency components may be output from the signal processing unit, or the differential value for a single frequency range may be output.
  • the signal processing unit may output the value and the integral value. Furthermore, only one of the differential value or the integral value with respect to a single frequency range may be output from the signal processing unit.
  • the genre is discriminated as the music feature, but the present invention is not limited to this, and the discriminating unit may discriminate the music tempo and other various features.
  • the tempo of music has a relatively strong correlation with the integrated value of the low frequency component of the music playback signal. That is, the low frequency component of the music playback signal is strongly influenced by the regular base rhythm included in the music, and the irregular waveform included in the low frequency component is dulled in the integrated waveform. Regular waveforms due to rhythm appear more clearly. For this reason, the peak interval of the integral waveform of the low frequency component, that is, the interval between the maximum values has a correlation with the tempo of music.
  • the determination unit can determine the tempo of the music by obtaining the peak interval of the integral waveform for each predetermined sampling period. If the peak interval is irregular, the feature discriminating unit may calculate a statistical value such as an average value, median value, or mode value as the peak interval. In the integrated value waveform of the low frequency component of the audio signal, the irregular waveform included in the audio signal is dulled and a regular waveform showing regular fluctuation of the audio appears more clearly. Voices can be classified using such regular changes.
  • the number of times that the integral value and the differential value of the low frequency component and the differential value of the high frequency component exceed a predetermined level within the sampling unit time is totaled, and the average value and fluctuation of the total value are counted.
  • the coefficient is calculated to determine the degree of variation in the music reproduction signal waveform
  • the contents of the processing executed by the determination unit to determine the characteristics of the music are not limited to the example using only the average value and the variation coefficient.
  • the music genre etc. may be discriminated by further referring to various statistical values such as standard deviation, variance, and total of the total values. Any number of statistical values may be used.
  • the present invention is not limited to this, and the genre etc. is determined by calculating the coefficient of variation of each aggregate value of all differential values and integral values. You may use it.
  • the 5-digit judgment value characterizing the waveform of the music playback signal was used for data analysis, the number of digits may be set according to various statistical values to be calculated. For example, if the average value and coefficient of variation of the integral and derivative values of the low-frequency component and the derivative value of the high-frequency component are calculated, the judgment value that characterizes the waveform of the music playback signal is 6 digits. You can also use various statistical values for voice classification.
  • the game control unit reflects the music gain on the change of the character form or the like, or the character determined according to the audio signal classification result is displayed on the game screen.
  • the relationship between the music feature discrimination result or the audio signal classification result and the game content is not limited to these examples.
  • various changes such as changes in game difficulty, progress speed, and bonus occurrence probability may be generated in association with music characteristics or audio signal classification results.
  • the signal processing unit may be configured as a hardware device combining circuit elements such as IC and LSI, or may be configured as a logical device combining MPU and software. Each of the data generation unit and the data analysis unit may also be configured as a hardware device.
  • the signal input unit is not limited to the line input terminal. For example, a device that receives a playback signal transmitted using radio such as FM radio waves and converts it into a music playback signal may be used as the signal input unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Human Computer Interaction (AREA)
  • Electrophonic Musical Instruments (AREA)

Abstract

Provided is a music game device capable of effectively realizing a function for reflecting a music feature and a user voice in the game contents. The music game device includes: a line input terminal (4) for acquiring a music reproduction signal; a microphone (6) for acquiring a user voice and converting it into a voice signal; a signal processing unit (10) for outputting at least one of a differential value and an integration value concerning a particular frequency component of each signal; judging units (30, 31) for judging a feature of the music to be reproduced by the music reproduction signal according to a signal outputted from the signal processing unit in response to the music reproduction signal and classifying the voice signal according to the signal outputted from the signal processing unit in response to the voice signal; and a game control unit (32) for reflecting the music feature judged by the judging units (30, 31) and the voice signal classification result in the game contents.

Description

明 細 書  Specification

音楽ゲーム機  Music game machine

技術分野  Technical field

[0001] 本発明は、ユーザの音声を取り込んでゲーム内容に反映させる機能を備えた音楽 ゲーム機に関する。  [0001] The present invention relates to a music game machine having a function of capturing a user's voice and reflecting it in game content.

背景技術  Background art

[0002] この種のゲーム機として、マイクから入力された音声信号を解析する専用の処理部 を有し、その処理部の解析結果をキャラクタの形態に反映させるゲーム機が知られて いる (例えば特許文献 1参照)。  [0002] As a game machine of this type, there is a game machine that has a dedicated processing unit that analyzes an audio signal input from a microphone and reflects the analysis result of the processing unit in the form of a character (for example, (See Patent Document 1).

特許文献 1:特開 2001— 29649号公報  Patent Document 1: Japanese Unexamined Patent Publication No. 2001-29649

発明の開示  Disclosure of the invention

発明が解決しょうとする課題  Problems to be solved by the invention

[0003] 音楽再生機器から出力される音楽再生信号を取り込んでその音楽再生信号にて 再生されるべき音楽の特徴、例えばジヤンノレを判別し、その判別結果をゲーム内容 に反映させる音楽ゲーム機が検討されている。このような音楽ゲーム機において、音 楽の特徴を判別するための構成要素とは別に音声信号を解析するための専用の処 理部を付加した場合、機能的に重複した部分が併設されて無駄が生じるおそれがあ る。 [0003] A music game machine that takes in a music playback signal output from a music playback device, determines the characteristics of the music to be played back by the music playback signal, for example, Giannole, and reflects the determination result in the game content is considered. Has been. In such a music game machine, when a dedicated processing unit for analyzing an audio signal is added in addition to the components for determining the characteristics of the music, a functionally redundant portion is added and wasted. May occur.

[0004] そこで、本発明は、音楽の特徴及びユーザの音声のそれぞれをゲーム内容に反映 させる機能を無駄なく実現することが可能な音楽ゲーム機を提供することを目的とす る。  [0004] Accordingly, an object of the present invention is to provide a music game machine capable of realizing the function of reflecting each of the music features and the user's voice in the game content without waste.

課題を解決するための手段  Means for solving the problem

[0005] 本発明の音楽ゲーム機は、入力信号の特定周波数成分に関する微分値及び積分 値のうち、少なくともいずれか一方に相当する信号を出力する信号処理部と、音楽再 生機器から出力される音楽再生信号を取り込んで前記信号処理部に入力させる音 楽再生信号入力部と、ユーザの音声を取り込んで音声信号に変換し、該音声信号を 前記信号処理部に入力させる音声入力部と、前記音楽再生信号の入力に対応して 前記信号処理部から出力される信号に基づいて、前記音楽再生信号にて再生され るべき音楽の特徴を判別し、かつ前記音声信号の入力に対応して前記信号処理部 から出力される信号に基づいて前記音声信号を分類する判別部と、前記判別部にて 判別された前記音楽の特徴及び前記音声信号の分類結果のそれぞれをゲーム内 容に反映させるゲーム制御部と、を備えることにより上記課題を解決する。 [0005] The music game machine of the present invention is output from a signal processing unit that outputs a signal corresponding to at least one of a differential value and an integral value related to a specific frequency component of an input signal, and a music reproduction device. A music playback signal input unit that takes in a music playback signal and inputs it to the signal processing unit, a voice input unit that takes a user's voice and converts it into a voice signal, and inputs the voice signal to the signal processing unit, and In response to music playback signal input Based on the signal output from the signal processing unit, the characteristics of the music to be played back by the music playback signal are determined, and the signal output from the signal processing unit in response to the input of the audio signal A discriminating unit that classifies the audio signal based on the discriminating unit, and a game control unit that reflects each of the characteristics of the music discriminated by the discriminating unit and the classification result of the audio signal in the game content. Solve the problem.

[0006] 本発明の音楽ゲーム機において、信号処理部は、音楽再生機器からの音楽再生 信号又はユーザ力 の音声信号の入力に対して、特定周波数成分に関する微分値 及び積分値のうち、少なくともいずれか一方に相当する信号を出力する。その出力信 号に基づいて判別部にて音楽の特徴が判別され、あるいは音声信号が分類され、そ れらの特徴及び分類結果のそれぞれがゲーム制御部にてゲーム内容に反映される 。音楽のジャンル、テンポといった特徴は、音楽再生信号の特定周波数成分の微分 値又は積分値と相関性を有しているため、その相関性を利用すれば、音楽再生信号 の入力に対応して信号処理部から出力される信号に基づいて音楽の特徴を判別す ることができる。一方、音声信号も音を波形化したものである点で音楽再生信号と共 通性を有している。このため、音声信号の特定周波数成分に関する微分値、あるい は積分値には、発声内容、すなわち音声の調子、強弱、抑揚、単語の相違といった 様々な要素と何らかの相関性を持った変化が現れる。その変化に着目すれば、音声 認識のような高度の音声解析は望めないとしても、音声信号を発声内容に応じてある 程度は分類することができる。これにより、少なくとも信号処理部を共通化し、場合に よっては判別部の処理機能の一部をも共通化して、安価なコストでより多くの遊戯機 能をユーザに提供することができる。  In the music game machine of the present invention, the signal processing unit receives at least one of a differential value and an integral value related to a specific frequency component with respect to an input of a music playback signal or a user-powered audio signal from a music playback device. A signal corresponding to either one is output. Based on the output signal, the feature of the music is discriminated by the discriminating unit, or the audio signal is classified, and each of the feature and the classification result is reflected in the game content by the game control unit. Features such as music genre and tempo correlate with the differential value or integral value of the specific frequency component of the music playback signal, so if the correlation is used, the signal corresponding to the input of the music playback signal It is possible to discriminate music features based on signals output from the processing unit. On the other hand, audio signals are also common to music playback signals in that they are sound waveforms. For this reason, the differential value or integral value of a specific frequency component of the speech signal shows changes that correlate with various factors such as the utterance content, that is, the tone, strength, inflection, and word difference. . Focusing on these changes, even if advanced speech analysis such as speech recognition is not desired, speech signals can be classified to some extent according to the utterance content. As a result, at least the signal processing unit is shared, and in some cases, part of the processing function of the determination unit is also shared, so that more game functions can be provided to the user at a low cost.

[0007] 本発明の音楽ゲーム機の一形態において、前記信号処理部は、前記特定周波数 成分の微分値及び積分値のそれぞれに相当する信号を出力し、前記判別部は、前 記信号処理部から出力される信号を所定のサンプリング単位時間ずつ取り込んで、 前記サンプリング単位時間内に各信号の値が所定レベルを超えるか否力、を判定し、 前記所定レベルを超える値が検出されたと判定された回数を所定のサンプリング周 期毎でかつ前記信号別に集計した解析データを生成するデータ生成部と、前記解 析データに記述された集計値に基づいて前記音楽の特徴を判別し、かつ前記音声 信号を分類するデータ解析部とを有していてもよい。この形態によれば、信号処理部 力 出力される微分値及び積分値のばらつきの程度が判別部にて作成される解析 データの集計値に反映される。これらのばらつきは音楽のジャンル、テンポといった 特徴、あるいはユーザの発声内容の変化と相関性を有しているため、その集計値を 利用することにより音楽の特徴を判別し、あるいはユーザの音声を分類することがで きる。特定周波数成分に関する微分値及び積分値の双方を信号処理部から出力さ せているので、信号強度が頻繁に変化を繰り返すような場合には微分値に関する集 計値が大きくなり、信号の全体的な揺らぎが大きい場合には積分値に関する集計値 が大きくなる。このような変化の傾向を利用して音楽の特徴をより細力べ判別すること ができる。そして、データ生成部の解析データを生成する機能を音声信号の分類に 関しても活用することにより、音声信号を分類する機能を設けるために必要な追加的 負担をさらに軽減することができる。 [0007] In one embodiment of the music game machine of the present invention, the signal processing unit outputs a signal corresponding to a differential value and an integral value of the specific frequency component, and the determination unit includes the signal processing unit. The signal output from the signal is taken in a predetermined sampling unit time, and it is determined whether or not the value of each signal exceeds a predetermined level within the sampling unit time, and it is determined that a value exceeding the predetermined level is detected. A data generation unit for generating analysis data obtained by counting the number of times every predetermined sampling period and for each signal; and determining characteristics of the music based on the total value described in the analysis data; and A data analysis unit for classifying the signals. According to this aspect, the degree of variation of the differential value and the integral value output by the signal processing unit is reflected in the total value of the analysis data created by the determination unit. These variations correlate with characteristics such as music genre and tempo, or changes in the content of the user's utterances, so the aggregated values can be used to distinguish music characteristics or classify user voices. can do. Since both the differential value and the integral value related to the specific frequency component are output from the signal processing unit, the aggregate value related to the differential value increases when the signal strength frequently changes, and the overall signal level is increased. When the fluctuation is large, the total value for the integral value becomes large. By using such a tendency of change, it is possible to discriminate music characteristics more delicately. Further, by utilizing the function of generating the analysis data of the data generation unit for the classification of the audio signal, the additional burden necessary for providing the function of classifying the audio signal can be further reduced.

本発明の音楽ゲーム機の一形態において、前記信号処理部は、前記入力信号の 低周波成分の微分値及び積分値の少なくともいずれか一方に相当する信号と、前記 入力信号の高周波成分の微分値及び積分値の少なくともいずれか一方に相当する 信号とをそれぞれ出力し、前記判別部は、前記信号処理部から出力される信号のそ れぞれを所定のサンプリング単位時間ずつ取り込んで、前記サンプリング単位時間 内に各信号の値が所定レベルを超えるか否かを判定し、前記所定レベルを超える値 が検出されたと判定された回数を所定のサンプリング周期毎でかつ前記信号別に集 計した解析データを生成するデータ生成部と、前記解析データに記述された集計値 に基づレ、て前記音楽の特徴を判別し、かつ前記音声信号を分類するデータ解析部 とを有していてもよい。この形態によれば、解析データに記述された集計値を利用す ることにより音楽の特徴を判別し、あるいはユーザの音声を分類することができる点は 上述した形態と同様である。しかも、信号処理部から低周波成分の微分値及び積分 値の少なくともいずれか一方に相当する信号と、前記入力信号の高周波成分の微分 値及び積分値の少なくともいずれか一方に相当する信号とをそれぞれ出力させてい るので、低周波成分及び高周波成分のそれぞれの変化の傾向を集計値から把握し て、音楽の特徴をより細かく判別することができる。そして、データ生成部の解析デー タを生成する機能を音声信号の分類に関しても活用することにより、音声信号を分類 する機能を設けるために必要な追加的負担をさらに軽減することができる。 In one form of the music game machine of the present invention, the signal processing unit includes a signal corresponding to at least one of a differential value and an integral value of the low frequency component of the input signal, and a differential value of the high frequency component of the input signal. And a signal corresponding to at least one of the integral values, and the determination unit captures each of the signals output from the signal processing unit by a predetermined sampling unit time, and outputs the sampling unit. It is determined whether or not the value of each signal exceeds a predetermined level in time, and the analysis data obtained by collecting the number of times that the value exceeding the predetermined level is detected for each predetermined sampling period and for each signal is obtained. A data generation unit for generating the data, and a data analysis unit for discriminating the characteristics of the music and classifying the audio signal based on the total value described in the analysis data It may have. According to this form, the feature of music can be discriminated by using the total value described in the analysis data, or the user's voice can be classified. In addition, a signal corresponding to at least one of the differential value and the integral value of the low frequency component and a signal corresponding to at least one of the differential value and the integral value of the high frequency component of the input signal are respectively transmitted from the signal processing unit. Since it is output, the tendency of each change of the low frequency component and the high frequency component can be grasped from the total value, and the music feature can be discriminated more finely. Then, the analysis data of the data generator By utilizing the function to generate the audio signal for the classification of the audio signal, the additional burden required to provide the function for classifying the audio signal can be further reduced.

[0009] 解析データに記述された集計値を利用する形態において、前記データ解析部は、 前記集計値のそれぞれの平均値を演算し、得られた平均値のばらつきを評価するこ とによって前記音楽の特徴としてのジヤンノレを判別し、かつ、前記平均値の大小関係 に基づいて前記音声信号を分類してもよい。音楽のジャンルと、音楽再生信号の特 定周波数成分に関する微分値あるいは積分値のばらつきとの間に存在する相関性 を利用して音楽のジャンルを判別することができる。また、音声信号の分類に関して は、音楽の特徴判別と同様に集計値の平均値を利用するので、集計値の演算機能 をも共通化することが可能となり、それにより、音声信号を分類する機能を設けるため に必要な追加的負担のさらなる軽減を図ることができる。  [0009] In a form in which the aggregate value described in the analysis data is used, the data analysis unit calculates an average value of each of the aggregate values, and evaluates variation of the obtained average value to thereby calculate the music. The speech signal may be classified based on the magnitude relation of the average value. The genre of music can be discriminated using the correlation existing between the genre of music and the variation of the differential value or integral value of the specific frequency component of the music playback signal. As for the classification of audio signals, the average value of the aggregated values is used in the same way as music feature discrimination, so it is possible to share the calculation function of the aggregated values. It is possible to further reduce the additional burden required to establish

[0010] 平均値の大小関係に基づいて音声信号を分類する形態において、前記データ解 析部は、前記平均値の大小関係が予め定められた複数のパターン (パターン A〜E) のいずれに該当するかにより前記音声信号を分類してもよい。この形態によれば、複 数のパターンを予め定めておくことにより、得られた平均値の大小関係を容易に分類 すること力 Sできる。  [0010] In the form in which the audio signal is classified based on the magnitude relationship of the average values, the data analysis unit corresponds to any of a plurality of patterns (patterns A to E) in which the magnitude relationship of the average values is predetermined. The audio signal may be classified according to whether or not. According to this embodiment, it is possible to easily classify the magnitude relation of the obtained average values by setting a plurality of patterns in advance.

[0011] 平均値の大小関係が予め定められた複数のパターンを利用して音声信号を分類 する形態において、前記データ解析部は、 2つの平均値間の差をいずれか一方の平 均値で割り算して得られる分岐参照値をさらに参照して前記平均値の大小関係が前 記複数のパターンのいずれに該当するかを判別してもよい。分岐参照値を設定する ことにより、パターン数を増加させて音声再生信号をさらに細かく分類することができ る。  [0011] In a form in which the audio signals are classified using a plurality of patterns in which the magnitude relationship of the average values is determined in advance, the data analysis unit calculates the difference between the two average values as one of the average values. The branch reference value obtained by division may be further referenced to determine which of the plurality of patterns corresponds to the magnitude relationship of the average value. By setting the branch reference value, the number of patterns can be increased to further classify the audio playback signal.

[0012] 本発明の音楽ゲーム機の一形態において、前記判別部は、前記音声信号の分類 結果に基づいてゲーム画面に出現させるべきキャラクタを決定し、前記ゲーム制御部 は、前記データ解析部にて決定されたキャラクタを前記ゲーム画面に出現させること により前記音声信号の分類結果を前記ゲーム内容に反映させてもよい。この形態に よれば、ユーザの発声内容に応じたキャラクタを画面に表示させるという遊戯機能を ユーザに提供することができる。 発明の効果 [0012] In one embodiment of the music game machine of the present invention, the determination unit determines a character to appear on the game screen based on the classification result of the audio signal, and the game control unit is connected to the data analysis unit. The sound signal classification result may be reflected in the game content by causing the determined character to appear on the game screen. According to this embodiment, it is possible to provide the user with a game function of displaying a character corresponding to the content of the user's utterance on the screen. The invention's effect

[0013] 以上に説明したように、本発明の音楽ゲーム機によれば、音楽再生機器から出力さ れる音声再生信号、及び音声入力部から出力される音声信号のいずれをも信号処 理部に入力してそれらの信号の特定周波数成分に関する微分値及び積分値のうち 少なくともいずれか一方に相当する信号を出力させているので、音楽の特徴と、音楽 再生信号の特定周波数成分の微分値又は積分値との相関性を利用して音楽の特 徴を判別する一方で、ユーザの発声内容に対応した微分値又は積分値の変化に着 目して音声信号を適度に分類することができる。これにより、少なくとも信号処理部を 共通化し、場合によっては判別部の処理機能の一部をも共通化して、安価なコストで より多くの遊戯機能をユーザに提供することができる。  [0013] As described above, according to the music game machine of the present invention, both the audio playback signal output from the music playback device and the audio signal output from the audio input unit are used as the signal processing unit. Since the signal corresponding to at least one of the differential value and the integral value related to the specific frequency component of those signals is input and output, the characteristic of music and the differential value or integral of the specific frequency component of the music playback signal are output. While distinguishing music features using correlation with values, it is possible to appropriately classify audio signals by focusing on changes in differential values or integral values corresponding to the user's utterance content. As a result, at least the signal processing unit is shared, and in some cases, part of the processing function of the determination unit is also shared, so that more play functions can be provided to the user at a low cost.

図面の簡単な説明  Brief Description of Drawings

[0014] [図 1]本発明の携帯型ゲーム機を携帯型音楽プレーヤとィャフォンとの間に配置した 状態を示す図。  FIG. 1 is a diagram showing a state in which a portable game machine of the present invention is placed between a portable music player and an earphone.

[図 2]図 1のゲーム機の制御系における音楽ジヤンノレ判別に関わる部分のブロック図  [Fig. 2] Block diagram of the part related to the discrimination of music in the control system of the game machine of FIG.

[図 3]図 2の制御ユニットの機能ブロック図。 FIG. 3 is a functional block diagram of the control unit of FIG.

[図 4]音楽再生信号とサンプリング周期との関係を示す図。  FIG. 4 is a diagram showing a relationship between a music playback signal and a sampling period.

[図 5]サンプリング周期内における積分値の波形とサンプリング単位時間との関係の 一例を示す図。  FIG. 5 is a diagram showing an example of the relationship between the waveform of the integrated value and the sampling unit time within the sampling period.

[図 6]解析データの内容を示す図。  [Fig. 6] A diagram showing the contents of analysis data.

[図 7]演算結果識別データの内容を示す図。  FIG. 7 is a diagram showing the contents of calculation result identification data.

[図 8]判別参照データの内容を示す図。  FIG. 8 is a diagram showing the contents of discrimination reference data.

[図 9]履歴データの内容を示す図。  FIG. 9 is a diagram showing the contents of history data.

[図 10]制御ユニットにて実行される解析データ生成処理ノレ一チンを示すフローチヤ ート。  [Fig. 10] Flow chart showing the analysis data generation process executed by the control unit.

[図 11]制御ユニットにて実行されるデータ解析処理ノレ一チンを示すフローチャート。  FIG. 11 is a flowchart showing a data analysis process executed by the control unit.

[図 12]音声信号を分類するためのパターンを示す図。  FIG. 12 is a diagram showing a pattern for classifying audio signals.

[図 13]保持キャラクタデータの内容を示す図。 [図 14]マイクモード時に制御ユニットにて実行されるデータ解析処理ルーチンを示す フローチャート FIG. 13 is a view showing the contents of retained character data. FIG. 14 is a flowchart showing a data analysis processing routine executed by the control unit in the microphone mode.

発明を実施するための最良の形態  BEST MODE FOR CARRYING OUT THE INVENTION

[0015] 図 1は本発明の一形態に係る携帯型のゲーム機を示している。音楽ゲーム機として のゲーム機 1は、携帯型音楽プレーヤ 100と組み合わせて使用されるものであり、筐 体 2と、その筐体 2の前面に取り付けられた表示装置としての LCD3とを備えている。 筐体 2にはライン入力端子 4、フォン端子 5及びマイク 6が設けられている。ライン入力 端子 4は携帯型音楽プレーヤ 100のライン出力端子 101と中継ケーブル 102を介し て接続される。フォン端子 5はィャフォン 103と接続される。つまり、本形態のゲーム 機 1は、携帯型音楽プレーヤ 100とこれに組み合わされるべき音声出力装置との間 に介装されて使用される。携帯型音楽プレーヤ 100と組み合わされる音声出力装置 はィャフォン 103に限らなレ、。すなわち、携帯型音楽プレーヤ 100は、スピーカ、へッ ドフォン等の各種の音声出力装置に向けて音声変換用の音楽再生信号を出力でき るものであればよぐその記録媒体の形式、再生方式といった細部は問わなレ、。さら に、音楽プレーヤは携帯型に限らず、家庭用オーディオ、テレビ、パーソナルコンビ ユータ、市販の携帯電子ゲームといった音楽を出力する各種の機器を含む。また、マ イク 6は、ユーザの音声を取り込んで音声信号に変換する。なお、マイク 6には、ユー ザの音声以外にも、周囲の音、電子機器からの音声又は音楽などを入力可能である FIG. 1 shows a portable game machine according to one embodiment of the present invention. A game machine 1 as a music game machine is used in combination with a portable music player 100, and includes a case 2 and an LCD 3 as a display device attached to the front surface of the case 2. . The casing 2 is provided with a line input terminal 4, a phone terminal 5, and a microphone 6. The line input terminal 4 is connected to the line output terminal 101 of the portable music player 100 via the relay cable 102. The phone terminal 5 is connected to the earphone 103. That is, the game machine 1 of this embodiment is used by being interposed between the portable music player 100 and the audio output device to be combined therewith. The audio output device combined with the portable music player 100 is not limited to the earphone 103. That is, the portable music player 100 is not limited as long as it can output a music reproduction signal for audio conversion to various audio output devices such as a speaker and a headphone. The details are unquestionable. Furthermore, the music player is not limited to a portable type, and includes various devices that output music such as home audio, television, personal computer, and commercially available portable electronic games. The microphone 6 takes in the user's voice and converts it into a voice signal. In addition to the user's voice, the microphone 6 can be used to input ambient sounds, voice or music from electronic devices, etc.

[0016] ゲーム機 1は、携帯型音楽プレーヤ 100のライン入力端子 4から出力される音楽再 生信号をィャフォン 103へ通過させる中継器として機能するとともに、携帯型音楽プ レーャ 100から出力される音楽再生信号を解析し、その解析結果に応じたゲームを ユーザに提供するゲーム機としての機能を有している。図 2は、ゲーム機 1の内部に 設けられた制御系のうち、特に音楽再生信号及び音声信号を取り込んで解析する機 能に関わる部分の構成を示すブロック図である。ゲーム機 1は、音楽再生信号入力 部としてのライン入力端子 4及び音声入力部としてのマイク 6からフォン端子 5へアナ ログの音楽再生信号及び音声信号を通過させるバイパス経路 R1と、ライン入力端子 4及びマイク 6から分岐経路 R2を介して取り込まれる音楽再生信号及び音声信号を 処理する信号処理部 10と、その信号処理部 10の出力信号を取り込む制御ユニット 1 1とを有してレ、る。なお、経路 Rl、 R2はいずれも右チャンネル、左チャンネル及びァ ースチャンネルの 3本のラインで構成される力 図では 1本で代表して示している。ラ イン入力端子 4とマイク 6は、それぞれを切り替える図示しない切り替えスィッチによつ て接続され、入力される信号として、音楽再生信号と音声信号とを切り替えることがで きる。例えば、ライン入力端子 4に中継ケーブル 102からの差し込みがあると、切り替 えスィッチは、ライン入力端子 4と分岐経路 Rl、 R2とを接続する。差し込みがない場 合、切り替えスィッチは、マイク 6と分岐経路 Rl、 R2とを接続する。 [0016] The game machine 1 functions as a relay that passes the music playback signal output from the line input terminal 4 of the portable music player 100 to the earphone 103, and music output from the portable music player 100. It functions as a game machine that analyzes the playback signal and provides the user with a game according to the analysis result. FIG. 2 is a block diagram showing a configuration of a part related to the function of taking in and analyzing music reproduction signals and audio signals in the control system provided in the game machine 1. The game machine 1 includes a line input terminal 4 as a music playback signal input section and a bypass path R1 for passing an analog music playback signal and a voice signal from the microphone 6 as a voice input section to the phone terminal 5 and the line input terminal 4 And the music playback signal and audio signal captured from the microphone 6 via the branch path R2. It has a signal processing unit 10 for processing and a control unit 11 for capturing the output signal of the signal processing unit 10. Paths Rl and R2 are all represented by one in the force diagram composed of three lines of the right channel, the left channel, and the first channel. The line input terminal 4 and the microphone 6 are connected by a switching switch (not shown) that switches between them, and can switch between a music reproduction signal and an audio signal as an input signal. For example, when the line input terminal 4 is inserted from the relay cable 102, the switching switch connects the line input terminal 4 and the branch paths Rl and R2. If there is no plug, the switch connects microphone 6 and branch paths Rl and R2.

[0017] 信号処理部 10は、ライン入力端子 4から取り込まれた音楽再生信号又はマイク 6に て変換された音声信号の低周波成分のみを通過させる一対のローパスフィルタ(LP F) 12A、 12Bと、その音楽再生信号又は音声信号の高周波成分のみを通過させる ハイパスフィルタ(HPF) 13Aと、 LPF12Aの出力信号を微分する微分回路 14と、 L PF12Bの出力信号を積分する積分回路 15と、 HPF13Aの出力信号を微分する微 分回路 16と、各回路 14〜: 16の出力信号をデジタル信号に変換して制御ユニット 11 に出力する A/D変換器 17A〜17Cとを備えている。 LPF12A、 12Bが通過させる 周波数域は例えば 1000Hz以下に設定され、 HPF13Aが通過させる周波数域は例 えば 1000Hz以上に設定される。なお、周波数域の設定値はこれらの例に限らない 。例えば、 LPF12A、 12Bが通過させる周波数域を 500Hz以下に設定し、 HPF13 Aが通過させる周波数域を 1000Hz以上に設定してもよレ、。さらに、 LPF12A、 12B が通過させる周波数域は互いに等しく設定されてもよいし、相違してもよレ、。両者の 通過周波数域が一致する場合には、 LPF12A、 12Bに代えて単一の LPFを設け、 その出力信号を微分回路 14及び積分回路 15に分岐してもよい。  [0017] The signal processing unit 10 includes a pair of low-pass filters (LPF) 12A and 12B that allow only a low-frequency component of a music reproduction signal captured from the line input terminal 4 or an audio signal converted by the microphone 6 to pass through. A high-pass filter (HPF) 13A that passes only the high-frequency component of the music playback signal or audio signal, a differentiation circuit 14 that differentiates the output signal of LPF12A, an integration circuit 15 that integrates the output signal of LPF12B, and an HPF13A A fine circuit 16 for differentiating the output signal, and A / D converters 17A to 17C for converting the output signals of the circuits 14 to 16 into digital signals and outputting them to the control unit 11 are provided. The frequency range that LPF 12A and 12B pass is set to 1000 Hz or less, for example, and the frequency range that HPF 13A passes is set to 1000 Hz or more, for example. The set value of the frequency range is not limited to these examples. For example, the frequency range that LPF12A and 12B pass can be set to 500Hz or less, and the frequency range that HPF13A can pass is set to 1000Hz or more. Further, the frequency ranges that LPF 12A and 12B pass may be set equal or different from each other. If the two pass frequency ranges coincide with each other, a single LPF may be provided instead of the LPFs 12A and 12B, and the output signal may be branched to the differentiation circuit 14 and the integration circuit 15.

[0018] 制御ユニット 11は、マイクロプロセッシングユニット(MPU)と、その MPUの動作に 必要な周辺装置、例えば RAM、 ROM等の記憶装置、を組み合わせたコンピュータ ユニットとして構成されている。制御ユニット 11には、上述した LCD3が制御対象とし て接続されるとともに、ゲームの指示等を与えるための入力装置 20、音声、効果音等 を発生させるためのスピーカユニット(SP) 21が接続される。さらに、スピーカユニット 21への接続経路にはフォン端子 5も接続される。 [0019] 制御ユニット 11は、 LCD3にゲーム画面を表示させる等の処理を実行することによ り、ユーザに対して各種のゲーム機能を提供する。そのゲームに付帯した機能として 、制御ユニット 11はライン入力端子 4から音楽再生信号が入力されているときの信号 処理部 10の出力信号を解析して音楽のジヤンノレを判別する機能を有している。図 3 は制御ユニット 11の機能ブロック図である。制御ユニット 11の MPU (図示せず)が所 定の制御プログラムを記憶装置 25から読み込んで実行することにより、制御ユニット 11の内部には、論理的装置として、判別部としてのデータ生成部 30及びデータ解 析部 31と、ゲーム制御部 32とが生成される。データ生成部 30は信号処理部 10の出 力信号を処理して解析データ D1を生成し、これを記憶装置 25に記憶する。データ 解析部 31は、解析データ D1を読み出し、演算結果識別データ D2を参照して、所定 の手法により音楽のジャンルを判別し、その判別結果に応じて履歴データ D4を更新 する。そのジヤンノレ判別には記憶装置 25に記録された判別参照データ D3が参照さ れる。ゲーム制御部 32は、履歴データ D4を参照しつつ所定のゲームプログラム(不 図示)に従ってゲームを実行する。 The control unit 11 is configured as a computer unit that combines a microprocessing unit (MPU) and peripheral devices necessary for the operation of the MPU, for example, storage devices such as RAM and ROM. The control unit 11 is connected to the LCD 3 described above as a control target, and is connected to an input device 20 for giving game instructions and the like, and a speaker unit (SP) 21 for generating sound, sound effects, and the like. The Further, the phone terminal 5 is also connected to the connection path to the speaker unit 21. The control unit 11 provides various game functions to the user by executing processing such as displaying a game screen on the LCD 3. As a function attached to the game, the control unit 11 has a function of analyzing the output signal of the signal processing unit 10 when the music reproduction signal is input from the line input terminal 4 and determining the music signal. . FIG. 3 is a functional block diagram of the control unit 11. When the MPU (not shown) of the control unit 11 reads a predetermined control program from the storage device 25 and executes it, the control unit 11 has a data generation unit 30 as a determination unit and a logical unit inside the control unit 11. A data analysis unit 31 and a game control unit 32 are generated. The data generation unit 30 processes the output signal of the signal processing unit 10 to generate analysis data D1, and stores this in the storage device 25. The data analysis unit 31 reads the analysis data D1, refers to the calculation result identification data D2, determines the genre of music by a predetermined method, and updates the history data D4 according to the determination result. The discrimination reference data D3 recorded in the storage device 25 is referred to for the Giannole discrimination. The game control unit 32 executes the game according to a predetermined game program (not shown) while referring to the history data D4.

[0020] さらに、制御ユニット 11は、マイク 6からの音声信号が信号処理部 10にて処理され た場合、その出力信号を解析して音声を分類し、その分類結果に基づいて LCD3の ゲーム画面上にキャラクタを出現させる機能をも有してレ、る。その機能を実現する場 合、制御ユニット 11は、ジヤンノレ判別機能と同様の手順でデータ生成部 30にて信号 処理部 10の出力信号を処理して解析データ D1を生成し、これを記憶装置 25に記 憶する。データ解析部 31は、解析データ D1を読み出して所定の手法により音声信 号を分類し、その分類結果に応じて保持キャラクタデータ D5を更新する。ゲーム制 御部 32は、上述した履歴データ D4に加えて、保持キャラクタデータ D5をも参照して ゲームを実行する。  [0020] Further, when the audio signal from the microphone 6 is processed by the signal processing unit 10, the control unit 11 analyzes the output signal and classifies the audio, and based on the classification result, the game screen of the LCD 3 It also has a function to make a character appear above. In order to realize this function, the control unit 11 generates the analysis data D1 by processing the output signal of the signal processing unit 10 in the data generation unit 30 in the same procedure as the Giannole discrimination function, and stores it in the storage device 25. To remember. The data analysis unit 31 reads the analysis data D1, classifies the voice signals by a predetermined method, and updates the retained character data D5 according to the classification result. The game control unit 32 executes the game with reference to the retained character data D5 in addition to the history data D4 described above.

[0021] 次に、図 4〜図 8を参照してゲーム機 1によるジャンル判別に関する処理を説明する 。図 4はライン入力端子 4から信号処理部 10に入力される音楽再生信号の波形の一 例である。信号処理部 10では、 LPF12A, 12Bにより音楽再生信号の低周波成分 が取り出され、 HPF13Aにより高周波成分が取り出される。取り出された低周波成分 の微分値が微分回路 14から出力され、低周波成分の積分値が積分回路 15から出 力され、高周波成分の微分値が微分回路 16から出力される。出力された微分値及 び積分値は A/D変換器 17A〜 17Cでデジタル信号に変換されて制御ユニット 11 のデータ生成部 30に入力される。データ生成部 30には、信号処理部 10から出力さ れる微分値及び積分値を処理するための基準時間として、図 4に示すサンプリング 周期 Tmと、図 5 (積分回路 15の出力波形の一例を示す図である)に示すサンプリン グ単位時間 Tnの二種類の時間長が設定されている。サンプリング周期 Tmはサンプ リング単位時間 Tnの整数倍である。一例として、サンプリング周期 Tmが 5秒、サンプ リング単位時間 Tnが 20ミリ秒にそれぞれ設定される。 Next, processing relating to genre discrimination by the game machine 1 will be described with reference to FIGS. FIG. 4 is an example of a waveform of a music playback signal input from the line input terminal 4 to the signal processing unit 10. In the signal processing unit 10, low frequency components of the music playback signal are extracted by the LPFs 12A and 12B, and high frequency components are extracted by the HPF 13A. The differential value of the extracted low-frequency component is output from the differentiation circuit 14, and the integrated value of the low-frequency component is output from the integration circuit 15. The differential value of the high frequency component is output from the differentiation circuit 16. The output differential value and integral value are converted into digital signals by the A / D converters 17A to 17C and input to the data generation unit 30 of the control unit 11. The data generation unit 30 includes a sampling period Tm shown in FIG. 4 and an example of an output waveform of the integration circuit 15 as a reference time for processing the differential value and the integration value output from the signal processing unit 10. Two types of time lengths are set for the sampling unit time Tn shown in the figure. The sampling period Tm is an integer multiple of the sampling unit time Tn. As an example, the sampling period Tm is set to 5 seconds, and the sampling unit time Tn is set to 20 milliseconds.

[0022] 制御ユニット 11のデータ生成部 30では、微分値及び積分値をサンプリング単位時 間 Tnずつ取り込んで、そのサンプリング単位時間 Tn内に微分値及び積分値がそれ ぞれ所定レベルを超えるか否力、を判定する。そして、所定レベルを超える値が検出さ れたと判定された回数をサンプリング周期 Tm毎に、かつ微分値及び積分値別に集 計して解析データ D1を生成する。例えば、図 4に設定された一つのサンプリング周 期 Tmにおける低周波成分の積分値が図 5のように変動していた場合、データ生成 部 30では各サンプリング単位時間 Tn内で積分値が所定の閾値 THを超えるか否か を監視し、積分値が閾値 THを超えた場合に積分値が所定レベルを超えたと判定す る。但し、一つのサンプリング単位時間 Tn内で積分値が閾値 THを超えた回数に関 係なぐ一度でも超えたならば回数は 1としてカウントされる。この判定処理をサンプリ ング周期 Tm内においてサンプリング単位時間 Tn毎に繰り返し、サンプリング周期 T mの経過時点で所定レベルを超えたと判定された回数を計数する。サンプリング周 期 Tmが 5秒、サンプリング単位時間 Tnが 20ミリ秒であれば、一周期 Tmにおける回 数の最小値は 0、最大値は 250である。  [0022] In the data generation unit 30 of the control unit 11, the differential value and the integral value are taken in by the sampling unit time Tn, and whether the differential value and the integral value each exceed the predetermined level within the sampling unit time Tn. Power. Then, the analysis data D1 is generated by collecting the number of times it is determined that a value exceeding a predetermined level is detected for each sampling period Tm and for each differential value and integral value. For example, if the integrated value of the low-frequency component in one sampling period Tm set in FIG. 4 fluctuates as shown in FIG. 5, the data generation unit 30 sets the integrated value to a predetermined value within each sampling unit time Tn. It is monitored whether or not the threshold value TH is exceeded. If the integral value exceeds the threshold value TH, it is determined that the integral value has exceeded a predetermined level. However, the number is counted as 1 if it exceeds once even if the integral value exceeds the threshold TH within one sampling unit time Tn. This determination process is repeated every sampling unit time Tn within the sampling period Tm, and the number of times that it has been determined that the predetermined level has been exceeded when the sampling period Tm has elapsed is counted. If the sampling period Tm is 5 seconds and the sampling unit time Tn is 20 milliseconds, the minimum number of times in one period Tm is 0 and the maximum value is 250.

[0023] 制御ユニット 11のデータ生成部 30は上記のような処理を微分値及び積分値のそ れぞれにつレ、て個別に実行し、計測された回数をサンプリング周期 Tm毎に順次集 計して図 6のように解析データ D1を生成する。図 6の解析データ D1において、チヤ ンネル chOは微分回路 14からの出力に、チャンネル chlは積分回路 15からの出力 に、チャンネル ch2は微分回路 16からの出力にそれぞれ対応する。サンプル番号 s mpl〜smpNは音楽再生信号の開始時点からの周期の番号に対応する。ここでは 音楽再生信号が全部で N周期に相当するものと仮定している。そして、サンプル番 号 smpX (但し、 Xは 1〜N)におけるチャンネル chOの集計値 sumOXは、処理開始 時点からの X番目のサンプリング周期 TmXにおいて低周波成分の微分値が所定の レベル THを超えたと判定された回数を示す。例えば、 sumOlは最初のサンプリング 周期において低周波成分の微分値が閾値 THを超えたと判定された回数に相当す る。他のチャンネル chl〜ch2につレ、ても同様である。 [0023] The data generation unit 30 of the control unit 11 individually executes the above-described processing for each of the differential value and the integral value, and sequentially collects the measured number of times for each sampling period Tm. The analysis data D1 is generated as shown in Fig. 6. In analysis data D1 in Fig. 6, channel chO corresponds to the output from differentiation circuit 14, channel chl corresponds to the output from integration circuit 15, and channel ch2 corresponds to the output from differentiation circuit 16. Sample numbers s mpl to smpN correspond to cycle numbers from the start point of the music playback signal. here It is assumed that the music playback signal is equivalent to N cycles in total. And the total value sumOX of channel chO in sample number smpX (where X is 1 to N) is that the differential value of the low frequency component exceeds the predetermined level TH in the Xth sampling period TmX from the start of processing. Indicates the number of times determined. For example, sumOl corresponds to the number of times that the differential value of the low frequency component was determined to exceed the threshold TH in the first sampling period. The same applies to the other channels chl to ch2.

制御ユニット 11のデータ解析部 31は、解析データ D1に記述されてレ、る集計値に ついて、チャンネル毎、つまり微分値及び積分値毎の平均値 M0〜M2と、低周波成 分及び高周波成分の微分値に関して解析データ D1に記述されている集計値の変 動係数 CVO、 CV2とを演算する(図 6参照)。ここで、変動係数とは、集計値の標準 偏差を平均値で割った値を百分率で表わした値であり、データのばらつきの大きさを 評価する尺度として統計処理に用いられる値の一種である。例えば集計値の標準偏 差を SD、平均値を Mとすれば、変動係数 CV= (SD/M) X 100で与えられる。さら に、データ解析部 31は、演算結果識別データ D2を参照し、平均値 MO、 Ml、 M2、 変動係数 CVO、 CV2のそれぞれに対応する識別値 dMO、 dMl、 dM2、 dCVO、 dC V2を取得する。演算結果識別データ D2は、平均値 MO、 Ml、 M2及び変動係数 C VO、 CV2のそれぞれの値と、識別値 dMO、 dMl、 dM2、 dCVO、 dCV2とを対応付 けるテーブルの集合である。識別値は平均値又は変動係数が取り得る範囲を所定段 数で区分したときの各区分を代表する値である。例えば、平均値 MOのテーブルでは 、図 7に示したように、平均値 MOが取り得る 0〜250の値の範囲を 3つの閾値 a、 b、 c (但し、 a<b< c)により 4段階に区分し、各区分を識別値 0〜3にて代表している。そ して、データ解析部 31は、図 7のテーブルを参照することにより、平均値 MOに対応 する 0〜3のいずれか一つの値を識別値 dMOとして取得する。平均値 Ml、 M2及び 変動係数 CV0、 CV2についても、図示しないが同様なテーブルが用意されている。 データ解析部 31は、取得された平均値 Ml、 M2及び変動係数 CV0、 CV2に対応 する識別値 dMl、 dM2、 dCV0、 dCV2を同様の手順で取得する。なお、平均値 M 1、 M2にそれぞれ対応する識別値 dMl、 dM2は 0〜2の 3段階、変動係数 CV0、 C V2にそれぞれ対応する識別値 dC V0、 dCV2は 0又は 1の 2段階にそれぞれ区分さ れている。但し、各識別値の区分段数は適宜に変更してよい。 The data analysis unit 31 of the control unit 11 calculates the average value M0 to M2 for each channel, that is, for each differential value and integral value, the low frequency component and the high frequency component for the total value described in the analysis data D1. Calculate the coefficient of variation CVO and CV2 of the aggregated value described in the analysis data D1 for the derivative value (see Fig. 6). Here, the coefficient of variation is a percentage of the value obtained by dividing the standard deviation of the aggregated value by the average value, and is a type of value used in statistical processing as a measure for evaluating the magnitude of data variation. . For example, if the standard deviation of the aggregate value is SD and the average value is M, the coefficient of variation is CV = (SD / M) x 100. Further, the data analysis unit 31 refers to the calculation result identification data D2, and obtains identification values dMO, dMl, dM2, dCVO, dC V2 corresponding to the average values MO, Ml, M2, and coefficient of variation CVO, CV2, respectively. To do. The calculation result identification data D2 is a set of tables in which the average values MO, Ml, M2 and the coefficients of variation CVO, CV2 are associated with the identification values dMO, dMl, dM2, dCVO, dCV2. The identification value is a value that represents each category when the range that the average value or coefficient of variation can take is divided by a predetermined number of steps. For example, in the average value MO table, as shown in FIG. 7, the range of values 0 to 250 that the average value MO can take is set to 4 by three threshold values a, b, and c (where a <b <c). Each category is represented by an identification value 0-3. Then, the data analysis unit 31 refers to the table in FIG. 7 to obtain any one value from 0 to 3 corresponding to the average value MO as the identification value dMO. A similar table is prepared for the average values Ml and M2 and the coefficient of variation CV0 and CV2, though not shown. The data analysis unit 31 acquires identification values dMl, dM2, dCV0, and dCV2 corresponding to the acquired average values Ml and M2 and variation coefficients CV0 and CV2 in the same procedure. The identification values dMl and dM2 corresponding to the average values M 1 and M2 are three levels 0 to 2, respectively, and the identification values dC V0 and dCV2 corresponding to the coefficient of variation CV0 and C V2 are two levels 0 and 1, respectively. Segmented It is. However, the number of division stages of each identification value may be changed as appropriate.

[0025] データ解析部 31は、取得された識別値 dM0〜dM2及び dCV0、 dCV2を、識別 値 dM0、 dMl, dM2、 dCV0、 dCV2の順に並べることにより、音楽再生信号の波形 を特徴付ける 5桁の数値を判定値として取得する。例えば、識別値 dMOが 1、 dMlが 0、 dM2力 、 dCVO力 0、 dCV2力その場合には、半 IJ定ィ直として 10001カ得られる。 判定値は本例では 144通り得られることになる。なお、判定値を得るための識別値 d M0〜dM2及び dCV0、 dCV2の並び順は本形態に限定されず、任意に指定しても よい。  [0025] The data analysis unit 31 arranges the acquired identification values dM0 to dM2 and dCV0 and dCV2 in the order of the identification values dM0, dMl, dM2, dCV0, and dCV2, thereby characterizing the waveform of the music playback signal. A numerical value is acquired as a judgment value. For example, if the identification value dMO is 1, dMl is 0, dM2 force, dCVO force 0, dCV2 force, 10001 can be obtained as a semi-IJ straightness. In this example, 144 judgment values are obtained. Note that the arrangement order of the identification values dM0 to dM2 and dCV0 and dCV2 for obtaining the determination value is not limited to this embodiment, and may be arbitrarily specified.

[0026] さらに、データ解析部 31は、上述した 5桁の判定値に基づいて、音楽再生信号に て再生されるべき音楽のジャンルを判別する。このジャンル判別においては判別参 照データ D3が参照される。図 8に例示したように、判定参照データ D3には、音楽の ジャンル A〜Xと上述の 144通りの判定値とが対応付けて記述されている。ここでいう ジャンルとは、例えば、クラッシック、ロック、バラード、 JAZZといった音楽の内容を区 別するために使用される概念である。データ解析部 31では、判定参照データ D3と比 較して、得られた判定値に一致するジャンルをその音楽再生信号に対応するジヤン ルとして決定する。例えば、判定値が 10001のときは、図 8に例示するようにその音 楽再生信号に対応するジャンルとして、ジャンル Aが決定される。さらに、ジヤンノレ決 定後、データ解析部 31は判別結果に応じて履歴データ D4を更新する。例えば、図 9に示すように、履歴データ D4はジャンル A〜Xと、それぞれの入力回数 Na〜Nxと が対応付けて記述されており、データ解析部 31は判別したジヤンノレの回数に 1をカロ 算することにより履歴データ D4を更新する。また、予め履歴データ D2の記述回数に ついて特定数を設けておき、判別結果が出力される毎にその判別されたジャンルが 履歴データ D2に記述されていくようにしてもよい。この場合、記述回数が特定数を超 えた場合は最も古い時期の記述が消去され、最新の判別結果が記述されるように履 歴データ D2が更新されるようにすればょレ、。  [0026] Further, the data analysis unit 31 determines the genre of music to be reproduced by the music reproduction signal based on the above-described five-digit determination value. In this genre discrimination, discrimination reference data D3 is referred to. As illustrated in FIG. 8, in the determination reference data D3, music genres A to X and the above-described 144 determination values are described in association with each other. The genre here is a concept used to distinguish the contents of music such as classic, rock, ballad, and JAZZ. The data analysis unit 31 compares the determination reference data D3 with the genre that matches the obtained determination value as the genre corresponding to the music playback signal. For example, when the determination value is 10001, genre A is determined as the genre corresponding to the music playback signal as illustrated in FIG. In addition, after the determination of data is made, the data analysis unit 31 updates the history data D4 according to the determination result. For example, as shown in FIG. 9, the history data D4 is described in association with the genres A to X and the respective input counts Na to Nx, and the data analysis unit 31 adds 1 to the determined number of gains. The history data D4 is updated by calculation. Alternatively, a specific number may be provided in advance for the number of times the history data D2 is described, and the determined genre may be described in the history data D2 each time a determination result is output. In this case, if the number of descriptions exceeds a specific number, the oldest description will be deleted, and the history data D2 will be updated so that the latest discrimination result is described.

[0027] 次に、図 10及び図 11を参照して、上述したジャンル判別を実施するために制御ュ ニット 11が実行する処理の手順を説明する。図 10は、制御ユニット 11 (データ生成 部 30)が解析データ D1を生成するために実行する解析データ生成処理ノレ一チンを 示す。このルーチンは、例えばユーザが入力装置 20 (図 2参照)からジャンル判別を 指示した状態で、信号処理部 10から微分値及び積分値がそれぞれ出力されたこと を条件として実行される。なお、信号処理部 10から出力される微分値及び積分値は 制御ユニット 11の内部バッファに逐次蓄積されて本ルーチンによる処理を受ける。 Next, with reference to FIG. 10 and FIG. 11, a procedure of processing executed by the control unit 11 in order to perform the above-described genre discrimination will be described. FIG. 10 shows an analysis data generation process that is executed by the control unit 11 (data generation unit 30) to generate the analysis data D1. Show. This routine is executed on condition that, for example, the differential value and the integral value are output from the signal processing unit 10 in a state where the user instructs genre discrimination from the input device 20 (see FIG. 2). The differential value and the integral value output from the signal processing unit 10 are sequentially accumulated in the internal buffer of the control unit 11 and subjected to processing by this routine.

[0028] 解析データ生成処理ルーチンでは、最初のステップ S1において、制御ユニット 11 は、データ処理の対象となるチャンネル chの番号を指定する変数 nを初期値 0にセッ トし、続くステップ S 2にて、内部バッファ力、らチャンネル chnのサンプリング単位時間 相当の出力信号 (微分値又は積分値)を取り込む。次のステップ S3において、制御 ユニット 11は、取り込まれた出力信号が所定レベルを超えているか否かを判定する。 所定レベルを超えている場合、制御ユニット 11はステップ S4に進み、チャンネル chn 用の内部カウンタに 1を加算し、その後、ステップ S5 進む。一方、ステップ S3にて 所定レベルを超えてレ、なレ、場合、制御ユニット 11はステップ S4をスキップしてステツ プ S5 進む。  [0028] In the analysis data generation processing routine, in the first step S1, the control unit 11 sets the variable n that specifies the number of the channel ch to be processed to the initial value 0, and then continues to step S2. Then, capture the output signal (differential value or integral value) equivalent to the sampling unit time of channel chn. In the next step S3, the control unit 11 determines whether or not the fetched output signal exceeds a predetermined level. If it exceeds the predetermined level, the control unit 11 proceeds to step S4, adds 1 to the internal counter for channel chn, and then proceeds to step S5. On the other hand, if the predetermined level is exceeded in step S3, the control unit 11 skips step S4 and proceeds to step S5.

[0029] ステップ S5において、制御ユニット 11は変数 nに 2がセットされているか否か判断し  [0029] In step S5, the control unit 11 determines whether or not 2 is set in the variable n.

2でなければステップ S6で変数 nに 1を加算してステップ S2 戻る。一方、ステップ S5で変数 nが 2の場合、制御ユニット 11はステップ S7 進む。ステップ S2 S6の処 理が繰り返されることにより、 3つのチャンネ ch0 ch2、つまり低周波成分に対す る微分回路 14、積分回路 15、高周波成分に対する微分回路 16のそれぞれの出力 がサンプリング単位時間相当長に亘つて検査される。  If not 2, add 1 to variable n in step S6 and return to step S2. On the other hand, if the variable n is 2 in step S5, the control unit 11 proceeds to step S7. By repeating the processing of steps S2 and S6, the output of each of the three channel ch0 ch2, that is, the differentiation circuit 14 for the low frequency component, the integration circuit 15 and the differentiation circuit 16 for the high frequency component is made to be equivalent to the sampling unit time. Inspected across the board.

[0030] ステップ S7において制御ユニット 11はサンプリング周期 Tm分の処理が終了したか 否かを判定する。例えば、ステップ S5が肯定判断された回数がサンプリング周期 Tm をサンプリング単位時間 Tnで除した値に一致する場合にサンプリング周期 Tm分の 処理が終了したと判断すればよい。ステップ S7が否定判断された場合、制御ユニット 11はステップ S1へ戻り、内部バッファに蓄えられている次のサンプリング単位時間の 信号の処理へと進む。一方、ステップ S7が肯定判断された場合、制御ユニット 11は ステップ S8 進み、内部カウンタに記録されている値を今回のサンプリング周期に対 応するサンプル番号 smpXの集計値 sumOX sumlX sum2X (図 6参照)として記 憶装置 25の解析データ D1に追加する。解析データ D1がまだ存在しない場合には 解析データ D 1を新たに作成して集計値を最初のサンプル番号 smp 1に対応付けて 記録する。 [0030] In step S7, the control unit 11 determines whether or not the processing for the sampling period Tm has been completed. For example, if the number of times that step S5 is affirmed is equal to the value obtained by dividing the sampling period Tm by the sampling unit time Tn, it may be determined that the processing for the sampling period Tm has ended. If the determination in step S7 is negative, the control unit 11 returns to step S1 and proceeds to process the signal of the next sampling unit time stored in the internal buffer. On the other hand, if step S7 is affirmed, the control unit 11 proceeds to step S8, and the value recorded in the internal counter is the sum of the sample numbers smpX corresponding to the current sampling period sumOX sumlX sum2X (see Fig. 6) To the analysis data D1 of the storage device 25. If analysis data D1 does not exist yet Create new analysis data D1 and record the total value in association with the first sample number smp1.

[0031] 続くステップ S9において、制御ユニット 11は内部カウンタの値を初期値 0にリセット し、さらに次のステップ S10で解析データ D1の生成処理が終了したか否かを判断す る。例えば、全てのチャンネル ch0〜ch2の出力力 SO近傍のいわゆる無音状態が所 定秒以上続いた場合に処理が終了したと判断することができる。そして、処理が終了 していなければ、制御ユニット 11はステップ S1へ戻る。処理が終了したと判断した場 合、制御ユニット 11は解析データ生成処理ルーチンを終える。以上の処理により、図 6に示したような解析データ D1が生成される。  In the subsequent step S9, the control unit 11 resets the value of the internal counter to the initial value 0, and further determines whether or not the generation processing of the analysis data D1 is completed in the next step S10. For example, it can be determined that the processing is completed when a so-called silent state near the output force SO of all channels ch0 to ch2 continues for a predetermined time or more. If the processing is not completed, the control unit 11 returns to step S1. If it is determined that the processing has ended, the control unit 11 ends the analysis data generation processing routine. With the above processing, analysis data D1 as shown in FIG. 6 is generated.

[0032] 図 11は、制御ユニット 11 (データ解析部 31)が解析データ D1から音楽のジヤンノレ を判別するために実行するデータ解析処理ルーチンを示す。このルーチンは、図 10 の解析データ生成処理ルーチンの終了後に続けて実行される。データ解析処理ル 一チンにおいて、制御ユニット 11は、最初のステップ S21で、データ処理の対象とな るチャンネル chの番号を指定する変数 nを初期値 0にセットし、続くステップ S22にて 、記憶装置 25に記録された解析データ D1から変数 nに対応するチャンネル番号 ch nの集計値を読み込み、それらの平均値と低周波成分及び高周波成分の微分値の 集計値に対する変動係数とを演算する。次のステップ S23において、制御ユニット 11 は変数 nに 2がセットされているか否か判断し、 2でなければステップ S24で変数 nに 1 を加算してステップ S22へ戻る。一方、ステップ S23で変数 nが 2の場合、制御ュニッ ト 11はステップ S25へ進む。ステップ S22〜S24の処理力 S繰り返されることにより、 3 つのチャンネル ch0〜ch2のそれぞれの平均値 M0〜M2と低周波成分及び高周波 成分の微分値の集計値についての変動係数 CV0、 CV2とが演算される。  FIG. 11 shows a data analysis processing routine executed by the control unit 11 (data analysis unit 31) in order to discriminate music gains from the analysis data D1. This routine is executed after the end of the analysis data generation processing routine of FIG. In the data analysis processing routine, the control unit 11 sets the variable n for designating the number of the channel ch subject to data processing to the initial value 0 in the first step S21, and stores it in the subsequent step S22. The total value of the channel number ch n corresponding to the variable n is read from the analysis data D1 recorded in the device 25, and the average value thereof and the coefficient of variation with respect to the total value of the differential values of the low frequency component and the high frequency component are calculated. In the next step S23, the control unit 11 determines whether or not 2 is set to the variable n. If not 2, 1 is added to the variable n in step S24 and the process returns to step S22. On the other hand, if the variable n is 2 in step S23, the control unit 11 proceeds to step S25. Processing power of steps S22 to S24 S Repeatedly calculates the average values M0 to M2 of the three channels ch0 to ch2 and the coefficient of variation CV0 and CV2 for the sum of the differential values of the low and high frequency components. Is done.

[0033] ステップ S25において、制御ユニット 11は、演算結果識別データ D4を参照し、得ら れた平均値 M0〜M2及び変動係数 CV0、 CV2にそれぞれ対応する識別値 dM0、 dMl、 dM2、 dCV0、 dCV2を取得する。次のステップ S26において、制御ユニット 1 1は、記憶装置 25の判別参照データ D3を参照して、識別値 dM0、 dMl、 dM2、 dC V0、 dCV2を順に並べた 5桁の判定値に対応するジャンルを選択することにより、音 楽のジャンルを判別する。さらに、制御ユニット 11は、次のステップ S27において、判 別されたジャンルの回数に 1が加算されるように履歴データ D2を更新し、この後、デ ータ解析処理ルーチンを終える。 In step S25, the control unit 11 refers to the calculation result identification data D4, and the identification values dM0, dMl, dM2, dCV0, respectively corresponding to the obtained average values M0 to M2 and the variation coefficients CV0 and CV2, Get dCV2. In the next step S26, the control unit 11 refers to the discrimination reference data D3 of the storage device 25 and refers to the genre corresponding to the 5-digit judgment value in which the identification values dM0, dMl, dM2, dC V0, dCV2 are arranged in order. Select the music genre by selecting. Furthermore, the control unit 11 determines in the next step S27. The history data D2 is updated so that 1 is added to the number of times of the separated genre, and then the data analysis processing routine is finished.

[0034] 次に図 12〜図 14を参照して、マイク 6からの音声信号を解析して音声を分類する マイクモードに関する処理を説明する。なお、マイクモードは、例えばマイク 6から信 号処理部 10への入力が可能とされた状態で、ユーザが入力装置 20に対してマイク モードを指示するための所定の操作を実施した場合に起動される特別のモードであ る。 [0034] Next, with reference to Figs. 12 to 14, a process related to the microphone mode for analyzing the audio signal from the microphone 6 and classifying the audio will be described. Note that the microphone mode is activated when the user performs a predetermined operation for instructing the microphone mode to the input device 20 with the input from the microphone 6 to the signal processing unit 10 enabled, for example. This is a special mode.

[0035] マイクモードにおいて、マイク 6からの音声信号はライン入力端子 4からの音楽再生 信号と同様に信号処理部 10に入力される。信号処理部 10では音声信号の低周波 成分に対する微分値及び積分値と、音声信号の高周波成分に対する微分値とがデ ジタル信号化されて出力される。それらの出力信号に基づいて制御ユニット 11のデ ータ生成部 30にて解析データ D1が生成される。この場合の解析データ D1は、上述 した音楽再生信号の場合と同様に、サンプリング周期 Tm内で微分値及び積分値の それぞれが閾値 THを超えた回数の集計値をチャンネル ch0〜ch2毎に記録したも のである。つまり、音声信号が入力された場合におけるデータ生成部 30の処理内容 は、音楽再生信号が入力された場合のそれと同様である。  In the microphone mode, the audio signal from the microphone 6 is input to the signal processing unit 10 in the same manner as the music playback signal from the line input terminal 4. In the signal processing unit 10, the differential value and integral value for the low frequency component of the audio signal and the differential value for the high frequency component of the audio signal are converted into a digital signal and output. Based on these output signals, the data generation unit 30 of the control unit 11 generates analysis data D1. As in the case of the music playback signal described above, the analysis data D1 in this case is recorded for each of the channels ch0 to ch2 as the total value of the number of times that the differential value and the integral value exceed the threshold TH within the sampling period Tm. It is. That is, the processing content of the data generation unit 30 when an audio signal is input is the same as that when a music playback signal is input.

[0036] 一方、データ解析部 31は、音楽再生信号に対応する解析データ D1を処理する場 合と、音声信号に対応する解析データ D1を処理する場合とで処理内容が相違する 。すなわち、音声信号に対応する解析データ D1を処理する場合、データ解析部 31 は、解析データ D1に記述されている集計値についてチャンネル毎の平均値 M0〜 M2を演算する(図 6参照)。そして、データ解析部 31は、得られた平均値 M0〜M2 の大小関係を比較することにより、平均値 M0、 Ml、 M2間の大小関係が図 12に示 すパターン A〜パターン Eのいずれに該当するかによつて音声信号を分類する。例 えば、平均値 M0 >平均値 Ml >平均値 M2の場合には音声信号がパターン Bに分 類される。パターン A〜Eは、各平均値 M0〜M2を結ぶベクトルにより平均値 M0〜 M2の大小関係を特徴付けるものである。但し、平均値 M0〜M2の大小関係が所定 の関係にある場合には、さらに、平均値 M0と平均値 Mlの差の割合を示す分岐参 照値 F= (MO— Ml) ZM0が演算され、この分岐参照値 Fがパターンの分岐に使用 される。図 12の例では、パターン Aとパターン Cとの分岐、パターン Dとパターン Eとの 分岐に関して分岐参照値 Fが使用される。 On the other hand, the data analysis unit 31 has different processing contents when processing the analysis data D1 corresponding to the music playback signal and when processing the analysis data D1 corresponding to the audio signal. That is, when processing the analysis data D1 corresponding to the audio signal, the data analysis unit 31 calculates the average values M0 to M2 for each channel for the total value described in the analysis data D1 (see FIG. 6). Then, the data analysis unit 31 compares the obtained average values M0 to M2 in magnitude relationship, so that the magnitude relationship between the average values M0, Ml, and M2 is any of the patterns A to E shown in FIG. The audio signal is classified according to whether it is applicable. For example, if average value M0> average value Ml> average value M2, the audio signal is classified into pattern B. The patterns A to E characterize the magnitude relationship between the average values M0 to M2 by vectors connecting the average values M0 to M2. However, when the magnitude relationship between the average values M0 to M2 is a predetermined relationship, a branch reference value F = (MO-Ml) ZM0 is calculated which indicates the ratio of the difference between the average value M0 and the average value Ml. , This branch reference value F is used for pattern branching Is done. In the example of FIG. 12, the branch reference value F is used for the branch between the pattern A and the pattern C and the branch between the pattern D and the pattern E.

[0037] 音声信号がパターン A〜Eのいずれかに分類されると、データ解析部 31は分類さ れたパターンに応じてゲーム画面に出現させるべきキャラクタを決定する。例えば、 パターン A〜Eのそれぞれに対して互いに異なる 1以上のキャラクタが予め対応付け られ、その対応関係に従って、分類されたパターンに対応するキャラクタをデータ解 析部 31が選択することによりキャラクタが決定される。なお、ここでレ、うキャラクタとは、 例えば、人や動植物、想像上の動植物の他、物品、文字や記号等をアレンジして LC D3で表示したものであり、特に表示内容は限定されなレ、。決定されたキャラクタはゲ ーム制御部 32によるゲームの制御に従って適宜のタイミングで LCD3のゲーム画面 上に表示される。さらに、データ解析部 31はキャラクタの決定内容に応じて保持キヤ ラクタデータ D5を更新する。一度出現したキャラクタは、ユーザが保持できるキャラク タとしてカウントされ、保持キャラクタデータ D5に記録される。例えば、図 13に示すよ うに、保持キャラクタデータ D5は、ゲーム画面上に表示させ得るキャラクタ A〜Xと、 ユーザが保持するキャラクタとを対応付けたテーブルとして構成されており、データ解 析部 31はパターンの分類結果に応じて出現させたキャラクタの個数に 1を加算するこ とにより保持キャラクタデータ D5を更新する。なお、ゲーム内容に応じて、同じキャラ クタを複数保持できるように設定してもよいし、キャラクターつに付き一つしか保持で きないように設定してもよい。キャラクタの保持可能な個数に制限を設けた場合は、設 定した個数以降は、保持キャラクタデータ D5の個数は更新されない。  [0037] When the audio signal is classified into any of patterns A to E, data analysis unit 31 determines a character to appear on the game screen according to the classified pattern. For example, one or more different characters are associated with each of the patterns A to E in advance, and the character is determined by the data analysis unit 31 selecting a character corresponding to the classified pattern according to the correspondence. Is done. Here, the character is a character, character or symbol, etc. arranged in addition to humans, animals and plants, imaginary animals and plants, and displayed in LCD3, and the display content is not particularly limited. Les. The determined character is displayed on the game screen of the LCD 3 at an appropriate timing according to the game control by the game control unit 32. Further, the data analysis unit 31 updates the retained character data D5 according to the determined contents of the character. Once appearing, the character is counted as a character that can be held by the user and recorded in the held character data D5. For example, as shown in FIG. 13, the retained character data D5 is configured as a table in which the characters A to X that can be displayed on the game screen and the characters retained by the user are associated with each other. Updates the stored character data D5 by adding 1 to the number of characters that appear according to the pattern classification result. Depending on the content of the game, it may be set so that a plurality of the same characters can be held, or it may be set so that only one character can be held. If the number of characters that can be held is limited, the number of held character data D5 is not updated after the set number of characters.

[0038] 次に、図 14を参照して、マイクモードにて音声を分類するために制御ユニット 11が 実行する処理の手順を説明する。なお、図 14の処理に先立って、制御ユニット 11 ( データ生成部 30)は音声信号に対応した解析データ D1を生成するために図 10に 示す解析データ生成処理ルーチンを実行するが、この点に関しては音楽再生信号 に対する処理と同様のため説明を省略する。音声信号に対応する解析データ D1が 生成されると、これに続いて制御ユニット 11は図 14に示すデータ解析処理ルーチン を実行する。  Next, with reference to FIG. 14, the procedure of processing executed by the control unit 11 to classify sound in the microphone mode will be described. Prior to the processing of FIG. 14, the control unit 11 (data generation unit 30) executes the analysis data generation processing routine shown in FIG. 10 in order to generate analysis data D1 corresponding to the audio signal. Is the same as the process for the music playback signal, so the explanation is omitted. When the analysis data D1 corresponding to the audio signal is generated, the control unit 11 subsequently executes a data analysis processing routine shown in FIG.

[0039] 図 14のデータ解析処理ルーチンにおいて、制御ユニット 11は、最初のステップ S3 1で、データ処理の対象となるチャンネル chの番号を指定する変数 nを初期値 0にセ ットし、続くステップ S32にて、記憶装置 25に記憶された解析データ D1から変数 nに 対応するチャンネル番号 chnの集計値を読み込み、それらの平均値を演算する。次 のステップ S33において、制御ユニット 11は変数 nに 2がセットされているかを判断し 、 2でなければステップ S34で変数 nに 1を加算してステップ S32へ戻る。一方、ステツ プ S33で変数 nが 2の場合、制御ユニット 11はステップ S35へ進む。ステップ S32〜 S34の処理が繰り返されることにより、 3つのチャンネル ch0〜ch2のそれぞれの平均 値が演算される。 In the data analysis processing routine of FIG. 14, the control unit 11 performs the first step S3. In step 1, the variable n that specifies the number of the channel ch to be processed is set to the initial value 0, and in step S32, the analysis data D1 stored in the storage device 25 corresponds to the variable n. Read the total value of channel number chn and calculate the average value. In the next step S33, the control unit 11 determines whether or not 2 is set to the variable n. If not 2, 1 is added to the variable n in step S34 and the process returns to step S32. On the other hand, if the variable n is 2 in step S33, the control unit 11 proceeds to step S35. By repeating the processing of steps S32 to S34, the average value of each of the three channels ch0 to ch2 is calculated.

[0040] ステップ S35において、制御ユニット 11は平均値 M0〜M2の大小関係を比較する 。次のステップ S 36において、制御ユニット 11は平均値 M0〜M2が所定の大小関係 、つまり、上述したパターン A、 Cの関係、又はパターン D、 Eの関係にあるか否か判 断する。所定の大小関係にある場合、制御ユニット 11はステップ S37にて分岐参照 値 Fを演算し、その後にステップ S38へ進む。ステップ S36が否定判断された場合、 制御ユニット 11はステップ S37をスキップしてステップ S38へ進む。ステップ S38にお いて、制御ユニット 11は平均値 M0〜M2の大小関係、さらに分岐参照値 Fが演算さ れている場合にはその分岐参照値 Fに基づいて音声信号がパターン A〜E (図 12参 照)のいずれに分類されるかを判断する。続くステップ S39において、制御ユニット 1 1は分類されたパターンに対応するキャラクタをゲーム画面上に出現させるべきキャラ クタとして決定する。その後、制御ユニット 11はステップ S40に進み、決定したキャラ クタの個数に 1が加算されるように保持キャラクタデータ D5を更新し、この後、データ 解析処理ルーチンを終える。  [0040] In step S35, the control unit 11 compares the magnitude relationship of the average values M0 to M2. In the next step S36, the control unit 11 determines whether or not the average values M0 to M2 are in a predetermined magnitude relationship, that is, the relationship between the patterns A and C or the relationships between the patterns D and E described above. If there is a predetermined magnitude relationship, the control unit 11 calculates the branch reference value F in step S37, and then proceeds to step S38. If a negative determination is made in step S36, the control unit 11 skips step S37 and proceeds to step S38. In step S38, the control unit 11 compares the average values M0 to M2, and if the branch reference value F is calculated, the audio signal is converted into patterns A to E based on the branch reference value F (Fig. 12)). In subsequent step S39, the control unit 11 determines a character corresponding to the classified pattern as a character to appear on the game screen. Thereafter, the control unit 11 proceeds to step S40, updates the retained character data D5 so that 1 is added to the determined number of characters, and thereafter ends the data analysis processing routine.

[0041] 以上のようにして決定されたキャラクタはゲーム制御部 32が実行するゲームにおい て様々に利用することができる。例えば、ゲーム制御部 32がキャラクタを育成するゲ ームを実行する場合、ゲーム制御部 32は、保持キャラクタデータ D5に記述されてい るキャラクタを育成キャラクタとは別のゲストキャラクタとして出現させてもょレ、。ゲストキ ャラクタは、例えばユーザが希望に応じて呼び出して LCD3に表示するキャラクタとし て位置付けることができる。音声信号の分類結果に応じて異なるゲストキャラクタを出 現させることにより、音声入力を利用したゲストキャラクタの収集という内容のゲーム機 能をユーザに提供することができる。 [0041] The character determined as described above can be used in various ways in the game executed by the game control unit 32. For example, when the game control unit 32 executes a game for breeding a character, the game control unit 32 may cause the character described in the retained character data D5 to appear as a guest character different from the breeding character. Les. The guest character can be positioned as a character that the user calls as desired and displays it on the LCD 3, for example. A game machine that collects guest characters using voice input by making different guest characters appear according to the classification result of the audio signal. Functions can be provided to the user.

[0042] 本形態のゲーム機 1においては、履歴データ D4にジヤンノレ別の判別回数が記録さ れているので、履歴データ D4を参照することにより、ゲーム機 1を経由してユーザが 聴取した音楽のジヤンノレ毎の頻度、ユーザのジヤンノレの好み等を分析し、ゲーム制 御部 32にて実行されるゲームの内容にジャンルの判別結果を反映させることが可能 となる。例えば、ゲーム制御部 32がキャラクタを育成するゲームを実行する場合には 、履歴データ D4に記述されているジャンル毎の判別回数の分布に応じてキャラクタ の態様、性格等の特性を変化させる、といった操作をゲーム制御部 32にて行うことが できる。そして、音楽ジヤンノレの判別に使用するための信号処理部 10、データ生成 部 30、データ解析部 31の機能を利用してユーザの音声を分類し、その分類結果を ゲーム内容に反映させることができる。これにより、音楽の特徴及びユーザの音声の それぞれをゲーム内容に反映させる機能を無駄なく実現することができるので、音声 判別のために専用の処理回路を付加する必要がなぐ安価なコストでより多くの遊戯 機能を提供することができる。  [0042] In the game machine 1 of this embodiment, since the number of distinctions for each Giannole is recorded in the history data D4, the music the user listens to via the game machine 1 by referring to the history data D4 It is possible to analyze the frequency of each game, the user's preference, and reflect the genre discrimination result in the game content executed by the game control unit 32. For example, when the game control unit 32 executes a game for nurturing a character, the characteristics of the character, personality, etc. are changed according to the distribution of the number of discriminations for each genre described in the history data D4. The operation can be performed by the game control unit 32. Then, the user's voice can be classified using the functions of the signal processing unit 10, the data generation unit 30, and the data analysis unit 31 for use in discriminating music, and the classification result can be reflected in the game contents. . As a result, it is possible to realize the function of reflecting each of the music features and the user's voice in the game content without waste, so it is possible to increase more at a low cost without the need to add a dedicated processing circuit for voice discrimination. It is possible to provide a play function.

[0043] 本発明は以上の形態に限定されることなぐ種々の形態で実施することができる。  [0043] The present invention can be implemented in various forms without being limited to the above forms.

例えば、信号処理部は音声再生信号又は音声信号の低周波成分に対する微分回 路及び積分回路と高周波成分に対する微分回路とを備えるものに限定されず、特定 周波数成分に対する微分回路及び積分回路の少なくともいずれか一方を備えてい る限りにおいて種々の変形が可能である。音楽再生信号の微分値及び積分値には 音楽の変化の傾向が反映されるので、微分値及び積分値の少なくともいずれか一方 を信号処理部から出力させることにより、その変化の傾向から音楽の特徴を判別し、 かつ音声を分類することができる。上記の形態では低周波成分及び高周波成分のそ れぞれについて微分値を出力させているが、両周波数成分の積分値を信号処理部 から出力させてもよいし、単一の周波数域に関する微分値及び積分値を信号処理部 力 出力させてもよい。さらには、単一の周波数域に関する微分値又は積分値のい ずれか一方のみを信号処理部から出力させてもよい。  For example, the signal processing unit is not limited to the one provided with the differentiation circuit and integration circuit for the low frequency component of the audio reproduction signal or the audio signal and the differentiation circuit for the high frequency component, and at least one of the differentiation circuit and the integration circuit for the specific frequency component. Various modifications are possible as long as either one is provided. The differential value and integral value of the music playback signal reflects the tendency of music changes. By outputting at least one of the differential value and integral value from the signal processing unit, the characteristics of the music can be determined from the tendency of the change. And classify speech. In the above embodiment, the differential value is output for each of the low frequency component and the high frequency component. However, the integrated value of both frequency components may be output from the signal processing unit, or the differential value for a single frequency range may be output. The signal processing unit may output the value and the integral value. Furthermore, only one of the differential value or the integral value with respect to a single frequency range may be output from the signal processing unit.

[0044] 上記の形態では、音楽の特徴としてジャンルを判別したが、本発明はこれに限定さ れることなぐ音楽のテンポその他の各種の特徴を判別部にて判別してもよい。例え ば、音楽のテンポは音楽再生信号の低周波成分の積分値に比較的強レ、相関性を有 している。すなわち、音楽再生信号の低周波成分は、音楽に含まれている規則的な ベースリズムの影響を強く受け、その積分値波形では、低周波成分に含まれる不規 則な変動が鈍されてベースリズムによる規則的な波形がより明瞭に現れる。このため 、低周波成分の積分値波形のピーク間隔、つまり、極大値間の間隔は音楽のテンポ と相関性を有している。判別部にて、所定のサンプリング周期毎に積分値波形のピ ーク間隔を求めることにより音楽のテンポを判別することができる。なお、ピーク間隔 が不規則であれば、特徴判別部で平均値、中央値、最頻値等の統計値をピーク間 隔として演算してもよい。そして、音声信号の低周波成分の積分値波形には、音声 信号に含まれる不規則な変動が鈍されて音声の規則的な揺らぎを示す規則的な波 形がより明瞭に現れる。このような規則的な変化を利用して音声を分類することがで きる。 In the above embodiment, the genre is discriminated as the music feature, but the present invention is not limited to this, and the discriminating unit may discriminate the music tempo and other various features. example For example, the tempo of music has a relatively strong correlation with the integrated value of the low frequency component of the music playback signal. That is, the low frequency component of the music playback signal is strongly influenced by the regular base rhythm included in the music, and the irregular waveform included in the low frequency component is dulled in the integrated waveform. Regular waveforms due to rhythm appear more clearly. For this reason, the peak interval of the integral waveform of the low frequency component, that is, the interval between the maximum values has a correlation with the tempo of music. The determination unit can determine the tempo of the music by obtaining the peak interval of the integral waveform for each predetermined sampling period. If the peak interval is irregular, the feature discriminating unit may calculate a statistical value such as an average value, median value, or mode value as the peak interval. In the integrated value waveform of the low frequency component of the audio signal, the irregular waveform included in the audio signal is dulled and a regular waveform showing regular fluctuation of the audio appears more clearly. Voices can be classified using such regular changes.

[0045] 上記の形態では低周波成分の積分値及び微分値と高周波成分の微分値とがサン プリング単位時間内に所定レベルを超えた回数をそれぞれ集計し、その集計値の平 均値及び変動係数を演算して音楽再生信号波形のばらつき具合を判別したが、音 楽の特徴を判別するために判別部が実行する処理の内容は、平均値及び変動係数 のみを利用する例に限らない。例えば、集計値の標準偏差、分散、合計、といった各 種の統計値をさらに参照して音楽のジャンル等を判別してもよレ、。統計値は任意の 複数の種類を利用してもよい。また、低周波成分及び高周波成分の微分値の変動係 数のみを演算したが、これに限定されず、全ての微分値及び積分値の各集計値の変 動係数を演算してジャンル等の判別に利用してもよい。データの解析に音楽再生信 号の波形を特徴付ける 5桁の判定値を利用したが、演算する各種の統計値に合わせ た桁数を設定してもよい。例えば、低周波成分の積分値及び微分値と高周波成分の 微分値のそれぞれの平均値及び変動係数を演算するのであれば、音楽再生信号の 波形を特徴付ける判定値は 6桁となる。また、音声の分類についても各種の統計値を 禾 U用してもよレ、。  [0045] In the above embodiment, the number of times that the integral value and the differential value of the low frequency component and the differential value of the high frequency component exceed a predetermined level within the sampling unit time is totaled, and the average value and fluctuation of the total value are counted. Although the coefficient is calculated to determine the degree of variation in the music reproduction signal waveform, the contents of the processing executed by the determination unit to determine the characteristics of the music are not limited to the example using only the average value and the variation coefficient. For example, the music genre etc. may be discriminated by further referring to various statistical values such as standard deviation, variance, and total of the total values. Any number of statistical values may be used. In addition, although only the variation coefficient of the differential value of the low frequency component and the high frequency component was calculated, the present invention is not limited to this, and the genre etc. is determined by calculating the coefficient of variation of each aggregate value of all differential values and integral values. You may use it. Although the 5-digit judgment value characterizing the waveform of the music playback signal was used for data analysis, the number of digits may be set according to various statistical values to be calculated. For example, if the average value and coefficient of variation of the integral and derivative values of the low-frequency component and the derivative value of the high-frequency component are calculated, the judgment value that characterizes the waveform of the music playback signal is 6 digits. You can also use various statistical values for voice classification.

[0046] 本形態では、ゲーム制御部にて音楽のジヤンノレをキャラクタの形態の変化等に反 映させ、あるいは音声信号の分類結果に応じて決定されたキャラクタをゲーム画面に 出現させる例を示したが、音楽の特徴の判別結果、あるいは音声信号の分類結果と ゲーム内容との関係はこれらの例に限定されない。例えばゲームの難易度、進行速 度、特典の発生確率の変化といった各種の変化を音楽の特徴、あるいは音声信号の 分類結果と関連付けて生じさせてもよい。 [0046] In this embodiment, the game control unit reflects the music gain on the change of the character form or the like, or the character determined according to the audio signal classification result is displayed on the game screen. Although the example of making it appear was shown, the relationship between the music feature discrimination result or the audio signal classification result and the game content is not limited to these examples. For example, various changes such as changes in game difficulty, progress speed, and bonus occurrence probability may be generated in association with music characteristics or audio signal classification results.

信号処理部は IC、 LSIといった回路素子を組み合わせたハードウェア装置として 構成してもよいし、 MPUとソフトウェアとを組み合わせた論理的装置として構成され てもよレ、。データ生成部及びデータ解析部のそれぞれについても、ハードウェア装 置として構成されてもよい。信号入力部はライン入力端子に限らない。例えば音楽再 生機器力 FM電波等の無線を利用して送信される再生信号を受信して音楽再生信 号に変換する装置を信号入力部として利用してもよい。  The signal processing unit may be configured as a hardware device combining circuit elements such as IC and LSI, or may be configured as a logical device combining MPU and software. Each of the data generation unit and the data analysis unit may also be configured as a hardware device. The signal input unit is not limited to the line input terminal. For example, a device that receives a playback signal transmitted using radio such as FM radio waves and converts it into a music playback signal may be used as the signal input unit.

Claims

請求の範囲 The scope of the claims [1] 入力信号の特定周波数成分に関する微分値及び積分値のうち、少なくともいずれ 力、一方に相当する信号を出力する信号処理部と、  [1] A signal processing unit that outputs a signal corresponding to at least one of a differential value and an integral value related to a specific frequency component of the input signal; 音楽再生機器から出力される音楽再生信号を取り込んで前記信号処理部に入力 させる音楽再生信号入力部と、  A music playback signal input unit for receiving a music playback signal output from a music playback device and inputting the music playback signal to the signal processing unit; ユーザの音声を取り込んで音声信号に変換し、該音声信号を前記信号処理部に 入力させる音声入力部と、  A voice input unit that captures a user's voice, converts the voice into a voice signal, and inputs the voice signal to the signal processing unit; 前記音楽再生信号の入力に対応して前記信号処理部から出力される信号に基づ いて、前記音楽再生信号にて再生されるべき音楽の特徴を判別し、かつ前記音声信 号の入力に対応して前記信号処理部から出力される信号に基づいて前記音声信号 を分類する判別部と、  Based on the signal output from the signal processing unit corresponding to the input of the music playback signal, the characteristics of the music to be played back by the music playback signal are determined, and the input of the audio signal is supported. A discriminating unit for classifying the audio signal based on a signal output from the signal processing unit; 前記判別部にて判別された前記音楽の特徴及び前記音声信号の分類結果のそれ ぞれをゲーム内容に反映させるゲーム制御部と、  A game control unit that reflects the characteristics of the music determined by the determination unit and the classification result of the audio signal in the game content; を備えた音楽ゲーム機。  Music game machine equipped with. [2] 前記信号処理部は、前記特定周波数成分の微分値及び積分値のそれぞれに相 当する信号を出力し、  [2] The signal processing unit outputs signals corresponding to the differential value and the integral value of the specific frequency component, 前記判別部は、前記信号処理部から出力される信号を所定のサンプリング単位時 間ずつ取り込んで、前記サンプリング単位時間内に各信号の値が所定レベルを超え るか否かを判定し、前記所定レベルを超える値が検出されたと判定された回数を所 定のサンプリング周期毎でかつ前記信号別に集計した解析データを生成するデータ 生成部と、前記解析データに記述された集計値に基づいて前記音楽の特徴を判別 し、かつ前記音声信号を分類するデータ解析部とを有している、請求の範囲第 1項 に記載の音楽ゲーム機。  The determination unit captures a signal output from the signal processing unit for each predetermined sampling unit time, determines whether the value of each signal exceeds a predetermined level within the sampling unit time, and determines the predetermined A data generation unit that generates analysis data in which the number of times it is determined that a value exceeding the level is detected for each predetermined sampling period and for each signal; and the music based on the total value described in the analysis data The music game machine according to claim 1, further comprising: a data analysis unit that discriminates the characteristics of the voice signal and classifies the audio signal. [3] 前記信号処理部は、前記入力信号の低周波成分の微分値及び積分値の少なくと もいずれか一方に相当する信号と、前記入力信号の高周波成分の微分値及び積分 値の少なくともいずれか一方に相当する信号とをそれぞれ出力し、  [3] The signal processing unit includes at least one of a signal corresponding to at least one of a differential value and an integral value of a low frequency component of the input signal, and a differential value and an integral value of a high frequency component of the input signal. Output signals corresponding to either of these, 前記判別部は、前記信号処理部から出力される信号のそれぞれを所定のサンプリ ング単位時間ずつ取り込んで、前記サンプリング単位時間内に各信号の値が所定レ ベルを超えるか否かを判定し、前記所定レベルを超える値が検出されたと判定され た回数を所定のサンプリング周期毎でかつ前記信号別に集計した解析データを生 成するデータ生成部と、前記解析データに記述された集計値に基づいて前記音楽 の特徴を判別し、かつ前記音声信号を分類するデータ解析部とを有している、請求 の範囲第 1項に記載の音楽ゲーム機。 The discriminating unit takes in each of the signals output from the signal processing unit for a predetermined sampling unit time, and the value of each signal falls within a predetermined unit time within the sampling unit time. A data generation unit that generates analysis data that determines whether or not a value exceeding the predetermined level has been detected, and aggregates the number of times that the value exceeding the predetermined level is detected for each predetermined sampling period and for each signal; and The music game machine according to claim 1, further comprising: a data analysis unit that discriminates characteristics of the music based on a total value described in data and classifies the audio signal. [4] 前記データ解析部は、前記集計値のそれぞれの平均値を演算し、得られた平均値 のばらつきを評価することによって前記音楽の特徴としてのジャンルを判別し、かつ、 前記平均値の大小関係に基づいて前記音声信号を分類する、請求の範囲第 2項又 は第 3項に記載の音楽ゲーム機。  [4] The data analysis unit calculates an average value of each of the aggregated values, determines a genre as a feature of the music by evaluating variation of the obtained average value, and calculates the average value of the average value. 4. The music game machine according to claim 2 or 3, wherein the audio signals are classified based on a magnitude relationship. [5] 前記データ解析部は、前記平均値の大小関係が予め定められた複数のパターン のいずれに該当するかにより前記音声信号を分類することを特徴とする請求の範囲 第 4項に記載の音楽ゲーム機。  5. The data analysis unit according to claim 4, wherein the data analysis unit classifies the audio signal according to which of a plurality of predetermined patterns the magnitude relationship of the average value corresponds to. Music game machine. [6] 前記データ解析部は、 2つの平均値間の差をいずれか一方の平均値で割り算して 得られる分岐参照値をさらに参照して前記平均値の大小関係が前記複数のパター ンのいずれに該当するかを判別する、請求の範囲第 5項に記載の音楽ゲーム機。  [6] The data analysis unit further refers to a branch reference value obtained by dividing the difference between two average values by one of the average values, and the magnitude relationship of the average values is determined by the plurality of patterns. 6. The music game machine according to claim 5, wherein the music game machine determines which one of them is applicable. [7] 前記判別部は、前記音声信号の分類結果に基づいてゲーム画面に出現させるベ きキャラクタを決定し、前記ゲーム制御部は、前記データ解析部にて決定されたキヤ ラクタを前記ゲーム画面に出現させることにより前記音声信号の分類結果を前記ゲ ーム内容に反映させる、請求の範囲第 1項〜第 6項のいずれか一項に記載の音楽ゲ ーム機。  [7] The determination unit determines a character to appear on the game screen based on the classification result of the audio signal, and the game control unit determines the character determined by the data analysis unit as the game screen. The music game machine according to any one of claims 1 to 6, wherein a classification result of the audio signal is reflected in the game content by appearing in the game.
PCT/JP2007/062794 2006-06-30 2007-06-26 Music game device Ceased WO2008001766A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2006182412A JP2008006218A (en) 2006-06-30 2006-06-30 Music game machine
JP2006-182412 2006-06-30

Publications (1)

Publication Number Publication Date
WO2008001766A1 true WO2008001766A1 (en) 2008-01-03

Family

ID=38845530

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2007/062794 Ceased WO2008001766A1 (en) 2006-06-30 2007-06-26 Music game device

Country Status (2)

Country Link
JP (1) JP2008006218A (en)
WO (1) WO2008001766A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011092250A1 (en) 2010-01-28 2011-08-04 Total Petrochemicals Research Feluy Method to start-up a process to make expandable vinyl aromatic polymers
CN111603776A (en) * 2020-05-21 2020-09-01 上海艾为电子技术股份有限公司 Method for recognizing gunshot in audio data, method for driving motor and related device

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7036542B2 (en) * 2017-06-15 2022-03-15 株式会社スクウェア・エニックス Video game processor and video game processor

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000035748A (en) * 1998-07-16 2000-02-02 Sony Corp Electronic learning support device and learning support method
JP2001024980A (en) * 1999-07-05 2001-01-26 Sony Corp Signal processing apparatus and method
JP2001029649A (en) * 1999-07-21 2001-02-06 Taito Corp Game machine executing speech visual display by speech recognition
JP2004110422A (en) * 2002-09-18 2004-04-08 Double Digit Inc Music classification device, music classification method, and program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000035748A (en) * 1998-07-16 2000-02-02 Sony Corp Electronic learning support device and learning support method
JP2001024980A (en) * 1999-07-05 2001-01-26 Sony Corp Signal processing apparatus and method
JP2001029649A (en) * 1999-07-21 2001-02-06 Taito Corp Game machine executing speech visual display by speech recognition
JP2004110422A (en) * 2002-09-18 2004-04-08 Double Digit Inc Music classification device, music classification method, and program

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011092250A1 (en) 2010-01-28 2011-08-04 Total Petrochemicals Research Feluy Method to start-up a process to make expandable vinyl aromatic polymers
CN111603776A (en) * 2020-05-21 2020-09-01 上海艾为电子技术股份有限公司 Method for recognizing gunshot in audio data, method for driving motor and related device
CN111603776B (en) * 2020-05-21 2023-09-05 上海艾为电子技术股份有限公司 Method for identifying gunshot in audio data, motor driving method and related device

Also Published As

Publication number Publication date
JP2008006218A (en) 2008-01-17

Similar Documents

Publication Publication Date Title
DE102012103553A1 (en) AUDIO SYSTEM AND METHOD FOR USING ADAPTIVE INTELLIGENCE TO DISTINCT THE INFORMATION CONTENT OF AUDIOSIGNALS IN CONSUMER AUDIO AND TO CONTROL A SIGNAL PROCESSING FUNCTION
CN108074557A (en) Tone regulating method, device and storage medium
JP3344195B2 (en) Karaoke scoring device
JP6939922B2 (en) Accompaniment control device, accompaniment control method, electronic musical instrument and program
JP6409652B2 (en) Karaoke device, program
CN117711359A (en) Intelligent tone color matching method, device, equipment and readable storage medium
WO2008001766A1 (en) Music game device
JP2014035436A (en) Voice processing device
JP6102076B2 (en) Evaluation device
JP5980931B2 (en) Content reproduction method, content reproduction apparatus, and program
CN105632523A (en) Method and device for regulating sound volume output value of audio data, and terminal
JP5884992B2 (en) Musical performance device and musical performance processing program
CN101479784B (en) Music genre discrimination device and game machine equipped with the same
JP6944357B2 (en) Communication karaoke system
JP4001897B2 (en) Music genre discriminating apparatus and game machine equipped with the same
WO2007105533A1 (en) Game device having music tempo judging function
JP4078375B2 (en) Music game machine
Jensen et al. Hybrid perception
JP5034642B2 (en) Karaoke equipment
JP2016191794A (en) Karaoke device and program
HK1129155B (en) Music genre identification device and game device using the same
JP3587200B2 (en) Karaoke scoring device
TWI328217B (en) Game machine having a function for discriminating music tempo
CN121155114A (en) Method and device for audio vibration of game handle, game handle and storage medium
HK1121088A (en) Music genre judging device and game machine having the same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07767600

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

NENP Non-entry into the national phase

Ref country code: RU

122 Ep: pct application non-entry in european phase

Ref document number: 07767600

Country of ref document: EP

Kind code of ref document: A1