WO2006098255A1 - Image display method and device thereof - Google Patents
Image display method and device thereof Download PDFInfo
- Publication number
- WO2006098255A1 WO2006098255A1 PCT/JP2006/304853 JP2006304853W WO2006098255A1 WO 2006098255 A1 WO2006098255 A1 WO 2006098255A1 JP 2006304853 W JP2006304853 W JP 2006304853W WO 2006098255 A1 WO2006098255 A1 WO 2006098255A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- display
- content
- information
- sound
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09F—DISPLAYING; ADVERTISING; SIGNS; LABELS OR NAME-PLATES; SEALS
- G09F27/00—Combined visual and audible advertising or displaying, e.g. for public address
Definitions
- the present invention relates to an image display method and apparatus, and more particularly, to a display method and apparatus for advertisement information or guidance information or video information.
- These advertisements are performed by sequentially displaying image group data of still images or moving images prepared in advance according to a schedule.
- an electronic advertisement is a unilateral flow of a still image or a moving image prepared in advance as described above, a viewer of the electronic advertisement (hereinafter referred to as a viewer). Is only passively watching ads.
- an electronic advertisement system that re-creates an advertisement display schedule by looking at an advertisement display terminal and sending a call or e-mail including an access key when a viewer is displayed in an advertisement! Has been proposed (see Patent Document 1;). According to this method, only the viewer who actually sees the advertisement can change the advertisement display data, so that the viewer can be more interested in the advertisement.
- an advertisement is distributed from a remote place to a display device included in a moving body such as a train.
- a method has been proposed in which advertisements to be displayed are changed depending on the traveling position and time of the moving body (see Patent Document 2). According to this method, it is possible to display an advertisement according to the travel position and the passenger class of the time, and the advertisement can be made effective.
- an advertisement display terminal is provided with an imaging device and information obtained by imaging the viewer is reflected in the advertisement display.
- an advertising terminal that displays advertisement data corresponding to an individual behavior tendency obtained by an imaging device in an image
- An advertising effectiveness confirmation system that deploys a device such as an imaging unit that collects data, transmits and records the collected data to an analysis device via a communication line, and reproduces and outputs the collected data in a predetermined format when analyzing the advertising effectiveness
- an interactive display device has been proposed in which the person attribute relating to the imaged viewer is determined, the person image is cut out, and the person attribute determination result is accumulated. (See Patent Document 5.) 0 In this case, the human image is cut out from a plurality of human templates prepared in advance as a human image.
- Patent Document 1 JP 2002-150130 A
- Patent Document 2 Japanese Patent Laid-Open No. 2003-157035
- Patent Document 3 Japanese Patent Laid-Open No. 2002-288527
- Patent Document 4 Japanese Patent Laid-Open No. 11-153977
- Patent Document 5 Japanese Unexamined Patent Publication No. 2000-105583
- the present invention has been made in view of the above problems, and provides an image display method and apparatus capable of effectively increasing the attention level of a viewer who views an image on a display device. For the purpose.
- the present inventor creates and displays an image content in real time based on the trigger information using a human movement as a trigger, so to speak, the image content display is not displayed on the information transmission side but on the receiver side.
- the present inventors have found that it is possible to effectively increase the degree of attention of viewers who view images on a display device by performing interactive display with entrusted initiative.
- An image display method includes:
- the selected image element of the content material and the remaining image element of the initial display image are combined, or the image of the image content is replaced with the initial display image, and also the human image captured as recognition information Image synthesis processing step of cutting out the image as it is or synthesizing a human substitute image as an image element,
- the recognition information acquisition process, motion detection process, content selection process, image composition processing process, and composite image display process are repeated.
- the image display method according to the present invention is characterized in that, in the motion detection step, a motion at the specific image element position is detected.
- the image display method according to the present invention selects a sound content corresponding to a motion or a specific image element in the content selection step, and the composite image display step in the composite image display step. At the same time, or independently of the synthesized image, the sound having the sound content power is output.
- the image display method according to the present invention is characterized in that the content of the display image is advertisement information, guidance information, or exhibition information provided to a person captured as recognition information.
- the content of the display image is video information that provides a person who views the display image with the performance of the person captured as the recognition information.
- an image display device includes:
- An image display device that determines an initial image content of a display image having a plurality of image element forces that can be displayed independently of each other, generates an initial display image, and displays the initial display image.
- Recognition information acquisition means for capturing a person as recognition information
- Motion detection means for detecting the movement of a person captured as recognition information
- a content selection unit that selects content material corresponding to a specific image element in response to movement or image content related to an initial display image
- the selected image element of the content material and the remaining image element of the initial display image are combined, or the image of the image content is replaced with the initial display image, and also the human image captured as recognition information Image synthesis processing means for cutting out and synthesizing a substitute image of a person as an image element;
- Combined image display means for displaying the combined image
- the image display device is characterized in that the motion detecting means detects a motion at the specific image element position.
- the content selection unit selects sound content corresponding to a specific image element
- the composite image display unit includes the composite image or the composite image. Independently, the sound having the sound content power is output.
- the image display device is characterized in that the content of the display image is advertisement information, guidance information, or exhibition information provided to a person captured as recognition information.
- the content of the display image is video information that provides a person who views the display image with the performance of the person captured as the recognition information.
- the image display method and apparatus according to the present invention are selected as a display image composed of a plurality of image elements that can be displayed independently of each other, for example, corresponding to the movement of a person captured as recognition information
- a display image composed of a plurality of image elements that can be displayed independently of each other, for example, corresponding to the movement of a person captured as recognition information
- synthesizing the image elements that are the content material power and the remaining image elements of the initial display image and further, for example, cutting out and synthesizing and displaying the human image captured as the recognition information.
- the degree of attention when the three parties look at the displayed image can be increased.
- FIG. 1 is a diagram showing a schematic configuration of an image generation apparatus of the present invention.
- FIG. 2 is a diagram for explaining an example in which a changed portion is specified by a difference between two consecutive frames.
- FIG. 3 is a diagram for explaining the “direction of motion” and the “accumulated amount of motion” obtained from the accumulation of frame differences.
- FIG. 4 is a diagram for explaining a method for recognizing the state of a subject by pattern matching with an image database.
- FIG. 5 is a diagram for explaining a method of searching for an image pattern in the image database that matches the positional relationship with the image of the subject and obtaining it as image recognition information.
- FIG. 6 is a diagram for explaining a method of searching for a sound database having a sound and a characteristic that matches the received sound and obtaining it as sound recognition information.
- FIG.7 Method for selecting and generating image information for image B when image A is at the top, and image D when image B is at the right for image B It is a figure for demonstrating.
- FIG. 8 is a diagram for explaining a method of generating image information according to the direction of movement of a subject.
- FIG. 9 is a flowchart for explaining an image display method of the present invention.
- FIG. 10 is a diagram for explaining a hierarchical structure of a plurality of contents stored in an image database and image contents included in each content image.
- FIG. 11A is a flowchart for explaining a procedure for generating a display image using human movement as trigger information.
- FIG. 11B is a flowchart for explaining in detail the procedure of content determination processing in FIG. 11A.
- FIG. 12 is a flowchart for explaining a procedure for outputting sound together with an image.
- FIG. 13 is a diagram showing an image display example in the case of a beer advertisement.
- FIG. 14 is a diagram for explaining the relationship between the position of human movement and image switching in the case of beer advertisements.
- FIG. 15 is a diagram for explaining the relationship between the position of human movement and sound in the case of beer advertisements
- FIG. 16 This is a diagram for explaining the situation when a person is performing as a video and displaying a performance.
- FIG. 17 is a diagram for explaining the relationship between the position of human movement and screen image switching when displaying an image.
- FIG. 18 is a diagram for explaining the relationship between the position of human movement and sound when displaying video. Explanation of symbols
- the image display apparatus 10 includes a receiving unit (recognition information acquisition means) 12 that captures external information, particularly a person as recognition information, a main control unit 14, and an output unit 16.
- the receiving unit 12 and the output unit 16 may be integrated with the main control unit 14.
- the receiving unit 12 includes, for example, an image sensor (imaging unit) 12a and a sound sensor (sound receiving unit) 1
- an ordinary camera, an infrared camera, an infrared sensor, or a 3D camera such as binocular parallax can be used, thereby receiving external information such as human behavior (sensing). To do.
- a 3D camera it is possible to obtain information on temporal changes in images with distance information.
- a sound sensor such as a microphone can be used, thereby receiving external information such as a human voice.
- the receiving unit 12 is a vibration sensor that recognizes vibrations that are struck or touched. It may be a Doppler sensor that recognizes the temperature, a sign, etc., a temperature sensor that recognizes the temperature by thermography, or a pressure sensor that recognizes the force level.
- the output unit 16 includes a display device (synthetic image display means) 16a and an acoustic device (sound output means) 16b.
- the display device 16a is, for example, a display, a projector, or a television
- the acoustic device 16b is, for example, a speaker or headphones.
- the main control unit 14 includes a mechanism that generates output information according to the information obtained by the receiving unit 12 and outputs the output information to the output unit 16.
- the main control unit 14 is, for example, a computer, and includes a recognition information processing unit 18, an image information generation unit 20, a sound information generation unit 22, and a clock 24.
- the recognition information processing unit 18 includes an image information processing unit (motion detection means) 26 and a sound information processing unit 28.
- the image information processing unit 26 processes the person information acquired by the image sensor 12a and generates a trigger signal used for selecting an output image.
- the sound information processing unit 28 processes the external information received by the sound sensor 12b, and generates a trigger signal used for selecting sound information. Sound information obtained based on this trigger signal is associated with external information. Apart from this, the ability to select sound content corresponding to a specific image element or the like will be described in detail later. Note that the sound information and the sound content have the same meaning, but the triggers for generating them are different as described above.
- the image information processing unit 26 generates a subject image (chroma single image) for synthesis in the form of a person (hereinafter, sometimes referred to as a subject! /) That is imaged by the image sensor 12a. For this purpose, it is output as it is, or it is output after image processing of outlines and shadows. Alternatively, a trigger signal for generating a substitute image of a subject such as a character image is output. Further, the image information processing unit 26 performs image processing on the subject imaged by the output image sensor 12a, detects the motion of the subject, and outputs a trigger signal for selecting image content.
- a subject image chroma single image
- the subject movement is identified by a frame difference such as a difference between two consecutive frames, a difference between three consecutive frames, or a difference from the background.
- a frame difference such as a difference between two consecutive frames, a difference between three consecutive frames, or a difference from the background.
- the difference between two consecutive frames As shown in Fig. 2, the coordinate force of the part that changed between the two frames Fla and Fib, as shown in Fig. 2, detects the "movement position" as shown in the frame Flc, or the part that changed
- the “amount of movement” is detected from the size of.
- the former “position of movement” has changed by, for example, comparing the frames before and after one frame is divided into a predetermined number of divided frames and capturing the change in luminance of the image of the divided frames. This is done by specifying the position of the divided frame. That is, the brightness of all the pixels (pixels) of the image of the current frame and the image of the previous frame (or background image) is compared from the input image data. It is determined that the pixel is in motion, and at this time, it is determined to which partition (divided frame) the center of gravity (X coordinate, Y coordinate) of the pixel group with motion belongs. Then, the “position of movement” is detected according to the position of the vertical and horizontal sections in the frame.
- “amount of motion” for example, in the above, from the input image data, the brightness of all the pixels (pixels) of the image of the current frame and the image of the previous frame (or the background image). If the difference is equal to or greater than a predetermined value, it is determined that the pixel is in motion, and at this time, “amount of motion” is detected according to the number of pixels in motion.
- the trigger information is based on the movement of the subject (quality of movement, or characteristics of movement) of at least one of the position of movement, direction of movement, amount of movement and speed of movement. (Trigger signal) is generated.
- the motion vector (size and direction) at points on the screen can also be obtained by calculating the optical flow using the filtering method or the gradient method.
- the image information processing unit 26 may be configured to acquire, for example, the state of the subject imaged by the image sensor 12a as image recognition information.
- the state of the subject is specified by, for example, pattern matching with an image database (different from the image database 30 described later) provided in the image information processing unit 26 that does not appear in FIG.
- the image pattern in the image database is searched for a feature that matches the image of the subject, and the features of the suit and business bag match, it is said to be ⁇ masculine ''
- the state is obtained as trigger information, or when the umbrella matches the feature, “the rain is falling” is obtained as the trigger information.
- the sound information processing unit 28 processes, for example, external sound received by the sound sensor 12b, and generates trigger information (trigger signal).
- the unit time waveform data of the received sound is analyzed by FFT (Fast Fourier Transform) to obtain the frequency distribution shape, and pattern matching with the sound data base not shown in FIG. Specified by.
- FFT Fast Fourier Transform
- the left side of FIG. 6 from among the sound patterns of high heels F 3a, leather shoes F3b, sneakers F3c, bag roller F3d, wheelchair F3e, rain F3f, etc.
- the sound that has the same characteristics as the F3g sound (sound pattern) F3g is searched, and it is determined that the component distribution shape of the input sound waveform matches the component distribution shape of the sound waveform of the high heel F3g. If it is, “feminine ⁇ ” t ⁇ ⁇ is obtained as trigger information.
- the voice tone is analyzed by FFT, and when the low frequency component is high, the voice is a low voice, for example, a male voice, and when the high frequency component is high, the voice is a high voice, for example, a female voice. You can get it as ...
- the image information generation unit (content selection unit and image composition processing unit) 20 generates image information based on the trigger information generated by the image information processing unit 26.
- the image information generation unit 20 includes an image database 30 and an image data determination unit 32.
- the image database 30 stores a group of image information accumulated for displaying an image, for example, as an advertisement, a guide, or an exhibition!
- an image of a tourist's finger or full body image is displayed at a specific position on a map screen of tourist information installed at the airport, so that image information such as the landmarks and specialties of the land at that location can be displayed. Mention may be made of what is provided. Also, as an exhibition, for example, image content and sound content groups of various insects are embedded in an exhibition panel that explains the ecology of insects installed in museums, and the hand images of visitors are overlaid at specific positions on the screen. By doing so, you can list images that explain the ecology of the insects at that location and those that provide insect calls. A specific example of advertisement will be described later.
- the image database 30 includes, for example, a plurality of contents for advertising a product as the content layer PF 1 (in FIG. 10, for example, content 40a that is an advertisement for beer, cosmetics Only the content 40b, which is the advertisement of the other, and the content 40c, which is the other advertisement, are displayed) and multiple contents for providing video (only the contents 48a and 48b are displayed in Fig. 10).
- Each content includes, as the image content layer PF2, a plurality of image contents that can be displayed independently of each other (in FIG. 10, only the image elements 42a to 42c of the image content 40a are displayed). Furthermore, in each image content, a plurality of image elements (in FIG. 10, a plurality of image elements (in FIG.
- image elements 44a to 44c and 52a to 52c are displayed as the image element layer PF3. .) Is included. Each image element is associated with a content material (only the content materials 46a to 46c and 54a to 54c are displayed in FIG. 10) as the content material layer PF4.
- the images included in each of these layers PF1 to PF4 may be still images or moving images.
- the initial image is, for example, a display image (image content) 42a composed of a plurality of image elements 44a to 44c, such as a company logo and a beer can, is selected.
- a content material 46a that is a company introduction image associated with the image element 44a is selected by replacing a character image element 44a that is a company logo, for example, with the movement of a person as trigger information.
- the content material 46c is included in the sound data base 34 to be described later, and the sound content material corresponding to the image element 44c is selected.
- the image element 52d is associated with the image content 50b related to the content 48a, which is different from the image content 50a as the initial image, and the movement of the person in the image element 52d is represented.
- the image content 50b is selected by switching to the image content 50a as a trigger.
- the content material 54c is a sound content material.
- the image data determination unit 32 retains the image information (image, initial image content) of the image group as it is or switches according to the display schedule. indicate.
- the cumulative amount of motion is equal to or greater than the threshold, for example, as shown in FIG. 7, the images F4a, F4b,... Associated with the trigger information set according to the position of the motion described above. For example, select the image information for the image F4b if the movement position is at the top for the image F4a, and the image F4d for the movement position for the right portion for the image F4b. To do.
- the image information group includes, for example, a plurality of pieces of image information associated with the direction of movement, and the image data determination unit 32 moves from the system installation location F 5c as shown in FIG. 8, for example.
- the direction is (1)
- image information related to the advertisement F5a existing in the direction is generated.
- the direction is (2)
- image information related to the advertisement F5b existing in the direction is generated.
- the image data determination unit 32 is associated with the previous trigger information by the new trigger information set according to the position of the motion. For example, instead of the image element content 42a, the content material 44a is selected to generate image information.
- the image data determination unit 32 corresponds to the image recognition information "masculine". Select and generate advertising image information for men, or select and generate information for advertising images of that brand according to the subject's appearance as a specific brand mark.
- the image information generation unit 20 may be configured to generate image information based on the trigger information obtained by the sound information processing unit 28!
- the image database 30 stores image information associated with the trigger information obtained by the sound information processing unit 28 (not shown), and the image data determination unit 32 is based on the trigger information.
- the image information corresponding to the trigger information is selected and generated.
- the image information generation unit 20 may be configured to use the image information received by the image sensor 12a as generated image information and to superimpose it on the current display image.
- the image is thinly superimposed on the image, or you are entering the screen, or your shadow is reflected in the image, like the chroma effect, or your image
- the image information is generated so that the image is displayed only where it moves.
- the image information generation unit 20 may be configured to select and generate a character image according to the trigger information of the subject.
- the image information generation unit 20 uses words (characters) such as “hot” when the temperature is high, or “recommended for you who is beautiful” when recognizing that it is a woman. It may be configured to be added and displayed.
- the sound information generation unit (sound content selection means) 22 generates sound information based on the trigger information obtained by the sound information processing unit 28, or responds to the movement of the subject, specific image elements, and the like. Thus, the selected sound content is generated.
- the sound information generation unit 22 includes a sound database 34 and a sound data determination unit 36.
- the sound database 34 stores sound information corresponding to the trigger information obtained by the sound information processing unit 28, and the sound data determination unit 36 selects sound information corresponding to the trigger information based on the trigger information. , Generate.
- the trigger information is an adult male
- the image information on which beer is poured is generated, and the sound of beer pouring or foam is selected and generated as sound information, or the trigger information
- the trigger information For example, when a person is a woman, sound information is selected and generated for a calling word such as “lady”.
- the sound picked up by the microphone may be generated as it is as sound information.
- the sound database 34 includes sound material data associated with the respective content materials 46a to 46c of the image information generation unit 20 or associated with the image elements 44a to 44c (not shown). O) Then, the sound data determination unit 36 selects and generates an audio material corresponding to the image trigger signal from the image information processing unit 26, in other words, the image element.
- image information corresponding to the recognition information is generated (S16 in FIG. 9). Then, the image display is performed according to the image information signal (S18 in FIG. 9).
- FIGS. 11A and 11B The image display method of the present invention will be described in more detail with reference to FIGS. 11A and 11B, taking as an example the case where a display image is generated using human movement as trigger information.
- content is determined (S20 in FIG. 11A).
- the content may be determined in advance, but is more preferably performed according to the following procedure. That is, as shown in the detailed step configuration in FIG. 11B, an image of a person from the camera is acquired (S48 in FIG. 11B), and the first image data determination unit 32 via the image information processing unit 26 It is determined whether or not the image processing is performed (S50 in FIG. 11B). In the case of the first image processing, the acquired image is held in the temporary image holding area (S52 in FIG. 11B). The image of the person from the camera is acquired again (S48 in FIG. 11B).
- the information is selected (determination process) in the image data determination unit 32 via the image information processing unit 26 (S56 in FIG. 11B) using the person's state and feature information as a trigger.
- the content is, for example, a beer advertisement (advertisement information).
- the sound information picked up by the microphone may be used as a trigger, and the content may be determined by the image data determination unit 32 via the sound information determination unit 28 (S56 in FIG. 11B).
- an image of a person with camera power is acquired again (S48 in FIG. 11B).
- the acquired image is held in the temporary image holding area (S26 in Fig. 11A) and consists of characters and figures such as company logos and beer cans that make up the content.
- An initial image composed of a plurality of image elements is generated and displayed (S28 in FIG. 11A), and a human image from the camera is acquired again (Fig. 11). 11A, S22).
- the first image processing is not performed, in other words, when the image processing has already been repeated, the movement of the person is detected by comparing the temporary image holding image with the current camera image (in FIG. 11A). , S30), and extract the image of the moving part (moving person) (S32 in Fig. 11A).
- the designated movement is, for example, the amount of movement equal to or greater than a threshold, movement in a specific direction, repetition of movement, movement in order between specific coordinates, and the like.
- the selected content material corresponding to the specific image element at the position where the motion is selected is selected to generate an image (S36 in FIG. 11A).
- the image is replaced and displayed, or is replaced with an image of a specific image element and is combined with other image elements and displayed (S38 in FIG. 11A).
- the image of the display device or the content of the specific image element changes one after another according to the movement of the person captured by the camera, so that the person or display device captured by the camera is viewed.
- Third-party eyes are attracted to the display image of the image display device.
- the image display method of the present invention when sound is output together with an image or independently of the image, as shown in FIG. After extracting the image of the person (moving person) (S32 in Fig. 11A), it is determined whether there is a force specified at a specific position in the image (S40 in Fig. 12). At this time, it may be determined whether or not there is a designated movement at an arbitrary position in the image.
- the designated movement is, for example, the amount of movement equal to or greater than a threshold value, movement in a specific direction, repetition of movement, movement in order between specific coordinates, and the like as described above.
- the specified sound material is selected from the material collection and a sound is generated (S44 in FIG. 12). For example, the generated sound is output by matching the timing of the image and output. Yes (S46 in Fig. 12) 0
- the current sound state is maintained with the current sound as it is, or when there is no sound (S46 in Fig. 12).
- the sound changes one after another according to the movement of the person captured by the camera, and a new sound is created by overlapping the sounds.
- the eyes of the third party who sees are attracted to the display image of the image display device.
- an image as shown in FIG. 13 is displayed on the display device, and a passerby views this advertisement.
- a camera (indicated by arrow A in FIG. 13) is attached to the display device, and a person who passes in front of the display device is captured as an image, for example, the shadow of the person is combined with an advertisement image.
- the image on the display device is the company logo “OX Beer” (indicated by arrow C1 in FIG. 13), product display (indicated by arrow C2 in FIG. 13), and a woman with a beer mug (FIG. 13).
- Middle indicated by arrow C3. Consists of multiple image elements that can be displayed independently.
- the image element of the logo ⁇ Ox Beer '' C1 includes the company introduction image as the content material, and a predetermined part of the logo image element (Indicated in arrow S1 in Fig. 14), the movement of a person is located.
- the screen changes to screen F2, and the company introduction image is displayed on the entire screen. Or at a predetermined position.
- the screen F1 when a person's movement is located at a predetermined part of the image element of the product display C2 (indicated by the arrow S2 in FIG. 14), the screen changes to the screen F3 and an image introducing one product is displayed. It is displayed on the entire screen or at a predetermined position. At this time, when the movement of a person is located at a predetermined portion S1 of the image element on the screen F3 in a state where the screen F3 is displayed, the previous image F2 is displayed.
- the screen F1 when a person's movement is located at a predetermined part of the woman C3 with a beer mug (indicated by the arrow S3 in Fig. 14), the screen changes to the screen F4 and the message of the producer's voice is displayed. Is displayed.
- a screen F6 including images to be introduced, which are different from the products on the screen F5 and the screen F3, are switched and displayed one after another.
- FIG. 15 showing a screen similar to FIG.
- the image element of the logo C1 "OX Beer” is the image element of the company's sound logo and the left beer mug (indicated by arrow C4 in Fig. 15) of the two beer mugs. Is the sound of pouring beer into a beer mug and the beer mug on the right ( In FIG. 15, this is indicated by arrow C5.
- the image element of) is associated with the sound when a beer mug is matched during a toast, and the image element of female C3 is associated with a toasting voice.
- rhythm loop A rhythm loop B
- melody loop are provided corresponding to the areas indicated by arrows X1 to X3, respectively.
- BGM music flowing in the background of the image
- Such an image display method is effective not only for advertising information but also for displaying various types of guidance information, and can be widely used without being limited thereto.
- FIG. 16 As shown in Fig. 16, for example, an actor performs on the stage where the camera (indicated by arrow B in Fig. 16) is directed.
- a screen (display: indicated by arrow C in Fig. 16) is provided behind the stage.
- the screen C displays multiple image elements such as a person who looks at the window in the room, a telephone, etc.
- the movement of the person captured by the camera B is displayed as a contour line or a shadow image. Are combined and displayed.
- Theatrical performers in front of the stage watch the performers on the stage and watch the screen.
- the screen image shows the outside landscape.
- the screen F14 switches to a screen F15 that displays an image of the store or the like that is different from the room.
- the screen switches to a screen F16 that displays a different store from the screen F15.
- the image element of the person standing outside the window is the voice of the person and the image element of the left curtain (indicated by arrow X5 in FIG. 18). Is associated with the sound of the phone and the image element of the phone (indicated by arrow X6 in Fig. 18).
- narration sound information is provided corresponding to a predetermined area in the upper right of the screen (indicated by an arrow X7 in FIG. 18), and narration is played when a person moves in this area. .
- the recognition means uses temperature sensors such as a vibration sensor that recognizes vibrations that have been struck or touched, a Doppler sensor that recognizes air flow and signs, thermography, and the like.
- a temperature sensor to be recognized or a pressure sensor to recognize a force force condition can be appropriately selected and used.
- the external information to be recognized includes what the person has and handles, the number of people, member composition, biological information (beating, sweating, body temperature, etc.), or , Movements and appearances of non-human pets (animals) and plants, movements and appearances of cars and machines, robots, weather 'climate' temperature 'wind, landscape, time or language content, voice status (bright, dark, Fun, nervous, happy, etc.), loudness, strength, etc. can be used.
- artificial intelligence and conversation functions are built into the main control unit, and effective call-up words are output according to the recognition information, and responses are made to the words picked up by the microphone for dialogue. It is more preferable to be able to do so.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
Description
明 細 書 Specification
画像表示方法およびその装置 Image display method and apparatus
技術分野 Technical field
[0001] 本発明は、画像表示方法およびその装置に関し、特に、広告情報もしくは案内情 報または映像情報の表示方法ならびにその装置に関する。 TECHNICAL FIELD [0001] The present invention relates to an image display method and apparatus, and more particularly, to a display method and apparatus for advertisement information or guidance information or video information.
背景技術 Background art
[0002] 乗り物や店舗内等において、小画面ディスプレイを使用した広告がいたるところで 見られる。また、近年、街頭等において、大画面ディスプレイを使用した広告も見られ るようになってきている。 [0002] Advertisements using small-screen displays are seen everywhere in vehicles and in stores. In recent years, advertisements using large-screen displays have been seen on streets.
これらの広告(以下、これを電子広告ということがある。)は、予め準備された静止画 像あるいは動画像の画像群データをスケジュールに従って逐次表示することによつ て行われる。 These advertisements (hereinafter sometimes referred to as electronic advertisements) are performed by sequentially displaying image group data of still images or moving images prepared in advance according to a schedule.
これにより、紙広告の場合に不十分であった最新の広告をタイムリーにかつ煩雑な 手間を力けることなく実現することができる。 As a result, the latest advertisement, which was insufficient in the case of paper advertisement, can be realized in a timely and troublesome manner.
[0003] し力しながら、電子広告は、上記のように予め準備された静止画像あるいは動画像 を一方的に流すものであるため、電子広告を見る者 (以下、これを視聴者ということが ある。)は受動的に広告を見るだけである。 [0003] However, since an electronic advertisement is a unilateral flow of a still image or a moving image prepared in advance as described above, a viewer of the electronic advertisement (hereinafter referred to as a viewer). Is only passively watching ads.
このため、印象に残りやすい視覚効果に優れる広告を流したとしても、視聴者の注 目を集め、高い広告効果を得るには限度がある。 For this reason, even if an advertisement with an excellent visual effect that tends to remain in the impression is circulated, there is a limit to attracting viewers' attention and obtaining a high advertising effect.
[0004] この点を改善するために、視聴者の注目を集め、高い広告効果を得る方法が提案 されている。 [0004] In order to improve this point, a method for attracting the attention of a viewer and obtaining a high advertising effect has been proposed.
[0005] 例えば、広告表示端末を見て!、る視聴者が広告に表示されて!、るアクセスキーを 含めて電話や電子メールを送ることで、広告表示スケジュールを再作成する電子広 告システムが提案されている(特許文献 1参照。;)。この方法によれば、広告を実際に 見て 、る視聴者のみが広告表示データを変更することができるので、視聴者に広告 への関心を高めることができるとされている。 [0005] For example, an electronic advertisement system that re-creates an advertisement display schedule by looking at an advertisement display terminal and sending a call or e-mail including an access key when a viewer is displayed in an advertisement! Has been proposed (see Patent Document 1;). According to this method, only the viewer who actually sees the advertisement can change the advertisement display data, so that the viewer can be more interested in the advertisement.
[0006] また、例えば、電車等の移動体が備える表示装置に遠隔地から広告を配信して表 示する場合に、移動体の走行位置や時間によって表示する広告を変更する方法が 提案されている(特許文献 2参照。;)。この方法によれば、その走行位置や時間の乗 客の層に応じた広告を表示することができ、広告を効果的にすることができるとされて いる。 [0006] Further, for example, an advertisement is distributed from a remote place to a display device included in a moving body such as a train. In this case, a method has been proposed in which advertisements to be displayed are changed depending on the traveling position and time of the moving body (see Patent Document 2). According to this method, it is possible to display an advertisement according to the travel position and the passenger class of the time, and the advertisement can be made effective.
[0007] さらにまた、広告表示端末に撮像装置を備え、視聴者を撮像して得た情報を広告 表示に反映する方法も提案されている。 [0007] Furthermore, a method has been proposed in which an advertisement display terminal is provided with an imaging device and information obtained by imaging the viewer is reflected in the advertisement display.
例えば、撮像装置で得た個人の行動傾向に合わせて、その行動傾向に対応する 広告データを画像表示する広告端末 (特許文献 3参照。)、複数の広告提示装置の 設定場所に視聴者の反応を収集する撮像部等の装置を配備し、収集したデータを 通信回線経由で分析装置に伝送記録し、広告効果の分析時に収集データを所定の 形式で再生出力する広告効果確認システム (特許文献 4参照。 )等が提案されて!ヽる また、撮像した視聴者に関する人物属性を判定して、人物像の切り出しを行うととも に、人物属性判定結果を蓄積するインタラクティブ表示装置が提案されている (特許 文献 5参照。 ) 0ここでいう人物像の切り出しは、予め用意した複数の人物テンプレー トのな力から選択されたものを人物像として切り出すものである。 For example, an advertising terminal (see Patent Document 3) that displays advertisement data corresponding to an individual behavior tendency obtained by an imaging device in an image (see Patent Document 3). An advertising effectiveness confirmation system that deploys a device such as an imaging unit that collects data, transmits and records the collected data to an analysis device via a communication line, and reproduces and outputs the collected data in a predetermined format when analyzing the advertising effectiveness (Patent Document 4) In addition, an interactive display device has been proposed in which the person attribute relating to the imaged viewer is determined, the person image is cut out, and the person attribute determination result is accumulated. (See Patent Document 5.) 0 In this case, the human image is cut out from a plurality of human templates prepared in advance as a human image.
特許文献 1 :特開 2002— 150130号公報 Patent Document 1: JP 2002-150130 A
特許文献 2 :特開 2003— 157035号公報 Patent Document 2: Japanese Patent Laid-Open No. 2003-157035
特許文献 3:特開 2002— 288527号公報 Patent Document 3: Japanese Patent Laid-Open No. 2002-288527
特許文献 4:特開平 11— 153977号公報 Patent Document 4: Japanese Patent Laid-Open No. 11-153977
特許文献 5 :特開 2000— 105583号公報 Patent Document 5: Japanese Unexamined Patent Publication No. 2000-105583
発明の開示 Disclosure of the invention
発明が解決しょうとする課題 Problems to be solved by the invention
[0008] し力しながら、上記従来の方法は、 V、ずれも、視聴者の注目を集め、高 、広告効果 等を得るには必ずしも十分ではないと考えられる。また、画像表示する一つ一つのコ ンテンッを時間と労力をかけて作り込む必要があり、コンテンツ制作が大変である。 [0008] However, it is considered that the above-described conventional methods are not necessarily sufficient for V and deviation to attract viewers' attention and to obtain high advertising effects and the like. Moreover, it is necessary to create each content to display an image with time and effort, making content production difficult.
[0009] 本発明は、上記の課題に鑑みてなされたものであり、表示装置の画像を見る視聴 者の注目度を効果的に高めることのできる画像表示方法およびその装置を提供する ことを目的とする。 [0009] The present invention has been made in view of the above problems, and provides an image display method and apparatus capable of effectively increasing the attention level of a viewer who views an image on a display device. For the purpose.
課題を解決するための手段 Means for solving the problem
[0010] 本発明者は、人の動きをトリガーとして、そのトリガー情報をもとにリアルタイムに画 像コンテンツを作り出して表示することにより、いわば、情報の発信側ではなく受け手 側に画像コンテンッ表示の主導権を委ねてインタラクティブな表示を行うことにより、 表示装置の画像を見る視聴者の注目度を効果的に高めることができることを見出し、 本発明に想達した。 [0010] The present inventor creates and displays an image content in real time based on the trigger information using a human movement as a trigger, so to speak, the image content display is not displayed on the information transmission side but on the receiver side. The present inventors have found that it is possible to effectively increase the degree of attention of viewers who view images on a display device by performing interactive display with entrusted initiative.
[0011] 本発明に係る画像表示方法は、 [0011] An image display method according to the present invention includes:
相互に独立して表示可能な複数の画像要素からなる表示画像の初期コンテンツを 決定し、初期表示画像を生成して表示する工程と、 Determining initial content of a display image composed of a plurality of image elements that can be displayed independently of each other, generating and displaying an initial display image; and
人を認識情報として捉える認識情報取得工程と、 A recognition information acquisition process for capturing people as recognition information;
認識情報として捉えた人の動きを検出する動き検出工程と、 A motion detection process for detecting the movement of a person captured as recognition information;
動きに対応して、特定の画像要素に対応するコンテンツ素材を選択し、または、初 期表示画像に関連する画像コンテンツを選択するコンテンツ選択工程と、 A content selection step of selecting content material corresponding to a specific image element corresponding to movement or selecting image content related to the initial display image;
選択されたコンテンツ素材力 なる画像要素と初期表示画像の残余の画像要素を 合成し、または、画像コンテンツからなる画像を初期表示画像に代替するとともに、さ らに、認識情報として捉えた人の画像を切り出してそのままに、または人の代替画像 を画像要素として合成する画像合成処理工程と、 The selected image element of the content material and the remaining image element of the initial display image are combined, or the image of the image content is replaced with the initial display image, and also the human image captured as recognition information Image synthesis processing step of cutting out the image as it is or synthesizing a human substitute image as an image element,
合成した画像を表示する合成画像表示工程と、 A combined image display step for displaying the combined image;
を有し、 Have
認識情報取得工程、動き検出工程、コンテンツ選択工程、画像合成処理工程およ び合成画像表示工程を繰り返すことを特徴とする。 The recognition information acquisition process, motion detection process, content selection process, image composition processing process, and composite image display process are repeated.
[0012] また、本発明に係る画像表示方法は、前記動き検出工程において、前記特定の画 像要素位置における動きを検出することを特徴とする。 [0012] Further, the image display method according to the present invention is characterized in that, in the motion detection step, a motion at the specific image element position is detected.
[0013] また、本発明に係る画像表示方法は、前記コンテンツ選択工程にぉ 、て、動きまた は特定の画像要素に対応する音コンテンツを選択し、前記合成画像表示工程にお いて、合成画像とともに、または合成画像とは独立して該音コンテンツ力もなる音を出 力することを特徴とする。 [0014] また、本発明に係る画像表示方法は、前記表示画像のコンテンツが、認識情報とし て捉えた人に提供する広告情報、案内情報または展示情報であることを特徴とする。 [0013] In addition, the image display method according to the present invention selects a sound content corresponding to a motion or a specific image element in the content selection step, and the composite image display step in the composite image display step. At the same time, or independently of the synthesized image, the sound having the sound content power is output. [0014] Further, the image display method according to the present invention is characterized in that the content of the display image is advertisement information, guidance information, or exhibition information provided to a person captured as recognition information.
[0015] また、本発明に係る画像表示方法は、前記表示画像のコンテンツが、認識情報とし て捉えた人のパフォーマンスを、該表示画像を見る人に提供する映像情報であること を特徴とする。 [0015] Further, in the image display method according to the present invention, the content of the display image is video information that provides a person who views the display image with the performance of the person captured as the recognition information. .
[0016] また、本発明に係る画像表示装置は、 [0016] Further, an image display device according to the present invention includes:
相互に独立して表示可能な複数の画像要素力 なる表示画像の初期画像コンテン ッを決定し、初期表示画像を生成して表示する画像表示装置であって、 An image display device that determines an initial image content of a display image having a plurality of image element forces that can be displayed independently of each other, generates an initial display image, and displays the initial display image.
人を認識情報として捉える認識情報取得手段と、 Recognition information acquisition means for capturing a person as recognition information;
認識情報として捉えた人の動きを検出する動き検出手段と、 Motion detection means for detecting the movement of a person captured as recognition information;
動きに対応して、特定の画像要素に対応するコンテンツ素材を選択し、または、初 期表示画像に関連する画像コンテンツを選択するコンテンツ選択手段と、 A content selection unit that selects content material corresponding to a specific image element in response to movement or image content related to an initial display image;
選択されたコンテンツ素材力 なる画像要素と初期表示画像の残余の画像要素を 合成し、または、画像コンテンツからなる画像を初期表示画像に代替するとともに、さ らに、認識情報として捉えた人の画像を切り出してそのままに、または人の代替画像 を画像要素として合成する画像合成処理手段と、 The selected image element of the content material and the remaining image element of the initial display image are combined, or the image of the image content is replaced with the initial display image, and also the human image captured as recognition information Image synthesis processing means for cutting out and synthesizing a substitute image of a person as an image element;
合成した画像を表示する合成画像表示手段と、 Combined image display means for displaying the combined image;
を有することを特徴とする。 It is characterized by having.
[0017] また、本発明に係る画像表示装置は、前記動き検出手段が、前記特定の画像要素 位置における動きを検出することを特徴とする。 [0017] Further, the image display device according to the present invention is characterized in that the motion detecting means detects a motion at the specific image element position.
[0018] また、本発明に係る画像表示装置は、前記コンテンツ選択手段が、特定の画像要 素に対応する音コンテンツを選択し、前記合成画像表示手段が、合成画像とともに、 または合成画像とは独立して該音コンテンツ力もなる音を出力することを特徴とする。 [0018] Further, in the image display device according to the present invention, the content selection unit selects sound content corresponding to a specific image element, and the composite image display unit includes the composite image or the composite image. Independently, the sound having the sound content power is output.
[0019] また、本発明に係る画像表示装置は、前記表示画像のコンテンツが、認識情報とし て捉えた人に提供する広告情報、案内情報または展示情報であることを特徴とする。 [0019] Further, the image display device according to the present invention is characterized in that the content of the display image is advertisement information, guidance information, or exhibition information provided to a person captured as recognition information.
[0020] また、本発明に係る画像表示装置は、前記表示画像のコンテンツが、認識情報とし て捉えた人のパフォーマンスを、該表示画像を見る人に提供する映像情報であること を特徴とする。 発明の効果 [0020] Further, in the image display device according to the present invention, the content of the display image is video information that provides a person who views the display image with the performance of the person captured as the recognition information. . The invention's effect
[0021] 本発明に係る画像表示方法およびその装置は、相互に独立して表示可能な複数 の画像要素からなる表示画像に、例えば、認識情報として捉えた人の動きに対応し て選択されたコンテンツ素材力 なる画像要素と初期表示画像の残余の画像要素を 合成するとともに、さらに、例えば、認識情報として捉えた人の画像を切り出して合成 して表示するため、認識情報として捉えた人または第三者が表示画像を見るときの注 目度を高めることができる。 [0021] The image display method and apparatus according to the present invention are selected as a display image composed of a plurality of image elements that can be displayed independently of each other, for example, corresponding to the movement of a person captured as recognition information In addition to synthesizing the image elements that are the content material power and the remaining image elements of the initial display image, and further, for example, cutting out and synthesizing and displaying the human image captured as the recognition information, The degree of attention when the three parties look at the displayed image can be increased.
また、人の動きに応じたコンテンツの切り替えが繰り返されることにより、人の動きが リアルタイムにコンテンツへ反映され画像が変化し続けるために注目している時間を より長く保つことができる。 In addition, by repeatedly switching the content according to the movement of the person, the movement of the person is reflected in the content in real time and the image keeps changing, so that the time of attention can be kept longer.
図面の簡単な説明 Brief Description of Drawings
[0022] [図 1]本発明の画像生成装置の概略構成を示す図である。 FIG. 1 is a diagram showing a schematic configuration of an image generation apparatus of the present invention.
[図 2]連続 2フレームの差分により、変化のあった部分を特定する例を説明するため の図である。 FIG. 2 is a diagram for explaining an example in which a changed portion is specified by a difference between two consecutive frames.
[図 3]フレーム差分の累積より得られる「動きの方向」と「動きの累積量」を説明するた めの図である。 FIG. 3 is a diagram for explaining the “direction of motion” and the “accumulated amount of motion” obtained from the accumulation of frame differences.
[図 4]被写体の様子を画像データベースとのパターンマッチングによって認識する方 法を説明するための図である。 FIG. 4 is a diagram for explaining a method for recognizing the state of a subject by pattern matching with an image database.
[図 5]画像データベースの画像パターンの中力 被写体の画像と位置関係が一致す るものを検索し、画像認識情報として得る方法を説明するための図である。 FIG. 5 is a diagram for explaining a method of searching for an image pattern in the image database that matches the positional relationship with the image of the subject and obtaining it as image recognition information.
[図 6]音データベースの音パターンの中から受容される音と特徴が一致するものを検 索し、音認識情報として得る方法を説明するための図である。 FIG. 6 is a diagram for explaining a method of searching for a sound database having a sound and a characteristic that matches the received sound and obtaining it as sound recognition information.
[図 7]画像 Aのときに動きの位置が上部にあると画像 Bの、さらに画像 Bのときに動き の位置が右部にあると画像 Dの、それぞれの画像情報を選択、生成する方法を説明 するための図である。 [Fig.7] Method for selecting and generating image information for image B when image A is at the top, and image D when image B is at the right for image B It is a figure for demonstrating.
[図 8]被写体の動きの方向に応じて画像情報を生成する方法を説明するための図で ある。 FIG. 8 is a diagram for explaining a method of generating image information according to the direction of movement of a subject.
[図 9]本発明の画像表示方法を説明するためのフローチャートである。 [図 10]画像データベースに保存される複数のコンテンツおよび個々のコンテンツの画 像に含まれる画像コンテンツ等の階層的な構造を説明するための図である。 FIG. 9 is a flowchart for explaining an image display method of the present invention. FIG. 10 is a diagram for explaining a hierarchical structure of a plurality of contents stored in an image database and image contents included in each content image.
[図 11A]人の動きをトリガ情報として表示画像を生成する手順を説明するためのフロ 一チャートである。 FIG. 11A is a flowchart for explaining a procedure for generating a display image using human movement as trigger information.
[図 11B]図 11A中、コンテンツ決定処理の手順を詳細に説明するためのフローチヤ ートである。 FIG. 11B is a flowchart for explaining in detail the procedure of content determination processing in FIG. 11A.
[図 12]画像と合わせて音を出力する手順を説明するためのフローチャートである。 FIG. 12 is a flowchart for explaining a procedure for outputting sound together with an image.
[図 13]ビールの広告の場合の画像表示例を示す図である。 FIG. 13 is a diagram showing an image display example in the case of a beer advertisement.
[図 14]ビールの広告の場合の人の動きの位置と画像切り替えの関係を説明するため の図である。 FIG. 14 is a diagram for explaining the relationship between the position of human movement and image switching in the case of beer advertisements.
[図 15]ビールの広告の場合の人の動きの位置と音の関係を説明するための図である FIG. 15 is a diagram for explaining the relationship between the position of human movement and sound in the case of beer advertisements
[図 16]人がパフォーマンスを行う様子を画像として捉えて映像表示する場合の状況を 説明するための図である。 [Fig. 16] This is a diagram for explaining the situation when a person is performing as a video and displaying a performance.
[図 17]映像表示する場合の人の動きの位置とスクリーンの画像切り替えの関係を説 明するための図である。 FIG. 17 is a diagram for explaining the relationship between the position of human movement and screen image switching when displaying an image.
[図 18]映像表示する場合の人の動きの位置と音の関係を説明するための図である。 符号の説明 FIG. 18 is a diagram for explaining the relationship between the position of human movement and sound when displaying video. Explanation of symbols
10 画像表示装置 10 Image display device
12 受容部 12 Receptor
12a 画像センサ 12a Image sensor
12b 音センサ 12b sound sensor
14 主制御部 14 Main control unit
16 出力部 16 Output section
16a 表示装置 16a display
16b 音響装置 16b sound equipment
18 認識情報処理部 18 Recognition Information Processing Department
20 画像情報生成部 22 音情報生成部 20 Image information generator 22 Sound information generator
24 クロック 24 clocks
26 画像情報処理部 26 Image Information Processing Department
28 音情報処理部 28 Sound Information Processing Department
30 画像データベース 30 image database
32 画像データ決定部 32 Image data determination unit
34 音データベース 34 sound database
36 音データ決定部 36 Sound data decision section
40a〜40c、 48a、 48b コンテンツ 40a-40c, 48a, 48b content
44a〜44c, 50a〜50c 画像コンテンツ 44a ~ 44c, 50a ~ 50c Image content
44a〜44c、 52a〜52d 画像要素 44a-44c, 52a-52d Image elements
46a〜46c、 54a〜54c コンテンツ素材 46a-46c, 54a-54c content material
発明を実施するための最良の形態 BEST MODE FOR CARRYING OUT THE INVENTION
[0024] 本発明に係る画像表示方法およびその装置の実施の形態について、図を参照して 、以下に説明する。 Embodiments of an image display method and apparatus according to the present invention will be described below with reference to the drawings.
[0025] まず、本発明に係る画像表示装置について、図 1を参照して説明する。 First, an image display apparatus according to the present invention will be described with reference to FIG.
本発明に係る画像表示装置 10は、外界情報、特にそのなかでも人を認識情報とし て捕らえる受容部 (認識情報取得手段) 12と、主制御部 14と、出力部 16とを備える。 受容部 12および出力部 16は、主制御部 14と一体ィ匕されたものであってもよい。 The image display apparatus 10 according to the present invention includes a receiving unit (recognition information acquisition means) 12 that captures external information, particularly a person as recognition information, a main control unit 14, and an output unit 16. The receiving unit 12 and the output unit 16 may be integrated with the main control unit 14.
[0026] 受容部 12は、例えば、画像センサ (撮像手段) 12aおよび音センサ(音受容手段) 1The receiving unit 12 includes, for example, an image sensor (imaging unit) 12a and a sound sensor (sound receiving unit) 1
2bのうちのいずれか一方または双方である。 Either one or both of 2b.
前者の画像センサ 12aは、通常のカメラのほか、赤外線カメラ、赤外線センサあるい は両眼視差等の 3Dカメラ等を用いることができ、これにより、人の挙動等の外界情報 を受容 (センシング)する。特に、 3Dカメラを用いることで、距離情報のある画像の時 間的変化の情報を得ることができる。 As the former image sensor 12a, an ordinary camera, an infrared camera, an infrared sensor, or a 3D camera such as binocular parallax can be used, thereby receiving external information such as human behavior (sensing). To do. In particular, by using a 3D camera, it is possible to obtain information on temporal changes in images with distance information.
後者の音センサ 12bは、マイク等の音センサを用いることができ、これにより、人の 声等の外界情報を受容する。 As the latter sound sensor 12b, a sound sensor such as a microphone can be used, thereby receiving external information such as a human voice.
また、受容部 12は、叩いたり触ったりした振動を認識する振動センサ、空気の流れ や気配等認識するドップラーセンサ、サーモグラフィなどで温度を認識する温度セン サあるいは力の力かり具合等を認識する圧力センサであってもよ 、。 The receiving unit 12 is a vibration sensor that recognizes vibrations that are struck or touched. It may be a Doppler sensor that recognizes the temperature, a sign, etc., a temperature sensor that recognizes the temperature by thermography, or a pressure sensor that recognizes the force level.
[0027] 出力部 16は、表示装置 (合成画像表示手段) 16aおよび音響装置 (音出力手段) 1 6bである。表示装置 16aは、例えば、ディスプレイ、プロジェクタあるいはテレビ等で あり、音響装置 16bは、例えば、スピーカーあるいはヘッドホン等である。 The output unit 16 includes a display device (synthetic image display means) 16a and an acoustic device (sound output means) 16b. The display device 16a is, for example, a display, a projector, or a television, and the acoustic device 16b is, for example, a speaker or headphones.
[0028] 主制御部 14は、受容部 12によって得られた情報に応じて出力情報を生成して、出 力部 16に出力する機構を備える。 The main control unit 14 includes a mechanism that generates output information according to the information obtained by the receiving unit 12 and outputs the output information to the output unit 16.
主制御部 14は、例えばコンピュータであり、認識情報処理部 18、画像情報生成部 20、音情報生成部 22およびクロック 24を有する。 The main control unit 14 is, for example, a computer, and includes a recognition information processing unit 18, an image information generation unit 20, a sound information generation unit 22, and a clock 24.
[0029] 認識情報処理部 18は、画像情報処理部 (動き検出手段) 26と、音情報処理部 28 を有する。 The recognition information processing unit 18 includes an image information processing unit (motion detection means) 26 and a sound information processing unit 28.
画像情報処理部 26は、画像センサ 12aによって取得した人の情報を処理して、出 力画像を選択するために用いるトリガ信号を生成する。音情報処理部 28は、音セン サ 12bによって受容される外界情報を処理して、音情報を選択するために用いるトリ ガ信号を生成する。このトリガ信号に基づいて得られる音情報は、外界情報と対応付 けられたものである。また、これとは別に、音コンテンツが特定の画像要素等に対応し て選択される力 この点については詳細を後述する。なお、音情報と音コンテンツは 、同一の意味であるが、上記のようにこれらを生成するトリガが異なるため、便宜的に 言葉を使い分ける。 The image information processing unit 26 processes the person information acquired by the image sensor 12a and generates a trigger signal used for selecting an output image. The sound information processing unit 28 processes the external information received by the sound sensor 12b, and generates a trigger signal used for selecting sound information. Sound information obtained based on this trigger signal is associated with external information. Apart from this, the ability to select sound content corresponding to a specific image element or the like will be described in detail later. Note that the sound information and the sound content have the same meaning, but the triggers for generating them are different as described above.
[0030] 画像情報処理部 26は、画像センサ 12aで撮像される人(以下、これを被写体と!/、う ことがある)をそのままの形で合成用の被写体画像 (クロマキ一画像)を生成するため にそのまま出力し、あるいは、輪郭線や影絵の画像処理を施して出力する。あるいは また、キャラクター画像等の被写体の代替画像を生成するための、トリガ信号を出力 する。また、画像情報処理部 26は、出力画像センサ 12aで撮像される被写体を画像 処理して被写体の動きを検出し、画像コンテンツを選択するためのトリガ信号を出力 する。 [0030] The image information processing unit 26 generates a subject image (chroma single image) for synthesis in the form of a person (hereinafter, sometimes referred to as a subject! /) That is imaged by the image sensor 12a. For this purpose, it is output as it is, or it is output after image processing of outlines and shadows. Alternatively, a trigger signal for generating a substitute image of a subject such as a character image is output. Further, the image information processing unit 26 performs image processing on the subject imaged by the output image sensor 12a, detects the motion of the subject, and outputs a trigger signal for selecting image content.
[0031] 被写体の動きは、連続 2フレームの差分、連続 3フレームの差分、背景との差分な どフレーム差分により、変化のあった部分を特定する。例えば連続 2フレームの差分 を取る場合、図 2に示すように、 2つのフレーム Flaとフレーム Fibとで変化のあった 部分の座標力 フレーム Flcに示すように「動きの位置」を検出し、あるいは、変化の あった部分の大きさから「動きの量」を検出する。 [0031] The subject movement is identified by a frame difference such as a difference between two consecutive frames, a difference between three consecutive frames, or a difference from the background. For example, the difference between two consecutive frames As shown in Fig. 2, the coordinate force of the part that changed between the two frames Fla and Fib, as shown in Fig. 2, detects the "movement position" as shown in the frame Flc, or the part that changed The “amount of movement” is detected from the size of.
前者の「動きの位置」については、例えば、 1フレームが所定数の分割フレームに分 割された前後のフレームを比較して、分割フレームの画像の輝度の変化を捉えること により、変化のあった分割フレームの位置を特定することにより行う。すなわち、入力 された画像データから、現在のフレームの画像とその直前のフレームの画像 (または 背景画像)のすベての画素 (ピクセル)の明るさを比較し、その差が既定値以上なら ば動きありの画素であると判断し、このとき、動きありの画素群の重心 (X座標, Y座標 )が縦横のどの区画 (分割フレーム)に属するものであるかを判定する。そして、フレー ム内の縦横の区画の位置に応じて「動きの位置」を検出する。 The former “position of movement” has changed by, for example, comparing the frames before and after one frame is divided into a predetermined number of divided frames and capturing the change in luminance of the image of the divided frames. This is done by specifying the position of the divided frame. That is, the brightness of all the pixels (pixels) of the image of the current frame and the image of the previous frame (or background image) is compared from the input image data. It is determined that the pixel is in motion, and at this time, it is determined to which partition (divided frame) the center of gravity (X coordinate, Y coordinate) of the pixel group with motion belongs. Then, the “position of movement” is detected according to the position of the vertical and horizontal sections in the frame.
後者の「動きの量」については、例えば、上記において、入力された画像データから 、現在のフレームの画像とその直前のフレームの画像 (または背景画像)のすベての 画素(ピクセル)の明るさを比較し、その差が既定値以上ならば動きありの画素である と判断し、このとき、動きありの画素数に応じて「動きの量」を検出する。 Regarding the latter “amount of motion”, for example, in the above, from the input image data, the brightness of all the pixels (pixels) of the image of the current frame and the image of the previous frame (or the background image). If the difference is equal to or greater than a predetermined value, it is determined that the pixel is in motion, and at this time, “amount of motion” is detected according to the number of pixels in motion.
ここで、被写体の動きとして、例えば図 3に示すように、 3つのフレーム F2a〜F2cの フレーム差分の累積より得られる、フレーム F2dに示す「動きの方向」とフレーム F2e に示す「動きの累積量」を用いてもょ 、。 Here, as the movement of the subject, for example, as shown in FIG. 3, the “direction of movement” shown in the frame F2d and the “cumulative amount of movement” shown in the frame F2e, which are obtained by accumulating the frame differences of the three frames F2a to F2c. "
さらに、時間要素を加えることにより、動きの位置、動きの方向、動きの量および動 きの速さのうちの少なくとも 1つの被写体の動き (動きの質、あるいは動きの特性)によ つてトリガ情報(トリガ信号)が生成される。 In addition, by adding a time element, the trigger information is based on the movement of the subject (quality of movement, or characteristics of movement) of at least one of the position of movement, direction of movement, amount of movement and speed of movement. (Trigger signal) is generated.
また、このような画面上の点における動きのベクトル(大きさや方向)は、フィルタリン グ法ゃ勾配法を用いてオプティカルフローを算出することによつても求めることができ る。 In addition, the motion vector (size and direction) at points on the screen can also be obtained by calculating the optical flow using the filtering method or the gradient method.
これらの被写体の動きは、 3Dカメラを利用することにより、例えばカメラの前 2m以 内の人影のみの動きを認識することができ、遠くの背景や人混みを除外することが可 能になる。同様に、赤外線カメラを利用して、赤外線ライトの届く範囲内のみの認識 や、夜などの暗いところでの認識も可能になる。 [0033] また、画像情報処理部 26は、例えば、画像センサ 12aで撮像される被写体の様子 を画像認識情報として取得するように構成してもよ ヽ。 By using a 3D camera, it is possible to recognize the movement of these subjects, for example, only the movement of a person within 2 meters in front of the camera, and it is possible to exclude distant backgrounds and crowds. Similarly, using an infrared camera, it is possible to recognize only within the reach of the infrared light or in dark places such as at night. [0033] The image information processing unit 26 may be configured to acquire, for example, the state of the subject imaged by the image sensor 12a as image recognition information.
被写体の様子は、例えば、画像情報処理部 26に設けられる図 1に表れない画像デ ータベース(後述の画像データベース 30とは別。 )とのパターンマッチングによって特 定する。 The state of the subject is specified by, for example, pattern matching with an image database (different from the image database 30 described later) provided in the image information processing unit 26 that does not appear in FIG.
すなわち、例えば、図 4に示すように、画像データベースの画像パターンの中から 被写体の画像と特徴が一致するものを検索し、スーツとビジネスバッグとの特徴が一 致した場合、「男性らしい」という様子をトリガ情報として得、あるいは、傘と特徴が一 致した場合「雨が降って 、る」と 、う様子をトリガ情報として得る。 That is, for example, as shown in Fig. 4, if the image pattern in the image database is searched for a feature that matches the image of the subject, and the features of the suit and business bag match, it is said to be `` masculine '' The state is obtained as trigger information, or when the umbrella matches the feature, “the rain is falling” is obtained as the trigger information.
あるいはまた、例えば、図 5に示すように、画像データベースの画像パターンの中か ら被写体の画像と位置関係が一致するものを検索し、目、鼻、口のノ ランスの関係が 一致した場合、特定の顔と!、う様子をトリガ情報として得る。 Alternatively, for example, as shown in FIG. 5, when an image database image pattern that matches the positional relationship with the subject image is searched, and the relationship between the eyes, nose, and mouth is matched, Get a specific face!
また、このとき、顔の位置から、大人と子供の区別を、あるいはまた、人数を、それぞ れトリガ情報として得るようにしてもょ 、。 At this time, it is also possible to obtain the distinction between adults and children or the number of people as trigger information from the position of the face.
[0034] 音情報処理部 28は、例えば、音センサ 12bで受容される外界の音を処理してトリガ 情報 (トリガ信号)を生成する。 The sound information processing unit 28 processes, for example, external sound received by the sound sensor 12b, and generates trigger information (trigger signal).
例えば、受容される音の単位時間の波形データを FFT (高速フーリエ変換)解析し 、周波数分布形状を得、音情報処理部 28に設けられる図 1に表れない音データべ ースとのパターンマッチングによって特定する。 For example, the unit time waveform data of the received sound is analyzed by FFT (Fast Fourier Transform) to obtain the frequency distribution shape, and pattern matching with the sound data base not shown in FIG. Specified by.
すなわち、例えば、図 6に示すように、図 6中右側の音データベースのハイヒール F 3a、革靴 F3b、スニーカー F3c、かばんのころころ F3d、車椅子 F3e、雨 F3f等の音 ノターンの中から図 6中左側の受容される音 (音パターン) F3gと特徴が一致するも のを検索し、入力された音の波形の成分分布形状がハイヒール F3gの音の波形の成 分分布形状と特徴が一致すると判断された場合は「女性らし ヽ」 t ヽぅ様子をトリガ情 報として得る。 That is, for example, as shown in FIG. 6, the left side of FIG. 6 from among the sound patterns of high heels F 3a, leather shoes F3b, sneakers F3c, bag roller F3d, wheelchair F3e, rain F3f, etc. The sound that has the same characteristics as the F3g sound (sound pattern) F3g is searched, and it is determined that the component distribution shape of the input sound waveform matches the component distribution shape of the sound waveform of the high heel F3g. If it is, “feminine ヽ” t ヽ ぅ is obtained as trigger information.
また、例えば、 FFTにより声のトーンを解析し、低い周波数成分が多いときは低い 声、例えば男声、高い周波数成分が多いときは高い声、例えば女声というように、音 力も外界の様子をトリガ情報として得るようにしてもよ 、。 [0035] 画像情報生成部 (コンテンツ選択手段および画像合成処理手段) 20は、画像情報 処理部 26で生成されるトリガ情報に基づいて画像情報を生成するものである。 Also, for example, the voice tone is analyzed by FFT, and when the low frequency component is high, the voice is a low voice, for example, a male voice, and when the high frequency component is high, the voice is a high voice, for example, a female voice. You can get it as ... The image information generation unit (content selection unit and image composition processing unit) 20 generates image information based on the trigger information generated by the image information processing unit 26.
画像情報生成部 20は、画像データベース 30と、画像データ決定部 32を有する。 画像データベース 30には、例えば広告、案内または展示等として画像表示するた めに蓄積された画像情報群が保存されて!ヽる。 The image information generation unit 20 includes an image database 30 and an image data determination unit 32. The image database 30 stores a group of image information accumulated for displaying an image, for example, as an advertisement, a guide, or an exhibition!
案内として、例えば、空港に設置する観光案内の地図画面の特定位置に観光客の 手指の画像や全身の画像が重ねて映し出されることで、その位置の土地の名所や名 物等の画像情報が提供されるものを挙げることができる。また、展示として、例えば、 博物館に設置する虫の生態を説明する展示パネルに各種の虫の画像コンテンツ群 や音コンテンツ群を埋め込んでおき、画面の特定位置に見学者の手指画像が重ね て映し出されることで、その位置にある虫の生態の様子を説明する画像や虫の鳴き 声が提供されるもの等を挙げることができる。なお、広告については、具体例を後述 する。 As guidance, for example, an image of a tourist's finger or full body image is displayed at a specific position on a map screen of tourist information installed at the airport, so that image information such as the landmarks and specialties of the land at that location can be displayed. Mention may be made of what is provided. Also, as an exhibition, for example, image content and sound content groups of various insects are embedded in an exhibition panel that explains the ecology of insects installed in museums, and the hand images of visitors are overlaid at specific positions on the screen. By doing so, you can list images that explain the ecology of the insects at that location and those that provide insect calls. A specific example of advertisement will be described later.
すなわち、図 10に示すように、画像データベース 30には、例えば、コンテンツ層 PF 1として、商品の広告を行うための複数のコンテンツ(図 10では、例えば、ビールの広 告であるコンテンツ 40a、化粧品の広告であるコンテンツ 40b、その他の広告であるコ ンテンッ 40cのみを表示する。 )や映像を提供するための複数のコンテンツ(図 10で は、コンテンツ 48a、 48bのみを表示する。)が保存されている。そして、それぞれのコ ンテンッには、画像コンテンツ層 PF2として、相互に独立して表示可能な複数の画像 コンテンツ(図 10では、画像コンテンツ 40aの画像要素 42a〜42cのみを表示する。) が含まれており、さらに、それぞれの画像コンテンツには、画像要素層 PF3として、相 互に独立して表示可能な複数の画像要素(図 10では、画像要素 44a〜44cおよび 5 2a〜52cのみを表示する。)が含まれる。それぞれの画像要素には、コンテンツ素材 層 PF4として、コンテンツ素材(図 10では、コンテンツ素材 46a〜46cおよび 54a〜5 4cのみを表示する。 )が対応付けられる。 That is, as shown in FIG. 10, the image database 30 includes, for example, a plurality of contents for advertising a product as the content layer PF 1 (in FIG. 10, for example, content 40a that is an advertisement for beer, cosmetics Only the content 40b, which is the advertisement of the other, and the content 40c, which is the other advertisement, are displayed) and multiple contents for providing video (only the contents 48a and 48b are displayed in Fig. 10). ing. Each content includes, as the image content layer PF2, a plurality of image contents that can be displayed independently of each other (in FIG. 10, only the image elements 42a to 42c of the image content 40a are displayed). Furthermore, in each image content, a plurality of image elements (in FIG. 10, only image elements 44a to 44c and 52a to 52c are displayed as the image element layer PF3. .) Is included. Each image element is associated with a content material (only the content materials 46a to 46c and 54a to 54c are displayed in FIG. 10) as the content material layer PF4.
これら各層 PF1〜PF4に含まれる画像は、静止画であってもよぐまた、動画であつ てもよい。 The images included in each of these layers PF1 to PF4 may be still images or moving images.
[0036] 上記のデータベース構造を、後述する画像表示方法の具体例に沿って、さらに具 体的に説明する。 [0036] The above database structure is further provided in accordance with a specific example of an image display method to be described later. Explain physically.
例えば、被写体である人の性別や持ち物等の情報あるいは被写体の周辺の環境 情報等をトリガーとしてコンテンツとしてビールの広告である 40aが選択、決定される と、初期画像 (初期表示画像)として、例えば、企業のロゴやビール缶等の文字や図形 である複数の画像要素 44a〜44cで構成される表示画像 (画像コンテンツ) 42aが選 択される。そして、人の動きをトリガ情報として、例えば企業のロゴである文字の画像 要素 44aに置き換えて画像要素 44aに対応付けられた例えば企業紹介画像であるコ ンテンッ素材 46aが選択される。一方、コンテンツ素材 46cは後述する音データべ一 ス 34に含まれるものであり、画像要素 44cに対応する音コンテンツ素材が選択される 。また、映像 Aのコンテンツ 48aの場合、画像要素 52dは初期画像としての画像コン テンッ 50aとは別の、コンテンツ 48aに関連する画像コンテンツ 50bと対応付けられて おり、画像要素 52dにおける人の動きをトリガとして画像コンテンツ 50aに切り替えて 画像コンテンツ 50bが選択される。なお、コンテンツ素材 54cは音コンテンツ素材であ る。 For example, when 40a, which is a beer advertisement, is selected and determined as content triggered by information such as the gender and belongings of the person who is the subject or environmental information around the subject, the initial image (initial display image) is, for example, Then, a display image (image content) 42a composed of a plurality of image elements 44a to 44c, such as a company logo and a beer can, is selected. Then, for example, a content material 46a that is a company introduction image associated with the image element 44a is selected by replacing a character image element 44a that is a company logo, for example, with the movement of a person as trigger information. On the other hand, the content material 46c is included in the sound data base 34 to be described later, and the sound content material corresponding to the image element 44c is selected. In the case of the content 48a of the video A, the image element 52d is associated with the image content 50b related to the content 48a, which is different from the image content 50a as the initial image, and the movement of the person in the image element 52d is represented. The image content 50b is selected by switching to the image content 50a as a trigger. The content material 54c is a sound content material.
また、画像データベース 30には、被写体を代替画像で表示するための、複数のキ ャラクタ一画像コンテンツが保存されて 、る (図示せず。 )0 The image database 30, for displaying the object in the substitute image, a plurality of keys Yarakuta first image content is stored, Ru (not shown.) 0
画像データ決定部 32は、例えば、前記動きの累積量が閾値に至らないときは、画 像グループの先の画像情報 (画像、初期画像コンテンツ)をそのまま保持し、あるい は、表示スケジュールに従って切り替え表示する。一方、前記動きの累積量が閾値 以上のときは、例えば図 7に示すように、前記した動きの位置に応じて設定される、ト リガ情報に対応つけられた画像 F4a、 F4b、 · ·の中から、例えば画像 F4aのときに動 きの位置が上部にあると画像 F4bの、さらに画像 F4bのときに動きの位置が右部にあ ると画像 F4dの、それぞれの画像情報を選択、生成する。 For example, when the cumulative amount of movement does not reach the threshold value, the image data determination unit 32 retains the image information (image, initial image content) of the image group as it is or switches according to the display schedule. indicate. On the other hand, when the cumulative amount of motion is equal to or greater than the threshold, for example, as shown in FIG. 7, the images F4a, F4b,... Associated with the trigger information set according to the position of the motion described above. For example, select the image information for the image F4b if the movement position is at the top for the image F4a, and the image F4d for the movement position for the right portion for the image F4b. To do.
また、画像情報群として、例えば、前記動きの方向に対応つけられた複数の画像情 報で構成し、画像データ決定部 32は、例えば図 8に示すように、システム設置場所 F 5cから移動する方向が(1)のとき、その方向に存在する広告 F5aに関する画像情報 を生成し、方向が(2)のときは、その方向に存在する広告 F5bに関する画像情報を 生成する。 また、図 10の画像コンテンツ群を参照する場合、上記のように、画像データ決定部 32は前記動きの位置に応じて設定される、新たなトリガ情報により、前のトリガ情報に 対応つけられた例えば画像要素コンテンツ 42aに代えてコンテンツ素材 44aを選択し 、画像情報を生成する。 Further, the image information group includes, for example, a plurality of pieces of image information associated with the direction of movement, and the image data determination unit 32 moves from the system installation location F 5c as shown in FIG. 8, for example. When the direction is (1), image information related to the advertisement F5a existing in the direction is generated. When the direction is (2), image information related to the advertisement F5b existing in the direction is generated. Further, when referring to the image content group in FIG. 10, as described above, the image data determination unit 32 is associated with the previous trigger information by the new trigger information set according to the position of the motion. For example, instead of the image element content 42a, the content material 44a is selected to generate image information.
[0037] また、このとき、画像情報群として、前記した被写体の様子に応じて設定される画像 グループを構成し、例えば、画像データ決定部 32は、「男性らしい」という画像認識 情報に対応して男性向け広告画像情報を選択、生成し、あるいは、特定のブランドの マークという被写体の様子に応じてそのブランドの広告画像を情報を選択、生成して ちょい。 [0037] At this time, as the image information group, an image group set according to the state of the subject described above is configured, and for example, the image data determination unit 32 corresponds to the image recognition information "masculine". Select and generate advertising image information for men, or select and generate information for advertising images of that brand according to the subject's appearance as a specific brand mark.
[0038] また、画像情報生成部 20は、音情報処理部 28で得られるトリガ情報に基づ 、て画 像情報を生成するように構成してもよ!、。 Further, the image information generation unit 20 may be configured to generate image information based on the trigger information obtained by the sound information processing unit 28!
すなわち、画像データベース 30には、音情報処理部 28で得られるトリガ情報に対 応つけられた画像情報が保存されており(図示せず。)、画像データ決定部 32は、ト リガ情報に基づいて、トリガ情報に対応する画像情報を選択し、生成する。 That is, the image database 30 stores image information associated with the trigger information obtained by the sound information processing unit 28 (not shown), and the image data determination unit 32 is based on the trigger information. The image information corresponding to the trigger information is selected and generated.
[0039] また、画像情報生成部 20は、既に説明したように、画像センサ 12aによって受容さ れた画像情報を生成画像情報とし、現在の表示画像に重ね合わせるように構成して ちょい。 [0039] As described above, the image information generation unit 20 may be configured to use the image information received by the image sensor 12a as generated image information and to superimpose it on the current display image.
例えば、映った姿が画像に薄く重なるように、または、クロマキ一効果のように、画面 の中に自分が入り込むように、または、画像に自分の影が映り込むように、または、自 分の動 、たところだけ画像が表示されるように、画像情報を生成する。 For example, the image is thinly superimposed on the image, or you are entering the screen, or your shadow is reflected in the image, like the chroma effect, or your image The image information is generated so that the image is displayed only where it moves.
また、画像情報生成部 20は、被写体のトリガ情報に応じて、キャラクター画像を選 択、生成するように構成してもよい。 Further, the image information generation unit 20 may be configured to select and generate a character image according to the trigger information of the subject.
[0040] また、画像情報生成部 20は、例えば、温度が高いときに「暑いね」とか、女性だと認 識したときに「奇麗なあなたにお勧め」とかのような言葉 (文字)を付加して表示する構 成としてもよい。 [0040] In addition, the image information generation unit 20 uses words (characters) such as “hot” when the temperature is high, or “recommended for you who is beautiful” when recognizing that it is a woman. It may be configured to be added and displayed.
[0041] 音情報生成部 (音コンテンツ選択手段) 22は、音情報処理部 28で得られるトリガ情 報に基づいて音情報を生成し、あるいはまた、被写体の動きや特定の画像要素等に 対応して選択した音コンテンツを生成するものである。 音情報生成部 22は、音データベース 34と、音データ決定部 36を有する。 [0041] The sound information generation unit (sound content selection means) 22 generates sound information based on the trigger information obtained by the sound information processing unit 28, or responds to the movement of the subject, specific image elements, and the like. Thus, the selected sound content is generated. The sound information generation unit 22 includes a sound database 34 and a sound data determination unit 36.
音データベース 34には、音情報処理部 28で得られるトリガ情報に対応した音情報 が保存されており、音データ決定部 36はトリガ情報に基づいて、トリガ情報に対応し た音情報を選択し、生成する。 The sound database 34 stores sound information corresponding to the trigger information obtained by the sound information processing unit 28, and the sound data determination unit 36 selects sound information corresponding to the trigger information based on the trigger information. , Generate.
例えば、トリガ情報が大人の男性であるとき、ビールが注がれている画像情報を生 成するとともに、音情報としてビールの注ぐ音や泡の音を選択し、生成し、または、トリ ガ情報が女性であるとき、「お嬢さん」などの呼びかけ言葉を音情報を選択し、生成 する等である。 For example, when the trigger information is an adult male, the image information on which beer is poured is generated, and the sound of beer pouring or foam is selected and generated as sound information, or the trigger information For example, when a person is a woman, sound information is selected and generated for a calling word such as “lady”.
この場合、マイクで拾った音をそのまま音情報として生成してもよ 、。 In this case, the sound picked up by the microphone may be generated as it is as sound information.
また、音データベース 34には、画像情報生成部 20のそれぞれのコンテンツ素材 4 6a〜46cに対応付けられ、あるいはまた、画像要素 44a〜44cに対応づけられた音 声素材データを含む(図示せず。 ) oそして、音データ決定部 36において、画像情報 処理部 26からの画像のトリガ信号に対応した、言い換えれば画像要素等に対応した 音声素材を選択し、生成する。 In addition, the sound database 34 includes sound material data associated with the respective content materials 46a to 46c of the image information generation unit 20 or associated with the image elements 44a to 44c (not shown). O) Then, the sound data determination unit 36 selects and generates an audio material corresponding to the image trigger signal from the image information processing unit 26, in other words, the image element.
[0042] つぎに、本発明の画像表示方法について、画像表示する場合を例にとり、図 9を参 照して概略説明する。 Next, the image display method of the present invention will be outlined with reference to FIG. 9, taking the case of image display as an example.
所定のタイミングで(図 9中、 S10)、外界情報 (人の情報)を取り込み(図 9中、 S12 )、認識情報を取得する(図 9中、 S14)。 At a predetermined timing (S10 in FIG. 9), external information (human information) is taken in (S12 in FIG. 9) and recognition information is acquired (S14 in FIG. 9).
ついで、認識情報に基づいて、認識情報に対応した画像情報を生成する(図 9中、 S16)。そして、画像情報の信号によって、画像表示を行う(図 9中、 S18)。 Next, based on the recognition information, image information corresponding to the recognition information is generated (S16 in FIG. 9). Then, the image display is performed according to the image information signal (S18 in FIG. 9).
[0043] 上記本発明の画像表示方法を、人の動きをトリガ情報として表示画像を生成する場 合を例にとり、図 11Aおよび図 11Bを参照してさらに詳細に説明する。 The image display method of the present invention will be described in more detail with reference to FIGS. 11A and 11B, taking as an example the case where a display image is generated using human movement as trigger information.
まず、図 11Aに示すように、コンテンツを決定処理する(図 11A中、 S20)。コンテン ッは、事前に決定処理しておいてもよいが、より好ましくは、以下の手順で行う。 すなわち、図 11Bにその詳細ステップ構成を示すように、カメラからの人の画像が 取得され(図 11B中、 S48)、画像情報処理部 26を介して画像データ決定部 32にお いて、最初の画像処理であるかどうかが判断され(図 11B中、 S50)、最初の画像処 理の場合は、取得された画像が一時画像保持領域へ保持され (図 11B中、 S52)、 再びカメラからの人の画像が取得される(図 11B中、 S48)。一方、最初の画像処理 でない場合、言い換えれば、既に画像処理が繰り返されている場合は、人の様子や 特徴を抽出する(図 11B中、 S54) 0そして、人の様子や特徴を抽出できた場合は、こ の人の様子や特徴の情報をトリガーとして、画像情報処理部 26を介して画像データ 決定部 32において、コンテンツを選択 (決定処理)する(図 11B中、 S56)。コンテン ッは、例えば、ビールの広告 (広告情報)である。この場合、トリガーとしてマイクで拾 つた音情報を用い、音情報決定部 28を介して画像データ決定部 32において、コン テンッを決定処理してもよい(図 11B中、 S56)。一方、人の様子や特徴を抽出できな い場合は、再びカメラ力 の人の画像が取得される(図 11B中、 S48)。 First, as shown in FIG. 11A, content is determined (S20 in FIG. 11A). The content may be determined in advance, but is more preferably performed according to the following procedure. That is, as shown in the detailed step configuration in FIG. 11B, an image of a person from the camera is acquired (S48 in FIG. 11B), and the first image data determination unit 32 via the image information processing unit 26 It is determined whether or not the image processing is performed (S50 in FIG. 11B). In the case of the first image processing, the acquired image is held in the temporary image holding area (S52 in FIG. 11B). The image of the person from the camera is acquired again (S48 in FIG. 11B). On the other hand, if it is not the first image processing, in other words, if it is already an image processing is repeated, (in FIG. 11B, S54) extracts a human manner and wherein 0 Then, to extract the human situation and characteristics In this case, the information is selected (determination process) in the image data determination unit 32 via the image information processing unit 26 (S56 in FIG. 11B) using the person's state and feature information as a trigger. The content is, for example, a beer advertisement (advertisement information). In this case, the sound information picked up by the microphone may be used as a trigger, and the content may be determined by the image data determination unit 32 via the sound information determination unit 28 (S56 in FIG. 11B). On the other hand, if a person's appearance and features cannot be extracted, an image of a person with camera power is acquired again (S48 in FIG. 11B).
コンテンツを決定処理する(図 11A中、 S20)と、つぎに、カメラからの人の画像が 取得され(図 11A中、 S22)、画像情報処理部 26を介して画像データ決定部 32にお いて、最初の画像処理であるかどうかが判断される(図 11A中、 S24)。 When the content is determined (S20 in FIG. 11A), an image of a person from the camera is acquired (S22 in FIG. 11A), and the image data determination unit 32 passes through the image information processing unit 26. Then, it is determined whether or not it is the first image processing (S24 in FIG. 11A).
そして、最初の画像処理の場合は、取得された画像が一時画像保持領域へ保持さ れ(図 11A中、 S 26)、コンテンツを構成する、企業のロゴ、ビール缶等の文字や図形 からなる複数の画像要素で構成される初期画像 (初期表示画像、初期画像コンテン ッ)が生成、表示されるとともに(図 11A中、 S28)、再び、カメラからの人の画像が取 得される(図 11A中、 S22)。一方、最初の画像処理出ない場合、言い換えれば、既 に画像処理が繰り返されて ヽる場合は、一時画像保持画像と現在のカメラ画像を比 較して人の動きを検出し(図 11A中、 S30)、動きのあった部分 (動いている人)の画 像を抽出する(図 11A中、 S32)。 In the case of the first image processing, the acquired image is held in the temporary image holding area (S26 in Fig. 11A) and consists of characters and figures such as company logos and beer cans that make up the content. An initial image composed of a plurality of image elements (initial display image, initial image content) is generated and displayed (S28 in FIG. 11A), and a human image from the camera is acquired again (Fig. 11). 11A, S22). On the other hand, when the first image processing is not performed, in other words, when the image processing has already been repeated, the movement of the person is detected by comparing the temporary image holding image with the current camera image (in FIG. 11A). , S30), and extract the image of the moving part (moving person) (S32 in Fig. 11A).
つぎに、現在の画像内の特定位置で指定した動きがあるかどうかを判断する(図 11 A中、 S34)。このとき、現在の画像内の任意の位置で指定した動きがあるかどうかを 判断してもよい。ここで、指定した動きとは、例えば、閾値以上の動きの量、特定方向 の動き、動きの反復、特定座標間における順番の動き等である。 Next, it is determined whether there is a movement designated at a specific position in the current image (S34 in FIG. 11A). At this time, it may be determined whether or not there is a designated movement at an arbitrary position in the current image. Here, the designated movement is, for example, the amount of movement equal to or greater than a threshold, movement in a specific direction, repetition of movement, movement in order between specific coordinates, and the like.
そして、指定した動きがある場合は、その動きのある位置の特定の画像要素に対応 する指定したコンテンツ素材を選択して画像を生成し(図 11A中、 S36)、現在の全 体の画像と置き換えて表示し、または特定の画像要素の画像と置き換えて他の画像 要素と合成して表示する(図 11A中、 S38)。一方、指定した動きがない場合は、一 定時間動かな 、状態が続 、て 、るかどうか判断される(図 11A中、 S35)。 If there is a specified motion, the selected content material corresponding to the specific image element at the position where the motion is selected is selected to generate an image (S36 in FIG. 11A). The image is replaced and displayed, or is replaced with an image of a specific image element and is combined with other image elements and displayed (S38 in FIG. 11A). On the other hand, if there is no specified movement, If it does not move for a fixed time, it is determined whether or not the state continues (S35 in FIG. 11A).
そして、動かない状態が続いている場合は、再び、コンテンツを決定処理する(図 1 1A中、 S20)。一方、動きがある場合は、現在の画像がそのまま表示されるとともに、 再び、カメラ力 の人の画像が取得される(図 11A中、 S22)。 Then, if the stationary state continues, the content is determined again (S20 in FIG. 11A). On the other hand, if there is movement, the current image is displayed as it is, and an image of a person with camera power is acquired again (S22 in FIG. 11A).
[0045] 上記のステップが繰り返されることにより、カメラで捉えられる人の動きによって、表 示装置の画像あるいは特定の画像要素のコンテンツが次々と変化するため、カメラで 捉えられる人や表示装置を見る第三者の眼が画像表示装置の表示画像に引き付け られること〖こなる。 [0045] By repeating the above steps, the image of the display device or the content of the specific image element changes one after another according to the movement of the person captured by the camera, so that the person or display device captured by the camera is viewed. Third-party eyes are attracted to the display image of the image display device.
[0046] また、本発明の画像表示方法にぉ 、て、画像と合わせて、または画像とは無関係 に独立して音を出力する場合は、図 12に示すように、動きのあった部分 (動いている 人)の画像を抽出した後(図 11A中、 S32)、画像の特定位置で指定した動きがある 力どうかを判断する(図 12中、 S40)。このとき、画像の任意の位置で指定した動きが あるかどうかを判断してもよい。ここで、指定した動きとは、前記のとおり、例えば、閾 値以上の動きの量、特定方向の動き、動きの反復、特定座標間における順番の動き 等である。 [0046] In addition, in the image display method of the present invention, when sound is output together with an image or independently of the image, as shown in FIG. After extracting the image of the person (moving person) (S32 in Fig. 11A), it is determined whether there is a force specified at a specific position in the image (S40 in Fig. 12). At this time, it may be determined whether or not there is a designated movement at an arbitrary position in the image. Here, the designated movement is, for example, the amount of movement equal to or greater than a threshold value, movement in a specific direction, repetition of movement, movement in order between specific coordinates, and the like as described above.
そして、指定した動きがある場合は、素材集の中から指定した音素材を選択し音を 生成し(図 12中、 S44)、例えば画像と出力のタイミングを合わせて、生成した音を出 力する(図 12中、 S46) 0一方、指定した動きがない場合は、現在の音をそのままに、 あるいは無音のときは無音のままに現在の音状態を保持する(図 12中、 S46)。 これにより、カメラで捉えられる人の動きによって、音声が次々と変化し、さらにまた 、音声が重なることによって新たな音が作り出されるため、上記と同様に、カメラで捉 えられる人や表示装置を見る第三者の眼が画像表示装置の表示画像に引き付けら れること〖こなる。 If there is a specified movement, the specified sound material is selected from the material collection and a sound is generated (S44 in FIG. 12). For example, the generated sound is output by matching the timing of the image and output. Yes (S46 in Fig. 12) 0 On the other hand, if there is no specified movement, the current sound state is maintained with the current sound as it is, or when there is no sound (S46 in Fig. 12). As a result, the sound changes one after another according to the movement of the person captured by the camera, and a new sound is created by overlapping the sounds. The eyes of the third party who sees are attracted to the display image of the image display device.
[0047] ここで、ビールの広告を例にとって、本発明を具体的に説明する。 [0047] Here, the present invention will be described in detail by taking beer advertisements as an example.
表示装置には、例えば図 13に示すような画像が表示され、通行人が、この広告を 眼にする。また、表示装置にはカメラ(図 13中、矢印 Aで示す。)が付帯されており、 表示装置の前を通行する人を画像として捉え、例えば、その人の影を広告の画像と 合成して表示する。 表示画装置の画像は、「OXビール」という企業のロゴ(図 13中、矢印 C1で示す。 ) 、商品表示(図 13中、矢印 C2で示す。)、ビールジョッキを持った女性(図 13中、矢 印 C3で示す。 )等の独立して表示することができる複数の画像要素で構成されて!、 る。 For example, an image as shown in FIG. 13 is displayed on the display device, and a passerby views this advertisement. In addition, a camera (indicated by arrow A in FIG. 13) is attached to the display device, and a person who passes in front of the display device is captured as an image, for example, the shadow of the person is combined with an advertisement image. To display. The image on the display device is the company logo “OX Beer” (indicated by arrow C1 in FIG. 13), product display (indicated by arrow C2 in FIG. 13), and a woman with a beer mug (FIG. 13). Middle, indicated by arrow C3. Consists of multiple image elements that can be displayed independently.
[0048] まず、人の動き (影の動き)の位置と画像切り替えの関係を図 14を参照して説明す る。 [0048] First, the relationship between the position of human movement (shadow movement) and image switching will be described with reference to FIG.
図 14に示すように、初期画像として表示される画面 F1において、「OXビール」とい うロゴ C1の画像要素は、コンテンツ素材として企業紹介画像を含んでおり、ロゴの画 像要素の所定の部分(図 14中、矢印 S1で示す。 )に人の動きが位置し、言い換えれ ばロゴの画像要素の所定の部分 S1に人が触れると、画面 F2に転換して、企業紹介 画像を画面の全体にあるいは所定の位置に表示する。 As shown in Figure 14, in the screen F1 displayed as the initial image, the image element of the logo `` Ox Beer '' C1 includes the company introduction image as the content material, and a predetermined part of the logo image element (Indicated in arrow S1 in Fig. 14), the movement of a person is located. In other words, when a person touches a predetermined part S1 of the logo image element, the screen changes to screen F2, and the company introduction image is displayed on the entire screen. Or at a predetermined position.
また、画面 F1において、商品表示 C2の画像要素の所定の部分(図 14中、矢印 S2 で示す。)に人の動きが位置すると、画面 F3に転換して、 1つの商品を紹介する画像 を画面の全体にあるいは所定の位置に表示する。このとき、画面 F3が表示された状 態で、画面 F3の画像要素の所定の部分 S1に人の動きが位置すると、先の画像 F2 が表示される。 On the screen F1, when a person's movement is located at a predetermined part of the image element of the product display C2 (indicated by the arrow S2 in FIG. 14), the screen changes to the screen F3 and an image introducing one product is displayed. It is displayed on the entire screen or at a predetermined position. At this time, when the movement of a person is located at a predetermined portion S1 of the image element on the screen F3 in a state where the screen F3 is displayed, the previous image F2 is displayed.
また、画面 F1において、ビールジョッキを持った女性 C3の所定の部分(図 14中、 矢印 S3で示す。 )に人の動きが位置すると、画面 F4に転換して、生産者の声のメッ セージが表示される。 On the screen F1, when a person's movement is located at a predetermined part of the woman C3 with a beer mug (indicated by the arrow S3 in Fig. 14), the screen changes to the screen F4 and the message of the producer's voice is displayed. Is displayed.
さらにまた、上記の例と同様にして、画面 F1において、あるいはまた、新たに生成さ れた画像 F2、 F3、 F4等において、所定の部分に人の動きが位置すると、原料のは なしを含む画面 F5や画面 F3の商品とは別の紹介する画像を含む画面 F6等が次々 に切り替え表示される。 Furthermore, in the same manner as in the above example, if a human movement is located in a predetermined part on the screen F1 or in the newly generated images F2, F3, F4, etc., the raw material is included. A screen F6 including images to be introduced, which are different from the products on the screen F5 and the screen F3, are switched and displayed one after another.
[0049] つぎに、人の動き (影の動き)の位置と音の関係を図 13と同様の画面を示す図 15を 参照して説明する。 Next, the relationship between the position of human movement (shadow movement) and sound will be described with reference to FIG. 15 showing a screen similar to FIG.
図 15に示す画面において、「OXビール」というロゴ C1の画像要素は企業のサゥン ドロゴと、 2つのビールジョッキのうちの左側のビールジョッキ(図 15中、矢印 C4で示 す。)の画像要素はビールジョッキにビールを注ぐときの音と、右側のビールジョッキ( 図 15中、矢印 C5で示す。)の画像要素は乾杯のときにビールジョッキを合わせるとき の音と、女性 C3の画像要素は乾杯の声と、それぞれ対応付けられている。 On the screen shown in Fig. 15, the image element of the logo C1 "OX Beer" is the image element of the company's sound logo and the left beer mug (indicated by arrow C4 in Fig. 15) of the two beer mugs. Is the sound of pouring beer into a beer mug and the beer mug on the right ( In FIG. 15, this is indicated by arrow C5. The image element of) is associated with the sound when a beer mug is matched during a toast, and the image element of female C3 is associated with a toasting voice.
そして、いずれかの画像要素の所定の部分に人が位置すると、画像要素に対応し た音が生成される。 When a person is positioned at a predetermined part of any image element, a sound corresponding to the image element is generated.
また、図 15中、矢印 X1〜X3で示す領域に対応して、それぞれ、リズムループ A、リ ズムループ Bおよびメロディループが設けられており、 V、ずれかの領域に人の動きが 位置すると、 BGM (画像の背景に流れる音楽)が作られる。 In FIG. 15, rhythm loop A, rhythm loop B, and melody loop are provided corresponding to the areas indicated by arrows X1 to X3, respectively. BGM (music flowing in the background of the image) is created.
このような画像表示方法は、広告情報だけでなぐ例えば、種々の案内情報を表示 する場合にも有効であり、さらにまた、これらに限らず広く利用することができる。 Such an image display method is effective not only for advertising information but also for displaying various types of guidance information, and can be widely used without being limited thereto.
[0050] つぎに、本発明の他の具体例として、人がパフォーマンスを行う様子を画像として 捉えて映像表示する例について、図 16〜18を参照して説明する。 [0050] Next, as another specific example of the present invention, an example in which a person performs a performance as an image and displays an image will be described with reference to FIGS.
図 16に示すように、カメラ(図 16中、矢印 Bで示す。)が向けられた舞台上で例えば 演技者がパフォーマンス (演技)を行う。舞台の後方には、スクリーン (表示装置:図 1 6中、矢印 Cで示す。)が設けられる。スクリーン Cには、部屋の窓カゝらこちらを覼く人、 電話機等の複数の画像要素が表示されており、さらに、カメラ Bで捉えた人の動きが 輪郭線や影絵として複数の画像要素に合成して表示される。舞台の前の観劇者は、 舞台上の演技者を観るとともにスクリーンを観ている。 As shown in Fig. 16, for example, an actor performs on the stage where the camera (indicated by arrow B in Fig. 16) is directed. A screen (display: indicated by arrow C in Fig. 16) is provided behind the stage. The screen C displays multiple image elements such as a person who looks at the window in the room, a telephone, etc. In addition, the movement of the person captured by the camera B is displayed as a contour line or a shadow image. Are combined and displayed. Theatrical performers in front of the stage watch the performers on the stage and watch the screen.
[0051] ここで、先の広告の例と同様に、まず、人の動き (影の動き)の位置とスクリーンじの 画像切り替えの関係を図 17を参照して説明する。 [0051] Here, as in the case of the previous advertisement, first, the relationship between the position of the person's movement (shadow movement) and the screen image switching will be described with reference to FIG.
初期画像として部屋内部空間が表示される画面 Fl 1にお 、て、窓力もこちらを覼く 人の部分(図 17中、矢印 S4で示す。 )に演技者の動きが位置すると、画面 F11とは 別の部屋内部空間が表示される画面 F12に示すように、例えば、その人は話しかけ てきた後スクリーン C上力も消える。また、画面 F12において、電話機の部分(図 17中 、矢印 S5で示す。 )に演技者の動きが位置すると、例えば、画面 F11に戻る。また、 画面 F11において、左下の所定の領域(図 17中、矢印 S6で示す。 )において演技者 が左方向の動きを行うと、画面 F11の部屋内部空間に続く部屋内部空間が表示され る画面 F13に切り替わる。さらに、画面 F13において、左下に表示されるドアの部分( 図 17中、矢印 S7で示す。 )に演技者が動くと、スクリーンの画像は、外の風景が表示 される画面 F14に切り替わる。同様にして、画面 F11において、右上の所定の領域( 図 17中、矢印 S8で示す。 )に演技者の動きが位置すると、部屋とは異なる店内等の 画像を表示する画面 F15に切り替わり、さらにまた、画面 F15において、右下の所定 の領域(図 17中、矢印 S9で示す。)で演技者が右方向に動くと、画面 F15とは別の 店内を表示する画面 F16に切り替わる。 On screen Fl 1 where the interior space of the room is displayed as the initial image, when the actor's movement is located in the part of the person who hits the window force (shown by arrow S4 in Fig. 17), screen F11 and As shown on screen F12 where another room interior space is displayed, for example, after the person has spoken, the power on screen C also disappears. On the screen F12, when the actor's movement is positioned on the telephone portion (indicated by arrow S5 in FIG. 17), for example, the screen returns to the screen F11. On the screen F11, if the performer moves leftward in a predetermined area on the lower left (indicated by arrow S6 in FIG. 17), the screen displays the room interior space that follows the room interior space on the screen F11. Switch to F13. Furthermore, when the performer moves to the door part (indicated by arrow S7 in Fig. 17) displayed on the lower left of the screen F13, the screen image shows the outside landscape. Switch to screen F14. Similarly, when the performer moves in a predetermined area on the upper right of the screen F11 (indicated by an arrow S8 in FIG. 17), the screen switches to a screen F15 that displays an image of the store or the like that is different from the room. On the screen F15, when the performer moves to the right in a predetermined area on the lower right (indicated by an arrow S9 in FIG. 17), the screen switches to a screen F16 that displays a different store from the screen F15.
[0052] つぎに、演技者の動き (影の動き)の位置と音の関係を図 18を参照して説明する。 Next, the relationship between the position of the performer's movement (shadow movement) and the sound will be described with reference to FIG.
図 18に示す画像において、窓の外に立つ人の画像要素(図 18中、矢印 X4で示す 。)は人の声と、左側のカーテンの画像要素(図 18中、矢印 X5で示す。 )は鳥の鳴き 声と、電話機の画像要素(図 18中、矢印 X6で示す。)は電話機のなる音と、それぞ れ対応付けられている。 In the image shown in FIG. 18, the image element of the person standing outside the window (indicated by arrow X4 in FIG. 18) is the voice of the person and the image element of the left curtain (indicated by arrow X5 in FIG. 18). Is associated with the sound of the phone and the image element of the phone (indicated by arrow X6 in Fig. 18).
そして、いずれかの画像要素の所定の部分に人が位置すると、画像要素に対応し た音が生成される。 When a person is positioned at a predetermined part of any image element, a sound corresponding to the image element is generated.
また、画面の右上の所定の領域(図 18中、矢印 X7で示す。 )に対応して、ナレーシ ヨンの音情報が設けられており、この領域に人の動きが位置すると、ナレーションが流 れる。 In addition, narration sound information is provided corresponding to a predetermined area in the upper right of the screen (indicated by an arrow X7 in FIG. 18), and narration is played when a person moves in this area. .
[0053] 以上説明した本実施の形態例に関わらず、認識手段としては、叩いたり触ったりし た振動を認識する振動センサ、空気の流れや気配等認識するドップラーセンサ、サ ーモグラフィなどで温度を認識する温度センサあるいは力の力かり具合等を認識する 圧力センサ等を適宜選択して用いることができる。 [0053] Regardless of the embodiment described above, the recognition means uses temperature sensors such as a vibration sensor that recognizes vibrations that have been struck or touched, a Doppler sensor that recognizes air flow and signs, thermography, and the like. A temperature sensor to be recognized or a pressure sensor to recognize a force force condition can be appropriately selected and used.
また、認識する外界情報としては、上記人の挙動等のほかに、人の持っているもの や扱っているもの、人数、メンバー構成、生体情報 (鼓動や発汗、体温など)、あるい はまた、人以外のペット (動物)や植物の動きや様子、自動車や機械、ロボットの動き や様子、天気'気候 '気温'風、風景、時間、または言葉の内容、声の状態(明るい、 暗い、楽しい、緊張、うれしい、など)、声の大きさ'強さ等を用いることができる。 また、主制御部に、人口知能、会話機能を組み込んでおき、認識情報に応じて、効 果的な呼びかけ言葉を出力したり、さらには、マイクで拾った言葉に対する返答を行 つて対話したりすることができるように構成すると、より好適である。 In addition to the above-mentioned human behavior, the external information to be recognized includes what the person has and handles, the number of people, member composition, biological information (beating, sweating, body temperature, etc.), or , Movements and appearances of non-human pets (animals) and plants, movements and appearances of cars and machines, robots, weather 'climate' temperature 'wind, landscape, time or language content, voice status (bright, dark, Fun, nervous, happy, etc.), loudness, strength, etc. can be used. In addition, artificial intelligence and conversation functions are built into the main control unit, and effective call-up words are output according to the recognition information, and responses are made to the words picked up by the microphone for dialogue. It is more preferable to be able to do so.
Claims
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2005072153 | 2005-03-15 | ||
| JP2005-072153 | 2005-03-15 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2006098255A1 true WO2006098255A1 (en) | 2006-09-21 |
Family
ID=36991597
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2006/304853 Ceased WO2006098255A1 (en) | 2005-03-15 | 2006-03-13 | Image display method and device thereof |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2006098255A1 (en) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2010142592A (en) * | 2008-12-22 | 2010-07-01 | Nintendo Co Ltd | Game program and game device |
| JP2013048924A (en) * | 2012-11-07 | 2013-03-14 | Nintendo Co Ltd | GAME PROGRAM AND GAME DEVICE |
| JP2014237016A (en) * | 2014-07-28 | 2014-12-18 | 株式会社クラス・マイスター | Control program of game device |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2001307124A (en) * | 2000-02-15 | 2001-11-02 | Sega Corp | Image processing system, image processing device, and imaging device |
| JP2002157079A (en) * | 2000-11-09 | 2002-05-31 | Doko Kagi Kofun Yugenkoshi | Method of discriminating intention |
| JP2002518717A (en) * | 1998-06-11 | 2002-06-25 | インテル・コーポレーション | Use of video images in providing input data to a computer system |
| JP2002196855A (en) * | 2000-10-06 | 2002-07-12 | Sony Computer Entertainment Inc | Image processor, image processing method, recording medium, computer program and semiconductor device |
-
2006
- 2006-03-13 WO PCT/JP2006/304853 patent/WO2006098255A1/en not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2002518717A (en) * | 1998-06-11 | 2002-06-25 | インテル・コーポレーション | Use of video images in providing input data to a computer system |
| JP2001307124A (en) * | 2000-02-15 | 2001-11-02 | Sega Corp | Image processing system, image processing device, and imaging device |
| JP2002196855A (en) * | 2000-10-06 | 2002-07-12 | Sony Computer Entertainment Inc | Image processor, image processing method, recording medium, computer program and semiconductor device |
| JP2002157079A (en) * | 2000-11-09 | 2002-05-31 | Doko Kagi Kofun Yugenkoshi | Method of discriminating intention |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2010142592A (en) * | 2008-12-22 | 2010-07-01 | Nintendo Co Ltd | Game program and game device |
| US9220976B2 (en) | 2008-12-22 | 2015-12-29 | Nintendo Co., Ltd. | Storage medium storing game program, and game device |
| JP2013048924A (en) * | 2012-11-07 | 2013-03-14 | Nintendo Co Ltd | GAME PROGRAM AND GAME DEVICE |
| JP2014237016A (en) * | 2014-07-28 | 2014-12-18 | 株式会社クラス・マイスター | Control program of game device |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN106648083B (en) | Enhanced playing scene synthesis control method and device | |
| KR101381594B1 (en) | Education apparatus and method using Virtual Reality | |
| JP4618384B2 (en) | Information presenting apparatus and information presenting method | |
| JP2011033993A (en) | Information presenting apparatus and method for presenting information | |
| EP4080907B1 (en) | Information processing device | |
| JP4238371B2 (en) | Image display method | |
| CN108537574A (en) | A kind of 3- D ads display systems and method | |
| JP2009109887A (en) | Synthetic program, recording medium and synthesizer | |
| JP5440244B2 (en) | Video information presentation device | |
| JP2006293999A5 (en) | ||
| KR102011868B1 (en) | Apparatus for providing singing service | |
| KR102200239B1 (en) | Real-time computer graphics video broadcasting service system | |
| KR102601329B1 (en) | Customer reaction apparatus using digital signage | |
| WO2006098255A1 (en) | Image display method and device thereof | |
| Birringer | Moveable worlds/Digital scenographies | |
| von Rosen | Scenographing the dance archive: Keep crawling! | |
| Fry et al. | MTV: The 24 hour commercial | |
| US20180101135A1 (en) | Motion Communication System and Method | |
| KR102581583B1 (en) | Digital signage apparatus | |
| Morris | The Mute Stones Sing: Rigoletto Live from Mantua | |
| KR20180136599A (en) | Server, apparatus and computer program stored in computer-readable medium for providing singing service | |
| JP2011175070A (en) | Video information display device | |
| CN109429084A (en) | Method for processing video frequency and device, for the device of video processing | |
| Brown | Entertaining love: Cinephile pastiche in Twenty-First century Taiwanese films | |
| Braun | Mirror Dancing in Congo |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
| DPE2 | Request for preliminary examination filed before expiration of 19th month from priority date (pct application filed from 20040101) | ||
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| NENP | Non-entry into the national phase |
Ref country code: RU |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 06715576 Country of ref document: EP Kind code of ref document: A1 |