[go: up one dir, main page]

WO1998046011A1 - Dispositif a effet special et procede de generation d'effet special - Google Patents

Dispositif a effet special et procede de generation d'effet special Download PDF

Info

Publication number
WO1998046011A1
WO1998046011A1 PCT/JP1997/001249 JP9701249W WO9846011A1 WO 1998046011 A1 WO1998046011 A1 WO 1998046011A1 JP 9701249 W JP9701249 W JP 9701249W WO 9846011 A1 WO9846011 A1 WO 9846011A1
Authority
WO
WIPO (PCT)
Prior art keywords
signal
shadow
video signal
filtering
depth information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP1997/001249
Other languages
English (en)
Japanese (ja)
Inventor
Kazuhiro Maruyama
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Priority to PCT/JP1997/001249 priority Critical patent/WO1998046011A1/fr
Priority to GB9827227A priority patent/GB2329312A/en
Publication of WO1998046011A1 publication Critical patent/WO1998046011A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2622Signal amplitude transition in the zone between image portions, e.g. soft edges
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • H04N5/275Generation of keying signals

Definitions

  • the present invention relates to an image processing device, and more particularly to a special effect device that obtains a three-dimensional effect by applying an appropriate shadow to an object video signal that is three-dimensionally image-converted.
  • FIG. 1A is an explanatory diagram showing one of the effects of the video special effect device
  • FIG. 1B is a configuration diagram of a video special effect device capable of realizing such effects.
  • the video special effect device shown in FIG. 1B generates a pseudo shadow IS from an object, and combines the object 0 and the pseudo shadow IS.
  • the object is a video to be processed to apply a special effect
  • the object video signal is a video signal to be processed by the special effect.
  • the video special effect device shown in FIG. 1B subtracts the key signal K supplied via the input terminal 101 from the background signal BG supplied via the input terminal 103, and calculates the result of this operation.
  • a mixing circuit 102 for adding a video signal V i supplied via an input terminal 100 to the input terminal 100; a memory 104 for storing a key signal K supplied via an input terminal 101;
  • An operation unit 105 for generating a pseudo-shadow signal IS by changing the position of the key signal K stored in the memory 104 by operating an address in the X and Y directions; From 104 the pseudo shadow signal IS is mixed with an over-verter one circuit 106 that reduces the overlapping part of the pseudo shadow signal IS and the key signal K supplied via the input terminal 101. From the mixed signal MIX from the circuit 102, the overunder circuit 1
  • a mixing circuit that subtracts the over-under processed output KZIS from 06, adds the color matte signal supplied via the input terminal 108 to the subtraction result, and outputs the result as the output signal OUT from the output terminal 109. It consists of 107. ,
  • An object video signal V i is supplied via an input terminal 100. This corresponds to object 0 shown in FIG. 1A.
  • the key signal K supplied via the input terminal 101 is a key signal K for keying the object video signal Vi.
  • the mixing circuit 102 subtracts the signal corresponding to the key signal K from the background signal BG, adds the object video signal Vi to the position, and adds the mixed signal M
  • the key signal K supplied through the input terminal 101 is supplied to the memory 104.
  • the operation unit 105 changes the position of the key signal K in the memory 104.
  • the key signal K is read from the memory 104 and is supplied to the over-under circuit 106 as a pseudo shadow signal IS.
  • the screen of the pseudo shadow signal IS is an image in which the pseudo shadow I shown in FIG. 1A is not hidden by the object 0.
  • the keyer signal K and the pseudo-shade signal IS are supplied to the inverter circuit 106.
  • the overunder circuit 106 subtracts only the overlapping portion of the key signal and the pseudo shadow signal IS from the pseudo shadow signal IS.
  • the overhander processing output KZIS from 6 corresponds to the pseudo shadow IS shown in FIG. 1A.
  • Over-Znder processing output KZIS and mixed signal MIX are Each is supplied to the mixing circuit 107.
  • the over-Z under-processing output KZIS from the over-Z under-circuit 106 is subtracted from the mixed signal MIX from the mixing circuit 102, and the input terminal 10
  • the power cutout signal supplied via 8 is added, and the addition result is output from the output terminal 109 as an output signal ⁇ UT.
  • This output signal is the entire image shown in FIG. 1A.
  • the pseudo shadow I shown in Fig. 1A is simply a shift of the key signal generated from the object ⁇ . That is, the pseudo-shaded image shown in FIG. 1A is a two-dimensional image. Therefore, in order to get closer to the actual shadow, it is necessary to use a three-dimensional image. In this case, it is necessary to generate a rotated shadow for the object and combine the shadow with the object. For this purpose, it is necessary to prepare one video special effect device for the object and one video special effect device for the shadow, and change the rotation axes of the object and the shadow before combining. In this case, the required operation technology is very advanced because it is the synthesis of images generated by different video special effect devices.
  • a special effect device is a special effect device for performing a special effect process on an object indicated by a video signal and a shadow on the object, wherein the gain control means controls a gain of an image of a shadow of the object; Filtering means for filtering the image, control means for controlling the gain of the gain control means in accordance with the shadow depth information, and controlling the filtering characteristics of the filtering means in accordance with the shadow depth information; and Means for synthesizing an image of the shadow of the object controlled and output by the means, an image of the object, and an image serving as a background of the object.
  • control means has a storage means for storing gain characteristic data according to the depth information and filtering characteristic data, and the control means is stored in the storage means.
  • the gain of the gain control means is controlled in accordance with the gain characteristic data
  • the filtering characteristic of the filtering means is controlled in accordance with the filtering characteristic data stored in the storage means.
  • the filtering means includes a low-pass filter.
  • a special effect device is a special effect device for performing a special effect process on an input source video signal.
  • the special effect device performs a first image conversion process on the source video signal to indicate an object image.
  • Object signal generating means for generating a shadow signal; and shadow signal generating means for performing a second image conversion process on the source video signal to generate a shadow signal corresponding to the target image.
  • the shadow signal output from the shadow video signal generation unit is input, and the object signal, the shadow signal, and the background signal corresponding to the source video signal are combined and output.
  • Synthesizing means for outputting an output video signal wherein the shadow signal generating means includes: gain control means for controlling a gain of the shadow and dough signals corresponding to the source video signal; and the shadow signal , A depth information generating means for generating depth information corresponding to the shadow signal, and a gain of the gain control means based on the depth information from the depth information generating means. And a control stage for controlling the filtering characteristics of the filtering means based on the depth information.
  • the shadow signal input to the shadow signal generating means is output to the gain control means as a shadow signal whose gain is controlled, and the gain is controlled.
  • the shadow signal thus input is input to the filtering means, is filtered, and is output from the filtering means as a shadow signal.
  • the filtering means includes a low-pass filter.
  • control means has storage means for storing gain characteristic data and filtering characteristic data corresponding to the depth information from the depth information generating means,
  • the filtering characteristic data is supplied from the storage means to the gain control means and the filtering means in accordance with the depth information, respectively.
  • the object signal generating means may include a storage means for storing the input source video signal, and the source video signal stored in the storage means may be read out in a predetermined unit.
  • Read address generating means for generating a read address, wherein the first image conversion process is performed by the read address.
  • the object video signal read from the memory means is processed as a target image.
  • the shadow video signal generation means includes a storage means for storing the input source video signal, and a readout unit for reading the source video signal stored in the storage means in a predetermined unit.
  • a read address generating means for generating a dress, and wherein the second image conversion processing is such that the shadow signal read from the memory means by the read address is used as the target image. Is processed to be output as a shadow signal corresponding to.
  • the synthesizing unit may be configured to receive the object signal from the object signal generating unit and the shadow signal from the shadow signal generating unit, and to input each of the signals. And a second synthesizing unit that receives the mixing unit and the background signal and outputs an output video signal. .
  • the special effects device is a special effects device for performing special effects processing on an input source video signal and a source key signal corresponding to the source video signal.
  • An object signal generating means for performing a first image conversion process to generate an object video signal and an object key signal indicating a target image, and the source video signal and the source key signal are input.
  • a shadow signal generating means for performing a second image conversion process to generate a shadow video signal and a real shadow signal corresponding to the target image, and an object output from the object signal generating means
  • the video signal and the object signal are input and the shadow signal generating means
  • the shadow video signal and the real shadow signal Synthesizing means for inputting a background signal corresponding to the source video signal, synthesizing the input signals, generating an output video signal, and outputting the output video signal.
  • the shadow signal generating means includes: a shadow key signal generating means for generating a shadow key signal corresponding to the target image with respect to the input so-called ski key signal; and the shadow key signal generating means.
  • Gain control means for controlling the gain of the shadow key signal output from the means, filtering means for filtering the shadow key signal, depth information generating means for generating depth information corresponding to the shadow signal, The gain of the gain control means is controlled based on the depth information from the depth information generating means, and the filtering characteristic of the filtering means is controlled based on the depth information. And control means for performing the operation.
  • the filtering means includes a low-pass filter.
  • the shadow key signal output from the shadow key signal generation means of the shadow signal generation means is output as a shadow key signal whose gain is controlled by the gain control means, and
  • the controlled shadow key signal is inputted to the above-mentioned filtering means, and the filtering characteristic is controlled and outputted as a real shadow signal.
  • the control means of the shadow signal generator has storage means for storing gain characteristic data and filtering characteristic data corresponding to the depth information from the depth information generating means. Then, the gain characteristic data and the filtering characteristic data are supplied to the gain control means and the filtering means, respectively, in accordance with the depth information.
  • the synthesizing means may include the object video signal from the object signal generating means and the object video signal.
  • the shadow video signal from the shadow signal generating means and the real shadow signal are input, and the input signals are combined to produce a mixed video signal.
  • 2 combining means for combining the mixed video signal, the mixed key signal and the background signal from the first synthesizing means, and generating the output video signal.
  • the object signal generating means may include first memory means for storing the source video signal and second memory means for storing the source key signal.
  • the source video signal and the source key signal stored in the first memory means and the second memory means, respectively, from the respective memory means.
  • Read address generating means for generating the read address to supply the read address to the first memory means and the second memory means.
  • the object video signal and the object key signal read from the first memory means and the second memory means, respectively, by the read address are used for the above purpose. Appear as an image It is intended to be.
  • the shadow signal generating means may include a third storage means for storing the source video signal and a fourth memory means for storing the source key signal. And a read address for reading the source video signal and the source key signal stored in the third memory means and the fourth memory means, respectively, from the respective memory means. And a read address generating means for supplying the read address to the third memory means and the fourth memory means, and the second image conversion processing comprises: The third memory means and the second memory means The above-mentioned shadow video signal and the above-mentioned shadow key signal read out from the memory means 4 are output, and the above-mentioned shadow video signal is output as a shadow video signal corresponding to the above-mentioned target image. is there.
  • the special effect method according to the present invention is a special effect method for performing a special effect process on an object formed by an input video signal and a shadow of the object, wherein a gain step for controlling a gain of an image showing the shadow of the object is provided.
  • the gain is controlled in the gain step by gain characteristic data corresponding to the depth information stored in a storage unit, and the depth stored in the storage unit.
  • the filtering step the shadow image is filtered by the filtering characteristic data corresponding to the information.
  • the special effect method is a special effect method for performing a special effect process on an input source video signal, wherein the first image conversion process is performed on the source video signal to indicate a target image.
  • the shadow signal step includes a gain control step of controlling a gain of the shadow signal corresponding to the source video signal, and a filtering step of filtering the shadow signal.
  • a depth information generating step for generating depth information corresponding to the shadow signal, and controlling the gain of the gain control step based on the depth information from the depth information generating step. And a control step of controlling the filtering characteristics of the filtering step based on the depth information.
  • the special effect method includes an object signal generation step of performing a first image conversion process on an input source video signal to generate an object signal indicating a target image; A shadow signal generation step of performing a second image conversion process to generate a shadow signal corresponding to the target image, an object signal generated in the object signal generation step, and the shadow video signal The shadow signal generated in the generation step is input, and the object signal, the shadow signal, and the background signal corresponding to the source video signal are combined to output an output video signal.
  • a gain control step for controlling the gain of the signal a filtering step for filtering the shadow signal, a depth information generation step for generating depth information corresponding to the shadow signal, and a depth information generation step.
  • the above-mentioned filtering step is performed by a one-pass filter.
  • control step may include: A step of storing gain characteristic data and filtering characteristic data according to the depth information from the information generating step, wherein the gain characteristic data and the filtering characteristic data are stored in accordance with the depth information from the storing step. These are supplied to the gain control step and the filtering step, respectively.
  • the object signal generating step includes: a storage step of storing the input source video signal; and the source video signal stored in the storage step in a predetermined unit.
  • a read address generating step for generating a read address so that the read address can be read out, and the first image conversion processing is performed by the object read out from the storage step by the read address.
  • the video signal is processed as a target image.
  • the shadow video signal generation step includes: a storage step of storing the input source video signal; and the source video signal stored in the storage step.
  • a read address generation step for generating a read address to be read in a predetermined unit, wherein the second image conversion processing is performed by the memory read out from the memory means by the read address.
  • the video signal is processed so as to be output as a shadow signal corresponding to the target image.
  • the object signal from the smart signal generating step and the shadow signal from the shadow signal generating step are input and input.
  • a first combining step of combining the above signals to output a mixed signal, and a second combining step of receiving the mixed signal and the background signal and outputting an output video signal. is there.
  • the special effect method comprises the steps of: In a special effect method in which a special effect process is performed on a source key signal corresponding to a source video signal, the source video signal and the source key signal are input, a first image conversion process is performed, and a target image is processed.
  • a source signal generation step for generating an object video signal and an object key signal to be shown and the source, the video signal, and the source key signal are input, and a second image conversion process is performed.
  • a shadow video signal generating step for generating a shadow video signal and a real shadow key signal corresponding to the target image, and an object video signal and an object key signal output from the object signal generating step are input.
  • a key signal and a background signal corresponding to the source video signal are input, and the input signals are synthesized to generate and output an output video signal; and
  • a gain control step for controlling the gain of the shadow key signal output from the shadow key signal generation step;
  • a filtering step for filtering the shadow key signal; and generating depth information corresponding to the shadow signal.
  • the filtering step is performed by a one-pass filter.
  • the special effect method according to the present invention includes a shadow key output from the shadow key signal generation step in the shadow signal generation step.
  • the signal is output as a shadow key signal whose gain has been controlled in the gain control step, and the shadow key signal whose gain has been controlled is input to the filtering step where the filtering characteristic is controlled and the real shadow key signal is output. It is output as a signal.
  • the gain control data and the filtering characteristic data corresponding to the depth information from the depth information generation step are stored.
  • a special effect method comprising a storage step, wherein the gain characteristic data and the filtering characteristic data are supplied to the gain control step and the filtering step, respectively, according to the depth information.
  • the synthesizing step in the synthesizing step, the object video signal and the object key signal from the object signal generating step are input and the shadow signal generating step is performed.
  • the object signal generating step includes a first memory step for storing the source video signal and a second memory step for storing the source key signal. And the source video signal and the source key signal stored in the first memory step and the second memory step, respectively, from the respective memory steps.
  • the object video signal and the object signal read from the first memory step and the second memory step, respectively, are output as the target image by the address. It is something.
  • a third storage step for storing the source video signal and a fourth memory step for storing the source key signal are included.
  • the shadow video signal and the shadow key read from the third memory step and the fourth memory step by the read address, respectively are used.
  • a signal is output and the shadow video signal is output as a shadow video signal corresponding to the target image.
  • the present invention provides a shadow generating means for generating a shadow image, a synthesizing means for synthesizing an object image to which a shadow is to be applied, the above-mentioned shadow image, and a background image.
  • Gain characteristic data storage means for storing filtering characteristic data; gain control means for controlling the gain of the shadow image; filtering means for filtering the shadow image; and the shadow image and the object.
  • the ratio xZy between the distance X from the image of the object and the distance y between the virtual light source and the image of the object is determined, and the gain control method is performed according to the gain characteristic according to the ratio XZy.
  • the gain in the gain control means is controlled in accordance with the gain characteristic in accordance with the ratio XZy, and the filtering characteristic in the filtering means is controlled in accordance with the filtering characteristic in accordance with the ratio xZy.
  • An image of a close shadow is generated, and the shadow image, the object image, and the background image are combined.
  • FIG. 1 is a diagram for explaining the background art.
  • FIG. 2 is a block diagram showing the entire configuration of the special effect device of the present invention.
  • FIG. 3 is a diagram for explaining a world coordinate system defined in the special effect device of the present invention.
  • FIG. 4 is a diagram for explaining a conversion process for obtaining an object video signal.
  • FIG. 5 is a diagram showing the relationship between the address on the frame memory corresponding to the object video signal and the address on the monitor screen.
  • FIG. 6 is a diagram showing an array of pixel data on the monitor screen and in the frame memory for indicating the meaning of the parameter ⁇ .
  • Figure 2 is a diagram showing the address space of the frame memory when the perspective method is used.
  • FIG. 8 is a conceptual diagram showing the relationship between the world coordinate system and the shadow coordinate system.
  • FIG. 6 is a diagram showing a relationship with an address on a plane.
  • FIG. 10 is a diagram for explaining a perspective transformation process for obtaining a three-dimensional shadow video signal from a three-dimensional object video signal in the point light source mode.
  • FIG. 11 is a flowchart showing a procedure of a conversion process for obtaining a point light source and a three-dimensional object video signal.
  • FIG. 12 is a diagram for explaining a conversion process for obtaining a shadow video signal in the parallel light source mode.
  • FIG. 13 is a flowchart showing a procedure of a conversion process for a parallel light source and a three-dimensional object video signal.
  • FIG. 14 is a diagram for illustrating a relationship between spherical coordinates and rectangular coordinates in a world coordinate system and a shadow coordinate system, respectively.
  • FIG. 15 is a diagram for explaining a perspective transformation process for obtaining a three-dimensional shadow video signal from a three-dimensional object video signal in the parallel light source mode.
  • FIG. 16 is a diagram for explaining a perspective transformation process for obtaining a virtual three-dimensional shadow video signal from a virtual three-dimensional object video signal.
  • FIG. 17 is a diagram for explaining an origin setting mode for automatically setting the origin of the shadow coordinate system.
  • FIG. 18 is a diagram showing a configuration of a real shadow generator.
  • FIG. 19 is a diagram showing a target image and a diagram showing the relationship between parameter H and gain and filtering characteristics.
  • FIG. 20 is a diagram for explaining the combining process of two intersecting images.
  • FIG. 21 is a block diagram showing the detailed configuration of the combiner.
  • FIG. 22 is a block diagram showing a detailed configuration of the priority signal generating circuit of FIG. 21.
  • FIG. 4 is a signal waveform diagram provided for FIG. BEST MODE FOR CARRYING OUT THE INVENTION
  • the CUP 8 is a processor for controlling all the circuits of the special effect device 1.
  • the CPU 8 receives each parameter obtained by operating the control panel 5 by the operator via the interface circuit (IZF) 5a and the data bus, and based on this parameter. To control each circuit. From this control panel 5, the perspective value P z , X in the shadow coordinate system
  • the parameters are read out in real time and reflected in the calculation of the address.
  • the CPU 8 monitors the change of the parameter supplied from the control panel 5 at a frame cycle. And parameters for calculating the read address based on the supplied parameters (! ⁇ To ⁇ ; ⁇ , b, ⁇ B 33 ', b, ⁇ b 33 ") frame period is not calculated. Ru a. Therefore, varying these parameters to re Alta Lee beam at full rate A period in response to operation of the operator And apply real-time special effects to the source video signal according to the changed parameters. it can.
  • the special effect device of the present invention allows a desired light source to be selected from a point light source and a parallel light source by inputting the type of the light source from the control panel 5.
  • a point light source mode the mode in which an object shadow is generated by a point light source. I will call it.
  • the CPU 8 controls each circuit and calculates a read address based on a program stored in a ROM (Read Only Memory) 6 provided as a program memory. Also, a ROM (Read Only Memory) 6 provided as a program memory. Also, a ROM (Read Only Memory) 6 provided as a program memory. Also, a ROM (Read Only Memory) 6 provided as a program memory. Also, a ROM (Read Only Memory) 6 provided as a program memory. Also,
  • each circuit is controlled and a read address is calculated.
  • RAM Random Access Memory
  • the object signal generator 10 receives the source video signal V 0 from the outside, and receives the source video signal V 0. Is converted to 3D
  • the object signal generator 10 To produce a two-dimensional object video signal V 3. Also, the object signal generator 10 outputs the source video signal V. It receives the source key signal K D for keying, and receives the source key signal K in the same way as the source video signal. Is subjected to three-dimensional conversion processing to generate an object key signal K 2 . Specifically, the object signal generation unit 10 generates the source video signal V. Frame memory 12 for temporarily storing data, and a soft key signal ⁇ for keying this source video signal. Frame memory 13 for temporarily storing data, and read addresses calculated for frame memory 12 and frame memory 13 corresponding to the three-dimensional conversion operation. (
  • Frame memory 12 is the supplied source video signal V. One It is a memory for storing it temporarily.
  • the frame memory 12 is supplied with a sequential write address from a write address generation circuit (not shown), so that the supplied source video signal V is supplied to the frame memory 12. Is stored as it is without deformation.
  • a read address ( ⁇ ,, ⁇ ⁇ ) calculated in accordance with the three-dimensional conversion operation is supplied from the read and address generation circuit 14 to the frame memory 12.
  • the object video signal V 2 that has been subjected to the three-dimensional conversion processing for each frame is output from the frame memory 12.
  • the output object video signal V 2 is sent to the mixer 30.
  • Frame memory 13 is a source key signal K for keying the source video signal V 0. This is a memory for temporarily storing data.
  • the same write address as the sequential write address supplied to the frame memory 12 is supplied.
  • the frame memory 13 receives the supplied sousky signal ⁇ . Is stored as it is without deformation.
  • the full Remume mode Li 1 3 frame Note Li 1 2 is supplied to the same address as the read address ( ⁇ ⁇ , ⁇ ⁇ ) so is supplied from the full Remume mode Li 1 3 3D transformed object Toki first signal kappa 2 the same as converted 3-dimensional and object Tobideo signal V 2 which is output.
  • the outputted object signal # 2 is sent to the combiner 30.
  • Read ⁇ address generating circuit 1 supplied from the scan click rie N'a dress generating circuits 9 in Sequence emission sheet catcher Le.
  • Lean add-less X S, Y S
  • the read add-supplied to the full Remume mode Li 1 2 and full Remume mode Li 1 3 Generate the address ( ⁇ ⁇ , Y M ).
  • This read address generation circuit 14 The specific calculation in will be described later.
  • the shadow signal generation unit 20 is a circuit for generating a shadow video signal and a shadow signal.
  • the shadow signal generator 20 outputs the source video signal V supplied from the outside.
  • the shadow signal generator 20 generates a source key signal K for keying the source video signal.
  • the receive Ri just as with three-dimensional conversion processing Shah Doubideo signal V 4, to generate a shut synchronizing first signal kappa 4.
  • the shadow signal generator 20 outputs the source video signal V supplied from the outside.
  • the shadow signal generation unit 20 generates a source key signal for keying the source video signal. And performs a three-dimensional conversion process in the same manner as the shadow video signal V 6 to generate a shadow key signal K s.
  • the shadow signal generation section 20 has a circuit configuration similar to that of the object signal generation section 10 and includes the source video signal V.
  • Memory 23 for temporarily storing data, and reading for supplying the calculated read address to the frame memory 22 and the frame memory 23.
  • the color matte generation circuit 21 has an address generation circuit 24 and a source video signal V0. By processing the source video signal V. This is a circuit to make the color of the shadow into a shadow. In the simplest case, the source video signal
  • the frame memory 22 is a memory for temporarily storing the source video signal and the matte-processed signal. Since a sequential write address from a write address generator (not shown) is supplied to the frame memory 22, the matte-processed software is supplied to the frame memory 22. Source video signal is stored as it is without image deformation. In the point light source mode, this frame memory
  • the 2 reads address generating circuit 2 4 from the three-dimensional conversion operation ⁇ beauty point light source is calculated on the basis of the read-out add-less (X M ', YM') because is supplied, the frame memory 2 2 from, Shah Doubideo signal V 4 which has been converted 3D is output.
  • the read address (2 ⁇ ,, ⁇ ⁇ ) calculated from the read address generation circuit 24 based on the three-dimensional conversion operation and the parallel light source is stored in the frame memory 22. Since this is supplied, the frame memory 22 outputs a three-dimensionally converted shadow video signal V 6.
  • Frame memory 2'3 is the source key signal K for keying the source video signal Vn. This is a memory for temporarily storing data. Since the same address as the sequential write address supplied to the frame memory 22 is supplied to the frame memory 23, the supplied address is supplied to the frame memory 23. Sawski — signal. Are stored as they are without image conversion. In the point light source mode, the frame memory 23 is supplied with the same address (XM ', ⁇ ⁇ ') as the read address supplied to the frame memory 22. From frame memory 23, three-dimensional conversion is performed. In the same manner as the shadow video signal V 4 , a three-dimensionally converted shadow key signal K 4 is output.
  • the same address ( ⁇ ⁇ ,, ⁇ ⁇ ⁇ ) as the read address supplied to the frame memory 22 is supplied to the frame memory 23.
  • the frame memory 23 outputs a three-dimensionally converted shadow key signal K 6 in the same manner as the three-dimensionally converted shadow video signal V 6 .
  • the read address generation circuit 24 is a circuit for generating a read address to be applied to the frame memory 22 and the frame memory 23.
  • scan click rie N'a monitor scan from de-less onset raw circuit 9 is supplied to the sequencer Nsha Torque rie down 3 on at the ⁇ de-less (X s, Y s) and, in the CPU 8 parameter b of Yotsute computed images transformation matrix, 'based on the read add-less (X M' ⁇ b 33, YM ') to generate a.
  • the real shadow signal generator 50 receives a shadow key signal (K 4 in the point light source mode and ⁇ 6 in the parallel light source mode) output from the frame memory 23, and receives the input shadow signal. outputs a; (when source mode is kappa 4 ', directional light mode kappa 6' point ⁇ ) against the dough signal, more realistic shadow Riarusha Douki signal for generating a. Details of the real shadow signal generator 50 will be described later.
  • the above is the configuration of the shadow video signal generation unit 20 ′.
  • the screen address generation circuit 9 is used for the monitor screen 3 screen. This is a circuit for dressing the entire lean surface in the order corresponding to the raster scan order. Specifically, a screen address (X s, Y s) is generated based on the internally generated horizontal synchronization signal and vertical synchronization signal.
  • the compiler 30 is a circuit for mixing signals supplied from the object signal generator 10 and the shadow signal generator 20.
  • the compiler 30 receives the object video signal V 3 and the object key signal K 2 output from the object signal generator 10, and outputs the object video signal V 2 and the object key signal K 2 from the shadow signal generator 20.
  • mixed bidet O signal V MIX 'and mixed key signal K MI X' the following equation
  • Vache IX ' K 2 V 2 + (1-K 2 ) ⁇ ' V
  • the object video signal V 2 and the object key signal # 2 output from the object signal generator 10 are received, and the shadow video signal output from the shadow signal generator 20 is received. It receives signals V and Li Arushi catcher Douki signal K '6, Obujiweku preparative bidet and old signals V 2 and Shah Doubideo signal V 6 and the mixture was mixed video signal V mIX ", Obujiweku Toki first signal K 2 and Li Arusha Douki Generate a mixed key signal K MI X ⁇ mixed with the signal K '6.
  • K " 1-(1-K 2) (1 — K '6) ⁇ ' ⁇ ⁇ (b) It can be expressed as.
  • Mixer 4 which will be described later in detail co Nbaina 3 0 0 is a circuit for mixing the supplied Bakkugurau down Dobideo signal V BK from the mixed video signal and mixed key signal and an external output from the co Nbaina 3 0 is there. In the point light source mode, it receives the mixed video signal V MIX ′ and the mixed key signal K MIX ′ output from the mixer 30 and receives the externally supplied background video signal V BK . The mixed video signal V MIX ′ and the background video signal V BK are mixed based on the mixed key signal K MIX ′ to generate an output video signal V 0UT ′. More specifically, this output video signal VOUT 'is given by the following equation:
  • V 0 UT KMIX VMIX + (1 — MIX) ⁇ BK ⁇ ⁇ ⁇ ⁇ ⁇ (C)
  • This world coordinate system is a three-dimensional rectangular coordinate system composed of the X, ⁇ , and ⁇ axes.
  • the ⁇ plane defined by the X axis and the ⁇ axis orthogonal thereto Assuming that the screen surface 3 exists on the surface, the X axis is defined as the horizontal (left / right) direction of the screen surface 3, and the Y axis is vertical (up / down) of the screen surface 3. ) Direction is defined.
  • the depth direction of the screen surface 3 is defined as the positive direction of the Z axis orthogonal to the XY plane, and the front side of the screen surface 3, that is, the screen surface is viewed.
  • the side where the viewpoint PZ exists is defined as the negative direction of the Z axis.
  • the center of the screen plane 3 matches the origin of the world coordinate system including the X axis, the Y axis, and the Z axis.
  • Virtual coordinate values are set on the X-axis that are continuous from the inside (origin) of the screen area to the left and right sides, and the X-axis in the screen area has the viewpoint PZ
  • the virtual coordinate values between "1 4" and "+4" are set from left to right when viewing the screen surface 3 from.
  • virtual coordinate values that are continuous from the inside (origin) of the screen area in the vertical and external directions are set on the Y axis, and the Y axis in the screen area is provided with a viewpoint.
  • Virtual coordinates between “13” and “+3” are set from below to above when viewing the screen surface 3 from PZ.
  • the operator's viewpoint position P Z is virtually set on the Z axis at a position where its coordinate value is “1 16”.
  • the conversion processing for generating an object video signal V 2 from a source video signal V ⁇ will be described with reference to FIGS. 4A and 4B.
  • the source video signal V which is two-dimensional data. Is stored in the frame memory 12 as it is without image conversion. Therefore, as shown in FIGS. 4A and 4B, the source video signal V. Is on the XY plane of the world coordinate system, Source video signal V on screen 8 present in the Y plane. Image is displayed.
  • FIG. 4 (a) shows a state where the XY plane is viewed from the viewpoint PZ on the Z axis in the space represented by the three-dimensional coordinates of the world coordinate system.
  • the displayed image appears.
  • FIG. 4B shows a state where the XZ plane is viewed from the viewpoint position on the positive side of the Y axis in a space represented by three-dimensional coordinates in the world coordinate system. Therefore, the source video signal V 0 existing on the XY plane overlaps the screen surface 3.
  • Such source video signal V Such source video signal V.
  • the operator operates the controls on the control panel to perform 3D image conversion processing in the world coordinate space. That is, a three-dimensional conversion matrix T consisting of parameters set for each frame.
  • the source video signal V by operating the operator.
  • the source video signal V is applied to each pixel. Is converted to a three-dimensional spatial position.
  • the video signal subjected to the three-dimensional image conversion processing is represented as a three-dimensional object video signal V ,.
  • the source video signal V is used for the three-dimensional conversion in the cases of Figs. This is an example in which is rotated about 45 ° about the X-axis as the center of rotation, and further translated in the positive direction of the Z-axis.
  • the three-dimensional transformation matrix T Q used for three-dimensional transformation is
  • the source video signal V is a parameter including elements for translating the source video signal in the X-axis, Y-axis, and Z-axis directions, respectively. This is a parameter that includes elements for uniformly enlarging and reducing the size of the whole in each of the three-dimensional axial directions.
  • this transformation matrix T Is a 4-by-4 matrix because the coordinate system for rotation transformation and the coordinate system for translation transformation and scaling transformation are represented in the same single coordinate system. It is called the next coordinate system (Homogeneous Color or dmate).
  • the source video signal V Q existing in the XY plane is transformed into the three-dimensional conversion matrix ⁇ . Is converted to the three-dimensional position represented by the three-dimensional object video signal V, by the following perspective transformation processing.
  • This perspective transformation processing is to transform the three-dimensional object video signal V, with the perspective transformation matrix P, as shown in FIGS. Is a transformation process that allows the XY plane to be seen through.
  • the conversion is to obtain the image of the video signal seen through the XY plane.
  • Figure 4 B represents thus the perspective video signal to the XY plane, the 2-dimensional and the object video signal V 2.
  • the object video signal V 2 seen through the screen plane 3 on the XY plane is a three-dimensional object on the depth side of the screen plane 3 as viewed from the virtual viewpoint PZ. This is a video image as if the video signal V, exists.
  • This perspective transformation matrix P. 'S parameter P z when seen through the three-dimensional conversion video signal V 2 on the XY plane is a perspective Bae Kuti blanking value for applying the perspective. That is, in the case of FIG. 4B, the three-dimensional object video signal V, in the three-dimensional space, is inclined by about 45 ° with respect to the XY plane, and when viewed from the virtual viewpoint PZ, The part that is far away looks small, and the part that is close looks large.
  • the transformation position of the three-dimensional object video signal V on the screen plane 3 by the perspective transformation is determined by the distance between the virtual viewpoint PZ and the screen plane 3, and the virtual viewpoint PZ and the three-dimensional object. It changes according to the distance from the virtual video signal V, and the perspective conversion according to the position of the virtual viewpoint PZ is performed by the operator setting the perspective value Pz according to the position of the virtual viewpoint PZ. It can be carried out. Normally, since the position of the viewpoint PZ is the coordinate value “1 16” on the z-axis, the passive value P z is set so that “1/16” is a standard value.
  • the transformation matrix Tobj for obtaining a two-dimensional object video signal V a from the matrix is a three-dimensional transformation matrix T Q and a perspective transformation matrix P.
  • the image processing apparatus using the special effect device of the present invention writes the two-dimensional source video signal VQ supplied from the outside to the frame memory 12 once, and By supplying the two-dimensional read address (X s , Y s ) calculated by the read address generation circuit 14 to the video signal read from the frame memory 12, On the other hand, spatial image conversion (three-dimensional image conversion) desired by the operator can be performed. Therefore, the source video signal V stored in the frame memory 12. , And the object video signal V 2 read from the frame memory 12 are two-dimensional data. That is, in ⁇ this two-dimensional readout completion de Les (X s, Ys) is a three-dimensional space Above z-axis data is not used.
  • ⁇ 33 ⁇ 21 22 ⁇ 231 ⁇ (4)
  • the two-dimensional position base-vector on the full Remume mode Li 1 2 [chi Micromax Upsilon Micromax] is expressed in the same coordinate system, base click preparative Le [X m y m H. ⁇ It can be expressed as.
  • the Monitasuku rie down 5 position on the 5 base-vector [X S Y s] you expressed by homogeneous coordinate system can be represented as vector preparative Le [X 6 y 6 1].
  • the parameter “H.” in this homogeneous coordinate system is a parameter that represents the scale of the magnitude of the vector and the rate of reduction. Is used as pseudo depth information in the present embodiment. Rice H. Will be described later.
  • equation (5) is a relational expression for determining the point on the monitor scan click rie down 2 that corresponds to a point on the full Remume mode Li 1 2 Te matrix T 3 3 Niyotsu.
  • the source video signal is stored in the frame memory 12 before conversion, and is stored on the monitor screen 3 obtained by the conversion matrix ⁇ 33.
  • the spatial video conversion is performed on the source video signal by determining the point of the frame memory 12 corresponding to the above point by the read address. In other words, when writing to frame memory 12
  • the image conversion is performed at the time of reading from the frame memory 12.
  • the frame memory 12 Instead of calculating by the equation (5) to find the point on the monitor screen 3 corresponding to the point, find the point on the frame memory 12 corresponding to the point on the monitor screen 3 There is a need. Therefore, by transforming equation (5),
  • the position vector of the homogeneous coordinate system on the frame memory 12 [X my m H. Is converted to a two-dimensional position vector [ ⁇ ⁇ ⁇ ⁇ ] on the frame memory 12.
  • the parameter “ ⁇ .” Used to represent the two-dimensional position vector [ ⁇ ⁇ ⁇ admir] in the homogeneous coordinate system is the parameter of the homogeneous coordinate system's position vector [X my m ⁇ . Since it is a parameter that represents the scale factor, to convert the position vector of the homogeneous coordinate system to a two-dimensional position vector, the position vector of the homogeneous coordinate system must be converted. If the parameters “X m ” and “y m ” indicating the direction are normalized by the parameter “1-1.” Which indicates the scale of the position vector in the homogeneous coordinate system, good. Therefore, Monitasuku each parameter menu over data "X M" in the two-dimensional position downy click preparative Le on rie down 3 and "Upsilon Micromax" is expressed by the following equation,
  • H o It can be expressed as. Monitasuku the homogeneous coordinate system vector preparative Le of the rie down 3 [x s ys 1], in the same when converted into the two-dimensional position vector [X s Ys], the position of the homogeneous coordinate system
  • the parameters “X s ” and “y s ” indicating the vector direction are normalized by the parameter “1” indicating the scale factor of the position vector in the homogeneous coordinate system.
  • the monitor scan click rie down 3 on a two-dimensional of each parameter "x s" position downy click preparative Le and "Y S" is expressed by the following equation,
  • the two-dimensional read address (X M , ⁇ ⁇ ) supplied to the frame memory 12 is given by the following equation from the equation (10).
  • (1) can be obtained as a bi 3 xs + b 2 3 ys + baa.
  • the read address (X M , ⁇ réelle) supplied to the frame memory 12 is converted into a three-dimensional conversion matrix ⁇ determined by a desired spatial image conversion device of the operator.
  • Each parameter (r,, ⁇ r) supplied to the frame memory 12 is converted into a three-dimensional conversion matrix ⁇ determined by a desired spatial image conversion device of the operator.
  • the parameter 11 obtained by the expression (40) is used. Is used as depth information.
  • the following effects are obtained by using the value of H obtained by the equation (40) instead of using the actual Z coordinate value in the three-dimensional space as the depth information. That is, when calculating the depth information, it is not necessary to calculate the actual Z coordinate value, so that the calculation for one dimension can be omitted. Thus, it is not necessary to use a high-speed processor for three-dimensional conversion, and depth information can be calculated even with a low-speed processor.
  • this H Since H is a value required when calculating the two-dimensional read address to be supplied to the frame memories 12 and 13, this H is used. There is no need to perform any special operations to find. Therefore, even faster calculation can be performed.
  • the frame memory 1 2, 1 3 of the normalized read add-less (x M / HO, y M ZH.) Means that the Naru rather large, thereby the spatial reading off Remume mode Li 1 2, 1 3
  • the interval between addresses increases.
  • the space between the spatial readout sampling addresses becomes large, the data read out from the frame memories 12 and 13 is displayed when displayed on the screen surface 55A. The resulting video is reduced.
  • Fig. 7 (B) in the address space of frame memories 12 and 13, parameters are set according to the perspective so as to intersect with the specified memory area ER1 without using the perspective.
  • Data H When the image data is read out by the read address of the area ER2 where the image is read out, the image projected on the screen surface 55A is projected in perspective as shown in Fig. 7 (A).
  • the image data read out from the area ER1 of the frame memories 12 and 13 without using the image data is a spatial readout sampler determined by the same parameter H over the entire area ER1.
  • the image is displayed in the area ERIX on the screen surface 55A as a non-perspective image.
  • the pixel data of the frame memories 12 and 13 are read from the area ER2 where the intervals of the spatial readout sampling addresses are different by the perspective method.
  • Parameter H on lean surface 55A In the read sampling address where is large, the image is reduced and the parameter H is set. In a region where the distance between the read sampling addresses becomes large and the interval between the read sampling addresses becomes large, a reduced image is displayed on the screen surface 55A.
  • the parameter H Changes in the size of the image as depth information, so that images using perspective can be screened. Can be projected on the surface 55A.
  • the shadow coordinate system like the word Lumpur de coordinate system, X s, is defined by three-dimensional orthogonal coordinate system consisting of YS and Z s axis is a coordinate system.
  • the plane on which the shadow video signal V shadow is projected is defined as the XY plane of the shadow coordinate system. This is called the shadow plane.
  • the direction in which the light source for providing the shadow of the object video signal Vobj exists is defined as the negative direction of the Z axis of the shadow coordinate system.
  • the special effects device of the present invention generates a shadow using a point light source mode using a point light source and a shadow light source using a parallel light source. It has a parallel light source mode and can be freely set by the operator. Further, X in the whirlpool de coordinate system, against the Y and Z-axis, X s, the angle of each of the Y s and Z s axis of shadow coordinate system can be O Operator sets an arbitrary angular .
  • FIG. 9A and FIG. 9B are the same as FIGS. 4A and 4B, and FIG. 9A is a view from the viewpoint PZ set on the Z axis of the world coordinate system to the XY of the world coordinate system.
  • FIG. 9B is a diagram when the plane is viewed, and FIG. 9B is a diagram when the YZ plane in the world coordinate system is viewed from a position in the positive direction of the X axis in the world coordinate system.
  • the three-dimensional transformation matrix T is used. 3D converted to 3D spatial position by The object video signal V, is perspective-transformed to the X s Y s plane of shadow coordinates by a perspective transformation matrix PSP ⁇ ⁇ by a point light source.
  • This is a video signal seen through the X s Y s plane of the shadow coordinate system when the three-dimensional object video signal V, when viewed from the point light source 90, is viewed from the point light source 90.
  • ⁇ Figure 9 beta which means to seek the video signal which is transparent to X s YS plane of shadow coordinate system, to be expressed as a three-dimensional Shah Doubideo signal V 3.
  • a detailed description of the perspective transformation matrix PSPOT using the point light source will be described later.
  • the three-dimensional shadow video signal V 3 is converted to the perspective transformation matrix ⁇ described above. Is used to perform perspective transformation so as to be seen on the XY plane of world coordinates. This is from a virtual viewpoint PZ on the Z-axis, when viewed three-dimensional sheet catcher Doubideo signal V 3, means to determine the video signal to be transparent to the XY plane of the whirlpool de coordinate system.
  • the video signal seen through the XY plane of the world coordinate system is represented as a two-dimensional shadow video signal V 4.
  • FIG. 9B The above processing shown in Fig. 9B is summarized.
  • conversion processing for the two-dimensional source over scan video signal V 0 obtains a two-dimensional Shah de video signal V 4 are Obujuku preparative video signal V.
  • a three-dimensional conversion step for obtaining a three-dimensional object video signal V by performing three-dimensional conversion with the three-dimensional object video signal V, and transforming the three-dimensional object video signal V, with a perspective transformation matrix P SP 0 T using a point light source.
  • To obtain a two-dimensional shadow video signal V 4 by seeing through the XY plane of the world coordinate system. Therefore, the two-dimensional source video signal V. From a two-dimensional shadow video signal
  • the transformation matrix Tshadow 'for determining V 4 is given by
  • the parameter [H s ] of the homogeneous coordinate system is a parameter of the homogeneous coordinate system when the position vector on the frame memory 12 is represented by the homogeneous coordinate system.
  • the parameter is a parameter representing the scaling factor of the size of the vector, and is used as pseudo depth information in this embodiment. It will be described later with respect to this "H S".
  • the transformation matrix T shadow 'for obtaining the two-dimensional shadow video signal V 4 from the two-dimensional source video signal V 0 is a 4-by-4 transformation matrix, which will be described later. data of the third row and third column) is not substantially used and the frames on the actual three-dimensional transformation matrix having a two-dimensional readout add-less arithmetic required para main Ichita and T 33 shadow '
  • the three-dimensional transformation matrix T 33 shadow ' is applied to the position vector [ ⁇ ⁇ ⁇ ' y m 'H s] on the memory 22 to obtain the position on the frame memory 22.
  • Vector [X m 'y m ' H s] is converted to a position vector [X sys 1] on monitor screen 3. If the relational expression is described below,
  • the scale factor “H S ” of the position vector [x m 'y m '] is the scale factor of the position vector [X s y s ] of the homogeneous coordinate system on the monitor screen 22. Means that the rate is converted to be "1" o
  • transformation matrix (T 33 shad ow ') - 1 is the transformation matrix T 33 shadow' is the inverse matrix of.
  • FIGS. Figure 1 0 is the X s axis direction of the shadow coordinate system Y s Z s plane Figure derconnection when viewed, the point light source 6 0 3 dimensional Obujeku preparative video V, and the three-dimensional Shah Doubideo V 3 Indicates the positional relationship.
  • perspective transformation matrix PSPOT that by this point source, when the light source mode point using a point light source, a conversion matrix for obtaining the three-dimensional Obujiweku preparative Bido V, from the three-dimensional Shah Doubideo V 3.
  • FIG. 11A shows a flow of the conversion process for the point light source 60
  • FIG. 11B shows a flow of the conversion process for the object video signal.
  • step SP1 the position of the point light source 60 represented by the world coordinate system is transformed into the shadow coordinate system by the transformation matrix F.
  • the perspective transformation matrix P s will be described later in step SP5.
  • 'Is not a perspective transformation matrix in world coordinates, but a perspective transformation matrix in shadow coordinates. Therefore, when the three-dimensional object video signal V, is perspectiveed on the X s Y s plane of the shadow coordinate system by the perspective transformation matrix P so ′, the point light source 60 represented by the world coordinate system is used. The position needs to be converted to a shadow coordinate system.
  • this transformation matrix F- ' will be specifically described.
  • the rotation angle of the X s axis of the shadow coordinate system with respect to the X axis of the field coordinate system is X
  • the rotation angle of the Y s axis of the shadow coordinate system with respect to the Y axis of the world coordinate system is ⁇ Y
  • ⁇ of the world coordinate system is Let the rotation angle of the Z s axis of the shadow coordinate system with respect to the axis be ⁇ ⁇ , and let the origin of the shadow coordinate system be (X so, y so, Z so).
  • the transformation matrix F— 'from the world coordinate system to the shadow coordinate system the transformation matrix F from the shadow coordinate system to the world coordinate system, which has an inverse matrix relationship, is the product of the rotation matrix and the movement matrix.
  • a transformation matrix F from the shadow coordinate system to the world coordinate system is obtained.
  • the transformation matrix F from the shadow coordinate system to the world coordinate system is
  • the matrix L (X y Z so) is a matrix for translation
  • the matrix R x (— x ), the matrix R ⁇ (-0 ⁇ ), and the matrix R z (- ⁇ z ) are rotation matrices
  • the row system UL (-Xso, one YZso) is a matrix for translation
  • step SP2 the position of the point light source 90 is moved to the position of the virtual point light source 91 on the Zs axis by the translation matrix T KS — '.
  • the reason for this is that, in order to obtain the 3D shadow video V 3 for the 3D object video V, when viewing the 3D object video V, from the position of the point light source 9 o 0, the 3D object video V, The three-dimensional shadow video V 3 can be obtained by making the video V, see through the X s Y s plane of the shadow coordinate system.
  • the point light source to be the viewpoint must be located on the Zs axis. Therefore, the translation matrix T XSYS —! Moves the position of the point light source 90 parallel to the position of the virtual point light source 91 on the Z s axis.
  • this translation matrix T XSYS — 1 becomes T It can be expressed as.
  • step SP1 and step SP2 the conversion processing for the point light source ends.
  • step SP3 as in step SP1, the three-dimensional object video signal V, represented in the world coordinate system, is transformed into the shadow coordinate system by the transformation matrix F1 '.
  • the perspective transformation matrix P s used in step SP5 described later. 'Is not a perspective transformation matrix in the world coordinate system, but a perspective transformation matrix in the shadow coordinate system. Therefore, the perspective transformation matrix P s .
  • the three-dimensional object video V is seen through the X s Ys plane of the shadow coordinate system, and the three-dimensional object video V, represented by the world coordinate system, is transformed into the shadow coordinate system. Need to be converted to In step SP4, as in step SP2, as shown in FIG.
  • the three-dimensional object video signal V is converted to the X s Ys plane of the shadow coordinate system by the translation matrix T XSYS — 1 .
  • the translated video signal is represented as a virtual three-dimensional object video signal V, '.
  • the translation matrix T XSYS — 1 so that the relative positional relationship between the video signal V, and the virtual object video signal V, 'relative to the virtual point light source 91 does not change.
  • step SP5 the perspective transformation matrix P s is obtained .
  • step SP5 the perspective transformation matrix P s is obtained .
  • step SP5 the perspective transformation matrix P s is obtained .
  • step SP5 the perspective transformation matrix P s is obtained .
  • step SP5 the perspective transformation matrix P s is obtained .
  • step SP5 the perspective transformation matrix P s is obtained .
  • step SP5 the perspective transformation matrix P s is obtained .
  • step SP5 the perspective transformation matrix P s is obtained .
  • V 3 ′ the video signal seen through the X s Y s plane of the shadow coordinate system.
  • This virtual three-dimensional shadow video signal V 3 ′ is, when the virtual three-dimensional object video signal V, ′ is viewed from the virtual point light source 91 with the virtual point light source 91 as a viewpoint, s The video signal seen through the Ys plane.
  • this perspective transformation matrix P s . 'I the perspective transformation matrix P shown in equation (2). From the relationship between and the viewpoint PZ, it can be considered that the viewpoint at the time of the fluoroscopy is at the position of the virtual point light source 91, that is, at the position of (0, 0, Z! _).
  • the perspective transformation matrix P s is obtained by replacing “P z ” with “— 1 Z z L ”. 'Is obtained.
  • the virtual three-dimensional shadow video signal V 3 ′ is moved in parallel to the X s Y s plane of the shadow coordinate system using the translation matrix T xsvs .
  • it will be expressed as 3-dimensional Shah Doubideo signal V 3 video signals that have been translated.
  • Figure 10 the three-dimensional shadow video signal V 3 has a point light source 90 as a viewpoint, and when the three-dimensional object video signal V, is viewed from the position of the point light source 90, the three-dimensional shadow video signal V 3 a video signal which is transparent to X s Ys plane.
  • the reason for this movement is that in step SP 4, the three-dimensional object and the video signal V, were translated by the translation matrix T xsys- ', so that the translation matrix T XSYS 1 This is to undo the parallel movement processing by.
  • the translation matrix T XSYS — 'and the translation matrix T XSYS have an inverse relationship, so the translation matrix T XSYS is
  • step SP 7 converted by the transformation matrix F represented by Equation (4 2), a 3-dimensional shadow video signal V 3 represented by shadow coordinate system, the whirl de coordinate system.
  • a three-dimensional shadow video signal V 3 represented by a coordinate value in the world coordinate system can be obtained.
  • the processing shown in the flow of Fig. 11B is summarized.
  • the processing for obtaining the three-dimensional shadow video signal in the world coordinate system from the three-dimensional object video signal V in the world coordinate system is performed by converting the three-dimensional object video signal V, in the world coordinate system from the shadow coordinate system.
  • Step (SP3) and in the shadow coordinate system, project the 3D object video signal onto the Xs Ys plane of the shadow coordinate system to project the 3D shadow video signal in the shadow coordinate system.
  • SP 4 3-5 and 3-6 and a 3D shadow in the shadow coordinate system ⁇ ⁇ Converting the video signal from the shadow coordinate system to the world coordinate system (SP7).
  • PT is a transformation matrix F- ', a row shift matrix T XSYS - 1, and a perspective transformation matrix P s . ', The translation matrix T XSYS- ', and the transformation matrix F.
  • this transformation matrix T shadow ' is a matrix of 4 rows and 4 columns, but a matrix excluding the components in the Z-axis direction (3rd row and 3rd column) as in equation (4).
  • the matrix T 33 shadow' T 33 shadow of the inverse matrix of (T 33 shadow ') - one of the para main Ichita the following equation: bb 1 2 'b 1 3 '
  • FIG. 12A and 12B are the same as FIGS. 9A and 9B, and FIG. 12A is a view from the viewpoint PZ set on the Z-axis of the world coordinate system.
  • FIG. 12B is a view when the plane is viewed, and FIG. 12B is a view when the YZ plane in the world coordinate system is viewed from the position in the positive direction of the X axis in the world coordinate system.
  • a three-dimensional transformation matrix T The 3D object video signal V, which is converted to a 3D spatial position, is projected onto the X s Y s plane of the shadow coordinates by the perspective transformation matrix ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ with a parallel light source Is done.
  • the X s Y s perspective video signal to the plane of the shadow coordinate system would represent by a three-dimensional Shah Doubideo signal V 5.
  • the directional light based on the perspective transformation matrix P PARA is the three-dimensional object video signal V, and the perspective transformation, a transformation matrix for obtaining the three-dimensional Shah Doubideo signal V 5.
  • three-dimensional Shah Doubideo signal V 5 is a perspective conversion matrix described earlier [rho. Is projected onto the XY plane of the world coordinate system. This means that when viewing the three-dimensional shadow video signal V 5 from the virtual viewpoint PZ on the Z axis, a video signal seen through the XY plane of the world coordinate system is obtained.
  • Figure 1 2 B is a video signal which is transparent to the XY plane of the whirlpool de coordinate system, to be expressed as a two-dimensional Shah Doubi de old signal V 6.
  • 3D conversion step for obtaining a 3D object video signal V, by performing a 3D transformation by the 3D object video signal V, and converting the 3D object video signal V, into a shadow coordinate system by a perspective transformation matrix P PARA using a parallel light source
  • T shadow " T. ⁇ P PA RA ⁇ P o ⁇ (60)
  • shadow signal generator 2 0 read ⁇ de-less generation circuit 2 4 respectively read ⁇ de-less and position downy click preparative Le supplied from the (X M ⁇ Upsilon Micromax ⁇ ) and [monitored with ⁇ ⁇ ' ⁇ "' scan click rie down 3 ⁇ dress and position base-vector in on the respective (X s
  • FIG. 13A shows the flow of the conversion process for the parallel light source 70
  • FIG. 13B shows the flow of the conversion process for the three-dimensional object video signal V ,.
  • step SP11 the coordinates of the parallel light source 70 defined by the spherical coordinates in the world coordinate system are converted to the orthogonal coordinates in the world coordinate system.
  • the position of a parallel light source is generally expressed in a spherical coordinate system rather than in a rectangular coordinate system.
  • the spherical coordinate system is a coordinate system in which the position of a parallel light source is represented by “radius (r)”, “latitude (a)”, and “longitude ( ⁇ )”.
  • Figure 14 1 shows the relationship between rectangular coordinates and spherical coordinates in the world coordinate system. As shown in Fig.
  • the relationship between the rectangular coordinate system and the spherical coordinate system is that the reference for latitude () is the ii direction of the Y axis and the reference for longitude is the XY plane.
  • the equatorial plane of the spherical coordinate coincides with the XY plane of the rectangular coordinate
  • the direction of latitude 0 (rad) and longitude 0 (rad) coincides with the minus direction of the Y axis.
  • the position of the collimated light source 70 defined by spherical coordinates is (r, a, ⁇ ), and the position of the collimated light source converted to rectangular coordinates is ( ⁇ ⁇ _, y L, z L), the light source position (XL, y L, z L ) is
  • the light source position (r, ⁇ , ⁇ ) of the spherical coordinate system set by the operator is substituted into this equation (61).
  • the light source position (XL, yL, Z! _) Converted into the rectangular coordinates can be obtained.
  • step SP12 the coordinates of the parallel light source are transformed from the first coordinate system to the shadow coordinate system by the transformation matrix F- '.
  • the transformation matrix F one 1, since as described in step SP 1, description is omitted.
  • Shadow coordinate system the position of the converted light source (x i_ ', y L' , z L ') and when, homogeneous coordinates source base click preparative Le of in the world coordinate system expressed in [XL y L z L 1] and the vector of the light source in the shadow coordinate system represented by the homogeneous coordinate system [XL L y L L z L L
  • step SP13 the position of the light source in rectangular coordinates in the shadow ⁇ coordinate system (X, y, yL ', z') obtained in step SP12 is expressed in spherical coordinates in the shadow coordinate system.
  • Ei 14B is a diagram showing the relationship between the rectangular coordinates and the spherical coordinates in the shadow coordinate system. As shown in Fig. 14B, the relationship between the rectangular coordinate system and the spherical coordinate system in the shadow coordinate system is such that the reference of the latitude (ar s ) is the negative direction of the Y s axis and the reference of the longitude ( ⁇ s ). And the X s Y s plane are doing.
  • X s of the equatorial plane is orthogonal coordinates of the spherical coordinates - matches Y s plane, latitude 0 (rad), the direction of ⁇ 0 (rad) matches the minus direction of the Y s axis.
  • step SP 14 the position (r ′, a ′, ⁇ ′) of the parallel light source obtained in step SP 13 is calculated by using the rotation matrix R z (Ichi '). ) Rotates one "" (rad) around the ⁇ s axis of the shadow target system.
  • the position of the parallel light source after the rotation process can be expressed as ( ⁇ ', 0, ⁇ ').
  • the light source rotated by the rotation matrix R z ( ⁇ ⁇ ') is represented as a virtual collimated light source 71. By rotating the position of the collimated light source ⁇ 0 in this manner, the light source shown in Fig. 13 is obtained.
  • step SP 17 the observable video signal V, is perspectiveed on the X s Y s plane of the shadow coordinate system by the perspective transformation matrix P so '.
  • the rotation matrix R z (- ⁇ ') is a rotation matrix around the Z s axis
  • the rotation matrix R z () is
  • step SP15 similarly to step SP11, the three-dimensional object video signal V, represented in the world coordinate system is transformed into the shadow coordinate system by the transformation matrix F '.
  • the perspective transformation matrix P s which will be described later, is similar to the processing for the parallel light source described in step SP 11. This is because ⁇ is not a perspective transformation matrix in the world coordinate system, but a perspective transformation matrix in the shadow coordinate system. If the 3D object video V, is to be seen through the X s Ys plane of the shadow coordinate system, each pixel position of the 3D object video V, represented by the world coordinate system, is shadowed. Need to convert to coordinate system.
  • step SP 16 the object video signal V, converted to the shadow coordinate system in step SP 15, is rotated by one “” (rad) around the Z axis by the rotation matrix R s (-'). This is because, as shown in FIG. 15, in step SP 14, the position (r ′, ⁇ ′, ⁇ ) of the parallel light source 70 is determined by the rotation matrix R z (— “”).
  • the object video signal V has to be rotated to correspond to the rotation process of the parallel light source 70, since ') has been rotated about the ⁇ axis — “' (rad).
  • the three-dimensional object video signal that is rotated (rad) around the Z s axis by the rotation matrix R 2 (-a ') is converted into the virtual three-dimensional object video signal V, ". Therefore, the relative position of the three-dimensional object video signal V and the parallel light source 70 with respect to the origin of the shadow coordinate system, and the virtual three-dimensional object video signal V with respect to the origin of the shadow target system , ⁇ and the virtual parallel light source 71 are exactly the same.
  • FIG. 14A is a drawing that three-dimensionally represents the positional relationship between the virtual three-dimensional object video signal V, ′ and the virtual three-dimensional shadow video signal V 5 ′′
  • the virtual object video signal V, 'J- Let a pixel point with (x., Y.
  • step S ⁇ 18 virtual shadow video signal V 5 ⁇ ⁇ is rotated around Z s axis by rotation matrix R z ⁇ 1 ).
  • FIG 5 depicts a rotational movement video signals as a three-dimensional sheet catcher Doubideo signal V 5.
  • the three-dimensional shadow video signal V 5 is a video signal obtained by seeing the three-dimensional object video signal V, through the parallel light source 70 into the X s Y s plane of the shadow coordinate system. It is. The reason for moving this way
  • step SP 16 the rotation matrix R z (— “') gives the third order This is because, since the original object video signal V, has been rotationally moved, the rotational movement processing by the rotational matrix R z ⁇ -a ') is restored. In other words, since the rotation matrix R z ( -ar ) and the rotation matrix R z ( ⁇ ') have an inverse matrix relationship, the rotation matrix R z ( ⁇ ') is
  • step SP 1 9 converted by the transformation matrix F represented by Equation (4 2), a 3-dimensional shadow video signal V 5 represented by shadow coordinate system, the whirl de coordinate system.
  • a three-dimensional shadow video signal V5 represented by the coordinate values in the world coordinate system can be obtained.
  • the perspective transformation matrix P PARA is a transformation matrix F _ ′, a rotation matrix R z (—), and a perspective transformation matrix P s . ⁇ , the rotation matrix R z ) and the transformation matrix F— 1.
  • T shadow " ⁇ . ⁇ P P ARA ⁇ P o
  • the calculation method is exactly the same as the calculation shown in (14).
  • a shadow coordinate system for defining a shadow plane on which a shadow is projected for an object video signal is set.
  • the special effect device of the present invention has an origin setting mode for automatically setting the origin of the shadow coordinates.
  • the operator sets the source video signal V. Specify a point above. 3D transformation of this specified point
  • the object is converted into a three-dimensional space by the matrix T Q , and the corresponding point on the object video signal Vobj converted into the three-dimensional space is set as the origin of the shadow coordinate system.
  • the origin of the shadow coordinate system is set at a point on the three-dimensional object video signal Vobj, and as a result, the three-dimensional object video signal Vobj exists on the shadow plane. become.
  • the source video signal V Let the upper right point be a, the upper left point be b, the lower left point be c, and the lower right point be d, and the corresponding point on the 3D object video signal V ob j be a ' , b ', c', and d '. Also, the source video signal V.
  • the source video signal V is output by the operator.
  • the case where the above point d is specified is explained as an example.
  • the corresponding point on the observable video signal V obj obtained by transforming the above point d (X d , y d , 0) by the three-dimensional transformation matrix T o is represented by d ′ (X c ′, y d ′, z d '). This point, d (x d , yd, 0) and d '(
  • Equation (4 2) and the origin of Shah de coordinate system is a path lame one other mobile matrix L represented by the formula (4 7) (X s., YSO, ZSO), a in Equation (7 4) connexion obtained (x d ', y d' , z d ') by substituting, 3-dimensional objects Tobideo signal V, the origin of the shadow coordinate system to a point above is set.
  • the shadow coordinate system is set so that the three-dimensional object video signal V, exists in the shadow plane in the three-dimensional space. Accordingly, the special effect device of the present invention can obtain a natural effect such that the shadow of an object existing on the ground is projected on the ground by the light from the sun.
  • the ⁇ key generator 5 ⁇ ⁇ ⁇ ⁇ will be described with reference to FIG.
  • the real shadow generator 50 includes a gain control circuit 500, a horizontal LPF 501, a vertical LPF 502, and a real shadow control unit 503 as a whole. Also, the real shadow control section 503 includes a gain characteristic of a gain control circuit 500 described later, and horizontal and vertical
  • ROM 504 which stores table data indicating the filtering characteristics of 01 and 502.
  • the gain control circuit 500 is a circuit that controls a gain given to a shadow key signal (# 4 , # 6) supplied from the frame memory 23.
  • Vertical LPF 5 0 2 is a horizontal LPF 5 0 Shah synchronizing first signal from 1 ( ⁇ 4, ⁇ 6) mouth one path off I filter in the vertical direction of the frequency component of (LPF).
  • the real-shadow controller 503 receives a parameter ⁇ used as depth information in the present embodiment from the read address generation circuit 24 ⁇ and stores it in R ⁇ 504.
  • the gain control circuit 500 and the horizontal and vertical PFs 501 and 502 are controlled based on predetermined gain characteristics and filtering characteristics, respectively.
  • the predetermined gain characteristic is data that results in the intended gain characteristic
  • the filtering characteristic is data that results in the intended filtering characteristic.
  • the final image to be obtained is such that the further away the shadow S from the object 0 is from the object ⁇ , the thinner and blurred the shadow S itself is, as shown in FIG. 19 ( ⁇ ). is there.
  • the gain characteristics (GAI ⁇ ) indicating the shading of the shadow S are shown in Fig. 19 (B).
  • the depth information As parameter H is large Kunar used, the gain is lowered against Shah Douki signal (K 4, K G), as para menu over data H is rather small, the gain for the Shah Douki signal (K 4, KG) Can be raised. In other words, the brightness decreases as the shadow S moves away from the object 0.
  • the characteristic data shown in FIG. 19 (, ⁇ ) is shown in FIG. 18.
  • the table data consisting of the lame signal and the gain data is shown in FIG. It is stored in R 0 ⁇ 504 in the control unit 503.
  • the blur amount (softness) of the shadow S that is, the filtering characteristic (FILTER) of the one-pass filter is as shown in FIG. 19 (C).
  • the filtering characteristic (FILTER) of the one-pass filter As shown in Fig. 19 (C), as the parameter Hs increases, the low-pass bands of the horizontal and vertical low-pass filters 501 and 502 become narrower, and the parameter Hs increases. H s hands horizontally and vertically as the is rather small lowpass off I filter 5 0 1 5 0 of each low frequency 2 pass full I Rutagafu rats, i.e. wider. That is, the further away the shadow S is from the object ⁇ , the more the outline of the shadow S is blurred.
  • VB (OUT) KBVB + - represented by (1 K B) KAVA ⁇ ⁇ (7 6), combined output such as V B (0 UT) is obtained.
  • the first video signal V A in the case where the second video signal V B is above (disk near rie down), priority signal to the composite video signal V A (0U ⁇ ) ⁇ 0 ⁇ Taking into account, the composite video signal corresponding to the first video signal V A is
  • the second video signal V B, prioritization Li Ti signal when the first video signal VA is above can be represented by using the priority signal Ho A and (1 one H. A). Therefore, when the first video signal V A is above the second video signal V B and the priority signal (111 H 0A ) is considered for V A (0 U ), The output of V B is
  • V B (OUT) X (K B V B + (1- K B) K A V A ⁇ (1 ⁇ H OA)
  • V 0 U ⁇ ⁇ 0 U ⁇ V A (0 U ⁇ ) X + V B (0 UT) X (8 1)
  • the first and second video signals V A and V B to E Li other than E Ria that image corresponding is displayed Ryo is, using the key signal K a and K b of the first and second video signals VA and V B (1- K a) - so can be defined in the form of a product such as (1 K N), the video signal V Alpha Or the key signal OUT for the area where either VB or
  • K OUT 1-(1-KA) (1 K B ) (83)
  • the combiner 22 of the special effect device of the present invention has two pieces of image information, that is, depth information II in the XYZ space behind the screen surface 3A shown in FIG. 20 (A). And two video signals of images intersect each other while extending obliquely so as to have a H s, Sunawa Acquired Bujiweku preparative video signal V 2 and Shah de video signal (point source mode at the time of V 4, directional light mode time for V 6) de, respectively the key signal K 2 (FIG. 2 0 (B)) and K 4 (point source mode when the kappa 4, when directional light mode is K s) (Fig. 2 0 (C) while keys Lee in g by), priority signals of two images each part Ho a and (1 one H. a) (Fig. 2 0 (D) Figure based on and FIG. 2 0 (E)) 2 1 Are synthesized using the configuration of
  • the prioritization Li Ti signal H OA is information for indicating the display of a Priority for Obujeku preparative video signal V?
  • Shah Doubideo signal V 4 or V 6
  • II 0A 1 when the priority of Obuji E click preparative video signal V 2 is 1 0 0 Bruno.
  • Is a Ichise emissions Bok Shah Doubideo signal (V 4 or V 6) is not displayed at all. That is, it can be said that the object video signal V 2 is opaque.
  • H OA Obujiweku Tobideo signal V 2 when the 0.5 is a semi-transparent, so that the sheet catcher Doubideo signal V 4 are show through.
  • the object video signal V 2 is displayed 50%, since Shah Doubideo signal (V 4 or V 6) is displayed 50%, Obujiweku preparative bi de O signals V 2 and Shah Doubideo signal (V 4 or V 6 ) will be displayed.
  • Figure 2 1 shows a configuration of co Npaina 3 0 as a whole, and the first video signal combining unit 4 1 for calculating the synthesis ratio of the object video signal V 2, Shah Doubideo signal (V 4 or V 6)
  • a second video signal synthesizing section 42 for calculating a synthesizing ratio
  • a key signal synthesizing section 43 for forming a key signal for the object video signal V 2 and the shadow video signal (V 4 or V 6 ); and an object video signal V 2 and shadow video signal (V 4 or V R) and co Nbaina 3 0 synthesized composite output unit 4 for outputting 4 and the ply o utility signal generating circuit 4 8 Metropolitan a is configured.
  • the first video signal synthesizing section 41 includes a “1” coefficient circuit 49, an “one-K ′ 4 (or one-K ′ 6 )” coefficient circuit 50, a synthesizing circuit 45, and multiplication circuits 46, 4. Consists of seven.
  • the combining circuit 45 calculates the following equation based on the first, second and third input data D 1, D 2 and D 3.
  • the output data D4 is supplied to multiplication circuits 46 and 47, and the multiplication circuit 46 receives the output data D4 and the object signal ⁇ 2 input to the compiler 30.
  • output D 4 X kappa 2 is obtained, and the output D 4 X kappa 2 and object video signal V 2 is input to the multiplier 4 7 is al, an output D 4 X ⁇ 2 XV 2 is obtained. Therefore, the output data S 11 output from the first video signal combining unit 41 is
  • the second video signal combining unit 4 3, "1 one H 0A" coefficient circuit 5 2 "1” and the coefficient circuit 5 3 "1 one K 2" coefficient circuit 5 4 and the same operation as (7 5) It comprises a synthesis circuit 51 to be executed, and multiplication circuits 152 and 153.
  • the input data D1, D2, and D3 input to the synthesis circuit 51 are respectively the “ 111H0A “ output from the " 111H0A " coefficient circuit 51 and the " 111H0A “ output from the "1” coefficient circuit. and “1” output, "1 - K 2" coefficient circuit 5 4 - “since 1 kappa 2" is output, output data Isseki D 4 output from the synthesizing circuit 5 1, ⁇ (1 - ⁇ ⁇ ) + ⁇ ⁇ ⁇ (1- ⁇ 2) ⁇ .
  • the multiplication circuit 5 5 is expressed as “(1 ⁇ ′ 4 (or K ′ 6 )) "From the coefficient circuit 5 0" (1 - K '4 (or kappa' 6)) "and output” (1 - kappa 2) "from the coefficient circuit 5 4" (1 one kappa 2) "output and is given, - outputs "(1 one ⁇ 2) ⁇ (1 K ' 4 ( or K' 6)).” "1 one (1 - ⁇ 2) ⁇ ( 1 - K '4 ( or kappa' 6))" operation circuit 5 6
  • the combined output unit 4 4 "1 one (1 - ⁇ 2) ⁇ ( 1 - ⁇ '4 ( or K'6))" is converted to a fraction in a reverse circuit 5 7 an output, the multiplier circuit 5 8
  • the composite video signal V is obtained by multiplying the sum by the sum output of the adder circuit 59. From combiner 30 to mixer 4
  • the adder circuit 59 is provided with the combined output data S 11 and S 12 of the first and second video signal combiners 41 and 42, and thus the combined video signal V. Is
  • V o ((H OAK 2 + (1- H OA) - ( ⁇ - ⁇ '4 ( or K' 6)) ⁇ ⁇ 2 ⁇ V + (1- ⁇ ⁇ ) ⁇ / 4 ( or K '6) + ⁇ ⁇ -(1- ⁇ 2 ) ⁇ '4
  • the mixed video signal V ' miK ( V ⁇ in the parallel light source mode) and the mixed key signal K' mi K (K "in the parallel light source mode) actually output from the combiner 30 are (8 9) Expression — , (88).
  • the images I and A are the depth information H.
  • the video signals V 2 and V 4 (or V 6 ) representing the first and second images ⁇ and A are key signals K 2 and K ′ 4 ( Or keying processing by ⁇ '6).
  • Depth information of image ⁇ Is depth information H where image 0 is closer to screen 3A than image ⁇ .
  • object video signal priority signal so as to output V 2 H.
  • A (FIG. 20 (D)) is generated in the priority signal generation circuit 48.
  • the output range W 1 and Shah de Ubideo signal V 4 in the range of W 3 have the depth information H s closer image 0 to disk rie down surface 3 A (or V 6)
  • the priority signal to perform is calculated as (11- H0A ) (Fig. 20 (E)).
  • the object video in the high priority ranges W2 and W4 of the object video 0, the object video is projected on the screen surface 3A and the shadow video A is displayed.
  • the shadow image A is displayed on the screen surface 3A.
  • the priority signal generating circuit 48 is configured as shown in FIG.
  • Obujeku preparative video signal V 2 and Shah Doubideo signal is input No. V 4 (or V 6 ) depth information H.
  • H s (FIG. 23 (A)) are received by a subtraction circuit 65, and the subtraction output S 21 (2H.1 H s ) (FIG. 23 (B)) is applied to a multiplication circuit 66.
  • the limiter 68 limits the value of the multiplication output S 23 to a range of +0.5 0.5 as shown in FIG. 23 (C), and thus the difference of the depth information.
  • the limiter output S24 is ( I. H A ) Gives the value of G.
  • the priority signal generation circuit 48 outputs the priority signal H 0A whose value switches in the range of 1 to 0.
  • the priority signal H 0A is the depth information H of the first and second video images A and A.
  • Hs the slope varies according to the magnitude of the difference.
  • the priority signal H 0A does not suddenly switch to 1 to 0 (or 0 to 1). Images 0 and A are seen through so that they overlap each other (Fig. 20 (G)). In other words, Since image ⁇ and image A are mixed and displayed, image 0 and image A
  • the mixing ratio of the image I and the image I gradually changes, so that a display without discomfort can be performed.
  • the width of the boundary area and the way of changing the image can be adjusted to the extent required by the operator by adjusting the value of the gain output S22 of the gain register 67.
  • the combiner 22 is a depth information synthesis circuit with a NAM mix circuit configuration.
  • the composite depth information H is selected by selecting depth information of the images I and A (representing the position of the image closer to the screen surface 3A). As the ZH S sent from the co Nbaina 2 2.
  • a mixed video signal (V ' mi ⁇ in point light source mode, V ⁇ mi ⁇ in parallel light source mode) and a mixed key signal (K' mix in point light source mode) In the parallel light source mode, K ⁇ mi )) is expressed by the above-mentioned equations (89) and (88), respectively.
  • V OUT ⁇ ' MI K V MIK + (1 — K') V BK
  • V OUT V MIford+ ( ⁇ — ⁇ ') V BK
  • the real shadow generator 50 controls the gain of the output real shadow signal (K'4 in the point light source mode, '6' in the parallel light source mode).
  • K'4 the gain of the output real shadow signal
  • V. UT ⁇ H OA 2 + (1-H OA) K 2 ⁇ V 2
  • V "OUT ⁇ H o AK 2 + (1-H OA) K 2 ⁇ V 2
  • V OUT H OAK 2 V 2 + ⁇ (1— H. A ) + H 0A (1-K 2 ) ⁇ V 4
  • both the point light source mode and the parallel light source mode are parts other than H 0AK 2 V 2 GMBH
  • Doubideo signal (V 4 or V 6) are output 1 0 ⁇ %.
  • the shadow video signal is black on the tube and surface, and in this case it will be dark black and projected on the monitor screen.
  • the operator operates a three-dimensional pointing device, keys, and the like provided on the control panel 5 to calculate a read address used in the special effect device of the present invention. Enter each parameter to be set.
  • the necessary parameter menu over data to the calculation of the read add-less, Pasupeku te I blanking value [rho Zeta, X s axis of shadow coordinate system, the rotation of each of the Y s axis and Z s axis Angles ⁇ x , 0 ⁇ , ⁇ z), the origin of the shadow coordinate system (x s , y so, z so), the type of light source indicating whether it is a parallel light source or a point light source, and the position of the light source is ( ⁇ ⁇ _, y L, z L) or (r, ⁇ , ⁇ ) and the like.
  • the CPU 8 receives these information from the control panel 5.
  • the parameters are received and read in real time and reflected in the calculation of the address.
  • the CPU 8 monitors a change in a parameter supplied from the control panel 5 at a frame cycle, and calculates a read address based on the supplied parameter. Because of the parameters (bubc ⁇ , b, ⁇ b 33 ') are calculated in the frame cycle. Therefore, these parameters can be changed in real time in the frame period according to the operation of the operator, and can be read out in real time according to the changed parameters.
  • the address is calculated.
  • the CPU 8 can also store these parameters in the RAM 7 for each frame as set values.
  • the monitor screen 3 displays the source video signal V. Is displayed.
  • the operator operates the three-dimensional pointing device provided on the control panel 5 to generate the source video signal V.
  • Command 3D image conversion operation When a three-dimensional image conversion is instructed by the operator, the CPU 8 generates a three-dimensional conversion matrix T specified by the operator. Is the parameter one another, r ⁇ r 33, i ⁇ ., The z and s receiving co emission control panel 5 or al, reflects these parameters Ichita the operation of reading add-less in Li Alta Lee beam Let it. Specifically, the CPU 8 monitors a change in these parameters supplied from the control panel 5 at a frame cycle, and reads a read address based on the supplied parameters. Parameters for counting (bub aa, b, ⁇ b
  • the read address generation circuit 14 of the object signal generation unit 10 is configured as follows. ? 11 11 Receives the parameters 1 1 1 to 1) 33 from 8 and receives the screen address (X s , Y s) from the screen address generation circuit 9 and obtains the equation (13) and based on the (1 4), Obuji two click preparative signal read add-less (chi Micromax, Upsilon ”) to be generated in the frame period.
  • the generated read address (chi Micromax, Upsilon Micromax) is Frame memory for video signal 12 and frame memory for key signal
  • the object video signal V 2 is outputted from the frame memory 12, and the object key signal ⁇ 2 is outputted from the frame memory 13.
  • the read address generation circuit 24 of the shadow signal generation section 20 receives the parameters b and -b 33 ′ from the CPU 8 and receives the parameters from the screen address generation circuit 9.
  • receiving a rie N'a dress (X s, Ys), based on the formula (5 8) and (5 9) the read address (X M ', Y M' ) full rate-time cycle for shadow signal Generated by
  • the generated read addresses ( ⁇ ⁇ ′, ⁇ personally′) are supplied to the frame memory 22 for the video signal and the frame memory 23 for the key signal, respectively.
  • the frame memory 22 outputs the shadow video signal V 4
  • the frame memory 23 outputs the shadow key signal ⁇ 4. Is forced.
  • the shadow signal output from the gain control circuit 500 is horizontal
  • LPF 501 is then input to the vertical LPF 502.
  • Li Arusha dough controller 5 0 3 based on the value of H s used as depth information in the present embodiment to be input is stored in the R ⁇ Micromax 5 0 4 of Riarusha dough controller 5 0 3, from a table consisting of H s and off I filter coefficient data (see Fig. 1 9 C) issued reads the corresponding filter coefficient data and supplies the full I filter coefficient data in the horizontal LPF 5 0 1.
  • the horizontal LPF 501 multiplies the input shadow key signal by the filter coefficient data from the real shadow controller 503 and outputs the result.
  • Li Arusha dough controller 5 0 3 is based on the H s input, are stored in the R 0 M 5 0 4 of Li Arusha dough controller 5 ⁇ 3, H s and off I filter coefficients
  • the corresponding filter coefficient data is read from the table including the data (see FIG. 19C), and the read filter coefficient data is supplied to the vertical LPF 502.
  • the vertical PF 502 multiplies the shadow signal from the horizontal LPF 501 by the filter coefficient data read from the real shadow controller 503, and finally the real shadow generator 5 and outputs 0 real Shah synchronization - signal K '4 and to outputs. Therefore, as shown in FIG.
  • the data of the shadow S near object 0 is Since the filter coefficient data multiplied by the object ⁇ ⁇ becomes larger, the contour becomes clear. Then, as the distance from the object increases, the value of the filter coefficient data to be multiplied becomes smaller, so that the data of the shadow S gradually becomes less contoured (blurred).
  • the contour cannot be shown to be gradually blurred on the drawing, the shadow is gradually blurred as it moves away from the object.
  • Co Nbaina 3 when the object signal generation unit 1 0 receives the old Bujiwe click preparative video signal V 2 and Obujiweku Toki first signal K 2 together, shea catcher du signal generator 2 0 from Shah Doubideo signal V 4 ⁇ beauty It receives the real signal K 4 ′ and generates a mixed video signal V mix ′ and a mixed key signal K mi K ′ based on the equation (a).
  • the mixer 40 includes an externally supplied background video signal V BK and a mixed video signal V mi spiritsoutput from the combiner 30.
  • CPU 8 is supplied from copolyesters emissions collected by filtration Rupaneru 5 Changes in these parameters are monitored in real time in the frame cycle. And a parameter (! For calculating the read address based on the supplied parameters! , ⁇ ! ⁇ , B, -b 33 ') are calculated at the frame period. Therefore, these parameters can be changed in real time in the frame cycle in accordance with the operation of the operator, and the read out data can be read out in real time in accordance with the changed parameters. The address is calculated.
  • CPU 8 receives the parameters! ",! ⁇ ! ⁇ Calculates each parameter b ⁇ ⁇ ⁇ ⁇ 33 of the three-dimensional conversion matrix T 33 — 1 represented by equation (8) based on 33 , ⁇ . Specifically, equation (2 8) to formula (3 7), Bruno,. ra Ichita ⁇ ⁇ ⁇ ⁇ 33, by substituting " ⁇ ⁇ . iz and s, Bruno. Lame one! Can be obtained. Also, the CPU 8 receives the three-dimensional conversion matrix T. Para menu over data r,, ⁇ r 33, i X. Si ⁇ . SL z. S, and parameters x about Shah de coordinates, 0 ⁇ , ⁇ z, X s. , Y s .
  • the generated read add-less ( ⁇ ⁇ , ⁇ ⁇ ) is off Remume mode for full Remume mode Li 1 2 and the key signal for bi de O signals Are supplied to the memory 13 respectively.
  • an object video signal V 2 is output from the frame memory 12, and an object video signal K 2 is output from the frame memory 13.
  • the shadow signal output from the gain control circuit 500 is input to the horizontal LPF 501 and then to the vertical LPF 502.
  • Riarusha dough controller 5 0 3 based on the value of H s which need use as depthwise information in this embodiment to be input, stored in the R 0 Micromax 5 0 4 of Li Arusha dough controller 5 0 3 are, from Ruteichi Bull such from H s and the filter coefficient data (see Fig. 1 9 C) corresponding filter coefficient data Read and supply this filter coefficient data to the horizontal LPF 501.
  • the horizontal LPF 501 multiplies the input shadow key signal by the filter coefficient data from the real shadow controller 503 and outputs the result.
  • Li Arusha de controller 5 0 3 based on the H s input, re Arusha dough controller 5 0 3 R ⁇ M 5 0 4, the stored, H s and off I filter
  • the corresponding filter coefficient data is read from the table including the coefficient data (see Fig. 19C), and the read filter coefficient data is supplied to the vertical LPF 502.
  • the vertical LPF 502 multiplies the shadow key signal from the horizontal LPF 501 by the filter coefficient data read from the real shadow controller 503, and finally the rear shadow generator 5 output as Li Arusha Douki signal K '6 to output Y. Therefore, as shown in FIG. 19 (1), the contour of the shadow S data in the vicinity of the object becomes sharp because the filter coefficient data multiplied by the object becomes large. Then, as the distance from the object 0 increases, the value of the filter coefficient data to be multiplied becomes smaller, so that the data of the shadow S gradually becomes unsharpened (blurred).
  • the contour cannot be shown as gradually blurring in the drawing, the shadow gradually blurs as it moves away from the object.
  • the combiner 30 receives the object video signal V 2 and the object key signal K 2 from the object signal generator 10, and also receives the shadow video signal V 6 and the real signal from the shadow signal generator 20. It receives the dowkey signal K 6 ′ and generates a mixed video signal V middy ⁇ and a mixed key signal K mi K ′′ based on equation (b).
  • the mixer 40 outputs a background video signal V BK supplied from the outside and a mixed video signal V mi K * output from the combiner 30. And the mixed key signal K mi K ′, and generates an output video signal V 0 UT ”based on the equation (d).
  • a gain characteristic according to the depth information is controlled, and a filtering characteristic according to the depth information is controlled, so that a shadow for a more current object is generated, and an image of the shadow is generated.
  • the object image and the package image are combined with each other, so with a simple configuration and simple processing, an object image that aims at a more realistic shadow according to the distance from the object image can be created at high speed. This has the effect that it can be added.
  • the image special effect device for the object and the shadow can be used to create a more realistic shadow by a simple operation without forming a desired image by a separate operation. Can be synthesized.
  • the special effect device of the present invention can be used in a case where a special effect image is generated in a broadcast station image processing device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Circuits (AREA)

Abstract

Le procédé de la présente invention sert à ajouter une ombre relativement proche de la réalité à une image d'objet animée d'une grande vitesse, au moyen de traitements simples et d'une construction simple. Une unité génératrice (20) de signaux d'ombre génère un signal vidéo d'ombre V4 et un signal clé d'ombre K4 qui est transféré à un générateur de signaux (50) d'ombre réelle, lequel génère une clé d'ombre réelle permettant la génération d'une ombre plus réaliste.
PCT/JP1997/001249 1997-04-10 1997-04-10 Dispositif a effet special et procede de generation d'effet special Ceased WO1998046011A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP1997/001249 WO1998046011A1 (fr) 1997-04-10 1997-04-10 Dispositif a effet special et procede de generation d'effet special
GB9827227A GB2329312A (en) 1997-04-10 1997-04-10 Special effect apparatus and special effect method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP1997/001249 WO1998046011A1 (fr) 1997-04-10 1997-04-10 Dispositif a effet special et procede de generation d'effet special

Publications (1)

Publication Number Publication Date
WO1998046011A1 true WO1998046011A1 (fr) 1998-10-15

Family

ID=14180397

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP1997/001249 Ceased WO1998046011A1 (fr) 1997-04-10 1997-04-10 Dispositif a effet special et procede de generation d'effet special

Country Status (2)

Country Link
GB (1) GB2329312A (fr)
WO (1) WO1998046011A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008219615A (ja) * 2007-03-06 2008-09-18 Matsushita Electric Ind Co Ltd 特殊効果装置

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040131276A1 (en) * 2002-12-23 2004-07-08 John Hudson Region-based image processor
US7242778B2 (en) 2003-04-08 2007-07-10 Gennum Corporation Hearing instrument with self-diagnostics

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6359276A (ja) * 1986-08-29 1988-03-15 Nec Corp シヤド−回路
JPH01144869A (ja) * 1987-12-01 1989-06-07 Nec Corp シャドー回路
JPH01293078A (ja) * 1988-05-20 1989-11-27 Nec Corp テレビジョンデジタルビデオエフェクト装置
JPH0259982A (ja) * 1988-08-26 1990-02-28 Nippon Telegr & Teleph Corp <Ntt> 立体形状データ生成装置
JPH04227171A (ja) * 1990-12-29 1992-08-17 Nec Corp テレビジョン映像信号処理装置
JPH04315274A (ja) * 1991-04-12 1992-11-06 Sony Corp 画像変換装置
JPH0554151A (ja) * 1991-08-22 1993-03-05 Brother Ind Ltd 立体図形処理装置
JPH05207364A (ja) * 1992-01-27 1993-08-13 Nec Corp 映像特殊効果装置
JPH0793585A (ja) * 1993-09-22 1995-04-07 Dainippon Screen Mfg Co Ltd 画像の影付け方法

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6359276A (ja) * 1986-08-29 1988-03-15 Nec Corp シヤド−回路
JPH01144869A (ja) * 1987-12-01 1989-06-07 Nec Corp シャドー回路
JPH01293078A (ja) * 1988-05-20 1989-11-27 Nec Corp テレビジョンデジタルビデオエフェクト装置
JPH0259982A (ja) * 1988-08-26 1990-02-28 Nippon Telegr & Teleph Corp <Ntt> 立体形状データ生成装置
JPH04227171A (ja) * 1990-12-29 1992-08-17 Nec Corp テレビジョン映像信号処理装置
JPH04315274A (ja) * 1991-04-12 1992-11-06 Sony Corp 画像変換装置
JPH0554151A (ja) * 1991-08-22 1993-03-05 Brother Ind Ltd 立体図形処理装置
JPH05207364A (ja) * 1992-01-27 1993-08-13 Nec Corp 映像特殊効果装置
JPH0793585A (ja) * 1993-09-22 1995-04-07 Dainippon Screen Mfg Co Ltd 画像の影付け方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008219615A (ja) * 2007-03-06 2008-09-18 Matsushita Electric Ind Co Ltd 特殊効果装置

Also Published As

Publication number Publication date
GB2329312A (en) 1999-03-17
GB9827227D0 (en) 1999-02-03

Similar Documents

Publication Publication Date Title
JP3840663B2 (ja) 画像処理装置及び方法
JP3107452B2 (ja) テクスチャマッピング方法およびその装置
US6249289B1 (en) Multi-purpose high resolution distortion correction
US6434277B1 (en) Image processing apparatus and method, and medium therefor
US5357599A (en) Method and apparatus for rendering polygons
JP7235875B2 (ja) リアルタイムの3d可視化を備えた点群カラー化システム
JPWO1997018667A1 (ja) 特殊効果装置、画像処理方法及びシャドウ生成方法
AU2019226134B2 (en) Environment map hole-filling
JPH0634748A (ja) レーダ信号表示装置
US20070139408A1 (en) Reflective image objects
US6441864B1 (en) Video signal processing device and method employing transformation matrix to generate composite image
US20200302579A1 (en) Environment map generation and hole filling
EP2595040A1 (fr) Dispositif de traitement d&#39;informations, procédé de traitement d&#39;informations et programme
US5268677A (en) Reduced viewport for graphics display
RU2427918C2 (ru) Метафора двумерного редактирования для трехмерной графики
WO1998046011A1 (fr) Dispositif a effet special et procede de generation d&#39;effet special
JP3278501B2 (ja) 画像処理装置および方法
US20030169272A1 (en) Image generation apparatus and method thereof
JP2973413B2 (ja) コンピュータグラフィックスの照度計算方法及び表示装置
JP3066035B2 (ja) 計算機図形処理表示装置制御システムのためのテクスチャマッピング技法
JPS6346584A (ja) 高速ホングシェ−ジング装置及びその方法
JP3967036B2 (ja) 画像合成装置
JPWO1998046011A1 (ja) 特殊効果装置及び特殊効果方法
JP3351805B2 (ja) 映像信号の発生方法
JP2005293616A (ja) 画像合成装置

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): GB JP US

ENP Entry into the national phase

Ref country code: GB

Ref document number: 9827227

Kind code of ref document: A

Format of ref document f/p: F

WWE Wipo information: entry into national phase

Ref document number: 09202258

Country of ref document: US