WO2001065847A1 - Dispositif et procede de traitement de donnees, support d'enregistrement et programme correspondant - Google Patents
Dispositif et procede de traitement de donnees, support d'enregistrement et programme correspondant Download PDFInfo
- Publication number
- WO2001065847A1 WO2001065847A1 PCT/JP2001/001525 JP0101525W WO0165847A1 WO 2001065847 A1 WO2001065847 A1 WO 2001065847A1 JP 0101525 W JP0101525 W JP 0101525W WO 0165847 A1 WO0165847 A1 WO 0165847A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- improvement
- improvement information
- information
- quality
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/01—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
- H04N7/0135—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/08—Systems for the simultaneous or sequential transmission of more than one television signal, e.g. additional information signals, the signals occupying wholly or partially the same frequency band, e.g. by time division
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
- H04N19/147—Data rate or code amount at the encoder output according to rate distortion criteria
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/154—Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/172—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
- H04N19/34—Scalability techniques involving progressive bit-plane based encoding of the enhancement layer, e.g. fine granular scalability [FGS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/59—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234327—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by decomposing into layers, e.g. base layer and one or more enhancement layers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/254—Management at additional data server, e.g. shopping server, rights management server
- H04N21/2543—Billing, e.g. for subscription services
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/426—Internal components of the client ; Characteristics thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/462—Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
- H04N21/4621—Controlling the complexity of the content stream or additional data, e.g. lowering the resolution or bit-rate of the video stream for a mobile client with a small screen
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N11/00—Colour television systems
- H04N11/24—High-definition television systems
Definitions
- the present invention relates to a data processing apparatus and method, and a recording medium and a program, and particularly to, for example, providing images of various image quality.
- the present invention relates to a data processing apparatus and method, and a recording medium and a program.
- a data processing device includes: an improvement information generating unit that generates improvement information for improving the quality of data; and an embedding unit that embeds the improvement information in data.
- the improvement information generating means may generate a prediction coefficient used for predicting a predicted value of the quality improvement data obtained by improving the data quality as the improvement information.
- the improvement information generating means may generate a prediction coefficient for each predetermined class.
- the improvement ⁇ report generation means is a class tap configuration that uses the student data to be a student to create a cluster of ffl to find the class of the evening.
- a prediction tap configuration unit configured using prediction data and a prediction coefficient calculation unit that calculates a prediction coefficient for each class using teacher data and prediction taps may be provided.
- the improvement information generation means may generate a plurality of types of improvement information.
- the improvement ⁇ r information generating means may generate prediction coefficients for different numbers of classes as a plurality of types of improvement information.
- the improvement information generating means may generate a plurality of types of prediction coefficients obtained by using different quality student data or teacher data as a plurality of types of improvement information.
- the improvement information generating means may generate at least the prediction coefficients and the information for performing the linear interpolation as a plurality of types of improved information.
- the improvement information generating means may generate a plurality of types of prediction coefficients obtained by using class taps or prediction tabs having different configurations as a plurality of types of improvement information.
- the improvement information generating means may generate a plurality of types of prediction coefficients obtained by performing the class classification by different methods as a plurality of types of improvement information.
- the improvement information generating means may generate, as the improvement information, a class code indicating the class of the data, which is used for predicting a predicted value of the quality improvement data obtained by improving the data quality.
- the improvement information generating means includes: a prediction tap configuration means configured to configure a prediction tab used for predicting a target teacher data of interest among teacher data serving as a learning teacher using student data serving as a learning student.
- Prediction coefficient storage means for storing prediction coefficients for each class code obtained by performing learning
- prediction calculation means for obtaining a prediction value of s-teacher data using prediction taps and prediction coefficients
- Data Class code detecting means for detecting a class code of a measurement coefficient that minimizes a predicted value, and the class code detected by the class code detecting means may be output as improvement information.
- the improvement report generation stage is a cluster that uses the teacher data to create a class buffer that is used to find the class of the teacher data of interest that is the focus of the teacher data that is the teacher of training.
- Classifying means for classifying the class of the teacher data of interest based on the class taps, and outputting a class code corresponding to the class obtained by the classifying means as improvement information. You may do so.
- the improvement report may be embedded in the data by performing tram diffusion.
- the embedding means may be configured to embed the improvement information overnight by changing one or more bits of the data to the improvement information.
- the data may be image data
- the improvement information may be information for improving the image quality of the image data.
- the data processing method includes an upper report step for generating improvement information for improving data quality and an embedding step for embedding the improvement information in data.
- the recording medium according to the present invention describes a program having an improvement information generation step for generating improvement information for improving data quality and an embedding step for embedding the improvement information in the data.
- a program according to the present invention includes an improvement information generation step of generating improvement information for improving data quality, and an embedding step of embedding improvement information in data.
- the data processing apparatus includes an extracting unit for extracting improvement information from the embedded data, and an improving unit for improving the data of the data by passing h t, 'j.
- Enhancement gazettes predict the value of quality improvement data that has improved the quality of the data.
- the improving means may obtain the predicted value of the quality improvement data by using the data and the prediction coefficient.
- the improvement information may be a prediction coefficient obtained for each predetermined class.
- the improvement means may obtain a prediction value of the quality improvement data by using the data and the prediction coefficient for each class. It may be.
- the improvement means includes: cluster tap configuration means for using the data to form a class tap used to obtain a class of the attention quality improvement data, which is the quality improvement data of interest; and Class classification means for class classification to obtain classes; prediction tap configuration means for configuring prediction taps used together with prediction coefficients to predict attention quality improvement data using data; prediction of attention quality improvement data A prediction means for determining a value using a prediction coefficient of the class of the attention quality improvement data and a prediction tap may be provided.
- the improvement information may be a class code representing a class of a prediction coefficient for each predetermined class used for predicting a predicted value of quality improvement data obtained by improving data quality.
- the improvement means may obtain a predicted value of the quality improvement data by using a prediction coefficient corresponding to the data and the class code.
- the improvement means includes: a prediction tap configuration means configured to use the data to form a prediction tap ffl along with a prediction coefficient in order to predict attention quality improvement data that is a quality improvement data being distributed; A prediction means for obtaining a predicted value of the attention quality improvement data using a prediction coefficient corresponding to a class code as the improvement information and a prediction tap may be provided.
- a plurality of types of improvement information may be embedded in the embedded data.
- prediction coefficients for different numbers of classes may be embedded as a plurality of types of improvement information.
- the prediction coefficient may be generated using student data as a student and teacher data as a teacher.In this case, different embedded student data or teacher data may be used as embedded data.
- At least prediction coefficients and information for performing linear interpolation may be embedded in the embedding data as a plurality of types of improvement information.
- a plurality of types of prediction coefficients obtained by using class taps or prediction taps having different configurations may be embedded as a plurality of types of improvement information.
- a plurality of types of prediction coefficients obtained by performing the class classification by different methods may be embedded as a plurality of types of improvement information.
- the data processing device may further include an improvement information selecting unit that selects, from a plurality of types of improvement information, one used for improving data quality.
- the extracting means may extract the improvement information from the embedded data by utilizing the bias of the energy of the data.
- the extracting means may extract the improvement information from the embedded data by performing inverse spectrum spreading.
- the extracting means may extract one or more bits of the embedded data as improvement information.
- the data may be image data
- the improvement information may be information for improving the image quality of the image data.
- a data processing method includes an extraction step of extracting improvement information from embedded data, and an improvement step of improving data quality using the improvement information.
- the recording medium according to the present invention is recorded with a program having an extraction step of extracting improvement information from embedded data and an improvement step of improving data quality using the improvement information.
- a program according to the present invention has an extraction step of extracting improvement information from embedded data, and an improvement step of improving data quality using the improvement information.
- the data processing apparatus includes: an improvement information generating unit that generates a plurality of types of improvement information for improving data quality; and a transmission unit that transmits data and one or more types of improvement information. Prepare.
- the data processing device may further include an improvement information selecting means for selecting, from among a plurality of types of improvement reports, one to be transmitted together with the data.
- the improvement information selection section may select improvement information in response to a request from the receiving device E that receives the data.
- the data processing device may further include a charging unit for performing a charging process in accordance with the improvement information selected by the improvement information selecting unit.
- the improvement information generating means may generate at least a prediction coefficient used for predicting a predicted value of the quality improvement data with improved data quality as the improvement information.
- the stage may generate a prediction coefficient for each predetermined class.
- the improvement report generation stage includes a class tap configuration unit configured to configure a class tap to be used for obtaining a class of interest in the teacher data of the teacher by using the student data as the student.
- a class classification means for classifying the class of the teacher data of interest based on the class tap, and a classifier for predicting the teacher data of interest. Prediction that uses the student data to create prediction taps to be used together with the measurement coefficients. Finds the prediction coefficients for each class using the tap configuration means and teacher data and prediction taps. May be provided.
- the stage may generate a prediction coefficient for the number K of classes as a plurality of types of improvement information.
- the improvement information generating means may generate a plurality of types of prediction coefficients obtained by using student data or teacher data of quality ⁇ as a plurality of types of improvement information.
- the improvement information generating means may generate at least prediction coefficients and information for performing linear interpolation as a plurality of types of improvement reports.
- the improvement information generating means may generate a plurality of types of prediction coefficients obtained by using class taps or prediction taps having different configurations as a plurality of types of improvement information.
- the improvement information generation means uses the complex classification required by performing the classification differently. Several kinds of prediction coefficients may be formed as improvement information of the few poles.
- the transmission unit uses the energy of the data to improve the data and one or more types of data by improving and reporting the data so that the data can be obtained based on the data and the improvement information. You may send a report.
- the transmission means may transmit the data and the improvement information of If class and the like by embedding the improvement information in the data by performing spread spectrum.
- the transmission unit may change one or more bits of the data into the enhanced information, thereby transmitting the data and one or more types of the enhanced information to the data.
- the stage may send all of the data and the above information of a plurality of types.
- the data may be image data
- the enhanced geriatric information may be information for improving the image quality of the image data.
- the data processing / J method includes an improvement information generation step of generating a plurality of types of improvement information for improving data quality, and a transmission step of transmitting data and one or more types of improvement information.
- the recording medium according to the present invention transmits a plurality of types of improvement information for improving data quality, and transmits one or more types of improvement information.
- a program for recording a transmission ⁇ step is recorded.
- the program according to the present invention includes: a plurality of types of improvement information for improving data quality; a plurality of types of improvement information; a step of transmitting data; and a transmission step of transmitting one or more types of improvement information.
- the data processing device J includes: a receiving unit that receives the data and one or more types of improvement information; and 1 ⁇ ′, the quality of the data, and one of the one or more types of improvement information. It is provided with an upper means for improving by using, and a step for performing a billing process in accordance with the improved information.
- the receiving means may receive several kinds of information, and in this case, the data processing P1! IR according to the present invention can improve the data quality from among a plurality of types of information : information. Improve the selection of things that are rivers to promote Is also good.
- the improvement information selecting means may select the improvement information in response to a request from a user.
- the data processing device may further include requesting means for requesting the transmission device that transmits the data and one or more types of the improvement information to use the improvement information used for improving the quality of the data,
- the receiving means may receive the improvement information transmitted by the transmitting device in response to the request from the requesting means.
- the improvement information may be a prediction coefficient used for predicting a predicted value of the quality improvement data obtained by improving data quality.
- the improvement means uses the data and the prediction coefficient to improve the quality.
- the predicted value of the improvement data may be obtained.
- the improvement information may be a prediction coefficient obtained for each predetermined class.
- the improvement means obtains a prediction value of the quality improvement data by using the data and the prediction coefficient for each class. You may do so.
- the improvement means includes: a class tap configuration means for configuring a class tap used to obtain a class of attention quality improvement data, which is quality improvement data of interest, using data; and attention quality improvement data based on the class tap.
- Classification means for classifying the class of data
- prediction tap configuration means for using data to form prediction taps that together with prediction coefficients for predicting the quality improvement data of interest, and prediction quality improvement data
- a prediction unit that obtains the predicted value of using the prediction coefficient of the class of the attention quality improvement data and the prediction tap.
- the receiving means may receive a plurality of types of improvement information.
- the receiving means may receive prediction coefficients for different numbers of classes as a plurality of types of improvement information.
- the prediction coefficient may be generated using the student data to be the student and the teacher data to be the teacher, and in this case, the receiving unit may use the student data or the teacher data of different qualities.
- a plurality of types of prediction coefficients to be obtained may be received as a plurality of types of improvement information.
- the receiving means includes at least a plurality of prediction coefficients and information for performing linear interpolation. It may be received as type improvement information.
- the receiving means may receive a plurality of types of prediction coefficients obtained by using a cluster tap or a prediction tap having a configuration of S as a plurality of types of enhancement information.
- the receiving means may receive a plurality of types of prediction coefficients obtained by performing the class classification by different methods as a plurality of types of improvement information.
- the receiving means may receive embedded data in which one or more types of improvement information are embedded in the data.
- the data processing device extracts the improvement information from the embedded data May be further provided.
- the extracting means may extract the improvement information from the embedded data by utilizing the bias of the energy of the data.
- the extracting means may extract the improvement information from the embedded data by performing inverse spectrum spreading.
- the extracting means may extract one or more bits of the embedded data as improvement information.
- the data may be image data
- the improvement information may be information for improving the image quality of the image data.
- the data processing method includes: a receiving step of receiving data and one or more types of improvement information; an improvement step of improving data quality using one of the one or more types of improvement information; And a charging step for performing a charging process according to the improvement information used to improve the quality of the data.
- a recording medium includes: a receiving chip that receives data and one or more types of improvement information; andan improvement step of improving data quality by using one of the one or more types of improvement information.
- a program having a charging step of performing a charging process according to the improvement information used to improve the data quality is recorded.
- a program according to the present invention includes: a receiving step of receiving data and one or more types of improvement information; an improving step of improving data quality by using one of the one or more types of improvement information; and a data quality. And a charging step of performing a charging process in accordance with the improvement information used to improve the charge.
- improvement information for improving data quality is formed, and the improvement information is inserted into the data.
- improvement information is extracted from the embedded data, and the quality of the data is improved using the improvement information.
- the recording medium, and the program according to the present invention a plurality of types of improvement information for improving the data quality K are generated, and the data and one or more types of improvement information are generated. Sent.
- FIG. 1 is a diagram showing a configuration example of a broadcast system according to an embodiment of the present invention.
- FIG. 2 is a block diagram showing a configuration example of the transmission [S 1].
- FIG. 3 is a flowchart illustrating the process of the transmission device 1.
- FIG. 4 is a block diagram illustrating a configuration example of the receiving device 3.
- FIG. 5 is a flowchart for explaining the processing of the receiving device 3.
- FIG. 6 is a block diagram showing a first configuration example of the improvement information generation unit 11.
- FIG. 7 is a diagram showing a configuration of a prediction tap (cluster tap).
- FIGS 8A, 8B, 8. And 80 are diagrams showing the correspondence between the system selection signal and the enhancement system.
- FIG. 9 is a flowchart for explaining the processing of the enhanced report generation unit 11 of FIG.
- FIG. 1 ⁇ is a block diagram showing a second configuration example of the improvement report generation unit 11.
- FIG. 11 is a flowchart for explaining the processing of the improvement report generation unit 11 of FIG. You.
- ml2: well, product fr>] is a block diagram showing a configuration example of ⁇ l of the fin part 24.
- FIG. 13 is a flowchart for explaining the processing of the m ′ 2, ′,, f
- FIG. 14 is a block diagram showing a third configuration example of the improvement report unit 11.
- FIG. 15 is a flowchart illustrating the process of the improved report generation unit 11 of FIG.
- FIG. 16 is a block diagram showing a first configuration example of finding a ⁇ ′ measurement coefficient: learning and concealment.
- FIG. 17 is a block diagram showing a second configuration example of quality improvement; is there.
- FIG. 18 is a flowchart for explaining the processing of the product fi i I: i3 ⁇ 424 of FIG.
- FIG. 19 is a block diagram showing a fourth configuration example of the] heating unit 11.
- FIG. 20 is a flowchart illustrating the processing of the I: information generation unit 11 in FIG.
- FIG. 21 is a block diagram illustrating a third example of the configuration of the quality [] fin part 24.
- FIG. 22 is a block diagram showing a second configuration example of the learning device for obtaining a prediction coefficient.
- FIG. 23 is a block diagram showing a configuration example of the integration unit 12.
- FIG. 24 is a flowchart for explaining the processing of the integration unit 12 in FIG.
- FIGS. 25 to 25 # are diagrams for explaining the replacement of the columns of both images.
- FIG. 26 is a block diagram showing a configuration example of the extraction unit 22.
- FIG. 27 is a flowchart for explaining the process of the soma output 22 of [X] 26.
- FIG. 28 It is a block diagram showing another example of the configuration of the integration section 12.
- FIG. 29 is a block diagram illustrating another configuration example of the extraction unit 22. As illustrated in FIG.
- FIG. 30 is a block diagram showing a configuration example of a computer according to an embodiment of the present invention.
- Fig. 1 shows a digital satellite broadcasting system with a suitable sword. This system is a system in which multiple ISs are logically combined. It does not matter whether the device is in the same housing or not).
- a satellite broadcast wave as a radio wave corresponding to the program broadcast is transmitted from the antenna (parabolic antenna) 1A to the satellite 2.
- the satellite (communication satellite or broadcast satellite) 2 receives the satellite broadcast wave from the transmission device 1, performs amplification and other necessary processing on the satellite broadcast wave, and transmits it.
- the satellite broadcast wave transmitted from the satellite 2 is received by the antenna (parabolic antenna) 3 A of the receiving device 3 and displayed.
- the transmission i 1 and the receiving device 3 are, for example, a network capable of bidirectional communication such as a public line, the Internet, a CATV (Cable Television) network, a wireless communication network, or the like. Communication is possible via network 4, and accounting processing such as exchange of accounting information via network 4 is performed between transmitting apparatus 1 and receiving apparatus 3.
- a network capable of bidirectional communication such as a public line, the Internet, a CATV (Cable Television) network, a wireless communication network, or the like.
- Communication is possible via network 4, and accounting processing such as exchange of accounting information via network 4 is performed between transmitting apparatus 1 and receiving apparatus 3.
- only one receiving device 3 is shown for simplicity of description, but a plurality of receiving devices having the same configuration as the receiving device 3 can be provided. .
- FIG. 2 shows a configuration example of the transmission device 1 of FIG.
- the improvement information generation unit 11 stores image data broadcast as a program (hereinafter, appropriately referred to as broadcast image data) or image data having the same content as the broadcast image data and higher image quality (higher image data). Image data). The improvement information generating unit 11 generates improvement information for improving the image quality of the broadcast image data in the receiving device 3.
- the improvement information generating unit 11 is also supplied with a system selection signal for selecting an enhancement system for improving the image quality of the broadcast image data.
- the improvement information generation unit 11 generates one or more types of improvement information according to the system selection signal supplied thereto.
- the improvement information generated by the improvement information generation unit 11 is supplied to the integration unit 12.
- the integration section 12 is supplied with the improvement information from the improvement information generation section 11 and also with the image data for broadcasting.
- the integration unit 12 integrates the broadcast image data and the enhancement information, generates an integrated signal, and supplies the integrated signal to the transmission unit 13.
- the broadcast image data and the enhancement information for example, in addition to time division multiplexing and frequency multiplexing, embedded coding as described later is used. It is possible to It is also possible to transmit the broadcast image data and the improvement information as a separate program without integrating them.
- the transmission unit 13 performs modulation, amplification, and other necessary processing on the integrated signal output from the integration unit 12 and supplies the processed signal to the antenna 1A.
- the billing processing unit 14 communicates with the receiving device 3 via the communication interface 15 and the network 4 to perform a billing process for providing a program to the receiving device 3.
- the communication interface 15 controls communication via the network 4. Next, a program transmission process performed by the transmission device 1 of FIG. 2 will be described with reference to a flowchart of FIG.
- the enhancement information generation unit 11 generates and integrates one or more types of enhancement information for improving the image quality of broadcast image data according to the system selection signal supplied thereto. Supply to parts 1 and 2.
- the unit of the broadcast image data for generating the improvement information (hereinafter, appropriately referred to as an improvement information generation unit) may be, for example, one frame unit or one program unit.
- the integration unit 12 integrates the broadcast image data and the improvement information, generates an integrated signal, and supplies the integrated signal to the transmission unit 13 in step S2. I do.
- the transmitting section 13 modulates, amplifies, and performs other necessary processing on the integrated signal output from the integrating section 12 and supplies the processed signal to the antenna 1 #.
- the integrated signal is transmitted from antenna 1A as a satellite broadcast wave.
- FIG. 4 shows a configuration example of the receiving device 3 of FIG.
- the satellite broadcast wave broadcast via the satellite 2 is received by the antenna 3A, and the received signal is supplied to the receiver 21.
- the reception unit 21 performs amplification, demodulation, and other necessary processing on the reception signal from the antenna 3A, obtains an integrated signal, and supplies the integrated signal to the extraction unit 22.
- the extraction unit 22 extracts the broadcast image data and one or more types of improvement information from the integrated signal from the reception unit 21, and supplies the broadcast image data to the quality improvement unit 24. In both cases, one or more types of ecology are supplied to the selection unit 23.
- the selection unit 23 selects the type corresponding to the image level signal S from the u- gold processing unit 27 from one or more types of improvement information from the extraction unit 22 and, together with the improvement information, A method selection signal for selecting an improvement method for improving image quality based on information is supplied to the quality improvement unit 24.
- the quality improvement unit 24 performs the processing of the method indicated by the method selection signal on the broadcast ffl i image data supplied from the extraction unit 22 using the improvement information supplied from the selection unit 23. . As a result, the quality improving unit 24 obtains image data with improved image quality and supplies it to the display unit 25.
- the control unit 25 is composed of, for example, a CRT (cathode ray tube), a liquid crystal panel, and a DMD (Dynamic Mirror Device), and has an image corresponding to the image data supplied from the quality improvement unit 24. Is displayed.
- the operation unit 26 is operated by a user when selecting the image quality of an image displayed on the display unit 25, and an operation signal corresponding to the operation is supplied to the u- gold processing unit 27. It has become.
- the billing processing unit 27 performs a billing process for the image quality selected by the user based on the operation signal from the operation unit 26.
- the billing processing unit 27 identifies the image quality requested by the user based on the operation signal from the operation unit 26, and supplies an image quality level signal indicating the degree of the image quality to the selection unit 23. I do.
- the selection unit 23 selects improvement information suitable for obtaining the image quality required by the user.
- the money processing unit 27 transmits the image quality level No.! To the transmitting device 1 via the communication interface 28 and the network 4.
- the image quality level signal transmitted from charging section 27 to transmitting apparatus 1 in this manner is transmitted via communication interface 15 in transmitting apparatus 1 (FIG. 2). Then, it is received by the accounting processing unit 14.
- the billing processing unit 14 bills the user of the receiving device 3 according to the image quality level signal. That is, the charging processing unit 14 calculates, for example, the viewing fee for each user, and communicates the account number of the transmitting device 1, the account number of the user, and the charging symbol including at least the viewed viewing fee.
- the data is transmitted to an unillustrated charging center (bank center) via the interface 15 and the network 4.
- the ⁇ gold center receives the ⁇ gold signal, it will calculate the amount corresponding to the viewing fee, Withdrawal from the user's account and payment processing to the account of transmitter 1 are performed.
- the communication interface 28 controls communication via the network 4.
- the reception signal output by the antenna 3A receiving the satellite broadcast wave is supplied to the reception unit 21.
- the receiving unit 21 receives the received signal and converts it into an integrated signal.
- This integrated ⁇ ⁇ signal is supplied to the extraction unit 22.
- the extraction unit 22 extracts broadcast ffl l l image data and information on the direction h of one or less I from the integrated signal from the reception unit 21. Then, the image data for broadcast is supplied to the upper part 24 of the item i, and one or more kinds of improvement information are supplied to the selection part 23.
- the selecting unit 23 selects the type corresponding to the image quality level from the billing processing unit 27 from the one or more types of improvement information from the extracting unit 22 in step S13, 'Along with the tilt information, a method selection ⁇ indicating an improvement method for improving the image quality based on the improvement information is supplied to the quality improvement unit 24.
- step S 14 the quality part seven part 24 supplies the processing of the method indicated by the expression selection signal to the broadcast image data supplied from the extraction part 22 from the selection part 23. Apply enhanced ecology reports. As a result, the quality improving unit 24 obtains image data with improved image quality and supplies the image data to the unit 25 for display. Then, the process returns to step S11, and the same processing is repeated thereafter.
- the image quality level signal output from the charging unit 27 corresponds to the image quality requested by the user operating the operation unit 26. Therefore, in the display unit 25, an image having the image quality requested by the user is displayed.
- a method for improving the image quality of the I® image data for example, It is possible to use the class classification adaptive processing described in Japanese Patent Application Laid-Open No. 8-51622, which was previously proposed.
- Classification adaptation processing consists of class classification processing and adaptation processing. Data is classified into classes based on their characteristics by the classification processing, and each class is adapted. Processing is performed. The adaptive processing is based on the following method.
- a pixel constituting a standard resolution or low resolution SD (Standard Definition) image and a predetermined prediction coefficient are linearly combined with the SD image to form the SD image.
- an SD pixel constituting a standard resolution or low resolution SD (Standard Definition) image and a predetermined prediction coefficient
- an image with improved resolution of the SD image can be obtained.
- a certain HD image is used as teacher data, and an SD image whose image quality such as resolution is degraded by reducing the number of pixels of the HD image is used as student data.
- the predicted value E [y] of the pixel value y of the pixels constituting the HD image (hereinafter, appropriately referred to as HD pixels) is converted to some SD pixels (pixels constituting the SD image) ⁇ , '2,... ⁇ and a predetermined linear prediction model wl, w2,.
- the predicted value E [y] can be expressed by the following equation.
- Equation (1) a matrix W consisting of a set of prediction coefficients wj, a matrix X consisting of a set of student data XU, and a matrix Y 'consisting of a set of predicted values E [yj] are ,
- the component xij of the matrix X means the j-th student data in the i-th set of student data (the set of student data used for the prediction of the i-th teacher data yi), and the matrix W of the matrix W
- the component wj represents a prediction coefficient by which a product with the j-th student data in the set of student data is calculated.
- yi represents the i-th teacher data, and thus E [yi] represents the predicted value of the i-th teacher data.
- a matrix E consisting of a set of residuals e of [y] is
- the prediction coefficient wj for obtaining a prediction value E [y] close to the pixel value y of the HD pixel is a square error
- the prediction coefficient wj that satisfies the following equation is obtained as a prediction value E [y] close to the pixel value y of the HD pixel. Therefore, it is the optimum value.
- a sweeping method (such as the Gayss-Jordan elimination method) may be used.
- the optimum prediction coefficient wj is obtained, and further, the prediction value E [y] of the pixel value y of the HD pixel is obtained by using the prediction coefficient W-j and by the equation (1). Suitable Response processing.
- adaptive 3 ⁇ 4 is not rare in SD both images, in that the components included in the HD LFI image is revealed Hei, for example, 5 becomes the f becomes sleeves question processing 3 ⁇ 4.
- the adaptive processing is the same as the interpolation processing using a so-called interpolation filter as long as only equation (1) is used, but the prediction coefficient w corresponding to the tap coefficient of the interpolation filter is Since it is obtained by so-called learning using the teacher data y, the components included in the HD image can be reproduced. From this, it can be said that adaptive processing has a so-called image creation (resolution image) effect.
- the adaptive processing has been described by taking as an example a case where the resolution is improved.However, the adaptive processing is performed in addition to the above. This method can be used to obtain a measured value of an image from which noise or blur has been removed. This According to the adaptive processing 1, it is "J ability" to improve noises such as noise removal and blurring.
- FIG. 6 shows an example of the configuration of the notification generation unit 11 in FIG. 2 when the prediction coefficient is obtained as the upper information by the above-described classification adaptive processing.
- Image data of Alf! As teacher data is supplied to the frame memory 31 in frame units, for example, and the frame memory 31 sequentially stores the teacher data supplied thereto.
- the downconverter 32 reads the teacher data stored in the frame memory 31 at, for example, the frame ⁇ position, applies an LPF (Low Pass Filter), thins out, etc. Then, image data of the same image quality as that of the broadcast ffl image data, that is, low-quality image data is generated as student data for learning the f ′ measurement coefficient, and supplied to the frame memory 33.
- LPF Low Pass Filter
- the frame memory 33 sequentially stores low-quality image data as student data output from the down-converter 32 in, for example, a frame level.
- the prediction group configuration circuit 34 is used as the teacher data stored in the frame memory 31.
- the pixels (hereinafter, appropriately referred to as “teacher pixels”) constituting all the images (hereinafter, appropriately referred to as “teacher images”) are sequentially set as the target pixels, and the image as the student data corresponding to the position of the target pixels.
- pixels of some student data (hereinafter, appropriately referred to as “student pixels”) that are spatially or temporally close to the position of (hereinafter, appropriately referred to as “student image”) It is read from the frame memory 33 and configured as a prediction tap used for multiplication with a prediction coefficient.
- the prediction tap configuration circuit 34 outputs a certain control from the control circuit 40.
- Shin ⁇ for example, corresponds to the position of the pixel of interest, also t 4 single student pixel a in the position or et spatially close position of the student image, b, c, and d, and the prediction tap, the prediction tap arrangement
- the circuit 34 in accordance with other control signals from the control circuit 40, comprises, for example: 9 student pixels a, which are spatially close to the position of the student image, corresponding to the positions of 't' pixels.
- the prediction taps are composed of pixels having a rectangular shape, as shown in FIG. 7, and pixels having a cross shape, a diamond shape, or any other shape. It is possible to configure. Also, the prediction taps can be configured not every adjacent pixel but with every other pixel.
- the prediction tap configured by the prediction tap configuration circuit 34 is supplied to the normal equation addition circuit 37.
- the class tap configuration circuit 35 reads out, from the frame memory 33, student pixels to be used for class classification for classifying the target pixel into one of several classes. That is, the cluster configuration circuit 35 generates a number of student pixels that are spatially or temporally close to the position of the student image corresponding to the position of the pixel of interest by a control signal from the control circuit 40. Is read from the frame memory 33 and supplied to the class classification circuit 36 as a class tap used for class classification.
- the measurement tap and the class tap can be composed of the same student pixel. Alternatively, it can be composed of different student pixels.
- the class classification circuit 36 classifies the pixel of interest based on the cluster from the class tap configuration circuit 35 by a method according to the control signal from the control circuit 40, and obtains the result as a result.
- the class code corresponding to the pixel class is supplied to the normal equation addition circuit 37.
- a method for performing class classification for example, a method using a threshold value
- AD RC Adaptive Dynamic Range Coding
- the pixel value of the student pixel constituting the class tap is binarized depending on whether it is larger than a predetermined threshold value (above the threshold value), and according to the binarization result, the class of the pixel of interest is determined. Is determined.
- the student pixels constituting the class tap are ADRC-processed, and the class of the pixel of interest is determined according to the ADR C code obtained as a result.
- the pixel value of each student pixel constituting the class tap is set to 1 bit, and in this case, a cluster is obtained as described above.
- a bit string in which one-bit pixel values for each pixel are arranged in a predetermined order is output as an ADRC code.
- the class tap is composed of N student pixels, and the K-bit ADRC processing result of the class tap is a class code, the pixel of interest is (2N) K It will be classified into one of the classes.
- the normal equation addition circuit 37 is the pixel of interest from the frame memory 31.
- the teacher pixel is read out, and the addition is performed for the measurement tap (the student pixel that composes) and the target pixel (the teacher pixel).
- the normal equation adder circuit 37 uses the prediction tap (student pixel) for each class corresponding to the class code supplied from the classifier circuit 36, and calculates the following equation on the left side of the regular equation of Expression (7). Performs operations equivalent to multiplication (x in x X ini) between student pixels, which are multipliers of prediction coefficients, and sharks ( ⁇ ).
- the normal equation addition circuit 37 also uses the prediction element (student pixel) and the pixel of interest (teacher pixel) for each class corresponding to the class code supplied from the class classification circuit 36, and calculates the equation In the right-hand side of the normal equation in (7), multiplication (xinxyi) of the student pixel and the target pixel (teacher pixel) and an operation equivalent to summation ( ⁇ ) are performed.
- the normal equation addition circuit 37 performs the above-mentioned addition by using all the teacher pixels stored in the frame memory 31 as 5 pixels, whereby the equation (7) is obtained for each class. A normal equation is established. Then, based on this normal equation, improvement information is generated for each predetermined number of teacher pixels.
- the prediction coefficient determination circuit 38 solves the normal equation generated for each class in the normal equation addition circuit 37 to obtain a prediction coefficient for each class, and supplies the prediction coefficient to the address corresponding to each class in the memory 39. I do.
- the memory 39 stores the child measurement coefficient supplied from the prediction coefficient determination circuit 38 as improvement information, and supplies it to the integration unit 12 (FIG. 2) as needed.
- the prediction coefficient determination circuit 38 outputs, for example, a default prediction coefficient (for example, a prediction coefficient obtained in advance using a relatively large number of teacher images).
- the control circuit 40 is supplied with a system selection signal (FIG. 2) for selecting an improvement system for improving the image quality of the broadcast image data.
- the control circuit 40 generates a prediction tap configuration circuit 34 and a cluster configuration so that the enhancement information necessary for improving the image quality of the broadcast image data is generated by the enhancement method indicated by the method selection signal.
- the amount of money (viewing fee and the like) charged by the charging processing unit 14 differs depending on the improvement method (improvement information used) in the receiving device 3.
- the amount to be charged ( ⁇ amount) can be set depending on whether or not to use the class classification process as an improvement method, for example. For example, as shown in Fig. 8A, there are three cases of improvement: when linear interpolation is used, when only adaptive processing is used, and when class classification adaptive processing is used. The amount can be different.
- the improvement information generating unit 11 does not perform any processing, and for example, indicates that linear interpolation is instructed. Is output as improvement information.
- the charge amount can also be set by the number of classes in the classification adaptive processing used as an improvement method. That is, for example, as shown in FIG. 8B, when linear interpolation is used as an improvement method]], when a class classification adaptive process with a small number of classes is used, a class classification adaptive process with a large number of classes is used In the three cases, the charge l can be different.
- the billing amount can be set according to the image quality of the I-plane image or the teacher image used to generate the prediction coefficient in the classification adaptive processing used as the enhancement method. That is, for example, when the image quality of the teacher L-plane image is good, a prediction coefficient with high performance that can greatly improve the image quality of the broadcast image data can be obtained, and conversely, the image quality of the teacher image is not so much ⁇ If this is not the case, a prediction coefficient with low performance can be obtained that improves the image quality of broadcast ffl image data only slightly. Therefore, for example, as shown in Fig.
- the billing amount can be different in the three cases where it is used. Furthermore, the billing amount can be set by a class tap or a prediction tap configured in the class classification adaptive processing used as an improvement method. In other words, depending on how the class taps and prediction taps are configured (the shape of the taps, the number of pixels that make up the taps, and the configuration of the taps from one or both pixels in the air or time direction). As described above, since the image quality of the obtained image is different, the charging amount can be different depending on the method of this configuration.
- the amount and the amount can be set according to the class classification method in the class classification adaptive processing used as the improvement method. That is, as shown in Fig. 8D, when linear interpolation is used as an improvement method, when adaptive processing using class classification based on the above-described threshold is used, adaptive processing using class classification based on ADRC processing is used. In these three cases, the billing amount can be different.
- the enhancement method and the method selection ⁇ can be associated, for example, as shown in FIGS. 8A to 8D, and the control circuit 40 uses the enhancement information used for the enhancement method corresponding to the method selection signal supplied thereto. Is output to the prediction tap configuration circuit 34, the class tap configuration circuit 35, and the class classification circuit 36. In addition, as an improvement method, a plurality of combinations of the above-described methods can be adopted.
- step S 21 teacher images corresponding to the unit of improvement information generation are stored in the frame memory 31. Then, the process proceeds to step S22, where the control circuit 4 ⁇ outputs a control signal for giving an instruction to obtain improvement information used for the improvement method corresponding to the method selection signal supplied thereto, to the prediction tap configuration circuit 3 4. It is supplied to the clustering circuit 35 and the classifying circuit 36. Accordingly, the prediction tap configuration circuit 34, the class tap configuration circuit 35, and the class classification circuit 36 perform processing such that a prediction coefficient as enhancement information used in an enhancement method according to the control signal is obtained. Is set.
- the method selection signal supplied to the control circuit 40 includes information indicating a plurality of enhancement methods, and the control circuit 40 outputs a control signal corresponding to the plurality of enhancement methods. Each time the processing in step S22 is performed, the data is sequentially output.
- control signal output from the control circuit 40 indicates linear interpolation
- a command to instruct linear interpolation is stored in the memory 39 as improvement information. Then, the processing in steps S23 to S28 is skipped, and the process proceeds to step S29.
- step S22 the process proceeds to step S23, in which the downconverter 32 applies a LPF (Low Pass Filter) to the teacher image stored in the frame memory 31 as necessary. ter) is applied or thinned out, an image having the same image quality as the broadcast image is generated as a student image, supplied to the frame memory 33, and stored.
- LPF Low Pass Filter
- the student image may be an image having a different image quality from the broadcast image data.
- a control signal to that effect is supplied from the control circuit 40 to the down converter 32, and the down converter 32
- a student image having a quality according to the control signal from the control circuit 40 is generated.
- step S 24 the process proceeds to step S 24, and among the teacher pixels stored in the frame memory 31, the pixel that has not yet been set as the target pixel is set as the target pixel, and the control circuit 4 ⁇ A prediction gap for the target pixel having a configuration according to the control signal from the CPU is configured using the student pixels stored in the frame memory 33. Further, in step S 24, in the cluster configuration circuit 35, the cluster for the target pixel having a configuration according to the control signal from the control circuit 40 is determined using the student pixels stored in the frame memory 33. Be composed. Then, the prediction tap is supplied to a normal equation adding circuit 37, and the class tap is supplied to a class classification circuit 36.
- step S25 the class classification circuit 36 classifies the pixel of interest in a method according to the control signal from the control circuit 40 based on the class tap from the class tap configuration circuit 35, and as a result, The class code corresponding to the obtained class is supplied to the regular equation adding circuit 37, and the process proceeds to step S26.
- step S26 in the normal equation addition circuit 37, the teacher pixel serving as the target pixel is read from the frame memory 31 and the prediction tap (the raw pixel constituting the target pixel) and the target pixel (the teacher pixel) are read. ) Is added to the target It is.
- step S27 the control circuit 40 determines whether or not the addition has been performed by using all the teacher pixels of the improvement information generation i! Stored in the frame memory 31 as pixels. If it is determined that addition is not performed with all the teacher pixels as S pixels, the process returns to step S24. In this case, one of the teacher pixels that has not yet been set as the target pixel is set as the new target pixel, and the processing of steps S24 to S27 is repeated.
- step S27 when it is determined by the control circuit 40 that the addition has been performed with all the teacher pixels of the improvement information generation unit as the pixel of interest, that is, in the regular equation addition circuit 37,
- step S28 the prediction coefficient determination circuit 38 solves the normal equation generated for each class to obtain the prediction for each class.
- the coefficient is obtained and supplied to the address of the memory 39 corresponding to each class.
- the memory 39 stores the prediction coefficient supplied from the prediction coefficient determination circuit 38 as improvement information.
- the memory 39 has a plurality of banks, so that a plurality of types of improvement information can be stored at the same time.
- step S29 the control circuit 40 determines whether or not improvement information has been obtained for all of the plurality of enhancement methods included in the method selection signal supplied thereto.
- step S29 when it is determined that among the plurality of enhancement information used for the plurality of enhancement schemes included in the scheme selection signal, there is one that has not been obtained yet, the process returns to step S22, and The control circuit 40 outputs a control signal corresponding to the improvement method for which improvement information has not yet been obtained, and thereafter, the same processing as in the above case is repeated.
- step S29 when it is determined that the improvement information has been obtained for all of the plurality of enhancement schemes included in the scheme selection signal, that is, the plurality of enhancement schemes included in the scheme selection signal and the When a plurality of types of improvement information to be used are stored in the memory 39, the process proceeds to step S30, and the plurality of types of improvement information are read from the memory 39, and the integration unit 12 (FIG. 2) And the process is terminated.
- the improvement information generation process shown in FIG. 9 stores the improvement information generation unit in the frame memory 31. This is repeated each time a teacher image is supplied.
- ⁇ 10 can generate a prediction coefficient as enhancement information when the transmitting device ⁇ 1 sends out an image of the same size as the original images, without high-resolution i-image data that will be both teacher images.
- Another configuration example is shown. Note that, in the drawings, portions corresponding to the case in FIG. 6 are denoted by “]” and a description thereof will be omitted as appropriate. That is, in FIG. 10, the e-gloss report generator 11 has no down-converter 32, and has a frame memory 41, a feature estimation circuit 42, a temporary teacher data generation circuit 43, The configuration is basically the same as that in FIG. 6 except that a provisional student data generation circuit 44 is newly provided.
- the temporary teacher has a relationship similar to the relationship between the true teacher image and the broadcast image data as the student images.
- An image and a temporary surface image (hereinafter referred to as a temporary teacher image and a temporary student image as appropriate) are generated from the broadcast river image data, and the temporary teacher image and the temporary student image are used.
- the improvement is to generate prediction coefficients as' i-reports. That is, broadcast ffl image data is supplied to the frame memory 41, and the frame memory 41 stores the broadcast image data supplied thereto in units of improvement information generation. It has become.
- the feature amount estimating circuit 42 obtains the feature of the broadcast image data stored in the frame memory 41 and supplies the feature to the temporary teacher data generating circuit 43 and the temporary student data generating circuit 44. .
- the feature amount of the broadcast image data includes, for example, a horizontal and vertical auto relation number, a histogram of both elementary values, and a histogram of a difference value between adjacent pixels (a histogram of an activity). Histogram) can be used.
- the provisional teacher data generation circuit 43 generates a characteristic ffi (hereinafter referred to as appropriate) of the original teacher image (true teacher image) for the broadcast image data based on the feature amount of the broadcast image data from the feature amount estimation circuit 42. , And estimated marshal features). Further, the provisional teacher data generating circuit 43 applies an LPF to the broadcast image data stored in the frame memory 41 and further performs thinning and the like, thereby obtaining a feature amount similar to the estimated teacher feature amount. Is generated, and the image is supplied to the frame memory 31 and the temporary student data generation circuit 44 as a temporary teacher image.
- the provisional student data generation circuit 44 applies an LPF to the provisional teacher image supplied from the provisional teacher data generation circuit 43 to obtain the original student image supplied from the feature amount estimation circuit 42. An image having the same feature amount as that of certain broadcast image data is generated, and the image is supplied to the frame memory 33 as a temporary student image.
- step S41 the feature amount estimating circuit 42 executes the broadcast image data stored in the frame memory 41. Are extracted and supplied to the provisional teacher data generation circuit 43 and the provisional student data generation circuit 44.
- the temporary teacher data generation circuit 43 receives the original image data for the broadcast image data based on the characteristic amount in step S42.
- the feature amount (estimated teacher feature amount) of the teacher image is estimated, and the process proceeds to step S43.
- the provisional teacher data generating circuit 43 uses the LPF filter to obtain an image having similar features to the estimated teacher features from the broadcast image data based on the estimated teacher features. Set the characteristics and the thinning width (thinning rate), and proceed to step S44.
- step S44 the temporary teacher data generation circuit 43 thins out the broadcast image data stored in the frame memory 41 with the set thinning width, and further sets the image after the thinning. By applying the LPF of the filter characteristics obtained, a temporary teacher image is generated.
- step S44 the thinning-out of broadcast image data is performed in high quality Since the shape of the autocorrelation becomes steeper than that of an image of the same size with lower image quality, the image with a high spatial frequency and the shape of the autocorrelation becomes steep To get it.
- step S44 the process proceeds to step S45, where the temporary teacher data generation circuit 43 obtains the feature amount of the temporary teacher image generated in step S44, and the feature amount is the estimated teacher feature. Determine if the quantity is approximate. If it is determined in step S45 that the feature amount of the temporary teacher image is not approximate to the estimated teacher feature amount, the process proceeds to step S46, where the temporary teacher data generation circuit 43 outputs the broadcast image data. Change the LPF filter characteristics or the setting value of the interrogation width in the evening, and return to step S44. As a result, the generation of the temporary teacher image is repeated.
- step S45 if it is determined in step S45 that the feature amount of the temporary teacher image is similar to the estimated teacher feature amount, the temporary teacher image is supplied to the frame memory 31 for storage and the temporary student image is stored.
- the data is supplied to the data generation circuit 44, and the process proceeds to step S47.
- step S47 the temporary data generation circuit 44 applies the temporary teacher image supplied from the temporary teacher data generation circuit 43 to the temporary teacher image. Set the LPF filter characteristics and proceed to step S48.
- step S48 the temporary student data generation circuit 44 applies the LPF of the set filter characteristics to the temporary teacher image, and generates a temporary student image.
- step S49 the temporary student data overnight generation circuit 44 obtains the feature amount of the temporary student image generated in step S48, and the feature amount is supplied from the feature amount estimation circuit 42. It is determined whether or not it is close to the feature amount of the broadcast image data. If it is determined in step S49 that the feature amount of the temporary student image does not approximate the feature amount of the broadcast image data, the process proceeds to step S50, where the temporary student data generation circuit 44 determines the temporary student data The set value of the filter characteristic of the LPF applied to the teacher image is changed, and the process returns to step S48. In this way, the temporary student image is generated again.
- step S49 determines that the feature amount of the temporary student image is close to the feature amount of the broadcast image data. If it is determined in step S49 that the feature amount of the temporary student image is close to the feature amount of the broadcast image data, the temporary student image is supplied to the frame memory 33 and stored therein, and the process proceeds to step S49. 5 Proceed to 1.
- the temporary teacher image stored in the frame memory 31 is Is used as the original teacher image
- the temporary student image stored in the frame memory 33 is used as the original student image in steps S22, S24 to S30 in FIG.
- the same processing I is performed as in each case, whereby a plurality of types of improvement information are generated and stored in the memory 39.
- the plurality of types of improvement information are read from the memory 39, supplied to the integration unit 12 (FIG. 2), and the processing is terminated.
- the clusters and prediction taps formed are different from those in the case of FIG. Inside :: Same as in the embodiment of FIG. 9 in that a plurality of pixels of the student image located around the position iS of the EH pixel constitute a class map and a 7 ′ measurement map. It is.
- FIG. 12 shows the improvement information generation unit 11 of the transmitting device 1 (FIG. 2).
- An example of the configuration of the unit 24 is shown.
- the frame memory 51 is supplied with broadcast image data output from the extraction unit 22 (FIG. 4).
- the frame memory 51 generates the broadcast image data at an improved report generation level.
- the prediction tap configuration circuit 52 performs the same processing as the prediction tap configuration circuit 34 of FIG. 6 in accordance with the control signal from the control circuit 57, thereby storing the data in the frame memory 51. Using broadcast image data, A measurement tap is formed and supplied to the prediction calculation circuit 56.
- the cluster configuration circuit 53 performs the same processing as the cluster configuration circuit 35 in FIG. 6 in accordance with the control signal from the control circuit 57, whereby the broadcast ffl stored in the frame memory 51 is processed.
- a class tap is formed using the image data, and supplied to a classification circuit 54.
- the class classification circuit 54 performs the same processing as the class classification circuit 36 in FIG. 6 in accordance with the control signal from the control circuit 57, thereby The class code obtained by ffl clustering is supplied to the memory 55 as an address.
- the memory 55 records the prediction coefficients t and S as the improvement information supplied from the selection unit 23 (FIG. 4). Further, the memory 55 reads out the prediction coefficient icitS at the address corresponding to the class code from the class classification circuit 54 and supplies the prediction coefficient to the prediction calculation circuit 56.
- the prediction operation circuit 56 uses the prediction tap supplied from the ⁇ 'measurement tap configuration circuit 52 and the child measurement coefficient supplied from the memory 55 to perform the linear prediction operation (product) shown in Expression (1). Perform a Japanese performance ⁇ ), and the result! The pixel value obtained by ⁇ is output as the ⁇ 'measurement value of the image ⁇ ghost image (both teacher images) that enhances the image quality of the broadcast ffl image data.
- the control circuit 57 is supplied with a system selection signal output by the selection section 23 (FIG. 4). Based on the method selection signal, the control circuit 57 converts the control signal of J-join and Iri] in the control circuit 40 of FIG. 6 into a prediction tap configuration circuit 52 and a cluster configuration circuit 5 3 And outputs it to the classification circuit 54.
- the system selection signal supplied from the selection unit 23 to the control circuit 57 is one of information indicating a plurality of enhancement systems included in the system selection signal supplied to the control circuit 40 in FIG. Only one report corresponding to the image quality level signal output by the gold processing unit 27 (FIG. 4) at the request of the user is obtained. Therefore, the control circuit 57 controls the prediction tap forming circuit 52, the cluster forming circuit 53, and the class classification circuit 54 so that the litjj image of the image f! Requested by the user can be obtained.
- the quality improvement processing performed by the quality improvement unit 24 of FIG. 12 to improve the image K of both images of the broadcast m In receiving concealment 3 (M 4), when broadcast river image data in units of improvement information generation is supplied from the extraction unit 22 to the quality improvement unit 24, the selection unit 23 sends the data to the quality improvement unit 24. On the other hand, from multiple reports, one report (one set) selected from the reports based on the image quality level signal is used to improve the image quality by ffl using the information. Supplied with method selection indicating the method.
- step S61 the broadcast ghost data supplied from the extraction unit 22 is stored in the frame memory 51 in units of upper information generation. Also step In S61, the improvement information supplied from the selection unit 23 is stored in the memory 55. Further, in step S61, the control circuit 57 receives the system selection signal from the selection unit 23, and instructs to improve the image quality of the broadcast image data by the enhancement system corresponding to the system selection signal. The control signal to be performed is supplied to the prediction tap configuration circuit 52, the class tap configuration circuit 53, and the class classification circuit 54. As a result, the prediction tap configuration circuit 52, the class tap configuration circuit 53, and the class classification circuit 54 are set to perform processing in accordance with the enhancement method indicated by the control signal from the control circuit 57. Is done.
- the improvement information stored in the memory 55 is a prediction coefficient, except when the method selection signal supplied to the control circuit 57 indicates a linear interpolation.
- the control circuit 57 supplies the prediction arithmetic circuit 56 with the broadcast image data stored in the frame memory 51. Is supplied to indicate a linear interpolation.
- the prediction calculation circuit 56 reads out the broadcast image data stored in the frame memory 51 via the prediction tap configuration circuit 52, performs linear interpolation, and outputs the result. Then, in this case, the processing after step S62 is not performed.
- step S61 the process proceeds to step S62, and among the pixels constituting the high-quality image obtained by improving the image quality of the broadcast image data stored in the frame memory 51, the pixel of interest is still the pixel of interest.
- the prediction tap configuration circuit In 52, the prediction tap for the pixel of interest, configured according to the control signal from the control circuit 57, is stored in the frame memory 51. It is configured using the pixels of the broadcast image data stored in.
- step S62 the cluster configuration circuit 53 stores the cluster map of the pixel of interest in the configuration according to the control signal from the control circuit 57 in the broadcast image data stored in the frame memory 51. It is constructed using the pixels of. Then, the prediction tap is supplied to a prediction operation circuit 56, and the class tap is supplied to a classification circuit 54.
- step S63 the class classification circuit 54 classifies the pixel of interest by using the class tap from the cluster configuration circuit 53 according to the control signal from the control circuit 57, The class code corresponding to the resulting class is supplied to the memory 55 as an address, and the process proceeds to step S64.
- step S64 in the memory 55, among the prediction coefficients as the improvement information stored in step S61, the prediction coefficients stored in the address indicated by the class code from the class classification circuit 54 are stored. It is read and supplied to the prediction operation circuit 56.
- step S65 the prediction operation circuit 56 uses the prediction tap supplied from the prediction tap configuration circuit 52 and the prediction coefficient supplied from the memory 55 to obtain an equation.
- the linear prediction operation shown in (1) is performed, and the pixel value obtained as a result is temporarily stored as the predicted value of the target pixel.
- step S66 the control circuit 57 focuses on all the pixels constituting the high-quality image frame corresponding to the broadcast image data frame stored in the frame memory 51. It is determined whether or not the predicted value has been obtained as a pixel. In step S66, if it is determined that all pixels constituting the frame of the high-quality image have been set as the pixel of interest and the predicted value has not yet been obtained, the process returns to step S62, and the frame of the high-quality image is determined. Of the constituent pixels, those that have not yet been set as the target pixel are newly set as the target pixel, and the same processing is repeated thereafter.
- step S66 when it is determined that the prediction value has been obtained by using all the pixels constituting the frame of the high-quality image as the target pixel, the process proceeds to step S67, and the prediction calculation circuit 56 High-quality images consisting of the predicted values obtained up to this point are sequentially output to the display unit 25 (Fig. 4), and the process ends.
- the quality improvement processing of FIG. 13 is repeatedly performed each time the broadcast image data of the improvement information generation unit is supplied to the frame memory 51.
- the transmitting device 1 transmits a plurality of types of improvement information
- the receiving device 3 selects, from the plurality of types of improvement information, one corresponding to the image quality according to the user's request, By using the selected improvement information to improve the image quality, it is possible to provide an image with the image quality according to the user's request, and further, fine-tune the billing according to the image quality of the image provided to the user. It can be performed.
- the transmitting device 1 transmits a plurality of types of improvement information
- the receiving device 3 selects, from the plurality of types of improvement information, one corresponding to the image quality according to the user's request.
- the transmission device 1 In this case, it is possible to receive the user's request from the reception concealment 3 in advance, and to transmit only the enhancement information corresponding to the image quality corresponding to the request to the reception device 3 (in this case, FIG. As shown by the dotted line in FIG. 7, under the control of the charging processing unit 14, in the integration unit 12, only the improvement information corresponding to the image quality according to the request from the user is included in the integration.
- a plurality of types of prediction coefficients and an R indicating a linear interpolation are transmitted as several types of improvement information.
- a plurality of types of prediction coefficients are transmitted by the transmitting device. ⁇
- the pre-calculated prediction coefficient is stored in the memory 55 of the receiving device 3 without transmitting the data from 1, and a plurality of rare predictions stored in the memory 55 are stored as the plural types of improvement information. It is possible to send information about which of the coefficients to use.
- the classification adaptive processing and the linear interpolation are used as the eavesdropping method.
- other processing can be adopted as an improvement method.
- the improvement information generation unit 11 generates prediction coefficients as improvement information, and the quality improvement unit 24 performs class classification adaptive processing using the prediction coefficients, thereby improving the image quality of the image.
- the improvement information generation unit 11 a class code of a prediction coefficient suitable for use in pixel prediction is obtained as the above report, and in the quality improvement unit 24, By performing adaptive processing using the prediction coefficient of the class code, the image quality of the image can be improved.
- the improvement report generation unit 11 and the quality improvement unit 24 store prediction coefficients for each class obtained by performing learning in advance. Then, the enhancement information generation unit 11 obtains a predicted value of a high-quality image by performing an adaptive process using the stored prediction coefficients of each class, and obtains a true value for each pixel. The class code of the f 'measurement coefficient when a near predicted value is obtained is obtained as the improvement information.
- the quality improvement unit 24 obtains a predicted value of a high-quality image by using a prediction coefficient stored in advance corresponding to a class code as the improvement information, and obtains an image with improved image quality. Get. In this case, the receiver 3 sends the message to the transmitter 1 (improvement information generator 11). Liii] The image of f! Is obtained.
- this ⁇ ⁇ v does not include the above-described classification in the ecology report generation unit 11 and the product K improvement unit 24. That is, in the improved report generation unit 11, the class code of the prediction coefficient suitable for obtaining the f ′ measurement value is, for example, the adaptive processing ('measurement performance ⁇ ) is performed using the prediction coefficients of all the classes.
- the quality improvement unit 24 improves the quality of the image by performing such adaptive processing using the T 'measurement coefficient of the class code as the improved tr! Therefore, there is no need to perform class classification in any of the improvement report section 11 and the quality improvement section 24.
- the memory 101 stores a prediction coefficient for each class obtained by performing learning in a learning device (FIG. 16) described later.
- the memory 101 is controlled by the control circuit 40, sequentially reads out prediction coefficients of each class, and supplies the prediction coefficients to the prediction calculation circuit 1 2.
- the prediction path 102 is supplied with a measurement coefficient from the memory 101, and also supplied with a measurement tap from the prediction tab configuration circuit 34.
- the prediction operation circuit 102 is composed of the prediction tap supplied from the prediction tap configuration circuit 34 and the prediction coefficient supplied from the memory 101, similarly to the f M 57 in FIG. And the expression
- the prediction operation circuit 102 performs a linear f ′ with each of the prediction coefficients of each class sequentially supplied from the memory 101 for a certain prediction tap.
- the prediction operation circuit 102 performs the measurement and obtains the predicted values of the teacher pixels, so that the prediction operation circuit 102 obtains the same number of predicted values as the total number of classes for the teacher pixels.
- the ⁇ ′ measurement value obtained by the prediction operation circuit 102 is supplied to the comparison circuit 103. Further, the teacher image is also supplied to the ratio circuit 103 from the frame memory 31. The comparison circuit 1 ⁇ 3 is supplied from the frame memory 31. The prediction error is obtained by comparing the teacher pixel constituting the supplied teacher image with the prediction value of the teacher pixel supplied from the prediction operation circuit 102 and obtained from the prediction coefficient of each class. And supplies it to the detection circuit 104.
- the detection circuit 104 detects the f ′ measured value of the teacher pixel that minimizes the prediction error supplied from the comparison circuit 103. Further, the detection circuit 104 detects a class code representing the class of the prediction coefficient at the time when the predicted value is obtained, and outputs it as improvement information.
- step S111 teacher images corresponding to the unit of improvement information generation are stored in the frame memory 31. Then, the process proceeds to step S112, where the control circuit 40 outputs a control signal for giving an instruction to obtain improvement information to be used for the enhancement scheme corresponding to the scheme selection signal supplied thereto, at the predicted time. Supplied to configuration circuit 34. As a result, the prediction tap configuration circuit 34 is set so as to perform processing for obtaining a class code as enhancement information used in the enhancement method according to the control signal.
- the method selection signal supplied to the control circuit 40 includes information indicating a plurality of enhancement methods, and the control circuit 40 converts the control signals corresponding to the plurality of enhancement methods into step S Each time the processing of 1 and 2 is performed, it is output sequentially.
- control signal output from the control circuit 40 indicates linear interpolation
- a command to instruct linear interpolation is stored in the memory 39 as improvement information. Then, the processing of steps S113 to S122 is skipped, and the process proceeds to step S123.
- step S112 the process proceeds to step S113, where the down-converter 32 outputs the LPF (Low Pass) to the teacher image stored in the frame memory 31 as necessary. Filtering or thinning is performed, and an image having the same image quality as that of the broadcast image data is generated as a student image, supplied to the frame memory 33, and stored.
- LPF Low Pass
- the student image may be an image having a different image quality from the broadcast image data. Is supplied from the control circuit 40 to the down-converter 32, and the down-converter 32 generates a student image of image quality according to the control signal from the control circuit 40.
- step S 114 the process proceeds to step S 114, and among the teacher pixels stored in the frame memory 31, the pixel which has not been set as the target pixel yet is set as a main pixel; A prediction function for a target pixel having a configuration according to a control signal from the control circuit 40 is configured using the student pixels stored in the frame memory 33.
- the prediction tap is supplied to the prediction operation circuit 102.
- step S115 the control circuit 40 sets the variable i for counting the class to 0 as an initial value, and proceeds to step S116.
- step S116 the control circuit 40 outputs the variable i as an address to the memory 1 ⁇ 1.
- step S116 the prediction coefficient corresponding to the class code #i is read from the memory 101 and supplied to the prediction calculation circuit 1.2.
- step S 117 the prediction operation circuit 102 uses the prediction tap supplied from the prediction tap configuration circuit 34 and the prediction coefficient supplied from the memory 101 to express the equation (1) Then, the obtained pixel value is supplied to the comparison circuit 103 as the predicted value of the target pixel.
- step S118 the comparison circuit 103 reads the pixel value of the target pixel from the frame memory 31 and compares the pixel value with the prediction value from the prediction operation circuit 102, thereby predicting the prediction value. Find the error. Further, in step S118, the comparison circuit 103 supplies the end and the measurement error to the detection circuit 104, and proceeds to step S119. In step S119, the control circuit 40 increments the variable i by 1 and proceeds to step S120. In step S120, it is determined whether or not the control circuit 40 variable i is less than the total number of classes N. If it is determined that the variable i is less than N, the process returns to step S116. Hereinafter, the same processing is repeated.
- step S120 when it is determined that the variable i is not less than N, that is, when the prediction error of the prediction value is obtained using the prediction coefficients corresponding to all the classes for the target pixel, Proceeding to step S121, the detection circuit 104 detects the class of the prediction coefficient that minimizes the prediction error of the pixel of interest, and The class code corresponding to the class is stored as improvement information.
- step S123 the control circuit 40 determines whether or not the improvement information has been obtained for all of the plurality of enhancement methods included in the method selection signal supplied thereto.
- step S123 when it is determined that among the plurality of enhancement information used for the plurality of enhancement schemes included in the scheme selection signal, there is one that has not been determined yet, the process returns to step S112.
- the control circuit 40 outputs a control signal corresponding to the improvement method for which improvement information has not yet been obtained, and the same processing as in the above case is repeated.
- step S123 when it is determined that the improvement information has been obtained for all of the plurality of heading schemes included in the symbol, that is, the plurality of schemes included in the scheme selection signal
- the process proceeds to step S124, and the detection information is obtained from the detection circuit 104. Then, it is supplied to the integration unit 12 (FIG. 2) and the processing is terminated.
- enhancement information generation processing of FIG. 15 is similar to the embodiment of FIG. 9 in that the frame memory 31: a teacher image for each enhancement information generation unit (for example, a teacher image for one frame) is used. It is repeated each time it is supplied.
- a teacher image for each enhancement information generation unit for example, a teacher image for one frame
- FIG. 16 shows a configuration example of an embodiment of a learning device for obtaining a prediction coefficient for each class to be stored in the memory 101 of [14].
- Fig. 6 shows the measurement tap configuration circuit 114, the cluster tube configuration circuit 115, the class classification circuit 116, the normal equation addition circuit 117, the prediction coefficient determination circuit 118, and the memory 119.
- Improved information generator 11 1 Frame memory 31 1, Down converter 32, Frame memory 33, Predictive tap configuration circuit 34, Class tap configuration circuit 35, Class classification circuit 36, Normal equation addition circuit 37, prediction coefficient decision circuit 38, memory 39, and so on.
- the learning device in FIG. 16 basically, the same processing as in the improvement information generating unit 11 in FIG. 6 is performed, so that the prediction coefficient for each class is obtained.
- Figure 1 In the memory 101 of 4, the prediction coefficient for each class obtained by performing the learning in the device E of 116 is stored.
- the downconverter 112 the end tap measuring circuit 114, the cluster tapping W path 115, and the classifying circuit 116 are controlled.
- the prediction coefficient of the dredg class II by changing the tap configuration and the class classification method.
- FIG. 17 shows an example of the configuration of the 't3 of device 3 (3 ⁇ 44)
- ⁇ 1 the portions corresponding to those in FIG. 12 are denoted by ⁇ 1, and the description thereof will be appropriately omitted below.
- the class code recording section 1 2 1 stores a class code as an upper report.
- the class code is transmitted as an improvement report for each of the high-quality images in which the image quality of the image data is improved.
- the class code as this improvement information is supplied from the selection unit 23 of the IS3 (3) (Fig. 4) to the quality improvement unit 24 (Fig. 17), and the class code storage unit 12 1 Record the class code as information.
- the class code storage section 121 can store the stored class code as an address in the memory 122 under the control of the control circuit 57.
- the memory 1 2 2 stores the prediction coefficient for each class obtained in the “ ⁇ ? IS IS of 1 16, '3, and the class code given as an address from the class code d storage 1 2 1 The prediction coefficient corresponding to is obtained and supplied to the prediction performance circuit 56.
- step S131 the frame memory 51 stores the extraction unit 22
- the broadcast image data supplied from the system is recorded in units of improvement information generation.
- step S131 the improvement information supplied from the selection unit 23 is stored in the memory 121.
- the control circuit 57 supplies the system selection signal from the selection unit 23, and improves the image quality of the image data for broadcasting by the enhancement system corresponding to the system selection signal.
- a control signal for giving an instruction is supplied to the prediction tap configuration circuit 52.
- the prediction tap configuration circuit 52 is set to perform processing according to the enhancement method indicated by the control signal from the control circuit 57.
- the improvement information stored in the memory 122 is a class code, except when the method selection signal supplied to the control circuit 57 indicates linear interpolation.
- the control circuit 57 supplies the prediction calculation circuit 56 with the broadcast image data stored in the frame memory 51. Is supplied. In this case, the prediction operation circuit 56 reads out the broadcast image data stored in the frame memory 51 via the prediction tap configuration circuit 52, performs linear interpolation, and outputs the result. Then, in this case, the processing after step S132 is not performed.
- step S132 the process proceeds to step S132, and among the pixels constituting the high-quality image in which the image quality of the broadcast image data stored in the frame memory 51 is improved, the pixel of interest is still the target pixel
- the prediction tap configuration circuit 52 stores, in the frame memory 51, the prediction tap for the target pixel having a configuration according to the control signal from the control circuit 57. It is configured using the pixels of the stored broadcast image data. This prediction tap is supplied to the prediction calculation circuit 56.
- step S133 the control circuit 57 reads the class code as the improvement information for the pixel of interest so as to read the class code storage unit 1 Control 2 1.
- the class code as the improvement information for the pixel of interest is read from the class code storage unit 121 and supplied to the memory 122.
- step S133 the prediction coefficient stored in the address indicated by the class code from the class code storage unit 121 is read out, and the prediction coefficient is stored in the prediction operation circuit 56. Supplied.
- the prediction operation circuit 56 uses the prediction tap supplied from the prediction tap configuration circuit 52 and the prediction coefficient supplied from the memory 55 to express the equation (1). Performs a linear prediction operation, and temporarily stores the resulting pixel value as the predicted value of the primary pixel.
- step S136 the control circuit 57 focuses on all the pixels constituting the high-quality image frame corresponding to the broadcast image data frame stored in the frame memory 51.
- step S 1 3 6 that whether determined the prediction value is determined, as a pixel of interest Te pixel to base constituting the frame of the high-quality image, is determined not yet determined a predicted value
- the process returns to step S132, and among the pixels constituting the frame of the high-quality image, those not yet set as the target pixel are newly set as the target pixel, and the same processing is repeated thereafter. .
- step S136 when it is determined that the prediction value has been obtained using all the pixels constituting the frame of the high-quality image as the pixel of interest, the process proceeds to step S137, and the prediction operation circuit 56 Outputs the high-quality image composed of the predicted values obtained so far to the display unit 25 (FIG. 4), and ends the processing.
- the class taps are obtained by constructing the class taps using the SD images and classifying the class taps, but are to be stored in the memories 101 and 122 in common.
- the measurement coefficients those obtained by forming class taps and performing class classification using HD images instead of SD images can be adopted.
- the class code as the enhancement information can be obtained without using the prediction coefficients of each class as described above without calculating the prediction value of the target pixel.
- Fig. 19 shows an example of the configuration of an improved layer report generator 11 that obtains a class code as an improved report by constructing a cluster from HD images (teacher images) and performing class classification. .
- the same reference numerals are given to portions corresponding to those in FIG. 6: coke and c, and the description thereof will be omitted as appropriate.
- the improvement information generation unit 11 in FIG. 19 includes a down converter 32, a frame memory 33, a prediction tap configuration “1 path 34, a regular addition circuit 37, and an end measurement coefficient determination circuit 38.
- the configuration is the same as that in FIG. 6 except that no is provided.
- step S141 teacher images corresponding to the unit of improvement information generation are stored in the frame memory 31. Then, the process proceeds to step S142, where the control circuit 40 sends a control signal for instructing to obtain improvement information used for the enhancement method corresponding to the method selection signal supplied thereto, to a cluster-top configuration circuit. 3 5 and the classification circuit 36. As a result, the cluster configuration circuit 35 and the class classification circuit 36 are set so as to perform processing for obtaining a class code as enhancement information used in the enhancement method according to the control signal.
- the system selection signal supplied to the control circuit 40 includes f information indicating a plurality of improvement methods, as in the above-described case.
- the corresponding control signal is sequentially output each time the processing in step S142 is performed.
- step S144 when the control signal output from the control circuit 40 complies with the naked interpolation, the fact that the linear interpolation is instructed is stored in the memory 39 as improvement information. Then, the processing of steps S144 to S145 is skipped, and the process proceeds to step S146. After the processing in step S144, the process proceeds to step S143, and among the teacher images ⁇ stored in the frame memory 311, those which have not yet been set as the target pixel are set as the target pixel, and the class tap is performed. In the configuration circuit 35, a class tap for an eye pixel having a configuration according to the control signal from the control circuit 40 is formed by flowing the teacher pixel stored in the frame memory 31. Then, the class tap is supplied to the classification circuit 36.
- step S144 the classifying circuit 36, based on the clustering from the cluster-top configuration circuit 35, executes the following method according to the control (i, ' ⁇ from the control path 40). li are classified into classes, and the class code corresponding to the resulting ⁇ is supplied to the memory 39 for ⁇ tS, and the process proceeds to step S145.
- step S145 the control circuit 40 determines whether or not the classification has been performed using all the improved teacher elements stored in the frame memory 31 as the target pixels. If it is determined that all the teacher images have not been classified into all pixels with ⁇ pixels, the process proceeds to step S144. In this case, one of the teachers ⁇ which has not been set as the attention ⁇ is newly set as a pixel; ′ t El, and the processing of steps S144 and S145 is repeated.
- step S145 when it is determined by the control circuit 40 that all the teachers of the improvement information generation unit are; ' Proceeding to 1 46, the control M path 40 determines whether or not improvement information has been obtained for all of the plurality of enhancement methods included in the method selection signal supplied thereto. If it is determined in step S146 that a plurality of ecology reports used for a plurality of enhancement methods included in the method selection signal have not been obtained yet, the process proceeds to step S142. Returning, the control circuit 40 outputs the control ft 1 ′ corresponding to the improvement method for which the improvement information has not yet been obtained, and the same processing as described above is repeated thereafter.
- step S146 when it is determined that the class code as the report is obtained for all of the plurality of enhancement methods included in the expression selection signal, that is, the plurality of methods included in the method selection If the class code as a plurality of kinds of improvement information used for the improvement method is stored in the memory 39, the process proceeds to step S147, and the plurality of kinds of improvement information are found from the memory 39. Then, it is supplied to the integration section 12 (FIG. 2) and the processing is terminated.
- 21 is the configuration of the product fnj-unit 24 of the receiving ⁇ -unit i 3 ([ ⁇ 4) when the above-mentioned information generation unit 11 is configured as shown in FIG. An example is shown.
- Portions corresponding to those in FIG. 17 are denoted by the same reference numerals, and description thereof will be omitted below as appropriate. That is, the quality improving unit 24 in FIG. 21 is configured in the same manner as in FIG. 17 except that the memory 13 is provided in place of the memory 122.
- the learning device of FIG. 16 stores, for each class, a learning obtained by performing a class classification using a class tap composed of student pixels.
- the prediction coefficients are stored, in the embodiment of FIG. 21, each class obtained by performing learning to perform class classification using a class tap composed of teacher pixels is stored in the memory 13 1. A prediction coefficient is stored.
- FIG. 22 shows a configuration example of an embodiment of a learning device that performs learning for performing class classification using class taps composed of teacher pixels.
- the same reference numerals are given to the portions corresponding to the case in FIG. 16, and the description thereof will be appropriately omitted below. That is, the learning device of FIG. 22 is basically configured in the same manner as in the case of FIG.
- the clustering circuit 1 115 is used; instead of the student image stored in the frame memory 113, the cluster image is obtained from the teacher image stored in the frame memory 111. To make up the top. It should be noted that the cluster tap configuration circuit 115 forms the same cluster as the class tap configuration circuit 35 configuring the improvement information generation unit 11 in FIG.
- the image quality can be improved according to the user's request. Therefore, also in this case, it is possible to provide an image having the image quality according to the user's request, and further, it is possible to perform fine charging according to the image quality of the image provided to the user.
- the case where the high-quality image data having the same content as the broadcast image data does not exist is similar to the case where the prediction coefficient is used as the enhancement information. Can be dealt with.
- the broadcast image data and the improvement information can be integrated into a symbol by, for example, time division multiplexing or frequency multiplexing.
- the improvement information (embed) It is possible.
- bias universalality of energy (entropy)
- information evaluationable information
- an image obtained by photographing a certain scenery is recognized by a person as an image of such a scene because the image (the pixel value of each pixel constituting the image, etc.)
- the image has a corresponding energy bias
- an image without energy bias is only noise or the like, and is not useful as information.
- Correlation of information refers to the components of the information (for example, if it is an image, the pixels and lines that make up the image) and the correlation between them (for example, autocorrelation, and the relationship between one component and another component) Distance).
- the correlation between the images is represented by the correlation between the lines of the image, and the correlation value representing the correlation is, for example, 2 which is the difference between the corresponding pixel values of the two lines.
- a sum of squares and the like can be used (in this case, a small correlation value indicates a large correlation between lines, and a large correlation value indicates a small correlation between lines).
- the correlation between the first line from the top (the first line) and other lines generally increases as the line closer to the first line increases.
- the farther away from one line the smaller. Therefore, there is a bias in the correlation that the closer to the first line, the greater the correlation with the first line, and the further away, the smaller the correlation.
- the M line which is relatively close to the first line, is compared with the first line.
- the correlation bias that the correlation increases as the distance from the first line decreases and the correlation decreases as the distance increases is destroyed.
- the correlation is restored by using the correlation bias that the correlation increases as the distance from the first line decreases and decreases as the distance increases. be able to. That is, in the image after replacement, the correlation with the M-th line near the first line is small, and the correlation with the N-th line far from the first line is large, because the original image has The correlation bias is clearly unnatural (unusual) and the Mth and Nth lines should be swapped.
- the image having the original bias of the correlation that is, the original image
- the image having the original bias of the correlation that is, the original image
- the image having the original bias of the correlation that is, the original image
- the image having the original bias of the correlation that is, the original image
- the image having the original bias of the correlation that is, the original image
- the image in which the embedding is performed is replaced with the original image.
- the original image and the enhancement information can be decoded without the overhead for decoding.
- An image obtained by embedding improvement information in an image (hereinafter referred to as an “embedded image” as appropriate) is considered to be a different image from the original image, and is regarded as valuable information by humans. Since the image is no longer a recognizable image, the original image can exhibit a darkening without overhead: ⁇ .
- FIG. 23 shows an example of the configuration of the integration unit 12 of FIG. 2 that integrates the integration ⁇ ,? 1 by embedding the improvement information into the broadcast image data as described above. are doing.
- the frame memory 61 stores broadcast image data in, for example, one frame unit.
- the frame memory 61 is composed of a plurality of banks, and by switching banks, storage of broadcast river image data to be supplied thereto, replacement of columns as described later, and frame memory 6 Data can be output from 1 at the same time.
- the Swap Yipo 'component 6 2 receives the improved ⁇ report from the [ ⁇ ] report generator 11 (Fig. 2) and stores it in the frame memory 6 1 based on the ⁇ report. Steps that represent how to replace the positions of each column of one frame of ILfii image (broadcast river image data) are created. !) Then, in the case where one frame image stored in the frame memory 61 is composed of ⁇ rows and ⁇ columns of ghosts, the ⁇ th column of the image (the ⁇ th column from the left) Is replaced with the ⁇ th column, swap information in which ⁇ and ⁇ ′ are associated with each other is generated in the swap information generation unit 62 ( ⁇ , ⁇ ′ is an integer of 1 or more and ⁇ or less) .
- the replacement method is as follows: If all the columns are subject to replacement, ⁇ ! (! There is only). Therefore, in this case, up to 10 g 2 (1!) Bits of information can be embedded in one frame at maximum.
- the swap information generated by the swap information generation unit 62 is supplied to the swapping unit 63, and the swapping unit 63 receives a frame according to the swap information supplied from the swap information generation unit 62. Swap the ⁇ column position of the image of one frame stored in the memory 6 1; ⁇ :. As a result, the improvement information is inserted into the broadcast ffl image data stored in the frame memory 61.
- Broadcast image data is supplied to the frame memory 61, and the frame memory 61 In, the broadcast image data is sequentially stored.
- step S71 the swap information generating unit 62 supplies from the improvement information generating unit 11 information on the amount of data that can be embedded in one frame of image (broadcast image data). . That is, for example, as described above, if the number of columns of one frame of broadcast image data is N and all the columns are to be replaced, the maximum number of columns in one frame is: Since it is possible to embed the improvement information of log 2 (N!) bits, such improvement information of the number of bits (below) is supplied.
- the swap information generation unit 62 proceeds to step S72, and generates swap information based on the improvement information supplied in step S71. That is, based on the improvement information, the swap information generating unit 62 generates the first column of frames to be subjected to the embedding process stored in the frame memory 61 (hereinafter referred to as a frame to be processed as appropriate). For example, it generates swap information indicating the number of columns in each of the second to Nth columns excluding the first column in the Nth column. This swap information is supplied to the swimming unit 63.
- the skipping unit 63 When the skipping unit 63 receives the step information from the slave information generation unit 62, the process proceeds to step S73, and according to the swap information, each column of the processing target frame stored in the frame memory 61 is processed. Swap the position of. As a result, the enhancement information is embedded in the frame to be processed, and the broadcast ffl image data (embedded image) in which the enhancement information is embedded in this way is read from the frame memory 61 and the integrated signal is transmitted. The signal is supplied to the transmission section 13 (FIG. 2).
- the position of each column of the frame can be switched by changing the storage position of the image data (pixels constituting it) in the frame memory 61.
- the frame memory 6 By controlling the address at the time of reading the frame from 1, the frame in which the positions of the columns have been exchanged may be read from the frame memory 61 as a result.
- the swap information includes information indicating what column should be replaced with the second to Nth columns, but the first column has No information indicating the number of columns to be replaced is included. Therefore, in the swapping section 63 ?, the second column to the Nth column are exchanged, but the first column is replaced. No replacement is performed. The reason will be described later.
- step S74 the frame memory 61 stores frames of the broadcast image data that have not yet been set as the frame to be processed. It is determined whether or not the frame has been stored. If it is determined that the frame has been stored, the process returns to step S71, and a frame that has not yet been set as a processing target frame is newly set as a processing target frame. Is repeated.
- step S74 If it is determined in step S74 that a frame that is not a frame to be processed is not stored in the frame memory 61, the embedding process ends.
- an image of a certain frame (here, image data for broadcasting) is converted into an integrated signal as an embedded image as follows.
- the second column of the N-frame (FIG. 25A) to be processed is changed to the sixth column (FIG. 25B), and the third column is changed to the third column.
- the N-th column corresponds to the N-th column, and swap information representing such a replacement is generated in the swap information generation unit 62.
- a frame as shown in FIG. 25J has a second column in the sixth column, a third column in the ninth column, and a third column in accordance with the above-described swap information.
- Column 4 is column 7, column 5 is column 3, column 6 is column 8, column 7 is column 4, column 8 is column 5, and column 9 is column 2.
- ⁇ ⁇ ⁇ the Nth row is replaced with the Nth row, it.
- the image of FIG. 25J is an embedded image as shown in FIG. 25K.
- the embedded image does not include a column with correct position, it is difficult to decode the image and enhancement information using the ffl relation of the image as described in E above. Therefore, in the embedding process of FIG. 24, the first column of each frame is not inserted.
- the improvement information can be embedded in the image by sequentially exchanging the columns, or it can be embedded in the image by exchanging all the columns at once. In other words, it is also possible to embed the improvement information in the image by repeating, for example, exchanging a certain column based on the improvement information and changing the next column based on the next improvement information. It is possible to embed the improvement information in the image by determining the replacement pattern of all the rows based on the improvement information and performing such replacement at once.
- FIG. 26 shows the extraction unit 22 of the receiving device 3 (FIG. 4) when the integrating unit 12 of the transmitting device 1 (FIG. 2) is configured as shown in FIG. 2 shows a configuration example.
- the frame memory 71 has the same configuration as the frame memory 61 in FIG. 23, and sequentially stores embedded images as integrated signals output by the receiving unit 21 (FIG. 4), for example, in frame units. I do.
- the smoothing unit 72 is the original position in the frame to be processed (the frame to be processed) in the embedded image stored in the frame memory 71. It calculates the correlation between the new column in it and the other column (the column that has not been restored to its original position), and based on the correlation, Still in its original position
- the swapping unit 72 supplies the swap information conversion unit 73 with swap information indicating how each column of the frame is replaced.
- the step report conversion unit 73 converts the step information from the stepping unit 72, that is, the correspondence between the position of the row of the frame to be processed before and after the insertion. Based on the relation, the improvement information embedded in the embedded image is decoded.
- the embedding image (encoded data) supplied thereto is, for example, ⁇ primary ⁇ by one frame unit.
- a variable n for counting the number of frames in the frame is set to, for example, 1 as an initial value, and the process proceeds to step S82. It is determined whether the variable n is less than or equal to N minus 1, which is 1 minus the number of columns in the frame.
- step S82 When it is determined in step S82 that the variable n is equal to or less than N-1, the process proceeds to step S83, and the swapping unit 72 executes the processing target stored in the frame memory 71.
- the ⁇ ⁇ (pixel sequence) of the ⁇ ? 3 ⁇ 4 ⁇ column is read from the frame of, and a vector in which each pixel (pixel value of) of the ⁇ -th column is arranged as an element (hereinafter referred to as a column vector as appropriate) Generate) vn.
- a column vector as appropriate Generate
- step S84 the variable k for counting the column on the right side of the n-th column is set to n + 1 as an initial value, and the process proceeds to step S85, where subbing is performed.
- the unit 72 reads out the pixels in the k-th column, generates a column vector V k having the pixels in the k-th column as elements, and proceeds to step S86.
- step S86 the column vectors vn and vk are Is used to determine the correlation between the n-th column and the k-th column.
- the distance d (n, k) between the column vectors vn and vk is calculated according to the following equation.
- c represents a shark in which m is changed from 1 to M, and A (i, j) is the i-th row of the frame to be processed. Represents the pixel (pixel value) in the j-th column.
- the reciprocal 1 / d (n, k) of the distance d (n, k) between the column vectors vn and vk is calculated, and the correlation (correlation value representing the k-th column) with the n-th column Is required.
- step S87 After calculating the correlation between the n-th column and the k-th column, the process proceeds to step S87, where it is determined whether or not the variable k is equal to or less than N—1 obtained by subtracting 1 from N, which is the number of frames in the frame. You. If it is determined in step S87 that the variable k is equal to or smaller than N—1, the process proceeds to step S88, the variable k is incremented by 1, and the process returns to step S85. Until it is determined at 87 that the variable k is not smaller than N ⁇ 1, the processing of steps S85 to S88 is repeated. That is, thereby, the correlation between the n-th column and the embedded image columns on the right side thereof is obtained.
- step S87 when it is determined in step S87 that the variable k is not less than N ⁇ 1, the process proceeds to step S88, and k that maximizes the correlation with the n-th column is obtained in the swapping unit 72. Then, if k that maximizes the correlation with the n-th column is expressed as, for example, K, the swapping unit 72 determines in step S90 that the swapping unit 72 is in the n + 1st column of the processing target frame stored in the frame memory 71. Subbing the K-th column, ie, replacing the K-th column with the (n + 1) -th column on the right of the n-th column.
- step S91 the variable n is incremented by one, and the process returns to step S82. Thereafter, in step S82, step S8 is performed until it is determined that variable n is not less than N-1. Steps 2 to S91 are repeated.
- the first column of the embedded image is the same as the first column of the original image. Therefore, when the variable n is 1, which is the initial value, the column of the embedded image that has the highest correlation with the first column is replaced with the second column to the right of the first column.
- the column with the highest correlation with the first column is basically the second column of the original image due to the correlation of the image. In this case, in the embedding process, any column of the embedded image is used.
- the second column of the original image, which has been replaced with will be restored (decoded) to its original position.
- the column having the highest correlation with the second column is also basically the third column of the original image because of the correlation of the image.
- the embedded image The third column of the original image, which has been replaced with a column somewhere else, will be returned to its original position.
- the embedded image stored in the frame memory 71 is decoded into an original image (broadcast image data).
- step S82 when it is determined that the variable n is not less than N ⁇ 1, that is, all of the second to Nth columns that constitute the embedding image are original using the correlation of the image.
- the process proceeds to step S92, and the decoded image is Is read from the frame memory 71.
- the swapping section 72 outputs the sub-information indicating how to replace the second to Nth columns of the embedded image when the embedded image is decoded into the original image, and Output to the step information conversion unit 73.
- the swap information conversion section 73 decodes and outputs the enhancement information embedded in the embedded image based on the step information from the subbing section 72.
- step S93 it is determined whether or not the frame of the embedded image that is not yet processed is stored in the frame memory 71. If it is determined that the frame is stored, the process proceeds to step S93. Returning to S81, the same processing is repeated with the frame of the embedding image that has not yet been processed as a new processing target.
- the frame memory 71 stores the If it is determined that no missing frame is stored, the decoding process ends.
- the embedding image which is an image in which the enhancement information is embedded, is decoded into the original two images and the enhancement information using the correlation of the images. Even without overhead, the embedded image can be decoded into the original image and enhancement information. Therefore, in the decoded image, the image R is not basically deteriorated by embedding the enhancement information.
- the system calculates the correlation with the column that has not been decoded and, based on the correlation, detects the column that should be placed in the position immediately to the right of the latest column that has already been decoded.
- By calculating the correlation between multiple columns and columns that have not yet been recovered it is also possible to detect a column to be replaced on the right of the latest column already decoded. .
- the improvement information is embedded in the broadcast image data by exchanging the columns.
- the embedding is performed by exchanging rows or at the same position in a predetermined number of frames arranged in the time direction. It is also possible to perform this by exchanging the position of a certain pixel column, or by exchanging both columns and rows.
- embedding is not an operation of replacing columns and the like, but operation of pixel values based on improvement information, rotation of water '] ⁇ lines, etc. based on improvement information, etc. It is also possible to do this. In either case, the original information can be decoded using the energy bias.
- Japanese Patent Application No. 1 1 — 1 2 9 9 19 Japanese Patent Application 1 1 — 1 6 0 5 2 9, Japanese Patent Application No. 1 1 1 1 6 0 5 3 0, Japanese Patent Application Hei 1 1 — 2 8 4 1 9 No.
- Japanese Patent Application No. 10—2 8 5 3 10 Japanese Patent Application No. 1 1—2 8 4 1 9
- Japanese Patent Application No. Hei 10 — 2 8 5 3 0 9 Japanese Patent Application No. Hei 10 — 2 8 5 3 0 9
- 1288 shows an example of the configuration of the integration unit 12 of the transmission equipment (FIG. 2) when ffl spread spectrum is used to embed enhancement information in broadcast ffl image data.
- the enhancement report output from the enhancement report generation unit 1 1 (2) is supplied to a spread spectrum signal generation circuit 81, and the spread spectrum signal generation circuit 81 At a predetermined timing, for example, a PN (Pseudo random Noise) code string such as a sequence is generated. Then, the spread spectrum signal generation circuit 81 spreads the spectrum by using the spread code sequence to obtain the spread spectrum signal and obtains the spread spectrum signal. 2 ift paid.
- PN Pulseudo random Noise
- the addition circuit 82 is supplied with a spread spectrum signal from the spectrum expansion / degeneration circuit 81 and also receives broadcast river image data. Superimposes a spread-spectrum signal on the broadcast river image data, thereby obtaining an integrated message incorporating improved information in the broadcast image data and outputting it to the transmitter 13 (Fig. 2). I do.
- broadcast river image data and spectrum spread Can be supplied to the adder 82 after D / A (Digital to Analog) conversion.
- FIG. 29 shows the extraction unit 2 2 of the 'device 3 (FIG. 4) when the integration unit 12 of the transmission device 1 (FIG. 2) is configured as shown in FIG. Is shown.
- the integrated signal output from the receiving section 21 (FIG. 4) is supplied to an inverse spectrum spreading circuit 91 and a decoding circuit 92.
- the inverse spectrum expansion circuit 91 generates a PN code sequence similar to that generated by the spread spectrum signal generation circuit 81 in FIG. 28, and based on the PN code sequence, The integrated signal is despread in the inverse spectrum, thereby decoding the enhancement information.
- the decoded improvement information is supplied to the selection unit 23 (FIG. 4).
- the inverse spread spectrum circuit 91 supplies the generated PN code sequence to the No. circuit 92.
- the decoding circuit 92 removes the spread spectrum signal superimposed on the integrated signal based on the PN code string from the inverse spread spectrum circuit 91, thereby decoding the broadcast image data. .
- the decoded broadcast image data is supplied to the quality improving unit 24 (FIG. 4).
- the extraction unit 22 can be configured without providing the decoding circuit 92.
- the image data for broadcasting on which the spread spectrum signal is superimposed is supplied to the quality improving unit 24.
- the method of embedding so that the original information can be decoded by utilizing the energy bias and the method of embedding by using spread spectrum have been described.
- a conventional digital watermark can also be used.
- the improved information can be embedded in the image data for broadcast. It is.
- the above-described series of processing can be performed by hardware, or can be performed by software.
- a program constituting the software is installed on a general-purpose computer or the like.
- FIG. 30 shows a configuration example of an embodiment of a computer in which a program for executing the above-described series of processes is installed.
- the program can be recorded in advance on a hard disk 205 or a ROM 203 as a recording medium built in the computer.
- the program may be a removable recording medium such as a floppy disk, CD-ROM (Compact Disc Read Only Memory), MO (Magneto optical) disk, DVD (Digita 1 Versatile Disc), magnetic disk, or semiconductor memory. 1 can be stored (recorded) temporarily or permanently.
- a removable recording medium 211 can be provided as so-called package software.
- the program is stored in the removable recording medium 211 as described above.
- a network such as a LAN (Local Area Network) or the Internet
- the computer has a CPU (Central Processing Unit) 202 built-in.
- the CPU 202 is connected to an input / output interface 210 via a bus 201, and the CPU 202 is connected to a keyboard via a user via the input / output interface 210.
- the program stored in the ROM (Read Only Memory) 203 is executed according to the command.
- the CPU 202 may execute a program stored on the hard disk 205, a program transferred from a satellite or a network, received by the communication unit 208, and installed on the hard disk 205, Alternatively, the program read from the removable recording medium 211 mounted on the drive 209 and installed on the hard disk 205 can be stored in the RAM.
- the CPU 202 performs the processing according to the above-described flowchart or the processing performed by the configuration of the above-described block diagram. Then, the CPU 202 sends the processing result to the output unit 206 including a liquid crystal display (LCD), a speaker, and the like, as necessary, for example, via the input / output interface 210.
- the data is output or transmitted from the communication unit 208, and further recorded on the hard disk 205.
- processing steps for describing a program for causing a computer to perform various kinds of processing do not necessarily need to be processed in chronological order according to the order described as a flowchart. Processes to be executed either individually or individually
- the program may be processed by one computer, or may be processed in a distributed manner by a plurality of computers. Further, the program may be transferred to a remote computer and executed.
- the present embodiment is directed to image data, the present invention is also applicable to audio data and the like.
- an embedded image is provided via a satellite line.
- the provision of an embedded image may be provided by other types such as terrestrial broadcasting, the Internet, and a CATV network. It is also possible to carry out recording via various types of recording media such as optical disks, magneto-optical disks, magnetic tapes, and semiconductor memories.
- an improvement report for improving data quality is generated, and the improvement information is inserted into the data. It is possible to provide data in a state in which the improvement information is embedded, data in a state in which the improvement information is extracted, data in which quality is improved by the improvement information, and the like.
- the recording medium, and the program of the present invention improvement information is extracted from embedded data, and the quality of data is improved using the improvement information. Therefore, it is possible to receive high-quality data.
- the recording medium, and the program according to the present invention a plurality of types of improvement information for improving data quality are generated, and the data and one or more types of improvement information are generated. Is sent. Therefore, it is possible to provide multiple quality data.
- data and one or more types of improvement information are supplied, and any one of the data quality and one or more types of improvement information is provided. While charging is performed using the data, a charging process is performed according to the improvement information used to improve the quality of the data. Therefore, it is possible to receive quality data according to the payment amount.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Television Systems (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Editing Of Facsimile Originals (AREA)
- Television Signal Processing For Recording (AREA)
Description
Claims
Priority Applications (5)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US10/019,124 US7679678B2 (en) | 2000-02-29 | 2001-02-28 | Data processing device and method, and recording medium and program |
| DE60141734T DE60141734D1 (de) | 2000-02-29 | 2001-02-28 | Gerät und verfahren zur verarbeitung von daten, aufzeichnungsmedium und programm |
| EP01908162A EP1176824B1 (en) | 2000-02-29 | 2001-02-28 | Data processing device and method, and recording medium and program |
| KR1020017013684A KR20010113047A (ko) | 2000-02-29 | 2001-02-28 | 데이터 처리장치 및 방법과 기록매체 및 프로그램 |
| US12/074,639 US20080240599A1 (en) | 2000-02-29 | 2008-03-05 | Data processing device and method, recording medium, and program |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2000-53098 | 2000-02-29 | ||
| JP2000053098 | 2000-02-29 |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US12/074,639 Division US20080240599A1 (en) | 2000-02-29 | 2008-03-05 | Data processing device and method, recording medium, and program |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2001065847A1 true WO2001065847A1 (fr) | 2001-09-07 |
Family
ID=18574523
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2001/001525 Ceased WO2001065847A1 (fr) | 2000-02-29 | 2001-02-28 | Dispositif et procede de traitement de donnees, support d'enregistrement et programme correspondant |
Country Status (6)
| Country | Link |
|---|---|
| US (2) | US7679678B2 (ja) |
| EP (2) | EP1176824B1 (ja) |
| KR (1) | KR20010113047A (ja) |
| CN (2) | CN100477779C (ja) |
| DE (1) | DE60141734D1 (ja) |
| WO (1) | WO2001065847A1 (ja) |
Families Citing this family (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP4265291B2 (ja) * | 2003-06-06 | 2009-05-20 | ソニー株式会社 | 情報信号の処理装置および処理方法、並びに情報信号の処理方法を実行するためのプログラム |
| TW200608342A (en) * | 2004-08-27 | 2006-03-01 | Benq Corp | Display apparatus abstract |
| KR100738930B1 (ko) * | 2006-01-06 | 2007-07-12 | 에스케이 텔레콤주식회사 | 이동통신망과 위성 디지털 멀티미디어 방송망의 다중전송을 이용한 위성 디지털 멀티미디어 방송의 화질 개선방법 및 시스템, 그를 위한 장치 |
| US8229209B2 (en) * | 2008-12-26 | 2012-07-24 | Five Apes, Inc. | Neural network based pattern recognizer |
| US8290250B2 (en) * | 2008-12-26 | 2012-10-16 | Five Apes, Inc. | Method and apparatus for creating a pattern recognizer |
| US8160354B2 (en) * | 2008-12-26 | 2012-04-17 | Five Apes, Inc. | Multi-stage image pattern recognizer |
| US20120117133A1 (en) * | 2009-05-27 | 2012-05-10 | Canon Kabushiki Kaisha | Method and device for processing a digital signal |
| US8977099B2 (en) | 2010-12-16 | 2015-03-10 | Panasonic Intellectual Property Management Co., Ltd. | Production apparatus and content distribution system |
| JP2013009293A (ja) * | 2011-05-20 | 2013-01-10 | Sony Corp | 画像処理装置、画像処理方法、プログラム、および記録媒体、並びに学習装置 |
| WO2013061337A2 (en) * | 2011-08-29 | 2013-05-02 | Tata Consultancy Services Limited | Method and system for embedding metadata in multiplexed analog videos broadcasted through digital broadcasting medium |
| US9986202B2 (en) * | 2016-03-28 | 2018-05-29 | Microsoft Technology Licensing, Llc | Spectrum pre-shaping in video |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH04292077A (ja) * | 1991-03-20 | 1992-10-16 | Fujitsu Ltd | 画像デ−タ出力制御方法 |
| JPH10243406A (ja) * | 1996-12-26 | 1998-09-11 | Sony Corp | 画像符号化装置および画像符号化方法、画像復号装置および画像復号方法、並びに記録媒体 |
| JPH1198487A (ja) * | 1997-09-24 | 1999-04-09 | Mitsubishi Electric Corp | 画像符号化装置及び画像復号化装置 |
| JPH11187407A (ja) * | 1997-12-19 | 1999-07-09 | Sony Corp | 画像符号化装置および画像符号化方法、提供媒体、画像復号装置および画像復号方法、並びに学習装置および学習方法 |
| JP2000031831A (ja) * | 1998-07-15 | 2000-01-28 | Sony Corp | 符号化装置および符号化方法、復号装置および復号方法、情報処理装置および情報処理方法、並びに提供媒体 |
Family Cites Families (23)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5243423A (en) * | 1991-12-20 | 1993-09-07 | A. C. Nielsen Company | Spread spectrum digital data transmission over TV video |
| JP3271108B2 (ja) * | 1993-12-03 | 2002-04-02 | ソニー株式会社 | ディジタル画像信号の処理装置および方法 |
| JP3671437B2 (ja) | 1994-08-04 | 2005-07-13 | ソニー株式会社 | ディジタル画像信号の処理装置および処理方法 |
| JPH08256085A (ja) * | 1995-03-17 | 1996-10-01 | Sony Corp | スペクトラム拡散通信システム及びその送信機と受信機 |
| US5621660A (en) * | 1995-04-18 | 1997-04-15 | Sun Microsystems, Inc. | Software-based encoder for a software-implemented end-to-end scalable video delivery system |
| US6275988B1 (en) * | 1995-06-30 | 2001-08-14 | Canon Kabushiki Kaisha | Image transmission apparatus, image transmission system, and communication apparatus |
| US5946044A (en) * | 1995-06-30 | 1999-08-31 | Sony Corporation | Image signal converting method and image signal converting apparatus |
| US5956674A (en) * | 1995-12-01 | 1999-09-21 | Digital Theater Systems, Inc. | Multi-channel predictive subband audio coder using psychoacoustic adaptive bit allocation in frequency, time and over the multiple channels |
| DE69722277T2 (de) * | 1996-01-31 | 2004-04-01 | Canon K.K. | Abrechnungsvorrichtung und ein die Abrechnungsvorrichtung verwendendes Informationsverteilungssystem |
| US6282364B1 (en) | 1996-02-05 | 2001-08-28 | Matsushita Electric Industrial Co., Ltd. | Video signal recording apparatus and video signal regenerating apparatus |
| JPH09231276A (ja) * | 1996-02-27 | 1997-09-05 | Canon Inc | 課金装置、通信装置及び通信システム |
| AU718453B2 (en) * | 1996-07-17 | 2000-04-13 | Sony Corporation | Image coding and decoding using mapping coefficients corresponding to class information of pixel blocks |
| US6160845A (en) * | 1996-12-26 | 2000-12-12 | Sony Corporation | Picture encoding device, picture encoding method, picture decoding device, picture decoding method, and recording medium |
| IL125866A (en) * | 1996-12-26 | 2003-01-12 | Sony Corp | Method and device for compressing an original picture signal to a compressed picture signal and for decoding a compressed picture signal to an original picture signal |
| US6211919B1 (en) * | 1997-03-28 | 2001-04-03 | Tektronix, Inc. | Transparent embedment of data in a video signal |
| US6695259B1 (en) * | 1997-05-21 | 2004-02-24 | Hitachi, Ltd. | Communication system, communication receiving device and communication terminal in the system |
| IL128423A (en) * | 1997-06-16 | 2003-07-31 | Sony Corp | Image processing device and method, and transmission medium, tramsmission method and image format |
| US7154560B1 (en) * | 1997-10-27 | 2006-12-26 | Shih-Fu Chang | Watermarking of digital image data |
| JP4093621B2 (ja) * | 1997-12-25 | 2008-06-04 | ソニー株式会社 | 画像変換装置および画像変換方法、並びに学習装置および学習方法 |
| US6389055B1 (en) * | 1998-03-30 | 2002-05-14 | Lucent Technologies, Inc. | Integrating digital data with perceptible signals |
| US6252631B1 (en) * | 1998-09-14 | 2001-06-26 | Advancedinteractive, Inc. | Apparatus and method for encoding high quality digital data in video |
| DE10139723A1 (de) | 2001-08-13 | 2003-03-13 | Osram Opto Semiconductors Gmbh | Strahlungsemittierender Chip und strahlungsemittierendes Bauelement |
| KR100423455B1 (ko) * | 2001-10-24 | 2004-03-18 | 삼성전자주식회사 | 영상신호처리장치 및 그 방법 |
-
2001
- 2001-02-28 CN CNB018009069A patent/CN100477779C/zh not_active Expired - Fee Related
- 2001-02-28 EP EP01908162A patent/EP1176824B1/en not_active Expired - Lifetime
- 2001-02-28 DE DE60141734T patent/DE60141734D1/de not_active Expired - Lifetime
- 2001-02-28 US US10/019,124 patent/US7679678B2/en not_active Expired - Fee Related
- 2001-02-28 EP EP06076266A patent/EP1755344A3/en not_active Withdrawn
- 2001-02-28 CN CN200810144569XA patent/CN101370131B/zh not_active Expired - Fee Related
- 2001-02-28 WO PCT/JP2001/001525 patent/WO2001065847A1/ja not_active Ceased
- 2001-02-28 KR KR1020017013684A patent/KR20010113047A/ko not_active Ceased
-
2008
- 2008-03-05 US US12/074,639 patent/US20080240599A1/en not_active Abandoned
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH04292077A (ja) * | 1991-03-20 | 1992-10-16 | Fujitsu Ltd | 画像デ−タ出力制御方法 |
| JPH10243406A (ja) * | 1996-12-26 | 1998-09-11 | Sony Corp | 画像符号化装置および画像符号化方法、画像復号装置および画像復号方法、並びに記録媒体 |
| JPH1198487A (ja) * | 1997-09-24 | 1999-04-09 | Mitsubishi Electric Corp | 画像符号化装置及び画像復号化装置 |
| JPH11187407A (ja) * | 1997-12-19 | 1999-07-09 | Sony Corp | 画像符号化装置および画像符号化方法、提供媒体、画像復号装置および画像復号方法、並びに学習装置および学習方法 |
| JP2000031831A (ja) * | 1998-07-15 | 2000-01-28 | Sony Corp | 符号化装置および符号化方法、復号装置および復号方法、情報処理装置および情報処理方法、並びに提供媒体 |
Non-Patent Citations (2)
| Title |
|---|
| KOSHIO MATSUI: "Denshi sukashi no kiso", MORIKITA SHUPPAN K.K., 21 August 1998 (1998-08-21), JAPAN, pages 76 - 89, XP002941477 * |
| See also references of EP1176824A4 * |
Also Published As
| Publication number | Publication date |
|---|---|
| EP1176824A4 (en) | 2005-06-22 |
| US20080240599A1 (en) | 2008-10-02 |
| EP1176824A1 (en) | 2002-01-30 |
| CN101370131B (zh) | 2011-03-02 |
| EP1755344A2 (en) | 2007-02-21 |
| EP1755344A3 (en) | 2010-12-22 |
| CN100477779C (zh) | 2009-04-08 |
| US7679678B2 (en) | 2010-03-16 |
| EP1176824B1 (en) | 2010-04-07 |
| US20030103668A1 (en) | 2003-06-05 |
| DE60141734D1 (de) | 2010-05-20 |
| KR20010113047A (ko) | 2001-12-24 |
| CN101370131A (zh) | 2009-02-18 |
| CN1366769A (zh) | 2002-08-28 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20080240599A1 (en) | Data processing device and method, recording medium, and program | |
| JP5093557B2 (ja) | 画像処理装置、画像処理方法、及びプログラム | |
| US20100202711A1 (en) | Image processing apparatus, image processing method, and program | |
| WO2001011889A1 (fr) | Dispositifs emetteur, recepteur et d'emission/reception et procedes correspondants, support d'enregistrement et signal | |
| JP3912627B2 (ja) | 画像符号化装置および画像符号化方法、並びに伝送方法 | |
| JP2001086507A (ja) | 画像符号化装置および画像符号化方法、画像復号装置および画像復号方法、媒体、並びに画像処理装置 | |
| KR20020020262A (ko) | 정보 처리 장치, 시스템 및 방법, 및 기록 매체 | |
| WO1999003283A1 (fr) | Dispositif et procede de codage, de decodage et de traitement d'image | |
| JPH08265745A (ja) | 特徴点の特定装置及びその方法 | |
| JP2001344098A (ja) | 処理装置及び方法、課金管理装置及び方法、送受信システム及び方法、送信/受信装置及び方法、媒体 | |
| JPH1175180A (ja) | 画像処理装置および画像処理方法、並びに伝送媒体および伝送方法 | |
| JP4362895B2 (ja) | データ処理装置およびデータ処理方法、並びに記録媒体 | |
| US8218077B2 (en) | Image processing apparatus, image processing method, and program | |
| JP3271101B2 (ja) | ディジタル画像信号処理装置および処理方法 | |
| JPH11187407A (ja) | 画像符号化装置および画像符号化方法、提供媒体、画像復号装置および画像復号方法、並びに学習装置および学習方法 | |
| KR101239268B1 (ko) | 화상 처리 장치, 화상 처리 방법, 및 기록 매체 | |
| JP2001320682A (ja) | データ処理装置およびデータ処理方法、並びに記録媒体およびプログラム | |
| Jaiswal et al. | Adaptive predictor structure based interpolation for reversible data hiding | |
| JP2005159830A (ja) | 信号処理装置および方法、記録媒体、並びにプログラム | |
| JP3912558B2 (ja) | 画像符号化装置および画像符号化方法、並びに記録媒体 | |
| JP3587188B2 (ja) | ディジタル画像信号処理装置および処理方法 | |
| JP4126629B2 (ja) | 映像信号処理装置及び映像信号処理方法 | |
| JP4582416B2 (ja) | 画像符号化装置および画像符号化方法 | |
| JP4120916B2 (ja) | 情報処理装置および方法、記録媒体、並びにプログラム | |
| JP4131049B2 (ja) | 信号処理装置及び信号処理方法 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| WWE | Wipo information: entry into national phase |
Ref document number: 01800906.9 Country of ref document: CN |
|
| AK | Designated states |
Kind code of ref document: A1 Designated state(s): CN KR US |
|
| AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 10019124 Country of ref document: US Ref document number: 1020017013684 Country of ref document: KR |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2001908162 Country of ref document: EP |
|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
| WWP | Wipo information: published in national office |
Ref document number: 2001908162 Country of ref document: EP |