WO2025225483A1 - Image processing device, image processing method, and program - Google Patents
Image processing device, image processing method, and programInfo
- Publication number
- WO2025225483A1 WO2025225483A1 PCT/JP2025/014989 JP2025014989W WO2025225483A1 WO 2025225483 A1 WO2025225483 A1 WO 2025225483A1 JP 2025014989 W JP2025014989 W JP 2025014989W WO 2025225483 A1 WO2025225483 A1 WO 2025225483A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- frame image
- processing device
- image processing
- image
- machine learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/60—Image enhancement or restoration using machine learning, e.g. neural networks
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/02—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed
Definitions
- the present invention relates to an image processing device, an image processing method, and a program for generating images to be displayed on a display device.
- an application program such as a game program draws frame images in real time, outputs the drawn frame images as video signals, and repeatedly executes a process to display them on the screen of a display device. In this way, images are presented to the user.
- the frame images drawn by the application program may not match the user's viewing environment or preferences in terms of resolution, color tone, etc. For this reason, technologies have been known that use hardware logic or other methods to convert the resolution, etc., of frame images before outputting them. However, because such technologies perform fixed conversions, they are not necessarily suitable for the content of the image.
- the present invention was made in consideration of the above situation, and one of its objectives is to provide an image processing device, image processing method, and program that can present to the user the results of desired conversions made according to the content of a frame image.
- An image processing device includes a frame image acquisition unit that acquires a frame image each time the frame image constituting a video to be presented to a user is drawn, and a frame image conversion unit that converts the acquired frame image using a pre-prepared machine learning model to obtain a converted frame image, and outputs the converted frame image as the frame image to be presented to the user.
- An image processing method includes the steps of acquiring a frame image each time the frame image constituting a video to be presented to a user is drawn, and converting the acquired frame image using a pre-prepared machine learning model to obtain a converted frame image, and outputting the converted frame image as the frame image to be presented to the user.
- a program causes a computer to execute the following steps: acquiring a frame image each time the frame image constituting a video to be presented to a user is drawn; and converting the acquired frame image using a pre-prepared machine learning model to obtain a converted frame image, and outputting the converted frame image as the frame image to be presented to the user.
- This program may be provided by being stored on a computer-readable, non-transitory information storage medium.
- FIG. 1 is a block diagram showing a configuration of an image processing device according to an embodiment of the present invention
- 1 is a functional block diagram showing functions of an image processing device according to an embodiment of the present invention
- FIG. 3 is a timing chart showing an example of a flow of processing executed by the image processing device according to the embodiment of the present invention.
- FIG. 1 is a block diagram showing the configuration of an image processing device 10 according to one embodiment of the present invention.
- the image processing device 10 is, for example, a home game console or a personal computer, and as shown in the diagram, is configured to include a control unit 11, a storage unit 12, and an interface unit 13.
- the image processing device 10 is also connected to a display device 14 and an operation device 15.
- the control unit 11 includes at least one processor such as a CPU, and executes programs stored in the storage unit 12 to perform various information processing. Specific examples of the processing performed by the control unit 11 in this embodiment will be described later.
- the storage unit 12 includes at least one memory device such as a RAM, and stores the programs executed by the control unit 11 and the data processed by those programs.
- the interface unit 13 is an interface for data communication between the display device 14 and the operation device 15.
- the image processing device 10 is connected to the display device 14 and the operation device 15 via the interface unit 13, either wired or wirelessly.
- the interface unit 13 includes a multimedia interface for transmitting video signals supplied by the image processing device 10 to the display device 14. It also includes a data communication interface for receiving signals indicating operations performed by the user on the operation device 15.
- the interface unit 13 may be equipped with a communication interface for sending and receiving data to and from other communication devices via a communication network such as the Internet.
- the display device 14 displays on a screen an image corresponding to the video signal supplied from the image processing device 10.
- the display device 14 may be a stationary display device such as a home television set, or a portable display device.
- the display device 14 may also be a head-mounted display device capable of presenting a three-dimensional image by presenting separate images to each of the user's left and right eyes.
- the operation device 15 is, for example, a controller for a home game console, and accepts operation input from the user.
- the operation device 15 is connected to the image processing device 10 via a wired or wireless connection, and transmits operation signals indicating the content of the operation input accepted from the user to the image processing device 10.
- the operation device 15 may be of various shapes, such as a device that the user holds in their hand or a device that the user wears on their hand.
- the image processing device 10 is functionally configured to include an application execution unit 21, a frame image acquisition unit 22, and a frame image conversion unit 23. These functions are realized by the control unit 11 operating in accordance with one or more programs stored in the storage unit 12. These programs may be provided to the image processing device 10 via a communications network such as the Internet, or may be provided by being stored on a computer-readable information storage medium such as an optical disc.
- the application execution unit 21 executes an application program and repeatedly performs the process of drawing frame images that show the results of that processing. These frame images are images that make up the video to be presented to the user, and by displaying such frame images on the display device 14 at predetermined intervals, the user can view the video that is the result of processing by the application execution unit 21.
- the drawing of frame images may be performed by a processor such as a GPU based on drawing commands from the application program.
- the drawn frame images are written to a predetermined frame buffer memory allocated in the storage unit 12.
- the frame image acquisition unit 22 reads and acquires the drawn frame image from the frame buffer memory each time the application execution unit 21 draws a frame image. In other words, in this embodiment, the frame image drawn by the application execution unit 21 is not sent directly to the display device 14.
- the frame image conversion unit 23 performs a given conversion process on the acquired frame image.
- This conversion process is a process for adjusting the content of the frame image according to the user's viewing environment, etc.
- the frame image obtained by the conversion process by the frame image conversion unit 23 will be referred to as the converted frame image
- the frame image acquired by the frame image acquisition unit 22 before the conversion process is performed will be referred to as the pre-conversion frame image.
- this converted frame image is output as the frame image to be presented to the user.
- the converted frame image output by the frame image conversion unit 23 is sent to the display device 14 via the interface unit 13 and displayed on the screen of the display device 14.
- the frame image conversion unit 23 converts the frame images using a pre-prepared machine learning model. This allows conversion to be performed according to the content of each frame image, compared to conversion using hardware logic, fixed filters, etc.
- multiple machine learning models are prepared in advance depending on the desired conversion content, etc.
- the frame image conversion unit 23 performs the frame image conversion process using a machine learning model selected from these multiple machine learning models based on given conditions such as user selection.
- the types of machine learning models prepared, the selection criteria for selecting the machine learning model to use, and specific examples of how to generate a machine learning model will be described later.
- the application execution unit 21 begins drawing a new pre-conversion frame image every time a predetermined time interval Tf has elapsed.
- This time interval Tf is determined according to the frame rate; for example, if the frame rate is 60 fps, the time interval Tf is 1/60 seconds.
- the drawing process of the pre-conversion frame image is expected to be completed in a time shorter than the time interval Tf.
- the drawing process of the pre-conversion frame image (N) begins at time t0
- the drawing process of the pre-conversion frame image (N+1) begins at time t1, after the time interval Tf has elapsed since time t0.
- the frame image conversion unit 23 performs conversion processing on the pre-conversion frame image and outputs a post-conversion frame image. This conversion processing, together with the drawing process for the pre-conversion frame image, is expected to be completed within the time interval Tf.
- the converted frame image output by the frame image conversion unit 23 is transmitted to the display device 14 in the next cycle.
- the converted frame image output in the previous cycle is transmitted to the display device 14.
- the converted frame image (N-1) whose conversion was completed by time t0 is transmitted to the display device 14 between time t0 and time t1
- the converted frame image (N) whose conversion was completed by time t1 is transmitted to the display device 14 between time t1 and time t2
- the converted frame image (N+1) whose conversion was completed by time t2 is transmitted after time t2.
- the image processing device 10 can perform a given conversion process on all pre-conversion frame images in real time and display the resulting converted frame images on the display device 14.
- the frame image conversion unit 23 converts the pre-conversion frame image into a post-conversion frame image with a higher, predetermined resolution.
- the frame image conversion unit 23 is able to not only increase the number of pixels in the post-conversion frame image, but also to increase the resolution of each pixel in the pre-conversion frame image by taking into account the content of the surrounding pixels.
- the frame image conversion unit 23 performs the frame image conversion process using a machine learning model from the multiple machine learning models prepared in advance that converts the frame image to one with a resolution that can be displayed by the display device 14. More specifically, for example, the frame image conversion unit 23 selects, as the machine learning model to use, a machine learning model that converts the image to a resolution that the user has specified in advance on a settings screen or the like. Alternatively, the frame image conversion unit 23 may obtain information regarding the display performance of the display device 14, and based on the obtained information, select, for example, the maximum resolution that the display device 14 can display as the resolution of the converted frame image. This makes it possible to display images at a variety of resolutions.
- the color gamut that a display device can display also differs depending on the type of display device. Therefore, by converting a pre-conversion frame image drawn by an application program that does not support display in a wide color gamut into a post-conversion frame image with a wider color gamut, it is possible to present the user with high-quality images that can be displayed by the display device 14.
- the frame image conversion unit 23 converts the colors of the pixels contained in the frame image so that the color gamut used in the converted frame image is wider than the color gamut used in the pre-conversion frame image.
- the frame image conversion unit 23 converts a pre-conversion frame image with a dynamic range of SDR (Standard Dynamic Range) into a post-conversion frame image with a dynamic range of HDR (High Dynamic Range).
- the color of each pixel in the converted frame image is determined taking into account the colors of surrounding pixels, etc. This makes it possible to convert parts that cannot be fully expressed in SDR and are crushed into colors with a gradation that looks natural to the human eye. Also in this example as well, multiple machine learning models may be prepared in advance to suit various color gamuts. The frame image conversion unit 23 may then select the machine learning model to use based on user specifications, information related to the display performance of the display device 14, etc.
- the frame image conversion unit 23 may perform a conversion that changes the brightness of each pixel contained in the pre-conversion frame image according to the brightness of the surrounding pixels, while maintaining the possible range of brightness before and after conversion. With this type of conversion, by increasing or decreasing the brightness of some pixels, it is possible to make the brightness change overall within the converted frame image in a gradation that appears natural to the human eye.
- the frame image conversion unit 23 uses a machine learning model that has been prepared in advance according to the type of color vision characteristics of the viewer to convert the colors of the pixels contained in the pre-conversion frame image to a color tone that corresponds to the color vision characteristics of the viewer. This makes it possible to perform color conversion that takes into account the content of the frame image.
- the user of the image processing device 10 selects one machine learning model in advance, according to their own color vision characteristics, on a settings screen or the like.
- the frame image conversion unit 23 converts the frame image using the machine learning model selected by the user. This allows the user to always view images with color tones that are easy to see according to their own color vision characteristics, regardless of the type of application program that draws the frame image or the content of the image to be displayed.
- the frame image conversion unit 23 may also perform a conversion that combines multiple examples described above. As a specific example, the frame image conversion unit 23 may first perform a conversion that changes the color tone of the pre-conversion frame image, and then perform a conversion that further improves the resolution of the frame image whose color tone has been changed, thereby generating a post-conversion frame image.
- machine learning models for performing multiple types of conversion at once may be prepared in advance.
- machine learning models generated by performing machine learning individually for each combination of conversions expected to be required are prepared in advance.
- high-resolution conversion corresponding to two types of output resolution and color tone conversion corresponding to two types of color vision characteristics are required, a total of nine types of conversion combinations are expected (three types x three types), taking into account the possibility that only one of the conversions is required.
- machine learning models are prepared for each of the remaining eight types of conversion combinations.
- the user selects one of these eight machine learning models to use based on their own color vision characteristics and the supported resolution of the display device 14. By converting the frame image using this selected machine learning model, it is possible to obtain a converted frame image with the color tone and resolution desired by the user in a single conversion.
- the frame image conversion unit 23 may select a machine learning model to use depending on the type of application program that draws the pre-conversion frame image, and convert the frame image using the selected machine learning model. For example, if the application program is a game program, different machine learning models may be prepared in advance for each game genre or title. By converting the frame image using a machine learning model selected depending on the type of application program in this way, it is possible to perform conversion that corresponds to the tendencies of the frame image being drawn, and to make the color tone of the converted frame image suitable for the content of the application program.
- the multiple machine learning models corresponding to the multiple types of conversion described above may be stored in advance in the storage unit 12 of the image processing device 10, or may be provided to the image processing device 10 as needed via a communications network, etc.
- a predetermined server device connected to the image processing device 10 via the Internet may store in advance multiple machine learning models corresponding to various types of conversion, and may transmit the model data necessary to execute the machine learning models in response to a request from the image processing device 10.
- the image processing device 10 requests model data of the machine learning model corresponding to the selected conversion content from the server device.
- the machine learning model provided by the server device in response to this request is then stored in the storage unit 12, and the frame image is converted using that model. This makes it easy to update the machine learning model over time and improve the quality of the conversion process.
- the machine learning model used by the frame image conversion unit 23 may be a model obtained by machine learning using two frame images obtained by drawing the same content under different drawing conditions, one as input data and the other as training data.
- the frame image conversion unit 23 when the frame image conversion unit 23 converts a frame image to a higher resolution, it first has an application program that supports both low and high resolutions execute the same process multiple times while changing the output resolution. This results in high-resolution and low-resolution frame images that represent the same content being drawn.
- machine learning is performed using the low-resolution frame images as input data and the high-resolution frame images as training data to generate a machine learning model that can convert low-resolution frame images into high-resolution frame images. In this way, there is no need to prepare manually adjusted images, and a machine learning model that can increase the resolution of frame images can be efficiently generated.
- the frame image conversion unit 23 converts the color tone of a frame image, it causes an application program equipped with an output mode that accommodates color vision diversity to execute the same process multiple times in different output modes. This makes it possible to render frame images with the same content, with unadjusted color tones and frame images with color tones adjusted for people with specific color vision characteristics. Of these, by performing machine learning using frame images with unadjusted color tones as input data and adjusted frame images as training data, it is possible to efficiently generate a machine learning model that can convert colors to those that are easier to view for people with specific color vision characteristics.
- the machine learning model used by the frame image conversion unit 23 may be a model obtained by machine learning using, as input data, images obtained by performing a given conversion on images used as training data.
- machine learning can be performed using the image obtained by converting this frame image to a lower resolution as input data. This makes it possible to obtain a machine learning model that can restore the original image by performing the inverse conversion, which is to increase the resolution.
- machine learning can be performed using a frame image drawn with a wide brightness range, such as HDR, as training data, and the image obtained by converting this frame image to a narrower brightness range, such as SDR, as input data. This makes it possible to generate a machine learning model that can convert to a wide color gamut.
- the frame image conversion unit 23 may also convert frame images using a machine learning model that accepts multiple frame images as input.
- a machine learning model that accepts multiple frame images drawn consecutively in chronological order as input it is possible to convert the most recent pre-conversion frame image into a post-conversion frame image, taking into account changes over time in the content of the frame images.
- the image processing device 10 can effectively convert the contents of a frame image and present the converted frame image to the user in real time.
- the image processing device 10 is an information processing device that is located relatively close to the user and is directly connected to the display device 14 and operation device 15.
- a server device connected to the client device via a communications network may render frame images to be displayed on the screen of the display device 14, rather than a client device directly connected to the display device 14 and operation device 15 used by the user.
- the server device connected to the client device used by the user via a communications network may function as the image processing device 10 of the present invention.
- the server device functioning as the image processing device 10 executes an application program to convert the rendered pre-conversion frame images to generate converted frame images, and transmits the generated converted frame images to the client device.
- the image processing device 10 also executes an application program that draws pre-conversion frame images, but this is not limited to this.
- the image processing device 10 may also acquire pre-conversion frame images drawn by another information processing device, convert them into post-conversion frame images, and send them to the display device 14 for viewing by the user.
- a server device may draw pre-conversion frame images, send the drawn pre-conversion frame images to a client device functioning as the image processing device 10, and convert the received pre-conversion frame images into post-conversion frame images that are presented to the user.
- a processor includes transistors and other circuits and is considered a circuitry or processing circuitry.
- a processor may also be a programmed processor that executes programs stored in memory.
- a circuit, unit, or means is hardware that is programmed to realize a described function or that performs that function.
- the hardware may be any hardware disclosed in this specification or any hardware that is programmed to realize or known to perform the described function.
- the circuit, means, or unit is a combination of hardware and software used to operate the hardware and/or processor.
- the present disclosure may include the following aspects.
- An image processing device comprising a circuit configured as follows.
- [Item 2] Item 1. In the image processing device according to item 1, The circuit performs a conversion to increase the resolution of the acquired frame image. Image processing device.
- [Item 3] Item 1. In the image processing device according to item 1, the circuit performs a conversion to change the brightness of pixels included in the acquired frame image in accordance with the brightness of surrounding pixels; Image processing device.
- the circuit converts the colors of pixels included in the acquired frame image so that a color gamut used in the converted frame image is wider than a color gamut used in the acquired frame image; Image processing device.
- the circuit converts the color of pixels included in the acquired frame image into a color tone according to the color vision characteristics of the viewer; Image processing device.
- the circuit performs the conversion using a machine learning model selected based on a given condition from a plurality of machine learning models prepared in advance; Image processing device.
- Item 6 Item 6.
- the circuit performs the conversion using a machine learning model selected according to the type of application program that has rendered the acquired frame image; Image processing device.
- Item 8 Item 1.
- the machine learning model is a model obtained by machine learning using two frame images obtained by drawing the same content under different drawing conditions, one of which is used as input data and the other as training data.
- Image processing device Each time a frame image constituting a video to be presented to a user is drawn, the frame image is acquired; converting the acquired frame image using a pre-prepared machine learning model to obtain a converted frame image, and outputting the converted frame image as the frame image to be presented to the user; Image processing methods.
- a computer-readable, non-transitory information storage medium comprising: Each time a frame image constituting a video to be presented to a user is drawn, the frame image is acquired; converting the acquired frame image using a pre-prepared machine learning model to obtain a converted frame image, and outputting the converted frame image as the frame image to be presented to the user; An information storage medium that stores a program for causing a computer to execute a process.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- Image Processing (AREA)
Abstract
Description
本発明は、表示装置に表示させる画像を生成する画像処理装置、画像処理方法、及びプログラムに関する。 The present invention relates to an image processing device, an image processing method, and a program for generating images to be displayed on a display device.
例えばビデオゲームなどにおいては、ゲームプログラム等のアプリケーションプログラムがリアルタイムでフレーム画像を描画し、描画したフレーム画像を映像信号として出力して表示装置の画面に表示させる処理を繰り返し実行する。これにより、ユーザーに映像が提示される。 For example, in video games, an application program such as a game program draws frame images in real time, outputs the drawn frame images as video signals, and repeatedly executes a process to display them on the screen of a display device. In this way, images are presented to the user.
上述した従来例の技術において、アプリケーションプログラムが描画するフレーム画像は、解像度や色調などの点でユーザーの視聴環境や好みと合致しない場合がある。そのため、従来からハードウェアロジック等によってフレーム画像の解像度等を変換して出力する技術が知られている。しかしながら、このような技術では固定的な変換が行われるため、必ずしもその画像の内容に適した変換が行えるとは限らない。 In the conventional technologies described above, the frame images drawn by the application program may not match the user's viewing environment or preferences in terms of resolution, color tone, etc. For this reason, technologies have been known that use hardware logic or other methods to convert the resolution, etc., of frame images before outputting them. However, because such technologies perform fixed conversions, they are not necessarily suitable for the content of the image.
本発明は上記実情を考慮してなされたものであって、その目的の一つは、フレーム画像の内容に応じて所望の変換を行った結果をユーザーに提示することのできる画像処理装置、画像処理方法、及びプログラムを提供することにある。 The present invention was made in consideration of the above situation, and one of its objectives is to provide an image processing device, image processing method, and program that can present to the user the results of desired conversions made according to the content of a frame image.
本発明の一態様に係る画像処理装置は、ユーザーに提示すべき映像を構成するフレーム画像が描画されるごとに、当該フレーム画像を取得するフレーム画像取得部と、前記取得したフレーム画像を、予め用意された機械学習モデルを用いて変換して得られる変換後フレーム画像を、前記ユーザーに提示するフレーム画像として出力するフレーム画像変換部と、を含む画像処理装置である。 An image processing device according to one aspect of the present invention includes a frame image acquisition unit that acquires a frame image each time the frame image constituting a video to be presented to a user is drawn, and a frame image conversion unit that converts the acquired frame image using a pre-prepared machine learning model to obtain a converted frame image, and outputs the converted frame image as the frame image to be presented to the user.
本発明の一態様に係る画像処理方法は、ユーザーに提示すべき映像を構成するフレーム画像が描画されるごとに、当該フレーム画像を取得するステップと、前記取得したフレーム画像を、予め用意された機械学習モデルを用いて変換して得られる変換後フレーム画像を、前記ユーザーに提示するフレーム画像として出力するステップと、を含む画像処理方法である。 An image processing method according to one aspect of the present invention includes the steps of acquiring a frame image each time the frame image constituting a video to be presented to a user is drawn, and converting the acquired frame image using a pre-prepared machine learning model to obtain a converted frame image, and outputting the converted frame image as the frame image to be presented to the user.
本発明の一態様に係るプログラムは、ユーザーに提示すべき映像を構成するフレーム画像が描画されるごとに、当該フレーム画像を取得するステップと、前記取得したフレーム画像を、予め用意された機械学習モデルを用いて変換して得られる変換後フレーム画像を、前記ユーザーに提示するフレーム画像として出力するステップと、をコンピュータに実行させるためのプログラムである。このプログラムは、コンピュータ読み取り可能で非一時的な情報記憶媒体に格納されて提供されてよい。 A program according to one aspect of the present invention causes a computer to execute the following steps: acquiring a frame image each time the frame image constituting a video to be presented to a user is drawn; and converting the acquired frame image using a pre-prepared machine learning model to obtain a converted frame image, and outputting the converted frame image as the frame image to be presented to the user. This program may be provided by being stored on a computer-readable, non-transitory information storage medium.
以下、本発明の実施形態について、図面に基づき詳細に説明する。 Embodiments of the present invention will be described in detail below with reference to the drawings.
図1は、本発明の一実施形態に係る画像処理装置10の構成を示す構成ブロック図である。画像処理装置10は、例えば家庭用ゲーム機やパーソナルコンピュータ等であって、同図に示されるように、制御部11と、記憶部12と、インタフェース部13と、を含んで構成されている。また、画像処理装置10は表示装置14及び操作デバイス15と接続されている。 FIG. 1 is a block diagram showing the configuration of an image processing device 10 according to one embodiment of the present invention. The image processing device 10 is, for example, a home game console or a personal computer, and as shown in the diagram, is configured to include a control unit 11, a storage unit 12, and an interface unit 13. The image processing device 10 is also connected to a display device 14 and an operation device 15.
制御部11は、CPU等のプロセッサを少なくとも一つ含み、記憶部12に記憶されているプログラムを実行して各種の情報処理を実行する。なお、本実施形態において制御部11が実行する処理の具体例については、後述する。記憶部12は、RAM等のメモリデバイスを少なくとも一つ含み、制御部11が実行するプログラム、及び当該プログラムによって処理されるデータを格納する。 The control unit 11 includes at least one processor such as a CPU, and executes programs stored in the storage unit 12 to perform various information processing. Specific examples of the processing performed by the control unit 11 in this embodiment will be described later. The storage unit 12 includes at least one memory device such as a RAM, and stores the programs executed by the control unit 11 and the data processed by those programs.
インタフェース部13は、表示装置14、及び操作デバイス15との間のデータ通信のためのインタフェースである。画像処理装置10は、インタフェース部13を介して有線又は無線のいずれかで表示装置14、及び操作デバイス15のそれぞれと接続される。具体的にインタフェース部13は、画像処理装置10が供給する映像信号を表示装置14に送信するためのマルチメディアインタフェースを含むこととする。また、操作デバイス15に対してユーザーが行った操作内容を示す信号を受信するためのデータ通信インタフェースを含んでいる。さらにインタフェース部13は、インターネット等の通信ネットワークを介して他の通信機器との間でデータを送受信するための通信インタフェースを備えてもよい。 The interface unit 13 is an interface for data communication between the display device 14 and the operation device 15. The image processing device 10 is connected to the display device 14 and the operation device 15 via the interface unit 13, either wired or wirelessly. Specifically, the interface unit 13 includes a multimedia interface for transmitting video signals supplied by the image processing device 10 to the display device 14. It also includes a data communication interface for receiving signals indicating operations performed by the user on the operation device 15. Furthermore, the interface unit 13 may be equipped with a communication interface for sending and receiving data to and from other communication devices via a communication network such as the Internet.
表示装置14は、画像処理装置10から供給される映像信号に応じた映像を画面上に表示する。表示装置14は、家庭用テレビ受像機などの据え置き型の表示装置であってもよいし、携帯型の表示装置であってもよい。また、表示装置14はユーザーの左右の目それぞれに別の映像を提示することによって立体映像を提示可能な頭部装着型の表示装置であってもよい。 The display device 14 displays on a screen an image corresponding to the video signal supplied from the image processing device 10. The display device 14 may be a stationary display device such as a home television set, or a portable display device. The display device 14 may also be a head-mounted display device capable of presenting a three-dimensional image by presenting separate images to each of the user's left and right eyes.
操作デバイス15は、例えば家庭用ゲーム機のコントローラなどであって、ユーザーからの操作入力を受け付ける。操作デバイス15は、画像処理装置10と有線又は無線により接続され、ユーザーから受け付けた操作入力の内容を示す操作信号を画像処理装置10に対して送信する。なお、操作デバイス15は、例えばユーザーが手で把持して使用するデバイスや、手に装着して使用するデバイスなど、各種の形状のものであってよい。 The operation device 15 is, for example, a controller for a home game console, and accepts operation input from the user. The operation device 15 is connected to the image processing device 10 via a wired or wireless connection, and transmits operation signals indicating the content of the operation input accepted from the user to the image processing device 10. The operation device 15 may be of various shapes, such as a device that the user holds in their hand or a device that the user wears on their hand.
以下、画像処理装置10が実現する機能について、図2の機能ブロック図を用いて説明する。図2に示すように、画像処理装置10は機能的に、アプリケーション実行部21と、フレーム画像取得部22と、フレーム画像変換部23と、を含んで構成されている。これらの機能は、制御部11が記憶部12に記憶された1又は複数のプログラムに従って動作することにより実現される。これらのプログラムは、インターネット等の通信ネットワークを介して画像処理装置10に提供されてもよいし、光ディスク等のコンピュータ読み取り可能な情報記憶媒体に格納されて提供されてもよい。 The functions realized by the image processing device 10 will be explained below using the functional block diagram of Figure 2. As shown in Figure 2, the image processing device 10 is functionally configured to include an application execution unit 21, a frame image acquisition unit 22, and a frame image conversion unit 23. These functions are realized by the control unit 11 operating in accordance with one or more programs stored in the storage unit 12. These programs may be provided to the image processing device 10 via a communications network such as the Internet, or may be provided by being stored on a computer-readable information storage medium such as an optical disc.
アプリケーション実行部21は、アプリケーションプログラムを実行し、その処理結果を示すフレーム画像を描画する処理を繰り返し実行する。このフレーム画像は、ユーザーに提示すべき映像を構成する画像であって、このようなフレーム画像が所定周期ごとに表示装置14に表示されることにより、ユーザーはアプリケーション実行部21による処理結果の映像を閲覧できる。なお、フレーム画像の描画は、アプリケーションプログラムによる描画命令に基づいてGPU等のプロセッサによって実行されてもよい。描画されたフレーム画像は、記憶部12に確保された所定のフレームバッファメモリに書き込まれる。 The application execution unit 21 executes an application program and repeatedly performs the process of drawing frame images that show the results of that processing. These frame images are images that make up the video to be presented to the user, and by displaying such frame images on the display device 14 at predetermined intervals, the user can view the video that is the result of processing by the application execution unit 21. Note that the drawing of frame images may be performed by a processor such as a GPU based on drawing commands from the application program. The drawn frame images are written to a predetermined frame buffer memory allocated in the storage unit 12.
フレーム画像取得部22は、アプリケーション実行部21によってフレーム画像が描画されるごとに、描画されたフレーム画像をフレームバッファメモリから読み出して取得する。すなわち、本実施形態においては、アプリケーション実行部21によって描画されたフレーム画像はそのまま表示装置14に送信されるわけではない。 The frame image acquisition unit 22 reads and acquires the drawn frame image from the frame buffer memory each time the application execution unit 21 draws a frame image. In other words, in this embodiment, the frame image drawn by the application execution unit 21 is not sent directly to the display device 14.
フレーム画像変換部23は、フレーム画像取得部22が新たに描画されたフレーム画像を取得するごとに、取得されたフレーム画像に対して所与の変換処理を行う。この変換処理は、ユーザーの視聴環境などに応じてフレーム画像の内容を調整するための処理である。以下では説明の便宜のため、フレーム画像変換部23による変換処理によって得られるフレーム画像を変換後フレーム画像と表記し、フレーム画像取得部22によって取得された変換処理を実行する前のフレーム画像を変換前フレーム画像と表記する。本実施形態においては、この変換後フレーム画像が、ユーザーに提示すべきフレーム画像として出力される。フレーム画像変換部23が出力した変換後フレーム画像は、インタフェース部13を介して表示装置14に送信され、表示装置14の画面に表示される。 Each time the frame image acquisition unit 22 acquires a newly drawn frame image, the frame image conversion unit 23 performs a given conversion process on the acquired frame image. This conversion process is a process for adjusting the content of the frame image according to the user's viewing environment, etc. For ease of explanation, the frame image obtained by the conversion process by the frame image conversion unit 23 will be referred to as the converted frame image, and the frame image acquired by the frame image acquisition unit 22 before the conversion process is performed will be referred to as the pre-conversion frame image. In this embodiment, this converted frame image is output as the frame image to be presented to the user. The converted frame image output by the frame image conversion unit 23 is sent to the display device 14 via the interface unit 13 and displayed on the screen of the display device 14.
ここでフレーム画像変換部23は、予め用意された機械学習モデルを用いてフレーム画像の変換を行う。これにより、ハードウェアロジックや固定的なフィルター等を用いて変換を行う場合と比較して、個々のフレーム画像の内容に応じた変換を行うことができる。 Here, the frame image conversion unit 23 converts the frame images using a pre-prepared machine learning model. This allows conversion to be performed according to the content of each frame image, compared to conversion using hardware logic, fixed filters, etc.
本実施形態では、目的とする変換内容などに応じて、予め複数の機械学習モデルが用意されているものとする。フレーム画像変換部23は、これら複数の機械学習モデルの中から、ユーザーの選択などの所与の条件に基づいて選択される機械学習モデルを用いてフレーム画像の変換処理を行う。用意される機械学習モデルの種類や、使用する機械学習モデルを選択するための選択基準、及び機械学習モデルの生成方法の具体例については、後述する。 In this embodiment, multiple machine learning models are prepared in advance depending on the desired conversion content, etc. The frame image conversion unit 23 performs the frame image conversion process using a machine learning model selected from these multiple machine learning models based on given conditions such as user selection. The types of machine learning models prepared, the selection criteria for selecting the machine learning model to use, and specific examples of how to generate a machine learning model will be described later.
ここで、本実施形態に係る画像処理装置10が実行する処理の流れの一例について、図3のタイミング図を用いて説明する。なお、図中において(N-1)、(N)、(N+1)は順に描画されるフレーム画像に対して便宜的に付された番号を示している。 Here, an example of the flow of processing executed by the image processing device 10 according to this embodiment will be explained using the timing diagram in Figure 3. Note that in the diagram, (N-1), (N), and (N+1) denote numbers conveniently assigned to the frame images that are drawn in sequence.
同図に示されるように、アプリケーション実行部21は、所定の時間間隔Tfが経過するごとに新たな変換前フレーム画像の描画を開始する。この時間間隔Tfは、フレームレートに応じて決まる時間であり、例えばフレームレートが60fpsの場合、時間間隔Tfは1/60秒となる。なお、本実施形態において変換前フレーム画像の描画処理は、時間間隔Tfと比較して短い時間で完了することが想定されている。図中においては、時刻t0において変換前フレーム画像(N)の描画処理が、時刻t0から時間間隔Tfが経過した後の時刻t1において変換前フレーム画像(N+1)の描画処理が、それぞれ開始されている。 As shown in the figure, the application execution unit 21 begins drawing a new pre-conversion frame image every time a predetermined time interval Tf has elapsed. This time interval Tf is determined according to the frame rate; for example, if the frame rate is 60 fps, the time interval Tf is 1/60 seconds. Note that in this embodiment, the drawing process of the pre-conversion frame image is expected to be completed in a time shorter than the time interval Tf. In the figure, the drawing process of the pre-conversion frame image (N) begins at time t0, and the drawing process of the pre-conversion frame image (N+1) begins at time t1, after the time interval Tf has elapsed since time t0.
変換前フレーム画像の描画処理が完了すると、その変換前フレーム画像に対してフレーム画像変換部23が変換処理を実行し、変換後フレーム画像を出力する。この変換処理は、変換前フレーム画像の描画処理と合わせて時間間隔Tf以内で完了することが想定されている。 Once the drawing process for the pre-conversion frame image is complete, the frame image conversion unit 23 performs conversion processing on the pre-conversion frame image and outputs a post-conversion frame image. This conversion processing, together with the drawing process for the pre-conversion frame image, is expected to be completed within the time interval Tf.
フレーム画像変換部23によって出力された変換後フレーム画像は、次のサイクルにおいて表示装置14に対して送信される。すなわち、新たなサイクルが開始して次に表示すべきフレーム画像の描画処理が開始するタイミングで、前回のサイクルで出力された変換後フレーム画像が表示装置14に送信される。図中においては、時刻t0までに変換が完了した変換後フレーム画像(N-1)が時刻t0から時刻t1までの間に、時刻t1までに変換が完了した変換後フレーム画像(N)が時刻t1から時刻t2までの間に、そして時刻t2までに変換が完了した変換後フレーム画像(N+1)は時刻t2以降に、それぞれ表示装置14に対して送信される。このようなサイクルを繰り返すことにより、画像処理装置10は、全ての変換前フレーム画像に対してリアルタイムで所与の変換処理を実行して得られる変換後フレーム画像を、表示装置14に表示させることができる。 The converted frame image output by the frame image conversion unit 23 is transmitted to the display device 14 in the next cycle. In other words, when a new cycle begins and the drawing process for the next frame image to be displayed begins, the converted frame image output in the previous cycle is transmitted to the display device 14. In the figure, the converted frame image (N-1) whose conversion was completed by time t0 is transmitted to the display device 14 between time t0 and time t1, the converted frame image (N) whose conversion was completed by time t1 is transmitted to the display device 14 between time t1 and time t2, and the converted frame image (N+1) whose conversion was completed by time t2 is transmitted after time t2. By repeating this cycle, the image processing device 10 can perform a given conversion process on all pre-conversion frame images in real time and display the resulting converted frame images on the display device 14.
以下、フレーム画像変換部23が行うフレーム画像の変換処理のいくつかの具体例について、説明する。 Below, we will explain some specific examples of the frame image conversion process performed by the frame image conversion unit 23.
第1の例として、フレーム画像を高解像化する変換処理の例について説明する。近年、4Kなどの非常に高い解像度での表示が可能な表示装置が登場しているが、例えば古いアプリケーションプログラムなどはこのような高解像度での表示に対応していない場合がある。このような場合に、古いアプリケーションプログラムによって描画された変換前フレーム画像を高解像度の変換後フレーム画像に変換することで、表示装置14が表示可能な高品質の映像をユーザーに提示することができる。 As a first example, we will explain an example of conversion processing to increase the resolution of frame images. In recent years, display devices capable of displaying at extremely high resolutions such as 4K have appeared, but older application programs, for example, may not be able to display at such high resolutions. In such cases, by converting pre-conversion frame images drawn by the older application program into high-resolution post-conversion frame images, it is possible to present the user with high-quality video that can be displayed by the display device 14.
具体的にフレーム画像変換部23は、変換前フレーム画像を、より高解像度の所定の解像度の変換後フレーム画像に変換する。このときフレーム画像変換部23は、機械学習モデルを利用することで、単に変換後フレーム画像の画素数を増やすだけでなく、変換前フレーム画像に含まれる個々の画素について、その周囲の画素の内容を考慮した高解像化を行うことができる。 Specifically, the frame image conversion unit 23 converts the pre-conversion frame image into a post-conversion frame image with a higher, predetermined resolution. In this case, by utilizing a machine learning model, the frame image conversion unit 23 is able to not only increase the number of pixels in the post-conversion frame image, but also to increase the resolution of each pixel in the pre-conversion frame image by taking into account the content of the surrounding pixels.
この例では、複数種類の解像度に対応するため、出力解像度の異なる複数の機械学習モデルが予め用意されてよい。フレーム画像変換部23は、予め用意された複数の機械学習モデルのうち、表示装置14が表示可能な解像度のフレーム画像への変換を行う機械学習モデルを利用して、フレーム画像の変換処理を行う。より具体的に、例えばフレーム画像変換部23は、ユーザーが設定画面などで予め指定した解像度への変換を行う機械学習モデルを、利用する機械学習モデルとして選択する。あるいはフレーム画像変換部23は、表示装置14の表示性能に関する情報を取得し、取得した情報に基づいて、例えば表示装置14が表示可能な最大の解像度などを変換後フレーム画像の解像度として選択してもよい。にこれにより、様々な解像度での表示を行うことができる。 In this example, in order to support multiple types of resolution, multiple machine learning models with different output resolutions may be prepared in advance. The frame image conversion unit 23 performs the frame image conversion process using a machine learning model from the multiple machine learning models prepared in advance that converts the frame image to one with a resolution that can be displayed by the display device 14. More specifically, for example, the frame image conversion unit 23 selects, as the machine learning model to use, a machine learning model that converts the image to a resolution that the user has specified in advance on a settings screen or the like. Alternatively, the frame image conversion unit 23 may obtain information regarding the display performance of the display device 14, and based on the obtained information, select, for example, the maximum resolution that the display device 14 can display as the resolution of the converted frame image. This makes it possible to display images at a variety of resolutions.
第2の例として、フレーム画像に含まれる画素の輝度や色域を調整する例について、説明する。第1の例における解像度と同様に、表示装置が表示可能な色域も、表示装置の種類によって異なっている。そのため、広い色域での表示に対応していないアプリケーションプログラムによって描画された変換前フレーム画像を、より広い色域の変換後フレーム画像に変換することで、表示装置14が表示可能な高品質の映像をユーザーに提示することができる。 As a second example, we will explain how to adjust the brightness and color gamut of pixels contained in a frame image. As with the resolution in the first example, the color gamut that a display device can display also differs depending on the type of display device. Therefore, by converting a pre-conversion frame image drawn by an application program that does not support display in a wide color gamut into a post-conversion frame image with a wider color gamut, it is possible to present the user with high-quality images that can be displayed by the display device 14.
具体的にフレーム画像変換部23は、変換後フレーム画像に用いられる色域が変換前のフレーム画像に用いられる色域よりも広くなるように、フレーム画像に含まれる画素の色を変換する。例えばフレーム画像変換部23は、ダイナミックレンジがSDR(Standard Dynamic Range)の変換前フレーム画像をHDR(High Dynamic Range)の変換後フレーム画像に変換する。 Specifically, the frame image conversion unit 23 converts the colors of the pixels contained in the frame image so that the color gamut used in the converted frame image is wider than the color gamut used in the pre-conversion frame image. For example, the frame image conversion unit 23 converts a pre-conversion frame image with a dynamic range of SDR (Standard Dynamic Range) into a post-conversion frame image with a dynamic range of HDR (High Dynamic Range).
この例においても、機械学習モデルを用いることで、変換後フレーム画像の各画素の色は、周囲の画素の色などを考慮して決定される。これにより、SDRで表現しきれずにつぶれてしまった部分などを、人の目から自然に見えるグラデーションを有する色に変換することができる。また、この例においても、様々な色域に応じて複数の機械学習モデルが予め用意されてもよい。そして、フレーム画像変換部23は、ユーザーの指定や表示装置14の表示性能に関する情報などに基づいて、使用する機械学習モデルを選択してもよい。 In this example as well, by using a machine learning model, the color of each pixel in the converted frame image is determined taking into account the colors of surrounding pixels, etc. This makes it possible to convert parts that cannot be fully expressed in SDR and are crushed into colors with a gradation that looks natural to the human eye. Also in this example as well, multiple machine learning models may be prepared in advance to suit various color gamuts. The frame image conversion unit 23 may then select the machine learning model to use based on user specifications, information related to the display performance of the display device 14, etc.
また、使用される色域にかかわらず、変換前フレーム画像を描画する際に使用される色空間などの描画条件によっては、フレーム画像内において細かな色の変化などが表現されずに色がつぶれてしまったりする場合がある。そこでフレーム画像変換部23は、輝度の取り得る範囲自体は変換前後で維持したまま、変換前フレーム画像に含まれる各画素の輝度を、周囲の画素の輝度に応じて変化させる変換を行ってもよい。このような変換によれば、一部の画素の輝度を上昇させたり、または抑えたりすることで、変換後フレーム画像内において全体として人の目から見て自然なグラデーションで輝度が変化するようにすることができる。 Furthermore, regardless of the color gamut used, depending on the drawing conditions, such as the color space used when drawing the pre-conversion frame image, subtle color changes may not be expressed within the frame image, resulting in crushed colors. Therefore, the frame image conversion unit 23 may perform a conversion that changes the brightness of each pixel contained in the pre-conversion frame image according to the brightness of the surrounding pixels, while maintaining the possible range of brightness before and after conversion. With this type of conversion, by increasing or decreasing the brightness of some pixels, it is possible to make the brightness change overall within the converted frame image in a gradation that appears natural to the human eye.
第3の例として,フレーム画像の色を閲覧者の色覚特性に応じた色調となるように変換する変換処理の例について、説明する。人によって色の見え方が異なる色覚多様性という特性が知られている。この特性により、人によっては特定の色を区別することが難しかったりする場合がある。そこでこの例では、変換前フレーム画像に含まれる画素の色を、特定の色覚特性を有する閲覧者にとって見やすい色に変換する。 As a third example, we will explain an example of conversion processing that converts the colors of a frame image to a color tone that matches the viewer's color vision characteristics. Color vision diversity is known as the difference in how different people see colors. Due to this characteristic, some people may find it difficult to distinguish between certain colors. Therefore, in this example, the colors of the pixels contained in the pre-conversion frame image are converted to colors that are easy to see for viewers with specific color vision characteristics.
この例では、特定の色を機械的に別の色に変換するだけでは閲覧者にとって見やすい画像にすることはできない。例えば特定の2種の色の区別がつきにくい色覚特性を持つ人にとっては、その2種の色が隣接する場合に色を区別しやすいように変換する必要があるため、画像自体の色の構成や分布、各画素の周辺の画素の色の配置などを考慮して、個々の画素の色をどのような色に変換するか決定することが望ましい。 In this example, simply converting a specific color to another color mechanically will not make the image easier for the viewer to see. For example, for people with color vision disorders who have difficulty distinguishing between two specific colors, when those two colors are adjacent, the colors need to be converted so that they are easier to distinguish. Therefore, it is desirable to decide what color to convert the color of each pixel into by taking into account the color composition and distribution of the image itself, and the color arrangement of the pixels surrounding each pixel.
そこでこの例においては、フレーム画像変換部23は、予め閲覧者の色覚特性の種類に応じて用意された機械学習モデルを用いることによって、変換前フレーム画像に含まれる画素の色を、その閲覧者の色覚特性に応じた色調となるように変換する。これにより、フレーム画像の内容を考慮した色の変換を行うことができる。 In this example, the frame image conversion unit 23 uses a machine learning model that has been prepared in advance according to the type of color vision characteristics of the viewer to convert the colors of the pixels contained in the pre-conversion frame image to a color tone that corresponds to the color vision characteristics of the viewer. This makes it possible to perform color conversion that takes into account the content of the frame image.
この例では、様々な色覚特性の種類に応じて、予め複数の機械学習モデルが用意されているものとする。画像処理装置10のユーザーは、予め設定画面などで、自身の色覚特性に応じて一の機械学習モデルを選択する。フレーム画像変換部23は、ユーザーが選択した機械学習モデルを用いてフレーム画像の変換を行う。これにより、ユーザーはフレーム画像を描画するアプリケーションプログラムの種類や表示する映像の内容などを問わず、常に自身の色覚特性に応じて見やすい色調の映像を閲覧することができる。 In this example, it is assumed that multiple machine learning models are prepared in advance according to various types of color vision characteristics. The user of the image processing device 10 selects one machine learning model in advance, according to their own color vision characteristics, on a settings screen or the like. The frame image conversion unit 23 converts the frame image using the machine learning model selected by the user. This allows the user to always view images with color tones that are easy to see according to their own color vision characteristics, regardless of the type of application program that draws the frame image or the content of the image to be displayed.
フレーム画像変換部23は、これまでに説明した複数の例を組み合わせた変換を行うこととしてもよい。具体例として、フレーム画像変換部23は、変換前フレーム画像の色調を変化させる変換をまず行い、その後、色調を変化させたフレーム画像に対してさらに解像度を向上させる変換を行うことによって、変換後フレーム画像を生成してもよい。 The frame image conversion unit 23 may also perform a conversion that combines multiple examples described above. As a specific example, the frame image conversion unit 23 may first perform a conversion that changes the color tone of the pre-conversion frame image, and then perform a conversion that further improves the resolution of the frame image whose color tone has been changed, thereby generating a post-conversion frame image.
また、複数種類の変換を一度に行うための機械学習モデルを予め用意することとしてもよい。この場合、必要と想定される変換の組み合わせごとに、それぞれ個別に機械学習を行って生成された機械学習モデルが予め用意される。一例として、2種類の出力解像度に対応した高解像化変換と2種類の色覚特性に対応した色調変換が必要とされる場合、どちらか一方の変換だけが必要とされる場合も考慮して、3種類×3種類で合計9種類の変換の組み合わせが想定される。このうちの1種類はどちらの変換も行わずに変換前フレーム画像をそのまま出力すればよいケースなので、残る8種類の変換の組み合わせについて、それぞれ機械学習モデルを用意することとする。ユーザーは、自身の色覚特性や表示装置14の対応解像度に基づいて、これら8個の機械学習モデルの中から利用するモデルを選択する。この選択された機械学習モデルを用いてフレーム画像の変換を行うことで、一度の変換でユーザーが希望する色調及び解像度の変換後フレーム画像を得ることができる。 Furthermore, machine learning models for performing multiple types of conversion at once may be prepared in advance. In this case, machine learning models generated by performing machine learning individually for each combination of conversions expected to be required are prepared in advance. As an example, if high-resolution conversion corresponding to two types of output resolution and color tone conversion corresponding to two types of color vision characteristics are required, a total of nine types of conversion combinations are expected (three types x three types), taking into account the possibility that only one of the conversions is required. For one of these, it is sufficient to output the pre-conversion frame image as is without performing either conversion, so machine learning models are prepared for each of the remaining eight types of conversion combinations. The user selects one of these eight machine learning models to use based on their own color vision characteristics and the supported resolution of the display device 14. By converting the frame image using this selected machine learning model, it is possible to obtain a converted frame image with the color tone and resolution desired by the user in a single conversion.
また、フレーム画像変換部23は、変換前フレーム画像を描画するアプリケーションプログラムの種類に応じて利用する機械学習モデルを選択し、選択した機械学習モデルを用いてフレーム画像の変換を行うこととしてもよい。例えばアプリケーションプログラムがゲームプログラムの場合、ゲームのジャンルやタイトルごとに異なる機械学習モデルが予め用意されてもよい。このようにアプリケーションプログラムの種類に応じて選択される機械学習モデルを使用してフレーム画像を変換することで、描画されるフレーム画像の傾向に対応した変換を行ったり、変換後フレーム画像の色調などをアプリケーションプログラムの内容に適したものにしたりすることができる。 Furthermore, the frame image conversion unit 23 may select a machine learning model to use depending on the type of application program that draws the pre-conversion frame image, and convert the frame image using the selected machine learning model. For example, if the application program is a game program, different machine learning models may be prepared in advance for each game genre or title. By converting the frame image using a machine learning model selected depending on the type of application program in this way, it is possible to perform conversion that corresponds to the tendencies of the frame image being drawn, and to make the color tone of the converted frame image suitable for the content of the application program.
以上説明した複数種類の変換に対応する複数の機械学習モデルは、予め画像処理装置10の記憶部12に格納されてもよいし、通信ネットワーク等を介して必要に応じて画像処理装置10に提供されてもよい。例えばインターネットを介して画像処理装置10と接続される所定のサーバ装置が、各種の変換に対応する複数の機械学習モデルを予め記憶しておき、画像処理装置10からの要求に応じて機械学習モデルの実行に必要なモデルデータを送信してもよい。この場合、画像処理装置10のユーザーが設定画面などで変換内容を選択すると、画像処理装置10は、その選択された変換内容に対応する機械学習モデルのモデルデータをサーバ装置に要求する。そして、この要求に応じてサーバ装置から提供された機械学習モデルを記憶部12内に記憶し、そのモデルを使用してフレーム画像の変換を行う。こうすれば、時間とともに機械学習モデルをアップデートして変換処理の品質を向上させることが容易になる。 The multiple machine learning models corresponding to the multiple types of conversion described above may be stored in advance in the storage unit 12 of the image processing device 10, or may be provided to the image processing device 10 as needed via a communications network, etc. For example, a predetermined server device connected to the image processing device 10 via the Internet may store in advance multiple machine learning models corresponding to various types of conversion, and may transmit the model data necessary to execute the machine learning models in response to a request from the image processing device 10. In this case, when a user of the image processing device 10 selects a conversion content on a settings screen, etc., the image processing device 10 requests model data of the machine learning model corresponding to the selected conversion content from the server device. The machine learning model provided by the server device in response to this request is then stored in the storage unit 12, and the frame image is converted using that model. This makes it easy to update the machine learning model over time and improve the quality of the conversion process.
以下、これまで説明したフレーム画像の変換に用いられる機械学習モデルの生成方法の具体例について、説明する。 Below, we will explain a specific example of how to generate the machine learning model used to convert the frame images described above.
フレーム画像変換部23が使用する機械学習モデルは、同じ内容を異なる描画条件に基づいて描画して得られる2個のフレーム画像のうち、一方を入力データとし、他方を教師データとして用いる機械学習によって得られるモデルであってよい。 The machine learning model used by the frame image conversion unit 23 may be a model obtained by machine learning using two frame images obtained by drawing the same content under different drawing conditions, one as input data and the other as training data.
具体例として、フレーム画像変換部23がフレーム画像を高解像化する変換を行う場合、予め、低解像度及び高解像度の双方に対応するアプリケーションプログラムに、出力解像度を変化させて同じ内容の処理を複数回実行させる。これにより、同じ内容を表す高解像度のフレーム画像と低解像度のフレーム画像が描画されることになる。このうち、低解像度のフレーム画像を入力データとし、高解像度のフレーム画像を教師データとする機械学習を行うことによって、低解像度のフレーム画像を高解像度のフレーム画像に変換可能な機械学習モデルを生成する。こうすれば、人の手で調整された画像を用意したりする必要がなく、フレーム画像を高解像化する機械学習モデルを効率的に生成することができる。 As a specific example, when the frame image conversion unit 23 converts a frame image to a higher resolution, it first has an application program that supports both low and high resolutions execute the same process multiple times while changing the output resolution. This results in high-resolution and low-resolution frame images that represent the same content being drawn. Of these, machine learning is performed using the low-resolution frame images as input data and the high-resolution frame images as training data to generate a machine learning model that can convert low-resolution frame images into high-resolution frame images. In this way, there is no need to prepare manually adjusted images, and a machine learning model that can increase the resolution of frame images can be efficiently generated.
また、フレーム画像変換部23がフレーム画像の色調を変換する場合、色覚多様性に対応した出力モードを備えたアプリケーションプログラムに、異なる出力モードで同じ内容の処理を複数回実行させることとする。これにより、同じ内容で、色調の調整がされていないフレーム画像と特定の色覚特性の人向けに色調が調整されたフレーム画像とを描画させることができる。このうち、色調の調整がされていないフレーム画像を入力データとし、調整がされたフレーム画像を教師データとして機械学習を行うことによって、特定の色覚特性の人に閲覧しやすい色調への変換が可能な機械学習モデルを効率的に生成することができる。 Furthermore, when the frame image conversion unit 23 converts the color tone of a frame image, it causes an application program equipped with an output mode that accommodates color vision diversity to execute the same process multiple times in different output modes. This makes it possible to render frame images with the same content, with unadjusted color tones and frame images with color tones adjusted for people with specific color vision characteristics. Of these, by performing machine learning using frame images with unadjusted color tones as input data and adjusted frame images as training data, it is possible to efficiently generate a machine learning model that can convert colors to those that are easier to view for people with specific color vision characteristics.
また、フレーム画像変換部23が使用する機械学習モデルは、教師データとして利用する画像に対して所与の変換を行って得られる画像を入力データとして用いる機械学習によって得られるモデルであってもよい。 Furthermore, the machine learning model used by the frame image conversion unit 23 may be a model obtained by machine learning using, as input data, images obtained by performing a given conversion on images used as training data.
具体例として、高解像度に対応するアプリケーションプログラムによって描画されたフレーム画像を教師データとして利用する場合、このフレーム画像を低解像度にする変換を行って得られる画像を入力データとして用いて、機械学習を行う。これにより、その逆変換である高解像化を行って元の画像を復元可能な機械学習モデルを得ることができる。同様に、色域を拡大する変換を行いたい場合、例えばHDRなどの広い輝度レンジで描画されたフレーム画像を教師データとして、このフレーム画像をSDRなどの狭い輝度レンジに変換して得られる画像を入力データとして、機械学習を行う。これにより、広い色域への変換が可能な機械学習モデルを生成することができる。 As a specific example, if a frame image drawn by an application program that supports high resolution is used as training data, machine learning can be performed using the image obtained by converting this frame image to a lower resolution as input data. This makes it possible to obtain a machine learning model that can restore the original image by performing the inverse conversion, which is to increase the resolution. Similarly, if you want to perform a conversion to expand the color gamut, machine learning can be performed using a frame image drawn with a wide brightness range, such as HDR, as training data, and the image obtained by converting this frame image to a narrower brightness range, such as SDR, as input data. This makes it possible to generate a machine learning model that can convert to a wide color gamut.
なお、以上の説明では単一のフレーム画像を機械学習モデルの入力とすることとしたが、フレーム画像変換部23は、複数のフレーム画像を入力として受け付ける機械学習モデルを用いて、フレーム画像を変換してもよい。特に、時系列に沿って連続で描画された複数のフレーム画像を入力として受け付ける機械学習モデルを用いることで、フレーム画像の内容の時間変化を考慮して、最新の変換前フレーム画像を変換後フレーム画像に変換することができる。 In the above explanation, a single frame image is used as input to the machine learning model, but the frame image conversion unit 23 may also convert frame images using a machine learning model that accepts multiple frame images as input. In particular, by using a machine learning model that accepts multiple frame images drawn consecutively in chronological order as input, it is possible to convert the most recent pre-conversion frame image into a post-conversion frame image, taking into account changes over time in the content of the frame images.
以上説明したように、本実施形態に係る画像処理装置10によれば、フレーム画像の内容に対して効果的な変換を行って得られる変換後フレーム画像をリアルタイムでユーザーに提示することができる。 As described above, the image processing device 10 according to this embodiment can effectively convert the contents of a frame image and present the converted frame image to the user in real time.
なお、本発明の実施の形態は以上説明したものに限られない。例えばフレーム画像変換部23が行う変換処理の内容は、以上例示したものに限られず、各種の変換をフレーム画像に対して実行してもよい。 Note that embodiments of the present invention are not limited to those described above. For example, the content of the conversion process performed by the frame image conversion unit 23 is not limited to the examples given above, and various types of conversion may be performed on the frame images.
また、以上の説明においては、画像処理装置10はユーザーから比較的近い位置に存在し、表示装置14及び操作デバイス15と直接的に接続されている情報処理装置であることとした。しかしながらこれに限らず、例えばクラウドゲーミングサービスなどと呼ばれるサービスにおいては、ユーザーが使用する表示装置14及び操作デバイス15と直接的に接続されているクライアント装置ではなく、クライアント装置と通信ネットワークを介して接続されたサーバ装置が表示装置14の画面に表示させるフレーム画像を描画する場合がある。このような場合、ユーザーが使用するクライアント装置と通信ネットワークを介して接続されたサーバ装置が、本発明における画像処理装置10として機能することとしてもよい。この場合、画像処理装置10として機能するサーバ装置は、アプリケーションプログラムを実行して描画される変換前フレーム画像を変換して変換後フレーム画像を生成し、生成した変換後フレーム画像をクライアント装置に対して送信する。 Furthermore, in the above explanation, the image processing device 10 is an information processing device that is located relatively close to the user and is directly connected to the display device 14 and operation device 15. However, this is not limited to this. For example, in services such as cloud gaming services, a server device connected to the client device via a communications network may render frame images to be displayed on the screen of the display device 14, rather than a client device directly connected to the display device 14 and operation device 15 used by the user. In such cases, the server device connected to the client device used by the user via a communications network may function as the image processing device 10 of the present invention. In this case, the server device functioning as the image processing device 10 executes an application program to convert the rendered pre-conversion frame images to generate converted frame images, and transmits the generated converted frame images to the client device.
また、以上の説明では変換前フレーム画像を描画するアプリケーションプログラムも画像処理装置10が実行することとしたが、これに限らず、本発明の実施の形態に係る画像処理装置10は、別の情報処理装置で描画された変換前フレーム画像を取得し、変換後フレーム画像に変換してユーザーが閲覧する表示装置14に対して送信することとしてもよい。例えば、前述したクラウドゲーミングサービスなどにおいてサーバ装置が変換前フレーム画像を描画して、描画した変換前フレーム画像を画像処理装置10として機能するクライアント装置に送信し、画像処理装置10が受信した変換前フレーム画像を変換後フレーム画像に変換してユーザーに提示してもよい。 Furthermore, in the above explanation, the image processing device 10 also executes an application program that draws pre-conversion frame images, but this is not limited to this. The image processing device 10 according to an embodiment of the present invention may also acquire pre-conversion frame images drawn by another information processing device, convert them into post-conversion frame images, and send them to the display device 14 for viewing by the user. For example, in the cloud gaming service mentioned above, a server device may draw pre-conversion frame images, send the drawn pre-conversion frame images to a client device functioning as the image processing device 10, and convert the received pre-conversion frame images into post-conversion frame images that are presented to the user.
なお、本明細書中に記載されている構成要素により実現される機能は、当該記載された機能を実現するように構成された、あるいはプログラムされた、汎用プロセッサ、特定用途プロセッサ、集積回路、ASICs (Application Specific Integrated Circuits)、CPU (Central Processing Unit)、従来型の回路、および/又はそれらの組合せを含む、任意の回路(circuitry)又は処理回路(processing circuitry)によって実装されてもよい。プロセッサは、トランジスタやその他の回路を含み、回路(circuitry)又は処理回路(processing circuitry)とみなされる。プロセッサは、メモリに格納されたプログラムを実行する、プログラム制御されるプロセッサ(programmed processor)であってもよい。 It should be noted that the functions provided by the components described herein may be implemented by any circuitry or processing circuitry, including general-purpose processors, application-specific processors, integrated circuits, ASICs (Application Specific Integrated Circuits), CPUs (Central Processing Units), conventional circuits, and/or combinations thereof, configured or programmed to provide the described functions. A processor includes transistors and other circuits and is considered a circuitry or processing circuitry. A processor may also be a programmed processor that executes programs stored in memory.
本明細書において、回路(circuitry)、ユニット、手段は、記載された機能を実現するようにプログラムされたハードウェア、又はその機能を実行するハードウェアである。当該ハードウェアは、本明細書に開示されているあらゆるハードウェア、又は、当該記載された機能を実現するようにプログラムされた、又は、実行するものとして知られているあらゆるハードウェアであってもよい。当該ハードウェアが回路(circuitry)の一種であるとみなされるプロセッサである場合、当該回路(circuitry)、手段、又はユニットは、ハードウェアと、当該ハードウェア及び/又はプロセッサを動作させる為に用いられるソフトウェアの組合せである。 In this specification, a circuit, unit, or means is hardware that is programmed to realize a described function or that performs that function. The hardware may be any hardware disclosed in this specification or any hardware that is programmed to realize or known to perform the described function. In the case where the hardware is a processor, which is considered a type of circuit, the circuit, means, or unit is a combination of hardware and software used to operate the hardware and/or processor.
本開示は、以下の態様を含んでよい。
[項目1]
ユーザーに提示すべき映像を構成するフレーム画像が描画されるごとに、当該フレーム画像を取得し、
前記取得したフレーム画像を、予め用意された機械学習モデルを用いて変換して得られる変換後フレーム画像を、前記ユーザーに提示するフレーム画像として出力する、
ように構成された回路(circuitry)を備える画像処理装置。
[項目2]
項目1に記載の画像処理装置において、
前記回路は、前記取得したフレーム画像を高解像化する変換を行う、
画像処理装置。
[項目3]
項目1に記載の画像処理装置において、
前記回路は、前記取得したフレーム画像に含まれる画素の輝度を、周囲の画素の輝度に応じて変化させる変換を行う、
画像処理装置。
[項目4]
項目1に記載の画像処理装置において、
前記回路は、前記変換後フレーム画像に用いられる色域が前記取得したフレーム画像に用いられる色域よりも広くなるように、前記取得したフレーム画像に含まれる画素の色を変換する、
画像処理装置。
[項目5]
項目1に記載の画像処理装置において、
前記回路は、前記取得したフレーム画像に含まれる画素の色を、閲覧者の色覚特性に応じた色調となるように変換する、
画像処理装置。
[項目6]
項目1に記載の画像処理装置において、
前記回路は、予め用意された複数の機械学習モデルのうち、所与の条件に基づいて選択される機械学習モデルを用いて前記変換を行う、
画像処理装置。
[項目7]
項目6に記載の画像処理装置において、
前記回路は、前記取得したフレーム画像を描画したアプリケーションプログラムの種類に応じて選択される機械学習モデルを用いて前記変換を行う、
画像処理装置。
[項目8]
項目1に記載の画像処理装置において、
前記機械学習モデルは、同じ内容を異なる描画条件に基づいて描画して得られる2個のフレーム画像のうち、一方を入力データとし、他方を教師データとして用いる機械学習によって得られるモデルである、
画像処理装置。
[項目9]
ユーザーに提示すべき映像を構成するフレーム画像が描画されるごとに、当該フレーム画像を取得し、
前記取得したフレーム画像を、予め用意された機械学習モデルを用いて変換して得られる変換後フレーム画像を、前記ユーザーに提示するフレーム画像として出力する、
画像処理方法。
[項目10]
コンピュータ読み取り可能で非一時的な情報記憶媒体であって、
ユーザーに提示すべき映像を構成するフレーム画像が描画されるごとに、当該フレーム画像を取得し、
前記取得したフレーム画像を、予め用意された機械学習モデルを用いて変換して得られる変換後フレーム画像を、前記ユーザーに提示するフレーム画像として出力する、
処理をコンピュータに実行させるためのプログラムを格納する情報記憶媒体。
The present disclosure may include the following aspects.
[Item 1]
Each time a frame image constituting a video to be presented to a user is drawn, the frame image is acquired;
converting the acquired frame image using a pre-prepared machine learning model to obtain a converted frame image, and outputting the converted frame image as the frame image to be presented to the user;
An image processing device comprising a circuit configured as follows.
[Item 2]
Item 1. In the image processing device according to item 1,
The circuit performs a conversion to increase the resolution of the acquired frame image.
Image processing device.
[Item 3]
Item 1. In the image processing device according to item 1,
the circuit performs a conversion to change the brightness of pixels included in the acquired frame image in accordance with the brightness of surrounding pixels;
Image processing device.
[Item 4]
Item 1. In the image processing device according to item 1,
the circuit converts the colors of pixels included in the acquired frame image so that a color gamut used in the converted frame image is wider than a color gamut used in the acquired frame image;
Image processing device.
[Item 5]
Item 1. In the image processing device according to item 1,
the circuit converts the color of pixels included in the acquired frame image into a color tone according to the color vision characteristics of the viewer;
Image processing device.
[Item 6]
Item 1. In the image processing device according to item 1,
the circuit performs the conversion using a machine learning model selected based on a given condition from a plurality of machine learning models prepared in advance;
Image processing device.
[Item 7]
Item 6. In the image processing device according to item 6,
the circuit performs the conversion using a machine learning model selected according to the type of application program that has rendered the acquired frame image;
Image processing device.
[Item 8]
Item 1. In the image processing device according to item 1,
The machine learning model is a model obtained by machine learning using two frame images obtained by drawing the same content under different drawing conditions, one of which is used as input data and the other as training data.
Image processing device.
[Item 9]
Each time a frame image constituting a video to be presented to a user is drawn, the frame image is acquired;
converting the acquired frame image using a pre-prepared machine learning model to obtain a converted frame image, and outputting the converted frame image as the frame image to be presented to the user;
Image processing methods.
[Item 10]
A computer-readable, non-transitory information storage medium, comprising:
Each time a frame image constituting a video to be presented to a user is drawn, the frame image is acquired;
converting the acquired frame image using a pre-prepared machine learning model to obtain a converted frame image, and outputting the converted frame image as the frame image to be presented to the user;
An information storage medium that stores a program for causing a computer to execute a process.
10 画像処理装置、11 制御部、12 記憶部、13 インタフェース部、14 表示装置、15 操作デバイス、21 アプリケーション実行部、22 フレーム画像取得部、23 フレーム画像変換部。 10 Image processing device, 11 Control unit, 12 Memory unit, 13 Interface unit, 14 Display device, 15 Operation device, 21 Application execution unit, 22 Frame image acquisition unit, 23 Frame image conversion unit.
Claims (10)
前記取得したフレーム画像を、予め用意された機械学習モデルを用いて変換して得られる変換後フレーム画像を、前記ユーザーに提示するフレーム画像として出力するフレーム画像変換部と、
を含む画像処理装置。 a frame image acquisition unit that acquires a frame image constituting a video to be presented to a user each time the frame image is drawn;
a frame image conversion unit that converts the acquired frame image using a pre-prepared machine learning model to obtain a converted frame image, and outputs the converted frame image as a frame image to be presented to the user;
An image processing device comprising:
前記フレーム画像変換部は、前記取得したフレーム画像を高解像化する変換を行う、
画像処理装置。 2. The image processing device according to claim 1,
the frame image conversion unit converts the acquired frame images to a higher resolution;
Image processing device.
前記フレーム画像変換部は、前記取得したフレーム画像に含まれる画素の輝度を、周囲の画素の輝度に応じて変化させる変換を行う、
画像処理装置。 2. The image processing device according to claim 1,
the frame image conversion unit performs conversion to change the luminance of pixels included in the acquired frame image in accordance with the luminance of surrounding pixels;
Image processing device.
前記フレーム画像変換部は、前記変換後フレーム画像に用いられる色域が前記取得したフレーム画像に用いられる色域よりも広くなるように、前記取得したフレーム画像に含まれる画素の色を変換する、
画像処理装置。 2. The image processing device according to claim 1,
the frame image conversion unit converts the colors of pixels included in the acquired frame image so that a color gamut used in the converted frame image is wider than a color gamut used in the acquired frame image.
Image processing device.
前記フレーム画像変換部は、前記取得したフレーム画像に含まれる画素の色を、閲覧者の色覚特性に応じた色調となるように変換する、
画像処理装置。 2. The image processing device according to claim 1,
the frame image conversion unit converts the colors of pixels included in the acquired frame images into color tones that correspond to the color vision characteristics of the viewer.
Image processing device.
前記フレーム画像変換部は、予め用意された複数の機械学習モデルのうち、所与の条件に基づいて選択される機械学習モデルを用いて前記変換を行う、
画像処理装置。 2. The image processing device according to claim 1,
the frame image conversion unit performs the conversion using a machine learning model selected based on given conditions from a plurality of machine learning models prepared in advance;
Image processing device.
前記フレーム画像変換部は、前記取得したフレーム画像を描画したアプリケーションプログラムの種類に応じて選択される機械学習モデルを用いて前記変換を行う、
画像処理装置。 7. The image processing device according to claim 6,
the frame image conversion unit performs the conversion using a machine learning model selected according to the type of application program that has drawn the acquired frame image.
Image processing device.
前記機械学習モデルは、同じ内容を異なる描画条件に基づいて描画して得られる2個のフレーム画像のうち、一方を入力データとし、他方を教師データとして用いる機械学習によって得られるモデルである、
画像処理装置。 2. The image processing device according to claim 1,
The machine learning model is a model obtained by machine learning using two frame images obtained by drawing the same content under different drawing conditions, one of which is used as input data and the other as training data.
Image processing device.
前記取得したフレーム画像を、予め用意された機械学習モデルを用いて変換して得られる変換後フレーム画像を、前記ユーザーに提示するフレーム画像として出力するステップと、
を含む画像処理方法。 acquiring a frame image constituting a video to be presented to a user each time the frame image is drawn;
a step of converting the acquired frame image using a previously prepared machine learning model to obtain a converted frame image, and outputting the converted frame image as a frame image to be presented to the user;
An image processing method comprising:
前記取得したフレーム画像を、予め用意された機械学習モデルを用いて変換して得られる変換後フレーム画像を、前記ユーザーに提示するフレーム画像として出力するステップと、
をコンピュータに実行させるためのプログラム。 acquiring a frame image constituting a video to be presented to a user each time the frame image is drawn;
a step of converting the acquired frame image using a previously prepared machine learning model to obtain a converted frame image, and outputting the converted frame image as a frame image to be presented to the user;
A program that causes a computer to execute the following.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2024-071552 | 2024-04-25 | ||
| JP2024071552 | 2024-04-25 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025225483A1 true WO2025225483A1 (en) | 2025-10-30 |
Family
ID=97490166
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2025/014989 Pending WO2025225483A1 (en) | 2024-04-25 | 2025-04-16 | Image processing device, image processing method, and program |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025225483A1 (en) |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2019211811A (en) * | 2018-05-31 | 2019-12-12 | 富士ゼロックス株式会社 | Image processing apparatus and program |
| US20210279841A1 (en) * | 2020-03-09 | 2021-09-09 | Nvidia Corporation | Techniques to use a neural network to expand an image |
| JP2022033562A (en) * | 2020-08-17 | 2022-03-02 | 富士フイルムビジネスイノベーション株式会社 | Information processing equipment |
| JP2023518865A (en) * | 2020-03-25 | 2023-05-08 | 任天堂株式会社 | Systems and methods for machine-learned image conversion |
| JP2023129183A (en) * | 2022-03-02 | 2023-09-14 | エヌビディア コーポレーション | Remastering lower dynamic range content for higher dynamic range display |
| WO2023218936A1 (en) * | 2022-05-10 | 2023-11-16 | ソニーセミコンダクタソリューションズ株式会社 | Image sensor, information processing method, and program |
-
2025
- 2025-04-16 WO PCT/JP2025/014989 patent/WO2025225483A1/en active Pending
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2019211811A (en) * | 2018-05-31 | 2019-12-12 | 富士ゼロックス株式会社 | Image processing apparatus and program |
| US20210279841A1 (en) * | 2020-03-09 | 2021-09-09 | Nvidia Corporation | Techniques to use a neural network to expand an image |
| JP2023518865A (en) * | 2020-03-25 | 2023-05-08 | 任天堂株式会社 | Systems and methods for machine-learned image conversion |
| JP2022033562A (en) * | 2020-08-17 | 2022-03-02 | 富士フイルムビジネスイノベーション株式会社 | Information processing equipment |
| JP2023129183A (en) * | 2022-03-02 | 2023-09-14 | エヌビディア コーポレーション | Remastering lower dynamic range content for higher dynamic range display |
| WO2023218936A1 (en) * | 2022-05-10 | 2023-11-16 | ソニーセミコンダクタソリューションズ株式会社 | Image sensor, information processing method, and program |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN108347647B (en) | Video picture displaying method, device, television set and storage medium | |
| US20060222246A1 (en) | Screen data transmitting device | |
| KR20180102125A (en) | Video display system | |
| JP2012027154A (en) | Image display system | |
| JP2013522974A (en) | Method for digital processing of video signals, digital image processor, and video display system | |
| JP2018060075A (en) | Information processing apparatus and image processing method | |
| CN113225619A (en) | Frame rate self-adaption method, device, equipment and readable storage medium | |
| US6683616B1 (en) | Method and apparatus for color adjustment of display screen | |
| JP2014116686A (en) | Information processing device, information processing method, output device, output method, program, and information processing system | |
| JP2005506811A (en) | Method and display system for adjusting display settings of display device | |
| CN113852722A (en) | A video color ringtone playback control method, system, calling terminal and readable medium | |
| WO2025225483A1 (en) | Image processing device, image processing method, and program | |
| JP2019103041A (en) | Image processing apparatus and image processing method | |
| JP2004279989A (en) | Network video adjustment system | |
| JP2002500478A (en) | Method and apparatus for reducing flicker in a television display of network application data | |
| JP2006345309A (en) | Image quality adjusting device, image quality adjusting method, and image display device | |
| CN112672217A (en) | Video playing method, video processing method, device, system and storage medium | |
| WO2020157979A1 (en) | Head mount display and image display method | |
| JP7321961B2 (en) | Image processing device and image processing method | |
| JP4219725B2 (en) | Image display method, image display system, image data conversion device, and image display device | |
| JP2007017615A (en) | Image processing apparatus, image processing method, and program | |
| KR102901668B1 (en) | Server and method for providing contents, and user terminal and method for outputting contents | |
| CN119248216B (en) | A method and device for dynamically synchronizing EDID of computer display screen in cloud PC mode | |
| CN112135057A (en) | Video image processing method | |
| CN112272270A (en) | Video data processing method |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 25793636 Country of ref document: EP Kind code of ref document: A1 |