WO2013081281A1 - Appareil et procédé de conversion d'image pour convertir une image bidimensionnelle en une image tridimensionnelle et support d'enregistrement correspondant - Google Patents
Appareil et procédé de conversion d'image pour convertir une image bidimensionnelle en une image tridimensionnelle et support d'enregistrement correspondant Download PDFInfo
- Publication number
- WO2013081281A1 WO2013081281A1 PCT/KR2012/007372 KR2012007372W WO2013081281A1 WO 2013081281 A1 WO2013081281 A1 WO 2013081281A1 KR 2012007372 W KR2012007372 W KR 2012007372W WO 2013081281 A1 WO2013081281 A1 WO 2013081281A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- control point
- depth information
- setting
- grid
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/261—Image signal generators with monoscopic-to-stereoscopic image conversion
Definitions
- the present invention relates to an image converting apparatus and method for converting a 2D image into a 3D image and a recording medium thereof. More particularly, the present invention relates to a 3D space in generating a depth map to convert a 2D image into a 3D image.
- the present invention relates to a video converting apparatus and method for displaying a Z-axis depth information of a grid and providing an interface for inputting or changing depth information by moving an object in the Z-axis direction, and a recording medium thereof.
- One method of generating a stereoscopic image is to convert a 2D video image into a 3D stereoscopic image.
- an already made video content can be utilized and a 3D stereoscopic image can be obtained without using a separate recording device.
- a left image and a right image having different disparity information must be generated from the 2D planar image.
- depth map is generated by giving depth information to objects displayed on the 2D plane image, and a left and right image are generated based on the depth map to generate a 2D plane image. It is converted to 3D stereoscopic image.
- the quality of the 3D stereoscopic image is highly dependent on the accuracy of the depth map.
- a depth map may be generated by calculating pixel values of a 2D image or measuring a motion vector to automatically generate a depth map, and by directly inputting depth information of a 2D object.
- the automatic generation method has the advantage of reducing the work speed, but there are technical limitations in generating high quality 3D stereoscopic images.
- the present invention has been made to solve the above problems, when generating a depth map to convert a two-dimensional image to a three-dimensional image, it is displayed as a grid representing the Z-axis depth information of the three-dimensional space, Z-axis direction
- the present invention provides an image converting apparatus and method for providing an interface for directly inputting or changing depth information by directly moving an object, and a recording medium thereof.
- the user input unit for setting the Z control point on the selected object in the two-dimensional image and moving the Z control point in the Z-axis direction to input depth information
- Depth map generator for recognizing the depth information of the object according to the movement to create a depth map
- Display unit for projecting and displaying the object on the virtual plane according to the two-dimensional image and the input depth information
- Z control point set on the object There is provided an image conversion apparatus including an image control unit configured to control to display an object on a virtual plane according to the movement of.
- the image converting apparatus further includes a grid setting unit that displays, as a grid, positions having the same value in the Z-axis direction as the Z control point moves.
- the grid setting unit may include a grid gap setting module for adjusting grid spacing, a graphic setting module for setting a graphic condition including at least one of color, thickness and line type of the grid, and an activation setting module for setting whether to display the grid. It is configured to include.
- the image conversion apparatus further includes a 3D image rendering unit generating a left image and a right image having different parallax information with respect to the 2D image based on the depth map, and generating a 3D image by combining the left image and the right image. do.
- the depth map generator extracts information about the selected object from the user input unit, an depth information setting module that recognizes a Z value change according to the movement of the Z control point as the depth information of the object, and a depth information of the recognized object. And a depth map generation module for generating a depth map accordingly.
- the image controller controls to display each object on a virtual plane in accordance with the movement of the Z control point set for each object.
- the step of displaying the input two-dimensional image in the three-dimensional space selecting an object in the two-dimensional image and setting the Z control point to the selected object, moving the Z control point in the Z-axis direction And projecting the virtual object to display the virtual object, and recognizing the change of the Z value according to the movement of the Z control point as the depth information of the object.
- the step of projecting and displaying a virtual object by moving the Z control point in the Z-axis direction is to display a position having the same value in the Z-axis direction as a grid according to the movement of the Z control point, and projecting the virtual object in the space where the grid is displayed. To display.
- Selecting an object in the 2D image and setting the Z control point for the selected object sets the Z control point for each object when two or more objects are selected in the 2D image.
- Selecting an object in the 2D image and setting the Z control point on the selected object keeps only one of the Z control points set on the two or more objects.
- the step of projecting and displaying the virtual object by moving the Z control point in the Z-axis direction is to project and display two or more virtual objects by moving the Z control points set in the two or more objects, respectively, in the Z-axis direction.
- the image conversion method may include generating a depth map according to depth information of an object after recognizing a Z value change according to movement of a Z control point as depth information of an object, and rendering a 3D image according to the depth map. It further includes.
- displaying the input two-dimensional image in a three-dimensional space selecting an object in the two-dimensional image and setting a Z control point to the selected object, Z control point in the Z-axis direction
- the image conversion method comprising moving and projecting and displaying a virtual object, and recognizing the change of Z value according to the movement of the Z control point as the depth information of the object is recorded by the program and the recording medium readable by the electronic device Is provided.
- the user when a depth map is generated to convert a 2D image into a 3D image, the user selects an object in the 2D image and moves the selected object directly in the Z-axis direction in the 3D space. Since the information can be input or changed, the depth information of the object can be checked in an intuitive manner.
- the depth information can be input or changed in detail.
- FIG. 1 is a block diagram of an image conversion apparatus for converting a 2D image into a 3D image according to an embodiment of the present invention.
- FIG. 2 is a block diagram of a depth map generator according to an embodiment of the present invention.
- FIG. 3 is a block diagram of a grid setting unit according to an embodiment of the present invention.
- FIG. 4 is a flowchart of an image conversion method for converting a 2D image into a 3D image according to an embodiment of the present invention.
- FIG. 5 is a first state diagram of a work screen of an image conversion device according to an embodiment of the present invention.
- FIG. 6 is a second state diagram of a work screen of an image conversion device according to an embodiment of the present invention.
- FIG. 7 is a third state diagram of an image conversion device according to an embodiment of the present invention.
- FIG. 1 is a configuration diagram of an image conversion apparatus for converting a 2D image into a 3D image according to an embodiment of the present invention
- FIG. 2 is a configuration diagram of a depth map generator according to an embodiment of the present invention
- FIG. It is a block diagram of a grid setting unit according to an embodiment of the present invention.
- the image conversion apparatus 100 includes a user input unit 120, a display unit 130, a depth map generator 200, and an image controller 140. It is configured by.
- the image conversion apparatus 100 may further include a 2D image input unit 110, a grid setting unit 300, or a 3D image rendering unit 160.
- the image conversion apparatus 100 provides a graphic interface for a user to set depth information on an input 2D image, generates a depth map according to the user's input, and generates a 3D image. Model the.
- the 2D image input unit 110 receives a 2D image to be converted into a 3D image.
- the user input unit 120 provides a user interface for selecting and controlling a function of the image conversion apparatus 100.
- the user input unit 120 sets the Z control point on the object selected in the 2D image and moves the Z control point in the Z axis direction to input depth information.
- the user may select an object displayed on the 2D image as a closed curve through the user input unit 120 and input depth information to the object.
- Object selection in two-dimensional images uses image recognition techniques such as automatic edge detection using image contours and edge recognition using predefined user-defined closed curve patterns (eg, faces, circles, squares, etc.).
- image recognition techniques such as automatic edge detection using image contours and edge recognition using predefined user-defined closed curve patterns (eg, faces, circles, squares, etc.).
- Bezier curves are irregular curves created by selecting several control points to connect them.
- the object selection in the two-dimensional image may be selected by the user to edit the object automatically recognized by the above methods.
- the user input unit 120 may directly input depth information of the selected object or select a Z control point by clicking a predetermined area of the object, and input depth information by dragging in the Z control point Z direction.
- the display unit 130 projects the object onto a virtual plane based on the 2D image and the input depth information, and displays the object on the 3D space.
- the display 130 displays a series of work processes, input results, processing results, etc. for inputting depth information for converting the input 2D image and the image to the user.
- the display 130 displays a 2D image on the x and y planes of the 3D space, and when the Z control point is moved, the object is projected onto a virtual plane including the Z control point.
- the image controller 140 controls to display the object by projecting the object on the virtual plane according to the movement of the Z control point set in the object. That is, the image controller 140 controls the display unit 130 to display the 2D image input to the 2D image input unit 110.
- the image controller 140 receives the Z control point through the user input unit 120, displays an object on a virtual plane including the Z control point, and positions the position information of the Z control point on the depth map generator 200, for example. To provide coordinate information.
- the depth map generator 200 generates a depth map by recognizing depth information of the object according to the movement of the Z control point.
- the depth map generator 200 extracts object information and generates a depth map by recognizing position information of a Z control point assigned to a corresponding object as depth information.
- the depth map generator 200 includes an object information extraction module 210, a depth information setting module 220, and a depth map generation module 230.
- the object information extraction module 210 extracts information on the selected object from the user input unit 120, and the depth information setting module 220 recognizes the Z value change according to the movement of the Z control point as the depth information of the object,
- the map generation module 230 generates a depth map according to the depth information of the recognized object.
- the object information extraction module 210 extracts information of the object selected for setting depth information from the 2D image.
- the object information may include identification information, x, y coordinates, vector values, etc. of the object selected in the 2D image.
- the depth information setting module 220 may set depth information of the corresponding object according to the Z control point set in the object.
- the object selected in the 2D image is positioned at a predetermined Z point in the 3D space according to the movement of the Z control point.
- the depth information setting module 220 may recognize depth information of the corresponding object using the Z value of the object.
- the depth map generation module 230 generates a depth map of the input 2D image based on the depth information of each object recognized by the depth information setting module 220.
- the depth map generator 200 may output the depth map by recognizing the depth value input by the user using the Z control point.
- the 3D image rendering unit 160 converts the input 2D image into a 3D image by using a depth map generated by the depth map generator 200.
- the 3D image rendering unit 160 generates a left image and a right image having different parallax information by using a depth map, and combines a left image and a right image having different parallax information to generate a 3D image.
- the image controller 140 displays the 2D image input to the 2D image input unit 110 on the display 130 and displays the object selected in the 2D image according to the object selection signal of the user input to the user input unit 120. Display.
- the image controller 140 controls the selected object to be displayed on the plane including the Z control point according to the Z control point input so that the user selects a specific object in the two-dimensional image and gives the control point to the object to provide depth information.
- the input process is displayed on the display 130.
- the image controller 140 transmits the object selection signal and the Z control point signal to the depth map generator 200 to control the depth map generator 200 to generate a depth map according to the Z control point input by the user.
- the image controller 140 generates a 3D image by controlling the 3D image rendering unit 160 using a depth map generated by the depth map generator 200 and generates the 3D image on the display 130. Display.
- the grid setting unit 300 displays the positions having the same value in the Z-axis direction in a grid according to the movement of the Z control point. As shown in FIG. 3, the grid setting unit 300 includes a grid interval setting module 310, a graphic setting module 320, and an activation setting module 330.
- the grid spacing setting module 210 may set the spacing of the grid displayed in the three-dimensional space according to the user's selection. The user can adjust the grid spacing so that the grid spacing is displayed wide or narrow depending on the size of the object to be edited or the arrangement of the objects.
- the graphic setting module 220 may change graphic settings such as color, thickness and line type of the grid displayed in the 3D space according to a user's selection. The user may adjust the graphic setting to display the grid more clearly according to the color of the image to be edited, the color of each individual object, or the position of the objects.
- the activation setting module 230 may display or hide the grid set according to the user's selection.
- the user can select and use grid spacing, graphics, activation, etc. according to the characteristics of the 2D image being worked on and personal preferences, and the user can easily edit the depth information of the 2D image. Can be done.
- the image conversion apparatus 100 visually checks the Z value of the object by selecting an object from the input two-dimensional image, giving a Z control point to the object and dragging the depth information. You can enter
- the image conversion apparatus 100 may automatically set the object to set the depth information through the user input unit 120 and set the depth information of the selected object while visually confirming by moving the Z control point. Compared with the generation method, high quality 3D stereoscopic images can be generated.
- the depth information set according to the movement of the Z control point is recognized as the depth information of the object automatically selected by the depth map generator 200 without the user directly inputting the depth value, the skill of the operator is compared with the manual generation method. There is an advantage that is not greatly required.
- the grid displayed by the grid setting unit 300 may be intuitively and objectively compared with the position of the moved Z control point.
- the depth information can be finely adjusted even without a professional worker, and a high quality 3D stereoscopic image can be generated compared to the automatic generation method.
- the image conversion apparatus 100 manually adjusts the depth information of the object through the user input unit 120 and generates a semi-automatic 3D stereoscopic image automatically recognizing the depth information adjusted by the depth map generator 200.
- the device includes a user input unit 120, a depth map generator 200, or a grid setting unit 300, so that even non-skilled users can intuitively generate high quality 3D stereoscopic images, respectively. Can solve the problem.
- the user input unit 120 sets a Z control point for each object and moves the Z control point to move the Z control point positions of the objects. Can be compared and changed visually, allowing more precise control of the three-dimensional effect between objects.
- 5 to 7 are first to third state diagrams of work screens of an image conversion apparatus according to an embodiment of the present invention.
- the object A is selected from the 2D image through the user input unit. For example, you can select objects by forming a Bezier curve by clicking on the area of the object you want to select, by selecting an object by automatic edge detection through image contours, or by selecting a custom closed curve pattern. Can be.
- the Z control point is set by selecting (for example, clicking a mouse) any point of the region including the selected object, and the (c) of FIG. 5.
- the Z coordinate of the selected object is moved by dragging the Z control point in the Z-axis direction.
- a two-dimensional image is positioned on the xy plane and grids of a predetermined interval are displayed in the Z-axis direction.
- the working environment of the image conversion apparatus may move an object located in the XY plane and each object in the Z-axis direction as illustrated in FIG. 7.
- the virtual object is projected on the grid and displayed.
- the user can select the object A in the 2D image and move it in the Z-axis direction to the position of the object B. Accordingly, the object B has the depth information Z '.
- the user can easily set the depth information using the grid.
- the user may move the object C on the same grid as the object depth information Z '.
- the depth information may be input by arranging objects selected in the 2D image in 3D space. This provides an intuitive working environment for easy editing.
- FIG. 4 is a flowchart of an image conversion method of converting a 2D image into a 3D image according to an exemplary embodiment of the present invention.
- a 2D image is displayed in a 3D space (S410), an object is selected from the 2D image, and a Z control point is set to the selected object (S420).
- the Z control point is moved in the Z-axis direction to project a virtual object to be displayed (S430), and the Z value change according to the movement of the Z control point is recognized as depth information of the object (S440).
- the image conversion method generates depth map information according to the depth information of the object (S450), and renders a 3D image (S460).
- the display device displays the 2D image input on the XY plane of the 2D image 3D space (S410), and the user checks the 2D image on the display unit, selects an object through the user input unit, and sets a Z control point on the selected object. (S420).
- depth control may be input or changed by setting a Z control point for each object and moving each object in the Z axis.
- only one of the Z control points set on the two or more selected objects may be activated and moved in the Z-axis direction to change or input depth information of each object differently.
- the user input unit drags the Z control point in the Z-axis direction to move the Z control point to a predetermined position in a three-dimensional space, and the image control unit displays the object by projecting the object on a virtual plane including the Z control point set by the user. (S430).
- step S430 may display a position having the same value in the Z-axis direction as a grid according to the movement of the Z control point, and display the virtual object by projecting the virtual object in the space where the grid is displayed.
- two or more virtual objects may be projected together and displayed according to the Z coordinate change value of the Z control point set to the two or more objects in operation S430.
- the depth map generator recognizes the depth information of the object according to the change of the Z control point value set by the user (S440), and generates a depth map using this (S450). Also, the 3D image rendering unit renders the 2D image into the 3D image using the generated depth map.
- the present invention sets depth information by selecting an object in a two-dimensional plane, dragging a Z control point by setting the object.
- the Z value of the object becomes the depth information in the three-dimensional space
- the user can visually recognize the depth information given to the corresponding objects through the Z value of the object, that is, the height from the x and y planes. .
- the Z value of each object can be checked on one screen so that the difference in depth information between objects can be compared with each other, so that the overall depth information can be easily recognized and edited.
- displaying the input two-dimensional image in a three-dimensional space selecting an object in the two-dimensional image and setting a Z control point to the selected object, Z control point in the Z-axis direction
- the image conversion method comprising moving and projecting and displaying a virtual object, and recognizing the change of Z value according to the movement of the Z control point as the depth information of the object is recorded by the program and the recording medium readable by the electronic device Is provided.
- An image conversion method can be written in a program, and codes and code segments constituting the program can be easily inferred by a programmer in the art.
- the program of the image conversion method is stored in an information storage medium (Readable Medium) that can be read by the electronic device, it is read and executed by the electronic device can convert a two-dimensional image to a three-dimensional image.
- an information storage medium Readable Medium
- an image conversion apparatus for converting a 2D image into a 3D image may include a processor, a memory, a storage device, and an input / output device as components, and these components may be interconnected using, for example, a system bus. have.
- the processor may process instructions for execution within the device.
- the processor may be a single-threaded processor, and in other implementations, the processor may be a multi-threaded processor.
- the processor processes instructions stored on memory or storage devices. It is possible to do
- the memory stores information in the apparatus.
- the memory is a computer readable medium.
- the memory may be a volatile memory unit, and for other implementations, the memory may be a nonvolatile memory unit.
- the storage device described above can provide a mass storage for the device.
- the storage device is a computer readable medium.
- the storage device may include, for example, a hard disk device, an optical disk device, or some other mass storage device.
- the input / output device provides an input / output operation for the device according to the present invention.
- the input / output device may include one or more network interface devices such as, for example, an Ethernet card, such as a serial communication device such as an RS-232 port and / or a wireless interface device such as, for example, an 802.11 card.
- the input / output device can include driver devices, such as keyboards, printers, and display devices, configured to send output data to and receive input data from other input / output devices.
- the apparatus according to the invention may be driven by instructions that cause one or more processors to perform the functions and processes described above.
- such instructions may include instructions that are interpreted, for example, script instructions such as JavaScript or ECMAScript instructions, or executable code or other instructions stored on a computer readable medium.
- the device according to the present invention may be implemented in a distributed manner over a network, such as a server farm, or may be implemented in a single computer device.
- the specification and drawings describe exemplary device configurations, the functional operations and subject matter implementations described herein may be embodied in other types of digital electronic circuitry, or modified from the structures and structural equivalents disclosed herein. It may be implemented in computer software, firmware or hardware, including, or a combination of one or more of them. Implementations of the subject matter described herein relate to one or more computer program products, ie computer program instructions encoded on a program storage medium of tangible type for controlling or by the operation of an apparatus according to the invention. It may be implemented as the above module.
- the computer readable medium may be a machine readable storage device, a machine readable storage substrate, a memory device, a composition of materials affecting a machine readable propagated signal, or a combination of one or more thereof.
- processing system encompass all the instruments, devices and machines for processing data, including, for example, programmable processors, computers or multiple processors or computers.
- the processing system may include, in addition to hardware, code that forms an execution environment for a computer program on demand, such as code constituting processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more thereof. .
- a computer program (also known as a program, software, software application, script or code) mounted on an apparatus according to the invention and executing a method according to the invention is a programming comprising a compiled or interpreted language or a priori or procedural language. It can be written in any form of language, and can be deployed in any form, including stand-alone programs or modules, components, subroutines, or other units suitable for use in a computer environment.
- a computer program does not necessarily correspond to a file in a file system.
- a program may be in a single file provided to the requested program, in multiple interactive files (eg, a file that stores one or more modules, subprograms, or parts of code), or part of a file that holds other programs or data. (Eg, one or more scripts stored in a markup language document).
- the computer program may be deployed to run on a single computer or on multiple computers located at one site or distributed across multiple sites and interconnected by a communication network.
- Computer-readable media suitable for storing computer program instructions and data include, for example, semiconductor memory devices such as EPROM, EEPROM, and flash memory devices, such as magnetic disks such as internal hard disks or external disks, magneto-optical disks, and CD-ROMs. And all forms of nonvolatile memory, media and memory devices, including DVD-ROM discs.
- semiconductor memory devices such as EPROM, EEPROM, and flash memory devices, such as magnetic disks such as internal hard disks or external disks, magneto-optical disks, and CD-ROMs. And all forms of nonvolatile memory, media and memory devices, including DVD-ROM discs.
- the processor and memory can be supplemented by or integrated with special purpose logic circuitry.
- Implementations of the subject matter described herein may include, for example, a backend component such as a data server, or include a middleware component such as, for example, an application server, or a web browser or graphical user, for example, where a user may interact with the implementation of the subject matter described herein. It can be implemented in a computing system that includes a front end component such as a client computer having an interface or any combination of one or more of such back end, middleware or front end components. The components of the system may be interconnected by any form or medium of digital data communication such as, for example, a communication network.
- the present invention is an interface for inputting or changing depth information by adjusting the Z coordinate of an object in a three-dimensional space in order to convert a two-dimensional image into a three-dimensional image or a three-dimensional space in which a grid representing a position having the same depth is displayed in the Z-axis direction. It can be used to provide an interface for inputting or changing depth information by adjusting the Z coordinate of an object on the image, an image converting apparatus and method using the same, and a recording medium thereof.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
La présente invention concerne un appareil et un procédé de conversion d'image pour convertir une image bidimensionnelle en une image tridimensionnelle et un support d'enregistrement correspondant. L'appareil de conversion d'image comprend : une unité d'entrée utilisateur pour régler un point de commande z sur un objet sélectionné dans une image bidimensionnelle et déplacer le point de commande z dans une direction d'axe z afin de recevoir des informations de profondeur ; une unité de génération de carte de profondeur pour percevoir des informations de profondeur sur l'objet en fonction du mouvement de point de commande z et générer une carte de profondeur ; une unité d'affichage pour projeter l'objet sur un plan virtuel conformément à l'image bidimensionnelle et aux informations de profondeur introduites, et afficher l'objet ; et un contrôleur d'image pour commander la projection de l'objet sur le plan virtuel en fonction du mouvement du point de commande z réglé sur l'objet et afficher l'objet.
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| KR10-2011-0125866 | 2011-11-29 | ||
| KR10-2011-0125865 | 2011-11-29 | ||
| KR1020110125865A KR20130059733A (ko) | 2011-11-29 | 2011-11-29 | 2차원 영상을 3차원 영상으로 변환하는 영상 변환 장치와 방법 및 그에 대한 기록매체 |
| KR1020110125866A KR101388668B1 (ko) | 2011-11-29 | 2011-11-29 | 2차원 영상을 3차원 영상으로 변환하는 영상 변환 장치와 방법 및 그에 대한 기록매체 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2013081281A1 true WO2013081281A1 (fr) | 2013-06-06 |
Family
ID=48535698
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/KR2012/007372 Ceased WO2013081281A1 (fr) | 2011-11-29 | 2012-09-14 | Appareil et procédé de conversion d'image pour convertir une image bidimensionnelle en une image tridimensionnelle et support d'enregistrement correspondant |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2013081281A1 (fr) |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20000041329A (ko) * | 1998-12-22 | 2000-07-15 | 박남은 | 입체 영상 이미지 변환방법 및 그 장치 |
| JP2005526477A (ja) * | 2002-05-17 | 2005-09-02 | ジェー. ヴィサヤセィル,ジョン | 双方向サイリスタ・バルブを有するac−dcコンバータ |
| JP2009290905A (ja) * | 2009-09-10 | 2009-12-10 | Panasonic Corp | 表示装置および表示方法 |
| KR20090129175A (ko) * | 2008-06-12 | 2009-12-16 | 성영석 | 영상 변환 방법 및 장치 |
| JP2011239398A (ja) * | 2010-05-03 | 2011-11-24 | Thomson Licensing | 設定メニューを表示する方法及び対応するデバイス |
-
2012
- 2012-09-14 WO PCT/KR2012/007372 patent/WO2013081281A1/fr not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20000041329A (ko) * | 1998-12-22 | 2000-07-15 | 박남은 | 입체 영상 이미지 변환방법 및 그 장치 |
| JP2005526477A (ja) * | 2002-05-17 | 2005-09-02 | ジェー. ヴィサヤセィル,ジョン | 双方向サイリスタ・バルブを有するac−dcコンバータ |
| KR20090129175A (ko) * | 2008-06-12 | 2009-12-16 | 성영석 | 영상 변환 방법 및 장치 |
| JP2009290905A (ja) * | 2009-09-10 | 2009-12-10 | Panasonic Corp | 表示装置および表示方法 |
| JP2011239398A (ja) * | 2010-05-03 | 2011-11-24 | Thomson Licensing | 設定メニューを表示する方法及び対応するデバイス |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP3769509B1 (fr) | Réunions en réalité mixte à points d'extrémité multiples | |
| WO2014115953A1 (fr) | Système de production de contenu en trois dimensions et procédé correspondant | |
| CN102662498B (zh) | 一种投影演示的无线控制方法及系统 | |
| US10078484B2 (en) | Multivision display control device and multivision system | |
| WO2019124726A1 (fr) | Procédé et système de fourniture de service de réalité mixte | |
| US8441480B2 (en) | Information processing apparatus, information processing system, and computer readable medium | |
| EP2612220A1 (fr) | Procédé et appareil d'interfaçage | |
| CN101821705A (zh) | 指针控制装置 | |
| CN111309203B (zh) | 一种鼠标光标的定位信息的获取方法及装置 | |
| CN115061679B (zh) | 离线rpa元素拾取方法及系统 | |
| WO2016080596A1 (fr) | Procédé et système de fourniture d'outil de prototypage, et support d'enregistrement lisible par ordinateur non transitoire | |
| EP4191529A1 (fr) | Procédé d'estimation de mouvement de caméra pour algorithme de suivi de réalité augmentée et système associé | |
| EP4168879A1 (fr) | Procédé d'aide à distance et dispositif | |
| CN105912101A (zh) | 一种投影控制方法和电子设备 | |
| JP7173826B2 (ja) | プログラマブルロジックコントローラシステム、プログラム作成支援装置およびコンピュータプログラム | |
| WO2024025034A1 (fr) | Procédé permettant de créer simultanément un contenu 2d et 3d et dispositif de création convergée associé | |
| WO2018194340A1 (fr) | Procédé et dispositif de fourniture de contenu pour hologramme en couches | |
| WO2013081281A1 (fr) | Appareil et procédé de conversion d'image pour convertir une image bidimensionnelle en une image tridimensionnelle et support d'enregistrement correspondant | |
| WO2013081304A1 (fr) | Appareil et procédé de conversion d'image pour convertir une image bidimensionnelle en une image tridimensionnelle et support d'enregistrement correspondant | |
| CN114928718A (zh) | 视频监控方法、装置、电子设备及存储介质 | |
| US10921977B2 (en) | Information processing apparatus and information processing method | |
| JP7179633B2 (ja) | 計測方法、計測装置、およびプログラム | |
| US11287789B2 (en) | Program development support device, program development support system, program development support method, and non-transitory computer-readable recording medium | |
| US20230196725A1 (en) | Image annotation system and method | |
| CN113495162B (zh) | 自动光学检测设备的控制系统 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12853740 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 12853740 Country of ref document: EP Kind code of ref document: A1 |