WO2012165718A1 - Appareil de commande automatique de convergence utilisant le suivi oculaire et procédé correspondant - Google Patents
Appareil de commande automatique de convergence utilisant le suivi oculaire et procédé correspondant Download PDFInfo
- Publication number
- WO2012165718A1 WO2012165718A1 PCT/KR2011/006805 KR2011006805W WO2012165718A1 WO 2012165718 A1 WO2012165718 A1 WO 2012165718A1 KR 2011006805 W KR2011006805 W KR 2011006805W WO 2012165718 A1 WO2012165718 A1 WO 2012165718A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- convergence
- eye
- image
- camera module
- stereoscopic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
- H04N13/383—Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/302—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
Definitions
- the present invention relates to an apparatus and method for automatically adjusting convergence by detecting eye movement when generating or playing a stereoscopic 3D image using a binocular camera, and more particularly, to an eye movement in front of the binocular camera.
- the present invention relates to a technology for detecting a region of a display panel on which the eye is looking through the camera module and automatically adjusting convergence based on the extracted region.
- the object is photographed with a slight horizontal difference like the human eye. It is called time difference.
- the 3D image is captured by the human eye and the 3D image is reproduced in the brain, if the image does not match according to the left and right camera intervals and the size of the parallax of the left and right images toward the object, the eye becomes tired and headaches occur. Cause problems such as causing.
- the two images can be perfectly matched, but there is no parallax between the two images, so the stereoscopic feeling cannot be felt.
- the left and right images do not overlap each other in inverse proportion to the viewing distance, and the parallax increases.
- the parallax between two images taken with a binocular camera has a characteristic that the parallax increases from a distance to a near distance.
- the left and right images are similar to each other, but because the objects in the distance and the objects in the short distance are mixed, a difference occurs in the matching point depending on the state of the background and the subjects.
- the user has no choice but to adjust the convergence by directly manipulating the UI or touching the panel each time in order to adjust the convergence of stereoscopic 3D.
- a technique for automatically adjusting convergence on a display panel using eye tracking is presented.
- the convergence is automatically adjusted and the 3D stereoscopic images are automatically adjusted just by looking at the area for the convergence without the need for a separate operation such as manipulating the UI or touching the panel. It is intended to present a technique that can be created or reproduced.
- an automatic convergence control apparatus using eye-tracking may include a sensing camera module and a sensing camera module configured to detect eye movement looking at a display panel of a device for generating or playing a stereoscopic 3D image. It includes a convergence control unit for automatically controlling the convergence according to the movement of the eye.
- the apparatus may further include a gaze extractor configured to extract an area on the display panel that the eye gazes according to the movement of the eye sensed by the sensing camera module.
- the convergence adjustment unit controls the convergence based on the gaze region extracted by the gaze region extraction unit.
- the apparatus may further include a calibration unit configured to perform calibration according to the movement of the eye detected by the sensing camera module.
- the apparatus may further include a stereoscopic image generator.
- the two camera modules may be camera modules supporting different maximum resolutions.
- an automatic convergence control method using eye-tracking is characterized in that a three-dimensional sensor camera module detects eye movements of a human eye and eyes are monitored according to eye movements detected at a detection stage.
- the apparatus may include a gaze region extraction step of extracting a gaze region on the display panel and a convergence adjustment step of controlling convergence based on the extracted region in the gaze region extraction step.
- the convergence adjustment step may control the convergence based on the gaze region extracted in the gaze region extraction step.
- the sensing step may include performing a calibration according to the movement of the eye detected by the sensing camera module.
- Eye-tracking can be used to provide a technology that allows the user to automatically adjust the convergence on the display panel.
- the convergence can be automatically adjusted just by looking at the area to adjust the convergence without any action, eliminating the hassle of adjusting the convergence and optimizing stereoscopically. I can recognize it.
- FIG. 1 is a diagram illustrating an example of adjusting convergence in a conventional stereoscopic 3D imaging apparatus.
- FIG. 2 is a diagram illustrating an apparatus for automatically adjusting convergence using eye-tracking according to an embodiment.
- FIG. 3 is a block diagram of an apparatus for automatically adjusting convergence using eye-tracking according to an embodiment.
- FIG. 4 is a diagram illustrating an image recognized by the right and left eyes of a stereoscopic image.
- FIG. 5 is a diagram illustrating a display result of left and right eye images on a display panel.
- FIG. 6 is a diagram illustrating an image after convergence is automatically adjusted according to an embodiment.
- FIG. 7 is a diagram illustrating a calibration for accurately extracting a gaze area according to eye-tracking.
- FIG. 8 is a flowchart illustrating a method for automatically adjusting convergence using eye-tracking according to an embodiment.
- FIG. 1 is a diagram illustrating an example of adjusting convergence in a conventional stereoscopic 3D imaging apparatus.
- the user in order to match the convergence of the stereoscopic 3D images, the user sets an area to be converged through the user interface or a specific region on the display panel of the apparatus. The convergence was adjusted by touching.
- the gaze of the user since the gaze of the user is not fixed to an object or a specific area while the stereoscopic 3D image is being played back, it changes from time to time, so each time the user has to adjust the screen UI or touch the display panel to adjust the convergence. There was pain and discomfort.
- the apparatus 100 for automatically adjusting convergence using eye-tracking may be a mobile device such as a binocular camera or a smartphone as a device for generating or playing a stereoscopic 3D image.
- the convergence automatic adjustment device 100 is not limited thereto and includes all devices capable of generating or playing stereoscopic 3D images.
- FIG. 2 illustrates a smartphone in which a video call camera module 110 is built in the front, and two camera modules 120 are built in the rear to capture an image corresponding to the left and right eye images.
- the automatic convergence adjusting apparatus 100 using eye tracking includes a sensing camera module 110 and a convergence adjusting unit 150.
- the sensing camera module 110 detects eye movement looking at the display panel of the device in which a stereoscopic 3D image is generated or reproduced.
- Smartphones or mobile phones can play a role as a front camera for a video call, and in the case of a general binocular camera can be equipped with a sensing camera module in the front separately.
- FIG. 4 is a diagram illustrating an image recognized by the right and left eyes of a stereoscopic image. As shown in FIG. 4, the images seen through both eyes are different, and through this different information, the distance difference to the object is recognized, thereby making the stereoscopic sense. At this time, convergence control should be performed according to which object is viewed as the center to make the stereoscopic recognition more effective. If the point of view of the user is not the point of convergence, it may cause eye fatigue or headache. Therefore, it is a very useful technique to immediately detect the movement of the eye and automatically adjust the convergence centered on the point where the eye looks.
- the object is photographed with a slight horizontal difference like the human eye. It is called time difference.
- the 3D image is captured by the human eye and the 3D image is reproduced in the brain, if the image does not match according to the left and right camera intervals and the size of the parallax of the left and right images toward the object, the eye becomes tired and headaches occur. Cause problems such as causing.
- the parallax between two images taken with a binocular camera has a characteristic that the parallax increases from a distance to a near distance.
- the left and right images are similar to each other, but because the objects in the distance and the objects in the short distance are mixed, a difference occurs in the matching point depending on the state of the background and the subjects.
- FIG. 5 is a diagram illustrating a display result of left and right eye images on a display panel.
- FIG. 5 shows the results of the case where the convergence is not matched and when the object A is centered, the center of the object B is adjusted.
- FIG. 6 is a diagram illustrating an image after convergence is automatically adjusted according to an embodiment.
- the left side of FIG. 6 is a case where the gaze area extraction unit 140 extracts the object A into the gaze area according to eye motion information detected through the front camera (corresponding to the detection camera module 110).
- 150 shows that the convergence is adjusted by horizontally moving an image corresponding to the left and right eyes around the object A.
- FIG. 6 illustrates a case in which the gaze extraction unit 140 extracts the object B as a gaze area according to eye motion information detected through the front camera (corresponding to the detection camera module 110).
- the adjusting unit 150 adjusts the convergence by horizontally moving the image corresponding to the left and right eyes around the object B.
- the automatic convergence adjusting device 100 may further include a gaze extraction unit 140.
- the gaze extraction unit 140 extracts an area on the display panel that the eye is looking at according to the movement of the eye detected by the sensing camera module 110.
- all conventionally known methods may be used to extract a spot that the pupil gazes by using the image of the eye photographed by the sensing camera.
- the convergence adjuster 150 controls the convergence based on the gaze region extracted by the gaze region extractor 140.
- the convergence automatic adjustment device 100 may further include a calibration unit 130.
- the calibration unit 130 performs calibration according to the movement of the eyes detected by the sensing camera module. In order to more accurately extract the gaze area according to the eye movement detected by the detection camera module 110, that is, a specific position on the display panel of the device 100 that the eye is gazing, the gaze area extraction unit 140 is calibrated. This may be necessary.
- FIG. 7 illustrates a calibration for accurately extracting a gaze area according to eye-tracking.
- the calibration unit 130 displays a calibration to perform calibration. Cursors are output in order from the bottom right of the image to the bottom left, top left, and top right. However, the order of outputting the cursor and the point of outputting the cursor may be arbitrarily determined, but are not limited thereto.
- the sensing camera module 110 detects (shoots) the movement of the eye tracking the point where the cursor is output.
- the calibration unit 130 sets coordinates for each eye position by converting the position of the eye detected (photographed) by the sensing camera module 110 into coordinates.
- the gaze area extractor 140 detects the eye movement and detects the eye position looking at a predetermined position on the display panel where the image is reproduced, and the calibrating unit 130 sets the calibrator 130. According to the coordinates, the eye gaze on the display panel can be accurately extracted.
- the automatic convergence control device 100 may further include a stereoscopic image generation unit 160.
- the stereoscopic image generator 160 receives images corresponding to the left and right eyes from the two camera modules 120, and applies the result of the convergence control process by the convergence control unit to the received left and right eyes images. Create an image.
- the two camera modules 120 may be camera modules that support the same maximum resolution or support different maximum resolutions. That is, the two camera modules 120 may be the same camera module supporting the same resolution, but in this case, a high-resolution camera module is mainly used, which causes a high cost of manufacturing the binocular camera. Therefore, when manufacturing a binocular camera, the cost of using a camera module that supports different maximum resolution, that is, one using a relatively low cost low resolution camera module, and one using a relatively high resolution camera module Can reduce the cost.
- the first camera module may support a relatively high resolution of 8M
- the second camera module may be a camera module that supports a relatively low resolution of 3M.
- the image captured by the first camera module may have a wider angle of view than the image photographed by the second camera module, thereby capturing a wider range.
- the present invention is not limited thereto, and the two images may have the same angle of view or have an arbitrary angle of view by adjusting or designing an optical system.
- the resolution of the output image includes the same or different output.
- the resolutions of the output images can be matched with each other to output images of the same resolution.
- the image may be output by photographing by making the size of the object substantially the same within the maximum resolution supported by each camera module.
- the stereoscopic image generator 160 may generate an image cutting unit (not shown) to generate a stereoscopic 3D image. And a scaling unit (not shown).
- the image cutting unit corresponds to left and right eye images, and two images respectively photographed from two camera modules 120 supporting different resolutions are input.
- the image cutting unit (not shown) according to an exemplary embodiment cuts an image captured from at least a high resolution camera module among two input images according to an angle of view of an input image captured from a low resolution camera module.
- the scaling unit may output the same to the convergence control unit 150 by matching the resolutions of two images whose angles of view are adjusted by the image cutting unit (not shown).
- the output resolutions of the high resolution camera module and the low resolution camera module are the same, and the respective images are captured and the captured images are input. That is, two cameras with different angles of view, that is, the 8M high resolution camera module and the 3M low resolution camera module, capture the target object to generate a stereoscopic 3D image by using the same output resolution, and each of the recorded images (Not shown).
- the target objects of the image input from the high-resolution camera module having a wide angle of view are larger than the same target objects of the image captured from the low resolution camera module. Becomes small.
- the image cutting unit (not shown) first crops a part of an image input from a high resolution camera module having a wide angle of view, that is, a portion where the angle of view coincides with an image input from a low resolution camera module.
- the convergence adjusting unit 150 generates a stereoscopic 3D image by adjusting the convergence of both images having the same angle of view.
- the scaling unit (not shown) scales a portion of the image input from the high resolution camera module cut according to the angle of view of the image input from the low resolution camera module so that the size of the image is relatively large. Match the resolution of both images to match the size of the image.
- the size of the target object to generate a three-dimensional 3D image is roughly matched by two cameras of the 8M high resolution camera module and the 3M low resolution camera module from a high resolution camera module having a wide angle of view
- the size of the captured image is larger than the size of the image captured from the low resolution camera module.
- an image cutting unit crops a portion of an image captured by a high resolution camera module that matches an angle of view of an image captured by the low resolution camera module.
- the scaling unit (not shown) first determines whether to scale the cropped image of the high resolution camera module. Because each image photographed through the high resolution camera module and the low resolution camera module is photographed so as to roughly match the size of the target object to generate a stereoscopic 3D image, the size of the cut image is input from the low resolution camera module. Will be the same as the video.
- the image of the cut-off high resolution camera module and the image of the low resolution camera module have the same size, and the objects do not need to be scaled.
- the scaling unit (not shown) performs scaling to match the resolution of both images.
- the convergence control unit 150 controls convergence control of two images whose angle of view and resolution are matched according to an input indication value.
- the stereoscopic image generation unit 160 outputs an image corresponding to the result of the convergence control process.
- the automatic convergence adjustment device 100 may further include a stereoscopic image playback unit 170.
- the stereoscopic image reproducing unit 170 reproduces the stereoscopic 3D image generated in advance, and if the convergence is automatically controlled by the convergence adjusting unit while the stereoscopic 3D image is reproduced, the stereoscopic 3D image is reproduced to match the processing result. Adjust the output.
- step S100 is a flowchart illustrating a method for automatically adjusting convergence using eye-tracking according to an embodiment.
- the sensing camera module detects the movement of a human eye (step S100).
- the detecting step (S100) may include performing a calibration according to the movement of the eye detected by the sensing camera module. That is, calibration is performed to more accurately extract the gaze area according to the eye movement detected by the detection camera module 110, that is, a specific position on the display panel of the device 100 that the eye is gazing at the gaze area extractor 140. Calibration may be necessary.
- the convergence automatic adjustment device 100 may further include a calibration unit 130. The calibration unit 130 performs calibration according to the movement of the eyes detected by the sensing camera module.
- FIG. 7 illustrates a calibration for accurately extracting a gaze area according to eye-tracking.
- the calibration unit 130 displays a calibration to perform calibration. Cursors are output in order from the bottom right of the image to the bottom left, top left, and top right. However, the order of outputting the cursor and the point of outputting the cursor may be arbitrarily determined, but are not limited thereto.
- the sensing camera module 110 detects (shoots) the movement of the eye tracking the point where the cursor is output.
- the calibration unit 130 sets coordinates for each eye position by converting the position of the eye detected (photographed) by the sensing camera module 110 into coordinates.
- step S100 the image being generated by the stereoscopic image generator 160 or reproduced by the stereoscopic image reproducing unit 170 is output.
- the stereoscopic image generation unit 160 receives images corresponding to the left and right eyes from the two camera modules 120 and outputs the input images to the display.
- the two camera modules 120 may be the same camera module that supports the same resolution, but in this case, both of them use the same high resolution camera module, which leads to a high cost when producing a binocular camera. Therefore, when manufacturing a binocular camera to solve this problem, it is possible to use a low-resolution camera module of a relatively low cost of any one of the two camera modules.
- a first camera module of two camera modules may support a relatively high resolution of 8M
- the second camera module may be a camera module supporting a resolution of 3M, which is a relatively low cost.
- the image captured by the first camera module may have a wider angle of view than the image photographed by the second camera module, thereby capturing a wider range.
- the resolution of the output image includes the same or different output. That is, although the maximum resolution of the camera module that supports relatively high resolution and the camera module that supports low resolution differs from each other, the resolutions of the output images can be matched with each other to output images of the same resolution.
- the image when capturing, the image may be output by photographing by making the object size approximately the same within the range of the maximum resolution supported by each camera module.
- the image cutting unit (not shown) of the stereoscopic image generator 160 corresponds to the left and right eye images, and receives two images respectively captured by two camera modules 120 supporting different resolutions.
- the image cutting unit (not shown) cuts an image photographed from at least a high resolution camera module among two input images according to an angle of view of an input image photographed from a low resolution camera module.
- the scaling unit (not shown) of the stereoscopic image generating unit 160 outputs by matching the resolution of the two images with the angle of view at the image cutting unit (not shown).
- the output resolutions of the high resolution camera module and the low resolution camera module are the same, and the respective images are captured and the captured images are input. That is, two cameras with different angles of view, that is, the 8M high resolution camera module and the 3M low resolution camera module, capture the target object to generate a stereoscopic 3D image by using the same output resolution, and each of the recorded images (Not shown).
- the target objects of the image input from the high-resolution camera module having a wide angle of view are larger than the same target objects of the image captured from the low resolution camera module. Becomes small.
- the image cutting unit (not shown) first crops a part of an image input from a high resolution camera module having a wide angle of view, that is, a portion where the angle of view coincides with an image input from a low resolution camera module.
- the scaling unit (not shown) scales a portion of the image input from the high resolution camera module cut according to the angle of view of the image input from the low resolution camera module so that the size of the image is relatively large. Match the resolutions of both images to match the size of the image and output them to the display.
- the size of the target object to generate a three-dimensional 3D image is roughly matched by two cameras of the 8M high resolution camera module and the 3M low resolution camera module from a high resolution camera module having a wide angle of view
- the size of the captured image is larger than the size of the image captured from the low resolution camera module.
- an image cutting unit crops a portion of an image captured by a high resolution camera module that matches an angle of view of an image captured by the low resolution camera module.
- the scaling unit (not shown) first determines whether to scale the cropped image of the high resolution camera module. Because each image photographed through the high resolution camera module and the low resolution camera module is photographed so as to roughly match the size of the target object to generate a stereoscopic 3D image, the size of the cut image is input from the low resolution camera module. Will be the same as the video.
- the image of the cut-off high resolution camera module and the image of the low resolution camera module have the same size, and the objects do not need to be scaled.
- the scaling unit (not shown) performs scaling to match the resolution of both images.
- the stereoscopic image reproducing unit 170 outputs and reproduces the stereoscopic 3D image previously generated on the display.
- the sensing step detects the movement of the eye looking at the display panel of the device 100 in which the stereoscopic 3D image is being generated or reproduced by the sensing camera module 110 to capture a captured image in the gaze extraction unit. Send it.
- step S100 the gaze area on the display panel on which the eye is looking is extracted (step S200).
- the gaze extraction unit 140 analyzes an image sensed and transmitted by the sensing camera module 110 and extracts an area on the display panel that the eye gazes as the eye moves.
- the detection camera module 110 detects the eye movement and detects the eye position looking at a predetermined position on the display panel where the image is reproduced.
- the unit 140 may accurately extract an area of the eye on the display panel according to the coordinates set by the calibration unit 130.
- step S300 the convergence is adjusted based on the region extracted in the gaze region extraction step (step S200) (step S300).
- the convergence adjustment step may adjust the convergence by horizontally or vertically moving the image corresponding to the left and right eyes based on the gaze region extracted in the gaze region extraction step.
- the convergence is adjusted by horizontally or vertically moving both images scaled and output by the scaling unit (not shown).
- the gaze area extraction unit 140 moves the object A to the gaze area according to eye motion information detected through the front camera (corresponding to the sensing camera module 110).
- the convergence adjusting unit 150 adjusts the convergence by horizontally or vertically moving the image corresponding to the left and right eyes around the object A.
- FIG. 6 illustrates a case in which the gaze extraction unit 140 extracts the object B as a gaze area according to eye motion information detected through the front camera (corresponding to the detection camera module 110).
- the adjusting unit 150 shows that the convergence is adjusted by horizontally or vertically moving the image corresponding to the left and right eyes with respect to the object B.
- step S300 the convergence outputs the stereoscopic 3D image adjusted in the adjusting step (step S300) (step S400).
- the stereoscopic image generating unit 160 or the stereoscopic image reproducing unit 170 generates or reproduces the final stereoscopic 3D image in which the convergence is adjusted.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Human Computer Interaction (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
L'invention porte sur un appareil et un procédé qui extraient une région sur un panneau d'affichage sur laquelle les yeux sont fixés par l'intermédiaire d'un module de caméra qui est monté en face d'une caméra binoculaire et détecte un mouvement des yeux, lorsqu'une vidéo 3D stéréoscopique est générée ou reproduite à l'aide de la caméra binoculaire ; et une convergence est automatiquement commandée sur la base de la région extraite. Lorsque des utilisateurs génèrent ou reproduisent une vidéo 3D stéréoscopique, une convergence vers une région servant à aligner une convergence peut être automatiquement commandée sans opération séparée, et seulement à l'aide d'une inspection visuelle, ce qui supprime des inconvénients lors de la commande d'une convergence et permet une reconnaissance optimale d'une structure tridimensionnelle.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| KR10-2011-0052501 | 2011-05-31 | ||
| KR1020110052501A KR101165764B1 (ko) | 2011-05-31 | 2011-05-31 | Eye-tracking을 이용한 컨버전스 자동 조절 장치 및 그 방법 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2012165718A1 true WO2012165718A1 (fr) | 2012-12-06 |
Family
ID=46716847
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/KR2011/006805 Ceased WO2012165718A1 (fr) | 2011-05-31 | 2011-09-15 | Appareil de commande automatique de convergence utilisant le suivi oculaire et procédé correspondant |
Country Status (2)
| Country | Link |
|---|---|
| KR (1) | KR101165764B1 (fr) |
| WO (1) | WO2012165718A1 (fr) |
Families Citing this family (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR102093198B1 (ko) * | 2013-02-21 | 2020-03-25 | 삼성전자주식회사 | 시선 인식을 이용한 사용자 인터페이스 방법 및 장치 |
| US10477154B2 (en) * | 2013-03-07 | 2019-11-12 | Cognex Corporation | System and method for aligning two work pieces with a vision system in the presence of occlusion |
| US10032273B2 (en) * | 2013-03-15 | 2018-07-24 | Cognex Corporation | Machine vision system calibration using inaccurate calibration targets |
| KR102333267B1 (ko) | 2014-12-10 | 2021-12-01 | 삼성전자주식회사 | 눈 위치 예측 장치 및 방법 |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR100726933B1 (ko) * | 2006-02-15 | 2007-06-14 | 주식회사 이시티 | 고정된 양안 카메라의 자동 주시각 제어를 위한 영상신호처리 방법 |
| KR100820639B1 (ko) * | 2006-07-25 | 2008-04-10 | 한국과학기술연구원 | 시선 기반 3차원 인터랙션 시스템 및 방법 그리고 3차원시선 추적 시스템 및 방법 |
| KR100906784B1 (ko) * | 2007-11-15 | 2009-07-09 | (주)레드로버 | 입체영상 제작 프로그램용 플러그인 모듈 및 입체영상 제작방법 |
| KR20110025020A (ko) * | 2009-09-03 | 2011-03-09 | 한국전자통신연구원 | 입체 영상 시스템에서 입체 영상 디스플레이 장치 및 방법 |
-
2011
- 2011-05-31 KR KR1020110052501A patent/KR101165764B1/ko not_active Expired - Fee Related
- 2011-09-15 WO PCT/KR2011/006805 patent/WO2012165718A1/fr not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR100726933B1 (ko) * | 2006-02-15 | 2007-06-14 | 주식회사 이시티 | 고정된 양안 카메라의 자동 주시각 제어를 위한 영상신호처리 방법 |
| KR100820639B1 (ko) * | 2006-07-25 | 2008-04-10 | 한국과학기술연구원 | 시선 기반 3차원 인터랙션 시스템 및 방법 그리고 3차원시선 추적 시스템 및 방법 |
| KR100906784B1 (ko) * | 2007-11-15 | 2009-07-09 | (주)레드로버 | 입체영상 제작 프로그램용 플러그인 모듈 및 입체영상 제작방법 |
| KR20110025020A (ko) * | 2009-09-03 | 2011-03-09 | 한국전자통신연구원 | 입체 영상 시스템에서 입체 영상 디스플레이 장치 및 방법 |
Also Published As
| Publication number | Publication date |
|---|---|
| KR101165764B1 (ko) | 2012-07-18 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2013129792A1 (fr) | Procédé et terminal portable pour corriger la direction du regard de l'utilisateur dans une image | |
| CN102954836B (zh) | 环境光传感器、用户使用装置及显示装置 | |
| WO2015120673A1 (fr) | Procédé, système et équipement de photographie pour commander la mise au point dans une photographie au moyen d'une technologie de suivi de globes oculaires | |
| WO2016099052A1 (fr) | Dispositif de guidage tridimensionnel pour informer une personne mal voyante d'un obstacle, système de guidage pour fournir des informations environnementales à l'aide de celui-ci et son procédé | |
| JP2012142922A (ja) | 撮像装置、表示装置、コンピュータプログラムおよび立体像表示システム | |
| WO2016200102A1 (fr) | Procédé et dispositif pour changer le point focal d'une caméra | |
| WO2013100239A1 (fr) | Procédé de traitement d'images dans un système de vision stéréoscopique et appareil correspondant | |
| WO2019158808A3 (fr) | Système d'imagerie et procédé de production d'images utilisant des caméras et un processeur | |
| WO2013039347A9 (fr) | Appareil de traitement d'image, et procédé de traitement d'image correspondant | |
| WO2019139404A1 (fr) | Dispositif électronique et procédé de traitement d'image correspondante | |
| WO2015034130A1 (fr) | Dispositif de téléprésence | |
| WO2017196026A2 (fr) | Procédé permettant de régler une image photographiée d'une caméra ptz, et appareil associé | |
| WO2012165718A1 (fr) | Appareil de commande automatique de convergence utilisant le suivi oculaire et procédé correspondant | |
| WO2017142223A1 (fr) | Système de transmission d'image à distance, appareil d'affichage et son procédé d'affichage de guidage | |
| EP3132305A1 (fr) | Dispositif mobile d'affichage 3d sans lunettes, son procédé de réglage, et son procédé d'utilisation | |
| WO2014088125A1 (fr) | Dispositif de photographie d'images et procédé associé | |
| WO2012046964A2 (fr) | Dispositif d'affichage d'image stéréoscopique destiné à afficher une image stéréoscopique en traçant une position focalisée | |
| WO2015194749A1 (fr) | Dispositif mobile d'affichage 3d sans lunettes, son procédé de réglage, et son procédé d'utilisation | |
| WO2020095639A1 (fr) | Dispositif de traitement d'informations, procédé de traitement d'informations et programme | |
| US11706378B2 (en) | Electronic device and method of controlling electronic device | |
| WO2016047824A1 (fr) | Dispositif de projection d'informations d'image, et procédé de commande de dispositif de projection | |
| TW201138458A (en) | Adjusting system and method for displaying screen, and billboard including the same | |
| WO2012165717A1 (fr) | Appareil pour générer une image tridimensionnelle (3d) stéréoscopique à l'aide d'un module asymétrique à deux caméras et procédé associé | |
| WO2016006731A1 (fr) | Dispositif portable qui commande un mode de photographie, et son procédé de commande | |
| WO2021221341A1 (fr) | Dispositif de réalité augmentée et son procédé de commande |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11866450 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 11866450 Country of ref document: EP Kind code of ref document: A1 |