[go: up one dir, main page]

WO2015030482A1 - Input device for wearable display - Google Patents

Input device for wearable display Download PDF

Info

Publication number
WO2015030482A1
WO2015030482A1 PCT/KR2014/007969 KR2014007969W WO2015030482A1 WO 2015030482 A1 WO2015030482 A1 WO 2015030482A1 KR 2014007969 W KR2014007969 W KR 2014007969W WO 2015030482 A1 WO2015030482 A1 WO 2015030482A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
wearable display
movement
input device
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/KR2014/007969
Other languages
French (fr)
Korean (ko)
Inventor
이길재
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Macron Co Ltd
Original Assignee
Macron Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Macron Co Ltd filed Critical Macron Co Ltd
Publication of WO2015030482A1 publication Critical patent/WO2015030482A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/163Wearable computers, e.g. on a belt
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/002Specific input/output arrangements not covered by G06F3/01 - G06F3/16
    • G06F3/005Input arrangements through a video camera
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04812Interaction techniques based on cursor appearance or behaviour, e.g. being affected by the presence of displayed objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • the present invention relates to an input device for a wearable display, and more particularly, to an apparatus for manipulating a wearable display worn on a human body, such as Google Glass recently released by Google.
  • the mobile display has evolved into a wearable display that maximizes portability beyond a smart display device in which a small computer function is added to an existing display function.
  • the glasses display Google Glass
  • Google Glass recently released by Google among wearable displays does not need to be carried like a conventional smart device because it can be worn like glasses.
  • a head mount display (HMD) is worn like a pair of glasses so that a large volume display device such as a TV is not required.
  • a watch-type display can be carried on the wrist small, it is possible to use a portable display device with both hands comfortable.
  • the eyeglass type display is in close contact with the eye and is efficient in various aspects such as portability, but a new type of input device is required because the touch screen cannot be used as an input device as in the conventional smart display.
  • a command signal may be input through a voice recognition method.
  • a voice recognition method there is a possibility of malfunction in a noisy place, and in a place where many people gather, it is possible to stimulate the hearing of others, so there is a limit to use.
  • a command signal such as a mouse movement or drag by a voice recognition method.
  • Korean Patent Publication No. 10-2012-0047746 has proposed a virtual mouse driving method for driving a mouse without directly touching the touch screen.
  • the virtual mouse driving method is a method of driving a command by analyzing a hand movement. Therefore, in the case of the spectacle-type display, there is a problem that the front view can be obscured by moving the hand in front of the camera for command input. Therefore, such a method is difficult to apply to the spectacle-type display that needs to be viewed in front.
  • the motion recognition method of moving a hand and inputting a command has a problem in that it is possible to operate only when the hand enters the camera image, so that the hand is in the image and to be careful not to leave the image area.
  • an object of the present invention is to provide an input device for a wearable display that can be applied to a wearable display device, such as Google Glass.
  • the input device for a wearable display is for controlling a wearable display device, and is mounted on the wearable display device and analyzes a camera that continuously photographs a plurality of images, and analyzes the plurality of images photographed by the camera. And an image matching unit for acquiring motion information of the camera, and a controller for generating a control signal corresponding to the motion information acquired by the image matching unit.
  • control unit preferably converts the motion information into a coordinate movement amount of the cursor on the display.
  • the image matching unit divides each photographed image into a predetermined number of blocks, obtains movement amounts of blocks corresponding to each other in two consecutive images by an image matching method, and continuously It is desirable to calculate the displacement between the two images.
  • the control unit it is preferable that the camera uses the movement information that is periodically moved up or down or left and right N times or more within the reference time as the on / off signal of the cursor.
  • control unit preferably utilizes the movement information of the camera periodically moving up, down, left and right N times or more within a reference time as a click signal of a cursor.
  • the wearable display device can be easily and accurately operated.
  • FIG. 1 is a schematic configuration diagram of an input device for a wearable display according to an exemplary embodiment of the present invention.
  • FIG. 2 is a view showing that images captured by a camera change when a part of the body on which the wearable display device is mounted is changed.
  • 3 is a diagram illustrating an example of dividing a photographed image into blocks.
  • FIG. 4 is a diagram illustrating movement of blocks in a captured image when the camera is moved downward.
  • FIG. 5 is a diagram illustrating movement of blocks in a captured image when the camera is moved leftward.
  • FIG. 1 is a schematic configuration diagram of an input device for a wearable display according to an exemplary embodiment of the present invention.
  • FIG. 2 is a view showing that the images taken by the camera is changed when moving the part of the body equipped with the wearable display device
  • Figure 3 is a view showing an example of dividing the captured image into blocks
  • Figure 4 is a lower side of the camera
  • FIG. 5 is a diagram illustrating movement of blocks in a captured image when moved in a direction
  • FIG. 5 is a diagram illustrating movement of blocks in a captured image when the camera is moved in a left direction.
  • the input device for a wearable display is for controlling a wearable display device such as Google glass, and includes a camera 10, an image matching unit 20, and a controller 30. ).
  • the camera 10 is for photographing a plurality of images continuously, and a camera provided in the wearable display device may be utilized.
  • a device for a wearable display in the form of glasses such as Google Glass
  • Google Glass may be provided with a camera facing the front of the glasses frame, and in the case of a wearable display device in the form of a watch, a camera may be installed on one side of the watch.
  • a description will be given based on a device for a wearable display having a form of glasses such as Google glass.
  • the image matching unit 20 receives a plurality of images captured by the camera 10 and analyzes the images to obtain motion information of the camera.
  • the image photographed by the camera installed in the wearable display device in the form of glasses is determined by the user's gaze (more accurately, the direction the user views (face direction)). Therefore, assuming that the camera rests on the head in a state parallel to the ground, shaking the head may cause an offset (difference) in the photographed image as shown in FIG. 2. That is, when the head is shaken from side to side, the photographed image is offset in the left and right directions, and when the head is shaken up and down, the captured image is also offset up and down. In this case, since the offset amount between images is decreased in the continuously photographed images, matching between images is possible.
  • the image matching unit 20 divides the photographed image into a predetermined number of blocks 1 as shown in FIG. 3 to obtain an offset amount between successive images indicating the degree of movement of the camera (ie, motion information).
  • the amount of movement of blocks corresponding to each other in two consecutive images is obtained by an image matching method such as template matching. For example, if each image is divided into six blocks from A to F, the amount of movement between the A block in the Nth image and the A block in the N + 1th image is obtained by an image matching method. The movement amount is also obtained for the F block. Then, the displacement (ie, the motion value) between the two images may be calculated from the statistical values of the movement amounts of the blocks thus obtained.
  • the average value of the moving amounts of the block may be used in relation to the statistical value, or the moving amount having a high frequency may be determined as the displacement between images.
  • the controller 30 generates a control signal for manipulating the wearable display device according to the motion information of the camera obtained by the image matching unit.
  • FIG. 4 when the camera is photographed while moving downward, each block moves upward when comparing (A) and (B) of FIG. 4.
  • the motion vector motion information
  • FIG. 4A is an image before moving the camera
  • FIG. 4B is an image after moving the camera.
  • FIG. 5 when the camera is moved to the left, the motion vector is obtained to the right.
  • the cursor on the display screen can be used by people more intuitively to move the camera in the direction of movement. Therefore, the x- and y-axis coordinates of the display need to be calibrated in the opposite direction of the obtained motion value.
  • the cursor on the display may be moved according to the movement of the camera (ie, the movement of the user's head).
  • the coordinates are converted so that the coordinates of the cursor move up, down, left and right according to the movement of the head up, down, left and right, intuitive driving is possible.
  • the sensitivity coefficient which is the ratio of the change of the display coordinates to the movement value between images
  • the square amount of the motion value may be used.
  • the cursor on the screen can be moved and the desired menu can be selected by obtaining the change amount of the coordinates on the display according to the degree of the motion value obtained by the matching between successive images.
  • the click operation for selecting a menu can be interpreted as a click operation when a certain time stops while moving the cursor by moving the head.
  • the nodding action is consistent with the intuitive click action, and the click action can be recognized by interpreting the motion values that go down and rise within a certain time.
  • the moment of click can be applied retrospectively to find out the point where the initial click action started from the past record after recognizing the click action.
  • the cursor can be turned on, off, moved and clicked to select a menu.
  • the head may be used as a signal for clicking an operation of shaking the head up, down, left, or right.
  • the wristwatch type of the wearable device may be driven similarly to the spectacle type.
  • the method is to change the vertical movement of the spectacle display described above to the 12 o'clock movement of the clock and the left and right movements to the 3 o'clock movement.
  • a similar function may be implemented using a gyro sensor mounted on the wearable display.
  • a gyro sensor an amount of change in the angle of a corresponding axis can be obtained.
  • the amount of change in angle when moving the head up and down and the amount of change in angle when moving the left and right by the offset amount between the images of the above-described part can be obtained similar to that when using the camera.
  • the wearable display device can be easily and accurately operated by moving a part of the body on which the wearable display device is mounted.
  • such a method is not disturbed by the surrounding noise, and unlike the virtual mouse driving method, the user's hand does not need to be moved, and thus the wearable display device can be operated very efficiently.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computer Hardware Design (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present invention relates to an input device for a wearable display which can be applied to a wearable display device, such as Google Glass. The input device for a wearable display according to the present invention is for controlling a wearable display device and comprises: a camera for continuously shooting multiple images, the camera being mounted on the wearable display device; an image matching unit for analyzing multiple images captured by the camera and obtaining motion information of the camera; and a control unit for generating a control signal corresponding to the motion information obtained by the image matching unit.

Description

웨어러블 디스플레이용 입력장치Input device for wearable display

본 발명은 웨어러블 디스플레이용 입력장치에 관한 것으로, 보다 상세하게는 최근 구글에서 출시한 구글 글라스와 같이 사람의 몸에 착용하는 웨어러블 디스플레이를 조작하기 위한 장치에 관한 것이다.The present invention relates to an input device for a wearable display, and more particularly, to an apparatus for manipulating a wearable display worn on a human body, such as Google Glass recently released by Google.

최근들어 새로운 형태의 디스플레이 디바이스들이 다양하게 출현하고 있다. 특히, 모바일 디스플레이의 경우 기존의 디스플레이 기능에 소형 컴퓨터 기능을 부가한 스마트 디스플레이 디바이스를 넘어, 휴대성을 극대화하는 웨어러블 디스플레이의 형태로 발전하고 있다. Recently, various types of display devices have emerged. In particular, the mobile display has evolved into a wearable display that maximizes portability beyond a smart display device in which a small computer function is added to an existing display function.

예컨대, 웨어러블 디스플레이 중 최근 구글에서 출시한 안경형 디스플레이(구글 글라스)의 경우 안경처럼 착용하면 되기 때문에 기존의 스마트 기기처럼 들고 다닐 필요가 없다. 즉, HMD(Head mount display)의 경우 안경처럼 착용하고 보기 때문에 TV와 같은 큰 부피의 디스플레이 디바이스가 필요 없다. 그리고, 시계형 디스플레이의 경우에도 소형으로 손목에 차고 다닐 수 있어, 양손이 편안한 상태에서 휴대형 디스플레이 기기를 이용할 수 있다.For example, the glasses display (Google Glass) recently released by Google among wearable displays does not need to be carried like a conventional smart device because it can be worn like glasses. In other words, a head mount display (HMD) is worn like a pair of glasses so that a large volume display device such as a TV is not required. And, even in the case of a watch-type display can be carried on the wrist small, it is possible to use a portable display device with both hands comfortable.

한편, 안경형 디스플레이의 경우 눈과 밀착되어 휴대성 등의 여러 측면에서 효율적이지만, 기존의 스마트 디스플레이처럼 터치 스크린을 입력기로서 사용할 수 없어 새로운 형태의 입력장치가 필요하다. On the other hand, the eyeglass type display is in close contact with the eye and is efficient in various aspects such as portability, but a new type of input device is required because the touch screen cannot be used as an input device as in the conventional smart display.

예를 들어, 공개특허 10-2001-0012024호에 나타나 있는 바와 같이, 음성 인식 방법을 통해 명령신호를 입력할 수 있다. 하지만, 음성 인식 방법의 경우 시끄러운 곳에서 오동작의 가능성이 있고, 많은 사람이 모이는 곳에서는 타인의 청각을 자극할 수 있어 사용에 제한이 있다. 나아가, 음성 인식 방법으로 마우스의 이동이나 드래그와 같은 명령신호를 입력하기에는 어려움이 있다.For example, as shown in Korean Patent Laid-Open No. 10-2001-0012024, a command signal may be input through a voice recognition method. However, in the case of a voice recognition method, there is a possibility of malfunction in a noisy place, and in a place where many people gather, it is possible to stimulate the hearing of others, so there is a limit to use. Furthermore, it is difficult to input a command signal such as a mouse movement or drag by a voice recognition method.

한편, 공개특허 10-2012-0047746호에는 터치 스크린을 직접 터치 하지 않고 마우스를 구동하는 가상마우스 구동방법 등이 제안된 바 있다. 하지만, 가상마우스 구동방법은 손의 움직임을 해석하여 명령을 구동하는 방법이다. 따라서, 안경형 디스플레이의 경우 명령입력을 위해 손을 카메라 앞에서 이동하면 전방의 시야를 가릴 수 있다는 문제점이 있으므로, 정면을 봐야하는 안경형 디스플레이에는 이와 같은 방식을 적용하기 어려움이 있다. 나아가, 손을 움직여 명령 입력을 하는 동작인식방법은 카메라 영상내로 손이 들어왔을 경우에만 동작이 가능하기 때문에 손이 영상내에 있는지를 확인하고 영상 영역밖으로 벗어나지 않게 주의해야 하는 문제점이 있다.On the other hand, Korean Patent Publication No. 10-2012-0047746 has proposed a virtual mouse driving method for driving a mouse without directly touching the touch screen. However, the virtual mouse driving method is a method of driving a command by analyzing a hand movement. Therefore, in the case of the spectacle-type display, there is a problem that the front view can be obscured by moving the hand in front of the camera for command input. Therefore, such a method is difficult to apply to the spectacle-type display that needs to be viewed in front. Further, the motion recognition method of moving a hand and inputting a command has a problem in that it is possible to operate only when the hand enters the camera image, so that the hand is in the image and to be careful not to leave the image area.

따라서, 웨어러블 디스플레이에 적용할 수 있는 새로운 형태의 입력장치의 개발이 요구된다.Therefore, the development of a new type of input device that can be applied to the wearable display is required.

본 발명은 상기한 문제점을 해결하기 위하여 안출된 것으로, 본 발명의 목적은 구글 글라스와 같은 웨어러블 디스플레이 디바이스에 적용할 수 있는 웨어러블 디스플레이용 입력장치를 제공하는 것이다.The present invention has been made to solve the above problems, an object of the present invention is to provide an input device for a wearable display that can be applied to a wearable display device, such as Google Glass.

본 발명에 따른 웨어러블 디스플레이용 입력장치는 웨어러블 디스플레이 디바이스를 제어하기 위한 것으로, 상기 웨어러블 디스플레이 디바이스에 장착되며, 복수의 영상을 연속적으로 촬영하는 카메라와, 상기 카메라에서 촬영된 복수의 영상을 분석하여 상기 카메라의 움직임 정보를 획득하는 영상매칭부와, 상기 영상매칭부에서 획득된 움직임 정보에 대응되는 제어신호를 생성하는 제어부를 포함하는 것을 특징으로 한다.The input device for a wearable display according to the present invention is for controlling a wearable display device, and is mounted on the wearable display device and analyzes a camera that continuously photographs a plurality of images, and analyzes the plurality of images photographed by the camera. And an image matching unit for acquiring motion information of the camera, and a controller for generating a control signal corresponding to the motion information acquired by the image matching unit.

본 발명에 따르면, 상기 제어부는 상기 움직임 정보를 디스플레이 상에서의 커서의 좌표 이동량으로 변환하는 것이 바람직하다.According to the present invention, the control unit preferably converts the motion information into a coordinate movement amount of the cursor on the display.

또한, 본 발명에 따르면 상기 영상매칭부는 촬영된 각 영상을 일정 개수의 블럭으로 분할하고, 연속된 두 영상에서 상호 대응되는 블럭들의 이동량을 영상 매칭 방법으로 구하고, 상기 구해진 이동량의 통계 값으로부터 연속된 두 영상 사이의 변위를 산출하는 것이 바람직하다.In addition, according to the present invention, the image matching unit divides each photographed image into a predetermined number of blocks, obtains movement amounts of blocks corresponding to each other in two consecutive images by an image matching method, and continuously It is desirable to calculate the displacement between the two images.

또한, 본 발명에 따르면 상기 제어부는, 상기 카메라가 기준시간 내에 상하 또는 좌우로 N회 이상 주기적으로 이동되는 움직임 정보를 커서의 on/off 신호로 활용하는 것이 바람직하다.In addition, according to the present invention, the control unit, it is preferable that the camera uses the movement information that is periodically moved up or down or left and right N times or more within the reference time as the on / off signal of the cursor.

또한, 본 발명에 따르면 상기 제어부는, 상기 카메라가 기준시간 내에 상하 또는 좌우로 N회 이상 주기적으로 이동되는 움직임 정보를 커서의 클릭 신호로 활용하는 것이 바람직하다.In addition, according to the present invention, the control unit preferably utilizes the movement information of the camera periodically moving up, down, left and right N times or more within a reference time as a click signal of a cursor.

본 발명에 따르면, 웨어러블 디스플레이 디바이스를 용이하고 정확하게 조작할 수 있다.According to the present invention, the wearable display device can be easily and accurately operated.

도 1은 본 발명의 일 실시예에 따른 웨어러블 디스플레이용 입력장치의 개략적인 구성도이다.1 is a schematic configuration diagram of an input device for a wearable display according to an exemplary embodiment of the present invention.

도 2는 웨어러블 디스플레이 디바이스가 장착된 몸의 부위를 움직였을 때 카메라에서 촬영되는 영상들이 변화되는 것을 보여주는 도면이다. FIG. 2 is a view showing that images captured by a camera change when a part of the body on which the wearable display device is mounted is changed.

도 3은 촬영된 영상을 블럭으로 나눈 일례를 보여주는 도면이다.3 is a diagram illustrating an example of dividing a photographed image into blocks.

도 4는 카메라를 하측 방향으로 움직였을 때 촬영된 영상에서 블럭들의 이동되는 것을 나타내는 도면이다.4 is a diagram illustrating movement of blocks in a captured image when the camera is moved downward.

도 5는 카메라를 좌측 방향으로 움직였을 때 촬영된 영상에서 블럭들의 이동되는 것을 나타내는 도면이다.FIG. 5 is a diagram illustrating movement of blocks in a captured image when the camera is moved leftward.

이하, 첨부된 도면을 참조하여 본 발명의 바람직한 실시예에 따른 웨어러블 디스플레이용 입력장치에 관하여 설명한다.Hereinafter, an input device for a wearable display according to an exemplary embodiment of the present invention will be described with reference to the accompanying drawings.

도 1은 본 발명의 일 실시예에 따른 웨어러블 디스플레이용 입력장치의 개략적인 구성도이다.1 is a schematic configuration diagram of an input device for a wearable display according to an exemplary embodiment of the present invention.

도 2는 웨어러블 디스플레이 디바이스가 장착된 몸의 부위를 움직였을 때 카메라에서 촬영되는 영상들이 변화되는 것을 보여주는 도면이며, 도 3은 촬영된 영상을 블럭으로 나눈 일례를 보여주는 도면이며, 도 4는 카메라를 하측 방향으로 움직였을 때 촬영된 영상에서 블럭들의 이동되는 것을 나타내는 도면이며, 도 5는 카메라를 좌측 방향으로 움직였을 때 촬영된 영상에서 블럭들의 이동되는 것을 나타내는 도면이다.2 is a view showing that the images taken by the camera is changed when moving the part of the body equipped with the wearable display device, Figure 3 is a view showing an example of dividing the captured image into blocks, Figure 4 is a lower side of the camera FIG. 5 is a diagram illustrating movement of blocks in a captured image when moved in a direction, and FIG. 5 is a diagram illustrating movement of blocks in a captured image when the camera is moved in a left direction.

도 1 내지 도 5를 참조하면, 본 실시예에 따른 웨어러블 디스플레이용 입력장치는 구글 글라스와 같은 웨어러블 디스플레이 디바이스를 제어하기 위한 것으로, 카메라(10)와, 영상 매칭부(20)와, 제어부(30)를 포함한다.1 to 5, the input device for a wearable display according to the present embodiment is for controlling a wearable display device such as Google glass, and includes a camera 10, an image matching unit 20, and a controller 30. ).

카메라(10)는 복수의 영상을 연속적으로 촬영하기 위한 것으로, 웨어러블 디스플레이용 디바이스에 구비되어 있는 카메라가 활용될 수 있다. 예를 들어, 구글 글라스와 같이 안경 형태의 웨어러블 디스플레이용 디바이스는 안경테에 전방을 향하여 카메라가 설치될 수 있고, 손목시계 형태의 웨어러블 디스플레이용 디바이스의 경우에는 시계의 한쪽에 카메라가 설치될 수 있다. 이하에서는, 설명의 편의를 위해 구글 글라스와 같이 안경 형태를 가지는 웨어러블 디스플레이용 디바이스를 기준으로 설명한다.The camera 10 is for photographing a plurality of images continuously, and a camera provided in the wearable display device may be utilized. For example, a device for a wearable display in the form of glasses, such as Google Glass, may be provided with a camera facing the front of the glasses frame, and in the case of a wearable display device in the form of a watch, a camera may be installed on one side of the watch. Hereinafter, for convenience of description, a description will be given based on a device for a wearable display having a form of glasses such as Google glass.

영상 매칭부(20)는 카메라(10)에서 촬영된 복수의 영상을 입력받으며, 이 영상들을 분석하여 카메라의 움직임 정보를 획득한다.The image matching unit 20 receives a plurality of images captured by the camera 10 and analyzes the images to obtain motion information of the camera.

구체적으로 설명하면, 안경 형태의 웨어러블 디스플레이용 디바이스에 설치되어 있는 카메라에 촬영되는 영상은, 사용자의 시선(보다 정확하게는 사용자가 바라보는 방향(얼굴 방향))에 의해 결정된다. 따라서, 카메라가 지면과 평행한 상태로 머리에 달려 있다고 가정할 때, 머리를 흔들면 도 2에 도시된 바와 같이 촬영된 영상도 머리를 흔든 방향으로 옵셋(차이)이 발생하게 된다. 즉, 머리를 좌우로 흔들면 촬영된 영상이 좌우 방향으로 옵셋이 발생하고, 머리를 상하로 흔들면 촬영된 영상도 상하로 옵셋이 발생하게 된다. 이때, 연속적으로 촬영된 영상에서는 영상 사이의 옵셋량이 적게 되므로 영상 간의 매칭이 가능하다.Specifically, the image photographed by the camera installed in the wearable display device in the form of glasses is determined by the user's gaze (more accurately, the direction the user views (face direction)). Therefore, assuming that the camera rests on the head in a state parallel to the ground, shaking the head may cause an offset (difference) in the photographed image as shown in FIG. 2. That is, when the head is shaken from side to side, the photographed image is offset in the left and right directions, and when the head is shaken up and down, the captured image is also offset up and down. In this case, since the offset amount between images is decreased in the continuously photographed images, matching between images is possible.

영상 매칭부(20)에서는 카메라의 움직임 정도(즉, 움직임 정보)를 나타내는 연속된 영상 간의 옵셋량을 구하기 위해, 도 3에 나타낸 바와 같이 촬영된 영상을 일정 개수의 블럭(1)으로 나눈다. 연속된 두 개의 영상에서 상호 대응되는 블럭들의 이동량을 템플릿 매칭과 같은 영상 매칭 방법으로 구한다. 예를 들어, 각 영상을 A~F까지 6개의 블럭으로 나눈다고 하면, N번째 영상에서 A 블럭과, N+1번째 영상에서의 A 블럭 사이의 이동량을 영상 매칭 방법으로 구하고, 동일하게 B~F 블럭에 대해서도 이동량을 구한다. 그리고, 이렇게 구해진 블럭들의 이동량들의 통계값으로부터 두 영상 사이의 변위(즉, 움직임 값)를 산출할 수 있다. 여기서, 통계값과 관련하여 블럭의 이동량들의 평균값을 사용할 수 있고, 아니면 빈도수가 많은 이동량을 영상 간의 변위로 정할 수 있다. The image matching unit 20 divides the photographed image into a predetermined number of blocks 1 as shown in FIG. 3 to obtain an offset amount between successive images indicating the degree of movement of the camera (ie, motion information). The amount of movement of blocks corresponding to each other in two consecutive images is obtained by an image matching method such as template matching. For example, if each image is divided into six blocks from A to F, the amount of movement between the A block in the Nth image and the A block in the N + 1th image is obtained by an image matching method. The movement amount is also obtained for the F block. Then, the displacement (ie, the motion value) between the two images may be calculated from the statistical values of the movement amounts of the blocks thus obtained. Here, the average value of the moving amounts of the block may be used in relation to the statistical value, or the moving amount having a high frequency may be determined as the displacement between images.

제어부(30)는 영상 매칭부에서 구해진 카메라의 움직임 정보에 따라 웨어러블 디스플레이 디바이스를 조작하기 위한 제어신호를 생성하는 것이다.The controller 30 generates a control signal for manipulating the wearable display device according to the motion information of the camera obtained by the image matching unit.

제어신호 생성에 관하여 구체적으로 설명하면, 도 4에 도시된 바와 같이 카메라를 하측 방향으로 움직이면서 촬영한 경우, 도 4의 (A)와 (B)를 비교할 때 각 블럭들이 상측으로 움직이게 되며, 이에 따라 움직임 벡터(움직임 정보)는 상측으로 구해진다. 참고로, 도 4의 (A)가 카메라를 움직이기 전의 영상이고, 도 4의 (B)가 카메라를 움직인 후의 영상이다. 또한, 도 5에 도시된 바와 같이, 카메라를 좌측 방향으로 움직이면 움직임 벡터는 우측으로 구해진다. Referring to the generation of the control signal in detail, as shown in FIG. 4, when the camera is photographed while moving downward, each block moves upward when comparing (A) and (B) of FIG. 4. The motion vector (motion information) is obtained upwards. For reference, FIG. 4A is an image before moving the camera, and FIG. 4B is an image after moving the camera. In addition, as shown in FIG. 5, when the camera is moved to the left, the motion vector is obtained to the right.

이때, 디스플레이 화면상의 커서는 카메라를 움직인 방향으로 움직이는 것이 사람들이 보다 더 직관적으로 사용할 수 있다. 따라서 디스플레이의 x축 및 y축 좌표는 구해진 움직임값의 방향과 반대로 캘리브레이션해줄 필요가 있다.At this time, the cursor on the display screen can be used by people more intuitively to move the camera in the direction of movement. Therefore, the x- and y-axis coordinates of the display need to be calibrated in the opposite direction of the obtained motion value.

상기와 같이 구해진 영상간의 움직임 값을 캘리브레이션 과정을 통하여 디스플레이상의 좌표 변화량으로 바꾸어주면, 카메라의 움직임(즉, 사용자의 머리의 움직임)에 따라 디스플레이 상의 커서를 움직일 수 있다. 이때 머리의 상하좌우 움직임에 맞게 커서의 좌표도 상하좌우로 움직이도록 방향을 맞추어 좌표 변환을 해주면 직관적인 구동이 가능하다. 그리고 영상간의 움직임 값에 대한 디스플레이 좌표의 변화의 비율인 민감도에 관한 계수를 적절히 조절하면, 머리를 심하게 돌리지 않더라도 커서를 화면 끝에서 끝까지 움직일 수 있도록 할 수 있다. 또한, 정밀한 제어를 위해서는 영상간의 움직임값에 대한 디스플레이 좌표의 변화율을 선형적으로 곱해서 구하는 것이 아니라, 움직임 값의 제곱량을 이용할 수도 있다. 기본적으로는 연속되는 영상간의 매칭에 의해 구해지는 움직임 값의 정도에 따라 디스플레이상의 좌표의 변화량을 구함으로써, 화면상의 커서를 이동하고 원하는 메뉴를 선택할 수 있다.When the motion value between the images obtained as described above is changed to the coordinate change amount on the display through a calibration process, the cursor on the display may be moved according to the movement of the camera (ie, the movement of the user's head). At this time, if the coordinates are converted so that the coordinates of the cursor move up, down, left and right according to the movement of the head up, down, left and right, intuitive driving is possible. By adjusting the sensitivity coefficient, which is the ratio of the change of the display coordinates to the movement value between images, the cursor can be moved from the end of the screen to the end without turning the head too hard. In addition, for precise control, instead of linearly multiplying the rate of change of the display coordinates with respect to the motion value between images, the square amount of the motion value may be used. Basically, the cursor on the screen can be moved and the desired menu can be selected by obtaining the change amount of the coordinates on the display according to the degree of the motion value obtained by the matching between successive images.

한편, 커서를 켜고 끄기 위해서는 특별한 동작이 필요하다. 예를 들어, 사전에 설정된 기준시간 안에 머리(카메라)를 상하 또는 좌우로 N회 이상 반복적으로 움직이는 동작을 커서를 on/off 하는 제어신호로 활용할 수 있다. On the other hand, special actions are required to turn the cursor on and off. For example, an operation of repeatedly moving the head (camera) up and down or left and right N times or more within a preset reference time may be used as a control signal for turning on / off the cursor.

즉, 머리를 상하로 여러번 움직이면 영상 간의 매칭에 의한 옵셋량의 증감이 주기적으로 나타내게 된다. 이것은 평상시에는 일어나기 힘든 동작으로, 기준시간 내에 이 주기적인 신호가 일정 횟수 이상 일어나게 되면 커서를 on/off 하는 동작으로 인식할 수 있다. 마찬가지로 머리를 좌우로 여러번 움직이는 동작을 커서를 끄는 동작으로 활용할 수 있다. In other words, if the head is moved up and down several times, the increase and decrease of the offset amount due to the matching between images is displayed periodically. This is an operation that is difficult to occur in normal times, and can be recognized as an operation of turning on / off the cursor when this periodic signal occurs more than a predetermined number of times within the reference time. Similarly, moving your head several times to the left or right can be used to drag the cursor.

한편, 메뉴를 선택하는 클릭동작의 경우는 머리를 움직여 커서를 이동시키다가 일정 시간 멈추었을 때 클릭동작으로 해석할 수 있다. 이외에도 순간적으로 머리를 내렸다가 올리는 동작을 클릭동작으로 활용할 수 있다. 머리를 끄덕이는 동작은 직관적인 클릭동작과 일치하고 일정 시간 내에 하방으로 내려갔다가 올라오는 움직임값들을 해석하면 클릭동작을 인식할 수 있다. 이때 클릭하는 순간은 클릭동작을 인식한 후 과거의 기록으로부터 초기 클릭동작이 시작된 지점을 찾아내서 해당좌표에서 클릭이 일어났음을 소급 적용할 수 있다. 상기와 같은 방식으로 커서를 on, off하고 움직이고 클릭하여 메뉴를 선택할 수 있다.On the other hand, the click operation for selecting a menu can be interpreted as a click operation when a certain time stops while moving the cursor by moving the head. In addition, you can use the action of clicking and lifting the head down momentarily. The nodding action is consistent with the intuitive click action, and the click action can be recognized by interpreting the motion values that go down and rise within a certain time. At this time, the moment of click can be applied retrospectively to find out the point where the initial click action started from the past record after recognizing the click action. In the same way as above, the cursor can be turned on, off, moved and clicked to select a menu.

참고로, 앞서 설명한 머리를 상하 또는 좌우로 흔드는 동작을 클릭하는 신호로도 활용할 수 있다.For reference, the head may be used as a signal for clicking an operation of shaking the head up, down, left, or right.

한편, 웨어러블 디바이스중 손목시계형의 경우에도 안경형과 유사하게 구동할 수 있다. 손목시계 위에 달린 카메라를 이용하여 텍스처가 있는 대상을 바라보고 손목을 상하좌우로 움직이면 커서를 켜고 이동하고 클릭할 수 있다. 방법은 상기 기술한 안경형 디스플레이의 상하 움직임을 를 시계의 12시 방향 움직임으로 바꾸고 좌우 움직임을 3시 방향 움직임으로 치환하면 된다.Meanwhile, the wristwatch type of the wearable device may be driven similarly to the spectacle type. With the camera on your wrist, you can look at textured objects and move your wrist up, down, left, or right to turn the cursor on, move and click. The method is to change the vertical movement of the spectacle display described above to the 12 o'clock movement of the clock and the left and right movements to the 3 o'clock movement.

상기에 설명한 방법은 웨어러블 디스플레이에 장착된 카메라를 이용하는 방법에 대해 설명하였지만, 웨어러블 디스플레이에 장착된 자이로 센서를 이용해서도 비슷한 기능을 구현할 수 있다. 자이로 센서의 경우에는 해당하는 축의 각도의 변화량을 얻을 수 있다. 머리를 상하로 움직일때의 각도 변화량과 좌우로 움직일때의 각도 변화량을 상기 설명한 부분의 영상간의 옵셋량으로 치환하면 카메라를 이용했을 때와 유사한 결과를 얻을 수 있다. Although the method described above has described a method of using a camera mounted on the wearable display, a similar function may be implemented using a gyro sensor mounted on the wearable display. In the case of a gyro sensor, an amount of change in the angle of a corresponding axis can be obtained. When the amount of change in angle when moving the head up and down and the amount of change in angle when moving the left and right by the offset amount between the images of the above-described part can be obtained similar to that when using the camera.

상술한 바와 같이, 본 발명에 따르면 웨어러블 디스플레이 디바이스가 장착된 신체의 부위를 움직임으로써 웨어러블 디스플레이 디바이스를 용이하고 정확하게 조작할 수 있다. 그리고, 이러한 방식은 기존 음식 인식 방법과 달리 주변의 소음에 의해 방해를 받지 않을 뿐 아니라, 가상마우스 구동방식과 달리 사용자의 손을 특별히 움직이지 않아도 되므로, 웨어러블 디스플레이 디바이스를 매우 효율적으로 조작할 수 있다.As described above, according to the present invention, the wearable display device can be easily and accurately operated by moving a part of the body on which the wearable display device is mounted. In addition, unlike the conventional food recognition method, such a method is not disturbed by the surrounding noise, and unlike the virtual mouse driving method, the user's hand does not need to be moved, and thus the wearable display device can be operated very efficiently. .

이상에서 본 발명의 바람직한 실시예에 대해 도시하고 설명하였으나, 본 발명은 상술한 특정의 바람직한 실시예에 한정되지 아니하며, 청구범위에서 청구하는 본 발명의 요지를 벗어남이 없이 당해 발명이 속하는 기술분야에서 통상의 지식을 가진 자라면 누구든지 다양한 변형 실시가 가능한 것은 물론이고, 그와 같은 변경은 청구범위 기재의 범위 내에 있게 된다.Although the preferred embodiments of the present invention have been shown and described above, the present invention is not limited to the specific preferred embodiments described above, and the present invention belongs to the present invention without departing from the gist of the present invention as claimed in the claims. Various modifications can be made by those skilled in the art, and such changes are within the scope of the claims.

Claims (5)

웨어러블 디스플레이 디바이스를 제어하기 위한 것으로,To control the wearable display device, 상기 웨어러블 디스플레이 디바이스에 장착되며, 복수의 영상을 연속적으로 촬영하는 카메라와,A camera mounted on the wearable display device and continuously photographing a plurality of images; 상기 카메라에서 촬영된 복수의 영상을 분석하여 상기 카메라의 움직임 정보를 획득하는 영상 매칭부와,An image matching unit which analyzes a plurality of images photographed by the camera to obtain motion information of the camera; 상기 영상 매칭부에서 획득된 움직임 정보에 대응되는 제어신호를 생성하는 제어부를 포함하는 것을 특징으로 하는 웨어러블 디스플레이용 입력장치.And a controller configured to generate a control signal corresponding to the motion information acquired by the image matching unit. 제1항에 있어서,The method of claim 1, 상기 제어부는 상기 움직임 정보를 디스플레이 상에서의 커서의 좌표 이동량으로 변환하는 것을 특징으로 하는 웨어러블 디스플레이용 입력장치.And the control unit converts the motion information into a coordinate movement amount of a cursor on a display. 제1항에 있어서,The method of claim 1, 상기 영상 매칭부는 촬영된 각 영상을 일정 개수의 블럭으로 나누고, 연속된 두 영상에서 상호 대응되는 블럭들의 이동량을 영상 매칭 방법으로 구하고, 상기 구해진 이동량의 통계 값으로부터 연속된 두 영상 사이의 변위를 산출하는 것을 특징으로 하는 웨어러블 디스플레이용 입력장치.The image matching unit divides each photographed image into a predetermined number of blocks, obtains an amount of movement of blocks corresponding to each other in two consecutive images by an image matching method, and calculates a displacement between two consecutive images from the statistical values of the obtained amounts of movement. Input device for a wearable display, characterized in that. 제1항에 있어서,The method of claim 1, 상기 제어부는, 상기 카메라가 기준시간 내에 상하 또는 좌우로 N회 이상 주기적으로 이동되는 움직임 정보를 커서의 on/off 신호로 활용하는 것을 특징으로 하는 웨어러블 디스플레이용 입력장치.The control unit is input device for a wearable display, characterized in that for the on / off signal of the cursor using the movement information that the camera periodically moved up or down or left and right N times within a reference time. 제1항에 있어서,The method of claim 1, 상기 제어부는, 상기 카메라가 기준시간 내에 상하 또는 좌우로 N회 이상 주기적으로 이동되는 움직임 정보를 클릭 신호로 활용하는 것을 특징으로 하는 웨어러블 디스플레이용 입력장치.The control unit is input device for a wearable display, characterized in that the camera utilizes the movement information periodically moved up and down or left and right N times or more within the reference time.
PCT/KR2014/007969 2013-08-27 2014-08-27 Input device for wearable display Ceased WO2015030482A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2013-0101766 2013-08-27
KR20130101766A KR101492813B1 (en) 2013-08-27 2013-08-27 A input device for wearable display device

Publications (1)

Publication Number Publication Date
WO2015030482A1 true WO2015030482A1 (en) 2015-03-05

Family

ID=52586949

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2014/007969 Ceased WO2015030482A1 (en) 2013-08-27 2014-08-27 Input device for wearable display

Country Status (2)

Country Link
KR (1) KR101492813B1 (en)
WO (1) WO2015030482A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017069324A1 (en) * 2015-10-22 2017-04-27 엘지전자 주식회사 Mobile terminal and control method therefor
WO2018097483A1 (en) * 2016-11-23 2018-05-31 삼성전자주식회사 Motion information generating method and electronic device supporting same

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101577359B1 (en) * 2015-03-16 2015-12-14 박준호 Wearable device
KR101700767B1 (en) 2015-06-02 2017-01-31 엘지전자 주식회사 Head mounted display
CN110176089B (en) * 2018-11-20 2021-06-25 广东小天才科技有限公司 An access control unlocking method and wearable device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001154794A (en) * 1999-11-29 2001-06-08 Nec Fielding Ltd Pointing device with click function by blink
KR101169583B1 (en) * 2010-11-04 2012-07-31 주식회사 매크론 Virture mouse driving method
KR101233793B1 (en) * 2011-09-26 2013-02-15 주식회사 매크론 Virtual mouse driving method using hand motion recognition
JP2013110764A (en) * 2013-02-19 2013-06-06 Sony Corp Imaging display device, and imaging display method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5891632B2 (en) * 2011-07-26 2016-03-23 富士通株式会社 Determination device, setting device, determination method, and determination program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001154794A (en) * 1999-11-29 2001-06-08 Nec Fielding Ltd Pointing device with click function by blink
KR101169583B1 (en) * 2010-11-04 2012-07-31 주식회사 매크론 Virture mouse driving method
KR101233793B1 (en) * 2011-09-26 2013-02-15 주식회사 매크론 Virtual mouse driving method using hand motion recognition
JP2013110764A (en) * 2013-02-19 2013-06-06 Sony Corp Imaging display device, and imaging display method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017069324A1 (en) * 2015-10-22 2017-04-27 엘지전자 주식회사 Mobile terminal and control method therefor
US10540005B2 (en) 2015-10-22 2020-01-21 Lg Electronics Inc. Mobile terminal and control method therefor
WO2018097483A1 (en) * 2016-11-23 2018-05-31 삼성전자주식회사 Motion information generating method and electronic device supporting same
US10796439B2 (en) 2016-11-23 2020-10-06 Samsung Electronics Co., Ltd. Motion information generating method and electronic device supporting same

Also Published As

Publication number Publication date
KR101492813B1 (en) 2015-02-13

Similar Documents

Publication Publication Date Title
KR101184460B1 (en) Device and method for controlling a mouse pointer
EP3090331B1 (en) Systems with techniques for user interface control
CN202584010U (en) Wrist-mounting gesture control system
WO2022005860A1 (en) Integration of artificial reality interaction modes
CN102779000B (en) User interaction system and method
US20150220158A1 (en) Methods and Apparatus for Mapping of Arbitrary Human Motion Within an Arbitrary Space Bounded by a User's Range of Motion
EP3093751A1 (en) Display apparatus and method of controlling display apparatus
CN107807732A (en) Method, storage medium and electronic installation for display image
US20150002475A1 (en) Mobile device and method for controlling graphical user interface thereof
EP4026318A1 (en) Intelligent stylus beam and assisted probabilistic input to element mapping in 2d and 3d graphical user interfaces
US20160132189A1 (en) Method of controlling the display of images and electronic device adapted to the same
US11009949B1 (en) Segmented force sensors for wearable devices
KR20160150565A (en) Three-dimensional user interface for head-mountable display
KR20150110257A (en) Method and wearable device for providing a virtual input interface
KR101502085B1 (en) A gesture recognition input method for glass type display device
KR20120045667A (en) Apparatus and method for generating screen for transmitting call using collage
CN104298340A (en) Control method and electronic equipment
CN108027654A (en) Input devices, input methods and programs
WO2015030482A1 (en) Input device for wearable display
KR101370027B1 (en) Mouse device for eyeglass display device and driving method thereof
KR102297473B1 (en) Apparatus and method for providing touch inputs by using human body
CN110968190A (en) IMU for touch detection
US11054917B2 (en) Wearable device and control method, and smart control system
JP6621133B2 (en) Information processing method, information processing apparatus, and program
WO2022228056A1 (en) Human-computer interaction method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14839491

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14839491

Country of ref document: EP

Kind code of ref document: A1