[go: up one dir, main page]

TWI796222B - Visual spatial-specific response time evaluation system and method based on immersive virtual reality device - Google Patents

Visual spatial-specific response time evaluation system and method based on immersive virtual reality device Download PDF

Info

Publication number
TWI796222B
TWI796222B TW111117927A TW111117927A TWI796222B TW I796222 B TWI796222 B TW I796222B TW 111117927 A TW111117927 A TW 111117927A TW 111117927 A TW111117927 A TW 111117927A TW I796222 B TWI796222 B TW I796222B
Authority
TW
Taiwan
Prior art keywords
response
subject
time
object image
response time
Prior art date
Application number
TW111117927A
Other languages
Chinese (zh)
Other versions
TW202344225A (en
Inventor
梁蕙雯
Original Assignee
國立臺灣大學
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 國立臺灣大學 filed Critical 國立臺灣大學
Priority to TW111117927A priority Critical patent/TWI796222B/en
Application granted granted Critical
Publication of TWI796222B publication Critical patent/TWI796222B/en
Publication of TW202344225A publication Critical patent/TW202344225A/en

Links

Images

Landscapes

  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A response evaluation system includes a headset device for providing image information to a subject, a control device for providing control signals, and a processing device including a storage unit and a processor. The storage unit stores program code, which causes the processor to execute: at a first time, displaying a first object image at a first position in a display area of the display of the headset, and calculating a first response time of the subject to the first object image according to a control signal provided by the control device; at the second time, displaying a second object image at a second position in the display area, and calculating a second response time of the subject to the second object image according to the control signal; and establishing a spatial position response time relationship according to the first position, the second position, the first response time and the second response time.

Description

以沉浸式虛擬實境設備評估具視覺空間特異性之反應時間評估系統與方法System and method for evaluating visual-spatial-specific reaction time evaluation with immersive virtual reality equipment

本發明係關於一種反應評估系統與方法;特別是,以沉浸式虛擬實境設備建構視覺空間,以評估受測者在不同視覺空間的反應能力的反應評估系統與方法。The present invention relates to a response evaluation system and method; in particular, a response evaluation system and method for constructing a visual space with immersive virtual reality equipment to evaluate the subject's response ability in different visual spaces.

視覺空間注意(visuo-spatial attention)是重要的認知能力,並且會因年紀增加而退步,更會受到腦部傷病之影響,包括:(1)局部腦部傷病(例如,腦中風、腦外傷及/或腦腫瘤);(2)廣泛性腦部疾病(例如,巴金森氏症、缺氧性腦病變及/或腦炎);(3)退化性變化(例如,失智症)。相較於運動或語言障礙,視覺空間注意缺失是個常見且影響重大卻可能被忽略的問題。舉例來說,腦中風病人出現視覺忽略或注意力不足症狀的比例可能高達七成。這些腦中風病人的個案可能因為視力或注意力受損的情況而導致愈後或術後恢復不良/緩慢等問題。即使透過復健等方式逐漸改善,病人的存留症狀仍然對日常生活功能或安全產生明顯影響。舉例來說,輕度半邊忽略的個案,雖然仍然可以注意到兩邊視野的刺激,但在病人的忽略側,反應速度可能較慢。而上述個案的差異往往無法具體地量化以及評估量測。Visuo-spatial attention is an important cognitive ability, and it will decline with age, and it will be affected by brain injuries, including: (1) localized brain injuries (such as cerebral apoplexy, traumatic brain injury and and/or brain tumor); (2) generalized brain disease (eg, Parkinson's disease, hypoxic encephalopathy, and/or encephalitis); (3) degenerative changes (eg, dementia). Compared with motor or language impairments, visuospatial attention deficit is a common and high-impact problem that may be overlooked. For example, the proportion of visual neglect or inattention symptoms in stroke patients may be as high as 70%. These cases of stroke patients may have problems with recovery or poor/slow recovery after surgery due to impaired vision or concentration. Even if gradually improved through rehabilitation and other methods, the patient's remaining symptoms still have a significant impact on daily life function or safety. For example, in a case of mild hemi-neglect, although stimuli in both visual fields can still be noticed, the reaction speed may be slower on the patient's neglected side. However, the above-mentioned differences in individual cases often cannot be specifically quantified and evaluated.

與運動能力的評估不同,視覺跟視野反應的評估並不容易且客觀,傳統大多依賴觀察日常生活行為或使用紙本測試為主。舉例來說,以腦中風的個案,常用的紙本測試為多條直線切半(bisectional test)、尋找紙上的特定圖案或字母的刪除測驗(cancellation test)或是仿繪圖案,而行為觀察則以閱讀菜單、打電話或是看地圖等方式進行測試。上述習知的測試方法需依賴醫療人員手工計分或主觀判定,導致結果容易出錯且較不客觀。同時,測試的紙本或電腦空間範圍有限,並無法反映受測者在生活情境中的多方向(靠近中央視野或周邊視野、遠近)的注意力差異。近年來,雖然已有電腦或應用程式投入使用,但也僅能優化醫療人員手工計分的問題。對於視覺空間能力,例如半邊忽略、偏盲、甚至於詐盲的分辨或評估尚有諸多困難。Different from the evaluation of motor ability, the evaluation of visual and visual field response is not easy and objective. Traditionally, most of them rely on observing daily life behaviors or using paper tests. For example, in cases of cerebral apoplexy, commonly used paper-based tests are bisectional test, cancellation test to find specific patterns or letters on the paper, or imitation pattern, while behavioral observation is Take the test by reading a menu, making a phone call, or looking at a map. The above-mentioned known test methods need to rely on manual scoring or subjective judgment by medical personnel, resulting in error-prone and less objective results. At the same time, the paper or computer space of the test is limited, and it cannot reflect the multi-directional attention differences (near the central visual field or peripheral visual field, near and far) of the testees in life situations. In recent years, although computers or applications have been put into use, they can only optimize the manual scoring of medical personnel. There are still many difficulties in distinguishing or evaluating visuospatial abilities, such as hemi-neglect, hemianopia, and even fraudulent blindness.

此外,測試環境的干擾與病人的認知或動作能力亦會影響最終的測試結果。具體來說,視覺上「看到」與認知上「注意到」以及「啟動動作回應」並不等同。舉例來說,左側半邊忽略病人對於左側視野的刺激,可能不太注意,啟動動作反應也可能較慢,失智病人可能有看到、有注意到,但啟動動作減慢,刻意假裝看不到的個案,可以測量視覺受到刺激物件吸引,但是故意不動作。目前習知之技術,只能測量完成動作回應有無,無法分析視覺與動作等不同階段對於刺激的反應幅度或速度。In addition, the interference of the test environment and the patient's cognitive or motor ability will also affect the final test results. Specifically, visually "seeing" is not the same as cognitively "noticing" and "activating a motor response". For example, if the left side ignores the stimulation of the left visual field, the patient may not pay much attention to it, and may also be slow to start the movement response. Dementia patients may see and notice, but the start movement is slowed down, deliberately pretending not to see In some cases, visual attraction to a stimulating object can be measured, but intentional inaction. The current known technology can only measure whether there is a response to the completed movement, but cannot analyze the magnitude or speed of the response to the stimulus in different stages such as vision and movement.

而不同刺激特性(例如,顏色、形狀及/或大小)與周遭環境的干擾都會影響病人注意能力。使用實體或紙本測試,其刺激特性選擇性受到限制,即使使用電腦建置測試環境,仍可能受到周遭環境的視覺與聲音干擾,影響測試結果。The different stimulus characteristics (eg, color, shape and/or size) and the interference of the surrounding environment will affect the patient's ability to pay attention. Using physical or paper tests, the selectivity of the stimulus characteristics is limited. Even if the test environment is built using a computer, the visual and sound interference from the surrounding environment may still affect the test results.

因此,在此領域中,如何提供受測者一個避免周遭干擾的測試環境、有效地提供不同測試情境或刺激,並且有效率且客觀的評分方式將會是必須要克服的議題。Therefore, in this field, how to provide testees with a test environment that avoids surrounding disturbances, effectively provide different test situations or stimuli, and provide efficient and objective scoring methods will be issues that must be overcome.

本發明提供一種反應評估系統,包含用以向受測者提供影像資訊的頭戴裝置、用以提供控制訊號的控制裝置以及包含儲存元件以及處理器的處理裝置。儲存元件儲存程式碼,所述程式碼使處理器執行:在第一時間,於頭戴裝置的顯示器的顯示範圍中的第一位置顯示第一物件影像,並計算受測者對於第一物件影像以控制裝置提供控制訊號的第一反應時間。在第二時間,於顯示範圍中的第二位置顯示第二物件影像,並計算受測者對於第二物件影像以控制裝置提供控制訊號的第二反應時間。以及依據第一位置、第二位置、第一反應時間與第二反應時間,建立空間位置-反應時間關係。The invention provides a response evaluation system, which includes a head-mounted device for providing image information to a subject, a control device for providing control signals, and a processing device including a storage element and a processor. The storage element stores the program code, and the program code causes the processor to execute: at the first time, display the first object image at the first position in the display range of the head-mounted device, and calculate the subject's response to the first object image The first response time of the control signal provided by the control device. At the second time, the second object image is displayed at the second position in the display range, and the second reaction time for the subject to provide a control signal by the control device to the second object image is calculated. And according to the first position, the second position, the first response time and the second response time, a spatial position-response time relationship is established.

本發明提供一種反應評估方法,包含:透過頭戴裝置的顯示器提供影像至受測者;在第一時間,於頭戴裝置的顯示器的顯示範圍中的第一位置顯示第一物件影像,並計算受測者對於第一物件影像以控制裝置提供控制訊號的第一反應時間。在第二時間,於顯示範圍中的第二位置顯示第二物件影像,並計算受測者對於第二物件影像以控制裝置提供控制訊號的第二反應時間。以及依據第一位置、第二位置、第一反應時間與第二反應時間,建立空間位置-反應時間關係。The present invention provides a response evaluation method, comprising: providing an image to a subject through a display of a head-mounted device; at a first time, displaying a first object image at a first position in the display range of the display of the head-mounted device, and calculating The subject's first reaction time to the first object image provided by the control device with a control signal. At the second time, the second object image is displayed at the second position in the display range, and the second reaction time for the subject to provide a control signal by the control device to the second object image is calculated. And according to the first position, the second position, the first response time and the second response time, a spatial position-response time relationship is established.

如上所述,透過頭戴裝置提供受測者較佳且較不易分心的測試環境,並利用控制裝置提供反饋,藉此評估受測者的反應速度以及視線偏差,如此可以達到有效地量測以及使用客觀數值評估受測者視野、專注力等目的。As mentioned above, the tester is provided with a better and less distracting test environment through the head-mounted device, and the control device is used to provide feedback, so as to evaluate the tester's reaction speed and line of sight deviation, so as to achieve effective measurement As well as the use of objective values to evaluate the subject's field of vision, concentration and other purposes.

對本文中使用諸如「第一」、「第二」等名稱的元件的任何引用通常不限制這些元件的數目或順序。相反,這些名稱在本文中用作區分兩個或更多個元件或元件實例的便利方式。因此,應當理解的是,請求項中的名稱「第一」、「第二」等不一定對應於書面描述中的相同名稱。此外,應當理解的是,對第一和第二元件的引用並不表示只能採用兩個元件或者第一元件必須在第二元件之前。關於本文中所使用之『包含』、『包括』、『具有』、『含有』等等,均為開放性的用語,即意指包含但不限於。Any reference to an element herein using a designation such as "first," "second," etc. generally does not limit the number or order of those elements. Rather, these designations are used herein as a convenient way of distinguishing between two or more elements or instances of an element. Therefore, it should be understood that designations "first", "second", etc. in the claims do not necessarily correspond to the same designations in the written description. Furthermore, it should be understood that references to first and second elements do not mean that only two elements may be used or that the first element must precede the second element. "Includes", "including", "has", "containing" and so on used in this article are all open terms, meaning including but not limited to.

術語「耦接」在本文中用於指代兩個結構之間的直接或間接電耦接。例如,在間接電耦接的一個示例中,一個結構可以經由電阻器、電容器或電感器等被動元件被耦接到另一結構。The term "coupled" is used herein to refer to a direct or indirect electrical coupling between two structures. For example, in one example of indirect electrical coupling, one structure may be coupled to another structure via a passive element such as a resistor, capacitor, or inductor.

在本發明中,詞語「示例性」、「例如」用於表示「用作示例、實例或說明」。本文中描述為「示例性」、「例如」的任何實現或方面不一定被解釋為比本發明的其他方面優選或有利。如本文中關於規定值或特性而使用的術語「大約」、「大致」旨在表示在規定值或特性的一定數值(例如,10%)以內。In the present invention, the words "exemplary" and "for example" are used to mean "serving as an example, instance or illustration". Any implementation or aspect described herein as "exemplary" or "such as" is not necessarily to be construed as preferred or advantageous over other aspects of the invention. The terms "about" and "approximately" as used herein with respect to a stated value or characteristic are intended to mean within a certain value (eg, 10%) of the stated value or characteristic.

請參照圖1,圖1說明一種反應評估系統100,包含用以向受測者10提供影像資訊的頭戴裝置110、用以提供控制訊號的控制裝置130以及包含儲存元件以及處理器的處理裝置120。儲存元件儲存程式碼,所述程式碼使處理器執行本發明之反應評估方法。Please refer to FIG. 1. FIG. 1 illustrates a response evaluation system 100, including a head-mounted device 110 for providing image information to a subject 10, a control device 130 for providing control signals, and a processing device including a storage element and a processor. 120. The storage element stores program codes, and the program codes enable the processor to execute the response evaluation method of the present invention.

具體來說,如圖2A所示,頭戴裝置110(例如,虛擬實境眼鏡或智慧型眼鏡)可以具有殼體114,並且可供受測者配戴於頭上(例如,具有支架可固定於耳殼上或是綁帶式)。且較佳而言,殼體114可以提供受測者阻隔頭戴裝置110之顯示器111以外的光線,以避免周遭環境的干擾。頭戴裝置110之顯示器111,舉例來說,可以是雙顯示器分別對應受測者的雙眼,藉此可以提供單眼測試等應用;或是,可以是單顯示器,以提供較完整的反應速度/視野範圍的測試環境。然而,頭戴裝置110之顯示器111的數量並不受限上述舉例,並且可以依據需求調整顯示器111的數量,而不受限於上述示例目的。另一方面,頭戴裝置110還可以包含聲音輸出/輸入設備116(例如,耳機或麥克風),使受測者可以接收到指令或者是回覆指令。另外,聲音輸出/輸入設備116亦可以搭配降噪設置(可以是主動降噪或是被動降噪),以避免受測者受到周遭環境聲音的干擾。Specifically, as shown in FIG. 2A , the head-mounted device 110 (for example, virtual reality glasses or smart glasses) can have a housing 114, and can be worn by the subject on the head (for example, has a bracket that can be fixed on ear shell or strap style). And preferably, the housing 114 can provide the subject to block the light outside the display 111 of the head-mounted device 110 to avoid interference from the surrounding environment. The display 111 of the head-mounted device 110, for example, can have dual displays corresponding to both eyes of the subject respectively, so that applications such as single-eye testing can be provided; or, it can be a single display to provide a relatively complete response speed/ Field of view test environment. However, the number of displays 111 of the head-mounted device 110 is not limited to the above examples, and the number of displays 111 can be adjusted according to requirements, and is not limited to the above examples. On the other hand, the head-mounted device 110 may also include a sound output/input device 116 (for example, an earphone or a microphone), so that the subject can receive instructions or respond to instructions. In addition, the sound output/input device 116 can also be equipped with a noise reduction setting (which can be active noise reduction or passive noise reduction), so as to avoid the subject being disturbed by the surrounding environment sound.

另一方面,如圖2B所示,處理裝置120,例如可以為電腦、智慧型手機、平板電腦或是與頭戴裝置110整合之微電腦系統,但不限於此。處理裝置120可以透過有線(例如,訊號線)或者是無線(例如,藍芽、無線網路或紅外線)等方式與頭戴裝置110通訊耦接。一般而言,處理裝置120也可以耦接至顯示器124,以產生使用者介面或是反應測試的運行狀況給操作人員確認。於一些情況下,處理裝置120也可以具備例如鍵盤、滑鼠或觸控面板等外部輸入126,使操作人員控制或監控處理裝置,但不限於此。處理裝置120的處理器122例如為中央處理器(CPU)、現場可程式閘陣列(FPGA)、專用積體電路(ASIC)或單晶片系統(SoC)等具備運算能力且可執行程式之元件。另一方面,儲存元件121例如為記憶體、唯讀記憶體(ROM)或是快閃記憶體等可儲存資料之元件。儲存元件121儲存可供處理器122所執行的程式碼或者應用程式。須說明的是,本發明並不受限於處理器122、儲存元件121及/或程式碼的形式與種類。On the other hand, as shown in FIG. 2B , the processing device 120 can be, for example, a computer, a smart phone, a tablet computer, or a microcomputer system integrated with the head-mounted device 110 , but is not limited thereto. The processing device 120 can be communicatively coupled with the head-mounted device 110 through wired (eg, signal cable) or wireless (eg, bluetooth, wireless network, or infrared) and other means. In general, the processing device 120 can also be coupled to the display 124 to generate a user interface or reflect the running status of the test for the operator to confirm. In some cases, the processing device 120 may also have an external input 126 such as a keyboard, a mouse, or a touch panel to allow an operator to control or monitor the processing device, but is not limited thereto. The processor 122 of the processing device 120 is, for example, a central processing unit (CPU), a field programmable gate array (FPGA), an application specific integrated circuit (ASIC) or a system on a chip (SoC) with computing capability and capable of executing programs. On the other hand, the storage element 121 is, for example, an element capable of storing data such as a memory, a read only memory (ROM) or a flash memory. The storage element 121 stores codes or application programs that can be executed by the processor 122 . It should be noted that the present invention is not limited to the form and type of the processor 122, the storage element 121 and/or the program code.

關於控制裝置130,請參照圖3A-3B,控制裝置130可以例如為手把、按鈕、聲控開關及/或姿態偵測儀等可以供受測者提供回饋至處理裝置120之元件。具體來說,當受測者進行測試時,可以透過控制裝置130下達確認、啟動、取消、暫停等指令至處理裝置。例如,當控制裝置130為按鈕或手把的時候,受測者可以透過按壓按鈕提供訊號至處理裝置120,如此可以令測試啟動、進行下一步驟或是暫停。另一方面,控制裝置130不一定需要接觸到受測者,例如可以透過聲控或者是姿態儀捕捉受測者的手勢或動作來達到接收/下達指令的目的。Regarding the control device 130 , please refer to FIGS. 3A-3B . The control device 130 can be, for example, a handle, a button, a voice-activated switch and/or an attitude detector, etc., which can provide feedback to the processing device 120 for the subject. Specifically, when the subject is performing a test, the control device 130 may issue commands such as confirmation, activation, cancellation, and suspension to the processing device. For example, when the control device 130 is a button or a handle, the subject can provide a signal to the processing device 120 by pressing the button, so that the test can be started, proceed to the next step or pause. On the other hand, the control device 130 does not necessarily need to be in contact with the subject, for example, the purpose of receiving/giving instructions can be achieved by capturing the gesture or movement of the subject through voice control or an attitude meter.

須說明的是,如圖3A所示,控制裝置130可以直接或者間接地耦接至處理裝置120。舉例來說,控制裝置130可以透過有線(例如,訊號線、網路線)或無線(例如,無線網路、藍芽、紅外線)的方式直接與處理裝置120連接。另一方面,如圖3B所示,控制裝置130可以透過頭戴裝置110與處理裝置120連接。換句話說,控制裝置130與頭戴裝置110連接後,經由頭戴裝置110將控制裝置130的訊號提供給處理裝置120。然而應當理解,本發明並不受限於上述示例。It should be noted that, as shown in FIG. 3A , the control device 130 may be directly or indirectly coupled to the processing device 120 . For example, the control device 130 can be directly connected to the processing device 120 through wired (eg, signal cable, network cable) or wireless (eg, wireless network, bluetooth, infrared) manner. On the other hand, as shown in FIG. 3B , the control device 130 may be connected to the processing device 120 through the head-mounted device 110 . In other words, after the control device 130 is connected to the head-mounted device 110 , the signal of the control device 130 is provided to the processing device 120 through the head-mounted device 110 . It should be understood, however, that the present invention is not limited to the above examples.

於一實施例中,控制裝置130的可以包括眼球追蹤模組。具體來說,控制裝置130可以與頭戴裝置110整合,眼球追蹤模組可以設置於殼體114上朝向受測者的一側,藉此偵測受測者的眼球影像或是與眼球視線相關的電及/或磁訊號。眼球追蹤模組例如可以是利用眼框電位測量法(Electro-Oculography)、鞏膜搜尋線圈(Scleral Search Coil)、眼球影像分析法(Video-Oculo Graphy)或是瞳孔與角膜的影像合併分析法(Video Based Combined Pupil/Corneal Reflection)等眼球追蹤技術來追蹤受測者的視線方向。In one embodiment, the control device 130 may include an eye tracking module. Specifically, the control device 130 can be integrated with the head-mounted device 110, and the eye tracking module can be set on the side of the housing 114 facing the subject, so as to detect the subject's eye image or correlate with the eye sight. electrical and/or magnetic signals. The eye tracking module can, for example, use the electro-oculography, the scleral search coil, the video-oculo graph, or the combined analysis of the pupil and cornea (Video Based Combined Pupil/Corneal Reflection) and other eye-tracking technologies to track the subject's gaze direction.

眼球追蹤模組可以將受測者的視線投射至頭戴裝置110的顯示器111的顯示範圍DA內的投影軌跡資訊轉換為控制訊號提供至處理裝置120,以進行後續處理。具體來說,當受測者將視線對準顯示範圍DA中所預設好的位置(例如中心或角落)時,可以賦予控制訊號的判定。舉例來說,當視線對準中心表示開始或是視線移至角落代表暫停或者其他任意可行指令。如此,受測者可以透過視線及/或眼動軌跡即可有介面,以輸入回饋或者控制訊號至處理裝置120中,而無須多餘的裝置或設置。The eye tracking module can convert the projection trajectory information projected by the subject's line of sight onto the display area DA of the display 111 of the head-mounted device 110 into control signals and provide them to the processing device 120 for subsequent processing. Specifically, when the subject aligns his/her gaze at a preset position (such as the center or a corner) in the display area DA, the determination of the control signal can be given. For example, start when looking at the center or pause when looking at the corner or any other possible command. In this way, the subject can have an interface to input feedback or control signals to the processing device 120 through the line of sight and/or eye movement track, without redundant devices or configurations.

儲存元件121所儲存的程式碼使處理器122執行本發明之反應評估方法:在第一時間t1,於顯示器111的顯示範圍DA中的第一位置P1顯示第一物件影像I1,並計算受測者對於第一物件影像I1的第一反應時間RT1;在第二時間t2,於顯示器111的顯示範圍DA中的第二位置P2顯示第二物件影像I2,並計算受測者對於第二物件影像I2的第二反應時間RT2;以及依據第一反應時間RT1與第二反應時間RT2,建立空間位置-反應時間關係。The program code stored in the storage element 121 enables the processor 122 to execute the reaction evaluation method of the present invention: at the first time t1, the first object image I1 is displayed at the first position P1 in the display range DA of the display 111, and the measured object image I1 is calculated. or the first reaction time RT1 for the first object image I1; at the second time t2, the second object image I2 is displayed at the second position P2 in the display range DA of the display 111, and the subject’s response to the second object image is calculated. The second response time RT2 of I2; and establishing a spatial position-response time relationship according to the first response time RT1 and the second response time RT2.

更具體來說,如圖4A所示,當測試開始時,受測者的視線可以在初始位置P0,具體來說,視線的初始位置P0可以是受測者於測試開始前將視線調整至中央的凝視位置,或是測試開始前的任一位置。待受測者準備好時,可以透過操作人員執行開始指令或是受測者自行啟動(例如,透過控制裝置130輸入控制訊號)。當顯示器111的顯示範圍DA中的第一位置P1顯示第一物件影像I1時,若受測者接收到第一物件影像I1的刺激,例如,受測者的視線由初始位置P0移向第一物件影像I1所在的第一位置P1,或是受測者在保持視線在初始位置P0時,透過餘光注意到第一物件影像I1。此時,受測者對於第一物件影像I1的第一反應時間RT1可以透過控制裝置130輸入確認訊號。例如,受測者看到/注意到第一物件影像I1後(不論是直視或是由餘光看到),立刻由控制裝置130產生控制訊號來確認計時。More specifically, as shown in Figure 4A, when the test starts, the subject's line of sight can be at the initial position P0. Specifically, the initial position P0 of the line of sight can be adjusted to the center by the subject before the test begins. Gaze position, or any position before the start of the test. When the subject is ready, the operator can execute the start command or the subject can start it by himself (for example, inputting a control signal through the control device 130 ). When the first position P1 in the display area DA of the display 111 displays the first object image I1, if the subject receives the stimulation of the first object image I1, for example, the subject's line of sight moves from the initial position P0 to the first object image I1. The first position P1 where the object image I1 is located, or the subject notices the first object image I1 through peripheral vision when keeping the line of sight at the initial position P0. At this time, the subject can input a confirmation signal through the control device 130 for the first reaction time RT1 of the first object image I1 . For example, after the subject sees/notices the image I1 of the first object (whether it is viewed directly or from the peripheral vision), the control device 130 immediately generates a control signal to confirm the timing.

接續地,如圖4B所示,當完成第一反應時間RT1的計時後或是距離第一時間t1一個預設時間後,將會在顯示器的顯示範圍中的第二位置P2顯示第二物件影像I2(此時,第一物件影像I1已消失)。換句話說,第一時間t1與第二時間t2間隔一個預設時間(預設時間可以由系統或者操作者定義),或是第二時間約略等於第一時間t1加上第一反應時間RT1(即,判定第一反應時間RT1後即刻產生第二物件影像I2)。Next, as shown in FIG. 4B, when the timing of the first reaction time RT1 is completed or after a preset time from the first time t1, the second object image will be displayed at the second position P2 in the display range of the display. I2 (at this time, the first object image I1 has disappeared). In other words, there is a preset time interval between the first time t1 and the second time t2 (the preset time can be defined by the system or the operator), or the second time is roughly equal to the first time t1 plus the first reaction time RT1 ( That is, the second object image I2 is generated immediately after the first reaction time RT1 is determined).

在第二時間t2時,在第二位置P2顯示第二物件影像I2後,若受測者接收到第二物件影像I2的刺激,例如,受測者的視線將由當前位置P0’移向第二物件影像I2所在的第二位置P2,或是受測者在保持視線在當前位置P0’時,透過餘光注意到第二物件影像I2。如同第一反應時間RT1的量測方式,受測者可以透過控制裝置130輸入確認訊號。須說明的是,當前位置P0’可以是第一位置P1(包含,預定範圍A1內的任一位置)、初始位置P0、或是第二物件影像I2生成前,受測者視線所在的任意位置。At the second time t2, after the second object image I2 is displayed at the second position P2, if the subject receives the stimulus of the second object image I2, for example, the subject's line of sight will move from the current position P0' to the second object image. The second position P2 where the object image I2 is located, or the subject notices the second object image I2 through peripheral vision when keeping the line of sight at the current position P0 ′. Like the measurement method of the first reaction time RT1 , the subject can input a confirmation signal through the control device 130 . It should be noted that the current position P0' can be the first position P1 (including any position within the predetermined range A1), the initial position P0, or any position where the subject's line of sight is before the second object image I2 is generated .

上述第一位置P1與第二位置P2之間的關係可以是隨機產生或是依序產生。舉例來說,可以隨機在顯示器111的顯示範圍DA中的第一位置P1顯示第一物件影像I1後,接著隨機在第二位置P2顯示第二物件影像I2。大致來說,第一位置P1與第二位置P2並不重疊(或可以部份重疊),且較佳而言,第二位置P2可以距離第一位置P1一個預設距離。需說明的是,本文所指的於顯示範圍DA內距離及/或位置可以用顯示器111的像素點作為單位(例如,距離一定的像素(pixel))。於一實施例中,顯示範圍DA可以被區分為四個象限Phase 1-4,當第一位置P1以預設或者隨機的方式出現在四個象限Phase 1-4的其中之一(本實施例為第二象限Phase 2),則第二位置可以接續地以預設或者隨機的方式,出現在剩餘三個象限中其中之一者(本實施例為第四象限Phase 4)。The above-mentioned relationship between the first position P1 and the second position P2 may be generated randomly or sequentially. For example, the first object image I1 may be randomly displayed at the first position P1 in the display area DA of the display 111 , and then the second object image I2 may be randomly displayed at the second position P2 . Generally speaking, the first position P1 and the second position P2 do not overlap (or can partially overlap), and preferably, the second position P2 can be a predetermined distance away from the first position P1. It should be noted that the distance and/or position within the display area DA referred to herein may use pixels of the display 111 as a unit (for example, a certain distance pixel (pixel)). In one embodiment, the display range DA can be divided into four quadrants Phase 1-4, when the first position P1 appears in one of the four quadrants Phase 1-4 in a preset or random manner (this embodiment is the second quadrant, Phase 2), then the second position may appear in one of the remaining three quadrants (in this embodiment, the fourth quadrant, Phase 4) in a preset or random manner.

另一方面,第一位置P1及/或第二位置P2亦可以由操作者或系統(例如,人工智能)進行控制而選定出現位置。舉例來說,測試當下由操作者決定下一個圖像所出現的位置,或者於測試前,操作者以預設的方式先行設定圖像位置的排程。須說明的是,為了避免圖示過於混亂,本實施例僅說明第一位置P1與第二位置P2,然而,本領域具通常知識者應知道,本發明的可以有三個或以上的圖像產生位置。例如顯示第三物件影像I3的第三位置P3可以與第二位置P2不重疊或僅部分重疊。然而,於一些情況下,第三位置P3可以與第一位置P1重疊或者是部份重疊(即,在第三位置P3出現的第三物件影像I3與第一物件影像I1可以重疊或接近)。On the other hand, the first position P1 and/or the second position P2 can also be controlled by an operator or a system (eg, artificial intelligence) to select the appearance position. For example, at the moment of the test, the operator decides the position where the next image will appear, or before the test, the operator sets the schedule of the image position in advance in a preset manner. It should be noted that, in order to avoid too much confusion in the illustration, this embodiment only illustrates the first position P1 and the second position P2. However, those skilled in the art should know that three or more images can be generated in the present invention Location. For example, the third position P3 displaying the third object image I3 may not overlap or only partially overlap with the second position P2. However, in some cases, the third position P3 may overlap or partially overlap with the first position P1 (ie, the third object image I3 appearing at the third position P3 may overlap or be close to the first object image I1 ).

此外,請一併參照圖4A-4B,於控制裝置130的可以包括眼球追蹤模組的實施例中,受測者對於第一物件影像I1的第一反應時間RT1,可以透過眼球追蹤模組來自動量測。舉例來說,當受測者看到/注意到第一物件影像I1後,透過眼球追蹤模組追蹤受測者的視線移至距離第一位置P1一預定範圍A1內而自動判定第一反應時間RT1。須說明的是,預定範圍A1的範圍大小可以根據測試難度或者其他條件有所不同。例如,當預定範圍A1為「零」時,代表受測者需要將視線位置與第一位置P1完全重疊才可以判定看到,反之亦然,當預定範圍A1的範圍較大時,將會有較寬鬆的判定。接續地,當第二時間t2時,在第二位置P2顯示第二物件影像I2後,可以透過眼球追蹤模組判定受測者的視線是否移至距離第二位置P2一預定範圍A2內,以記錄第二反應時間RT2。須說明的是,預定範圍A1與預定範圍A2可以相同或不同。In addition, please refer to FIGS. 4A-4B together. In an embodiment where the control device 130 may include an eye tracking module, the first reaction time RT1 of the subject to the first object image I1 can be measured through the eye tracking module. Automatic measurement. For example, when the subject sees/notices the first object image I1, the eye tracking module tracks the subject's line of sight to within a predetermined range A1 from the first position P1 to automatically determine the first reaction time RT1. It should be noted that the range size of the predetermined range A1 may be different according to the test difficulty or other conditions. For example, when the predetermined range A1 is "zero", it means that the subject needs to completely overlap the line of sight position with the first position P1 before he can judge to see, and vice versa, when the predetermined range A1 is larger, there will be more lenient judgment. Next, at the second time t2, after the second object image I2 is displayed at the second position P2, the eye tracking module can be used to determine whether the subject's line of sight has moved within a predetermined range A2 from the second position P2, so as to Record the second reaction time RT2. It should be noted that the predetermined range A1 and the predetermined range A2 may be the same or different.

另一方面,眼球追蹤模組所擷取的視線軌跡資訊L1、L2也可以透過處理裝置120的顯示器124提供給操作人員,操作人員可以藉由顯示器124顯示的視線軌跡資訊L1、L2來確認受測者的視線位置以進行主動計時。例如,操作者看到受測者的視線移到距離第一位置P1一預定範圍A1內,由操作人員立刻按下確認來計時。須說明的是,上述對於反應時間RT1、RT2的計時方式並非要限制本發明,任何本領域習知之計時手段,皆應屬於本發明之範疇。On the other hand, the gaze track information L1, L2 captured by the eye tracking module can also be provided to the operator through the display 124 of the processing device 120, and the operator can use the gaze track information L1, L2 displayed on the display 124 to confirm the received eye position of the tester for active timing. For example, when the operator sees that the subject's line of sight moves to within a predetermined range A1 from the first position P1, the operator immediately presses OK to start timing. It should be noted that the above-mentioned timing methods for the reaction times RT1 and RT2 are not intended to limit the present invention, and any timing means known in the art shall fall within the scope of the present invention.

於一實施例中,請參照圖5,物件影像可以隨機或不隨機的方式,在顯示器111的顯示範圍DA中依序出現直到所有顯示範圍DA內皆已顯示過物件影像。具體來說,如圖5所示,若顯示器111的顯示範圍DA可以區分為N的小區,則將會以上述舉例方式將物件影像依序於N小區中出現。於此實施例中,將會產生至少N組反應時間數據。須說明的是,上述N個小區可以重疊、部份重疊及/或完全不重疊的方式分布於顯示器111的顯示範圍DA內。本發明並不限於顯示器111的顯示範圍DA之小區的分割方式。須說明的是,圖5所用的顯示範圍DA為圓形,然而,應當理解顯示範圍亦可以為矩形或是任意合適之形狀。此外,小區的分割也不以圖5所示的矩形為限。In one embodiment, please refer to FIG. 5 , the object images may appear in the display area DA of the display 111 sequentially in a random or non-random manner until all object images have been displayed in the display area DA. Specifically, as shown in FIG. 5 , if the display area DA of the display 111 can be divided into N sub-districts, the object images will appear sequentially in the N sub-divisions in the above example. In this embodiment, at least N sets of response time data will be generated. It should be noted that the above N sub-regions may be distributed within the display area DA of the display 111 in a manner of overlapping, partially overlapping and/or not overlapping at all. The present invention is not limited to the way of dividing the cells of the display area DA of the display 111 . It should be noted that the display area DA used in FIG. 5 is a circle, however, it should be understood that the display area can also be a rectangle or any suitable shape. In addition, the division of cells is not limited to the rectangle shown in FIG. 5 .

另一方面,第一物件影像I1與第二物件影像I2可以完全相同也可以不同。舉例來說,第一物件影像I1與第二物件影像I2可以在大小、形體、顏色及/或亮度上有所不同。測試者可以依據測試不同目的來調整每一個物件影像。當例如共需要產生N個物件影像時,此N個影像可以完全相同或是至少有一個不同。On the other hand, the first object image I1 and the second object image I2 may be completely the same or different. For example, the first object image I1 and the second object image I2 may be different in size, shape, color and/or brightness. The tester can adjust each object image according to the different purposes of the test. For example, when it is necessary to generate N object images in total, the N images may be completely the same or at least one of them may be different.

在測試結束後,顯示器111的顯示範圍DA中,N個位置中的每一個位置將會有對應的反應時間數據(至少N組反應時間數據)。如此可以建立起空間位置-反應時間關係。須說明的是,空間位置-反應時間關係中的反應時間可以是測量時所儲存的原始數據,或是經過正規化或者加權運算後的數據。舉例來說,可以將受測者的平均反應速度設定為基準值,透過各個位置的反應時間與基準值的比較,如此可以得到受測者相對較容易忽視的地方。另外,反應時間的基準值亦可以是所有受測者的平均數、或是其他具統計意義上的任意數值。After the test is over, in the display range DA of the display 111 , each of the N positions will have corresponding reaction time data (at least N sets of reaction time data). In this way a spatial position-reaction time relationship can be established. It should be noted that the response time in the spatial position-response time relationship can be the original data stored during the measurement, or the data after normalization or weighting operation. For example, the average reaction speed of the subject can be set as the reference value, and by comparing the reaction time of each position with the reference value, it can be obtained that the subject is relatively easy to ignore. In addition, the reference value of the reaction time can also be the average of all the subjects, or any other statistical value.

於一實施例中,如圖6所示,反應評估系統100可以根據空間位置-反應時間關係,產生反應時間頻譜圖。具體來說,反應時間頻譜圖的X軸及Y軸所構出的平面可以對應至受測者的視野範圍,圖上的每一點具有一個Z軸數值以對應受測者的反應速度,此Z軸數值可以透過色階(例如,藍色系至紅色系的色溫圖)、灰階及/或亮度差異的方式呈現2D圖片。亦可以透過直方柱等方式呈現3D的圖像資訊。須說明的是,本發明並不受限於上述示例之反應時間頻譜圖的形式。In one embodiment, as shown in FIG. 6 , the response evaluation system 100 can generate a response time spectrogram according to the spatial position-reaction time relationship. Specifically, the plane formed by the X-axis and Y-axis of the reaction time spectrogram can correspond to the field of vision of the subject, and each point on the graph has a Z-axis value corresponding to the reaction speed of the subject. Axis values can represent 2D images in terms of color scale (eg, a blue-to-red color temperature map), grayscale, and/or brightness differences. The 3D image information can also be presented in a manner such as a histogram. It should be noted that the present invention is not limited to the form of the response time spectrogram in the above example.

於一實施例中,處理裝置120亦可以對測試過程中所儲存的複數個反應時間進行校正。舉例來說,在測試過程中,共有N個位置中的每一個位置將會有對應的反應時間數據。當受測者測試M次時(M小於N),有可能因為疲勞或者倦怠而導致第M次後的反應時間產生些許延遲(例如數毫秒)。當受測者產生疲勞或者倦怠時(例如可以透過處理裝置120判定、透過預設測試次數及/或其他可能方式),針對反應時間進行校正。校正方式可以是遞增(隨測試次數增加來增加校正反應時間的長度)、線性、非線性。於一實施例中,校正係可以參考同一點的反應時間。舉例來說,於測試初始量測時紀錄受測者的視線由參考起點至參考終點的反應時間RT r後,於第M次測試後再次量測受測者的視線由參考起點至參考終點的反應時間RT Mr後,藉由反應時間RT r與反應時間RT Mr之間的比值、差值或任意關係來校正測試初期以及量測M次後的反應時間。須說明的是,數次測試後,亦有可能因為受測者熟悉測試流程後導致反應變快,因此對於反應時間的校正亦可以是遞減或是其他方式進行校正。 In one embodiment, the processing device 120 can also calibrate the plurality of response times stored during the test. For example, during the test, each of the total N positions will have corresponding reaction time data. When the subject is tested for M times (M is less than N), there may be a slight delay (for example, a few milliseconds) in the reaction time after the Mth time due to fatigue or burnout. When the subject is fatigued or tired (for example, it can be determined through the processing device 120, through the preset number of tests and/or other possible ways), the reaction time is corrected. The correction method can be incremental (increase the length of the correction reaction time as the number of tests increases), linear, and nonlinear. In one embodiment, the calibration system can refer to the reaction time at the same point. For example, after recording the reaction time RT r of the subject's line of sight from the reference starting point to the reference end point during the initial measurement of the test, after the Mth test, measure the reaction time of the subject's line of sight from the reference starting point to the reference end point again. After the reaction time RT Mr , the reaction time at the initial stage of the test and after M measurements is corrected by the ratio, difference or arbitrary relationship between the reaction time RT r and the reaction time RT Mr. It should be noted that after several tests, it is also possible that the subject's reaction becomes faster after being familiar with the test procedure, so the correction of the reaction time can also be performed by decreasing or other methods.

圖7說明透過反應時間資料以及眼動軌跡以建立不同空間位置之反應時間與眼動軌跡分布圖的簡易流程圖700。首先在空間位置1產生物件後,此時,受測者將會經歷眼睛看到、大腦做出反應以及完成反應等各個階段。系統可以在可以記錄受測者的眼動軌跡以及反應時間後輸出至後端演算。接著,在空間位置2進行重複的動作。待進行至空間位置N後,將會儲存N組眼動軌跡以及N組反應時間,且分別對應各自的空間位置1-N。接著基於空間位置1-N、眼動軌跡1-N以及反應時間1-N建立不同空間位置之反應時間與眼動軌跡分布圖。FIG. 7 illustrates a simplified flowchart 700 for constructing reaction time and eye movement distribution maps for different spatial locations through reaction time data and eye movement trajectories. Firstly, after the object is generated at the spatial position 1, at this time, the subject will go through various stages such as seeing with the eyes, making a response in the brain, and completing the response. After the system can record the subject's eye movement track and reaction time, it can output to the back-end calculation. Next, repeat the action at spatial position 2. After reaching the spatial position N, N sets of eye movement trajectories and N sets of reaction times will be stored, corresponding to the respective spatial positions 1-N. Then, based on the spatial positions 1-N, eye movement trajectories 1-N and reaction time 1-N, the distribution diagrams of reaction time and eye movement trajectories in different spatial positions are established.

於一實施例中,如圖8所示,圖8說明一種反應評估方法800,包含:步驟810,透過頭戴裝置的顯示器提供影像至受測者;步驟820,在第一時間,於頭戴裝置的顯示器的顯示範圍中的第一位置顯示第一物件影像,並計算受測者對於第一物件影像以控制裝置提供控制訊號的第一反應時間;步驟830,在第二時間,於顯示範圍中的第二位置顯示第二物件影像,並計算受測者對於第二物件影像以控制裝置提供控制訊號的第二反應時間;以及步驟840,依據第一位置、第二位置、第一反應時間與第二反應時間,建立空間位置-反應時間關係。In one embodiment, as shown in FIG. 8, FIG. 8 illustrates a reaction evaluation method 800, including: step 810, providing images to the subject through the display of the head-mounted device; step 820, at the first time, The first position in the display range of the display of the device displays the first object image, and calculates the subject's first reaction time for the first object image to control the device to provide a control signal; step 830, at the second time, in the display range The second position in the display shows the second object image, and calculates the second reaction time for the subject to provide the control signal with the control device for the second object image; and step 840, according to the first position, the second position, and the first reaction time With the second reaction time, a spatial position-reaction time relationship is established.

於一實施例中,還包含:步驟850,依據該空間位置-反應時間關係,產生一反應時間頻譜圖。In one embodiment, the method further includes: step 850, generating a response time spectrogram according to the spatial position-response time relationship.

於一實施例中,還包含:步驟835,對第一反應時間與第二反應時間中至少一者進行校正處理。In one embodiment, the method further includes: Step 835, performing calibration processing on at least one of the first response time and the second response time.

提供對本發明的先前描述以使得本領域具通常知識者能夠製作或實施本發明。對於本領域具通常知識者來說,對本發明的各種修改將是很清楚的,並且在不脫離本發明的精神或範圍的情況下,本文中定義的一般原理可以應用於其他變化。因此,本發明不旨在限於本文中描述的示例,而是符合與本文中發明的原理和新穎特徵一致的最寬範圍。The previous description of the invention is provided to enable one of ordinary skill in the art to make or practice the invention. Various modifications to the invention will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other changes without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the examples described herein but is to be accorded the widest scope consistent with the principles and novel features of the invention herein.

10:受測者 100:反應評估系統 110:頭戴裝置 111:顯示器 114:殼體 116:聲音輸出/輸入設備 120:處理裝置 121:儲存元件 122:處理器 124:顯示器 126:外部輸入 130:控制裝置 700:流程 800:反應評估方法 810,820,830,835,840,850:步驟 A1,A2:預定範圍 DA:顯示範圍 I1:第一物件影像 I2:第二物件影像 t1:第一時間 t2:第二時間 RT1:第一反應時間 RT2:第二反應時間 L1,L2:視線軌跡 P0:初始位置 P0’:當前位置 P1:第一位置 P2:第二位置10: Subject 100: Response Evaluation System 110: Headset 111: display 114: Shell 116: Sound output/input device 120: processing device 121: storage element 122: Processor 124: display 126: External input 130: Control device 700: process 800: Response Assessment Methods 810, 820, 830, 835, 840, 850: steps A1, A2: Scheduled range DA: display range I1: first object image I2: second object image t1: the first time t2: second time RT1: First Response Time RT2: second reaction time L1, L2: Line of sight trajectory P0: initial position P0': current position P1: first position P2: second position

呈現附圖以幫助描述本發明的各個方面,為簡化附圖及突顯附圖所要呈現之內容,附圖中習知的結構或元件將可能以簡單示意的方式繪出或是以省略的方式呈現。例如,元件的數量可以為單數亦可為複數。提供這些附圖僅僅是為了解說這些方面而非對其進行限制。The drawings are presented to help describe various aspects of the present invention. In order to simplify the drawings and highlight the contents to be presented in the drawings, the known structures or elements in the drawings may be drawn in a simple schematic manner or presented in an omitted manner . For example, the number of elements may be singular or plural. These figures are provided merely to illustrate these aspects and not to limit them.

圖1為本發明一實施例中,使用反應評估系統示例情境圖。Fig. 1 is an example situation diagram of using a response evaluation system in an embodiment of the present invention.

圖2A為本發明一實施例中,頭戴裝置的示意圖。FIG. 2A is a schematic diagram of a head-mounted device according to an embodiment of the present invention.

圖2B為本發明一實施例中,處理裝置的示意圖與系統方塊圖。FIG. 2B is a schematic diagram and a system block diagram of a processing device in an embodiment of the present invention.

圖3A-3B為本發明一實施例中,處理裝置與控制裝置耦接的系統方塊圖。3A-3B are system block diagrams of a coupling between a processing device and a control device in an embodiment of the present invention.

圖4A-4B為本發明一實施例中,物件影像與顯示範圍之間關連的示意圖。4A-4B are schematic diagrams of the relationship between object images and display areas in an embodiment of the present invention.

圖5為本發明一實施例中,顯示範圍分割為N個小區的示意圖。FIG. 5 is a schematic diagram of dividing the display range into N cells in an embodiment of the present invention.

圖6為本發明一實施例中,反應時間頻譜圖的示意圖。FIG. 6 is a schematic diagram of a response time spectrogram in an embodiment of the present invention.

圖7為本發明一實施例中,建立不同空間位置之反應時間與眼動軌跡分布圖的簡易流程圖。FIG. 7 is a simplified flow chart of establishing the distribution diagrams of reaction time and eye movement trajectories at different spatial locations in an embodiment of the present invention.

圖8為本發明一實施例中,反應評估方法的流程圖。FIG. 8 is a flowchart of a response evaluation method in an embodiment of the present invention.

10:受測者 10: Subject

100:反應評估系統 100: Response Evaluation System

110:頭戴裝置 110: Headset

120:處理裝置 120: processing device

130:控制裝置 130: Control device

Claims (12)

一種反應評估系統,包含: 一頭戴裝置,用以向一受測者提供一影像資訊; 一控制裝置,用以提供一控制訊號;以及 一處理裝置,包含一儲存元件以及一處理器,該儲存元件儲存一程式碼,該程式碼使該處理器執行: 在一第一時間,於該頭戴裝置的顯示器的一顯示範圍中的一第一位置顯示一第一物件影像,並計算該受測者對於該第一物件影像以該控制裝置提供該控制訊號的一第一反應時間; 在一第二時間,於該顯示範圍中的一第二位置顯示一第二物件影像,並計算該受測者對於該第二物件影像以該控制裝置提供該控制訊號的一第二反應時間;以及 依據該第一位置、該第二位置、該第一反應時間與該第二反應時間,建立一空間位置-反應時間關係。 A response assessment system comprising: A head-mounted device for providing an image information to a subject; a control device for providing a control signal; and A processing device includes a storage element and a processor, the storage element stores a program code, and the program code causes the processor to execute: At a first time, display a first object image at a first position in a display range of the display of the head-mounted device, and calculate the subject's response to the first object image and provide the control signal with the control device a first reaction time; displaying a second object image at a second position in the display range at a second time, and calculating a second reaction time for the subject to provide the control signal with the control device to the second object image; as well as According to the first position, the second position, the first response time and the second response time, a spatial position-response time relationship is established. 如請求項1之反應評估系統,其中該控制裝置還包含: 一眼球追蹤模組,依據該受測者的一視線位置提供該控制訊號。 As the reaction evaluation system of claim 1, wherein the control device further includes: The eyeball tracking module provides the control signal according to a gaze position of the subject. 如請求項1之反應評估系統,其中該顯示範圍以象限劃分,該第一位置對應該顯示範圍的一第一象限,該第二位置對應其餘象限中任一者。The response evaluation system according to claim 1, wherein the display range is divided into quadrants, the first position corresponds to a first quadrant of the display range, and the second position corresponds to any one of the remaining quadrants. 如請求項1之反應評估系統,其中該第二位置隨機出現於該顯示範圍中與該第一位置不重疊的位置。The response evaluation system according to claim 1, wherein the second position randomly appears in a position in the display range that does not overlap with the first position. 如請求項1之反應評估系統,其中該程式碼還使該處理器執行: 依據該空間位置-反應時間關係,產生一反應時間頻譜圖。 The response evaluation system according to claim 1, wherein the program code further causes the processor to execute: According to the spatial position-response time relationship, a response time spectrogram is generated. 如請求項1之反應評估系統,其中該處理裝置對該第一反應時間與該第二反應時間中至少一者進行一校正處理。The response evaluation system according to claim 1, wherein the processing device performs a calibration process on at least one of the first response time and the second response time. 一種反應評估方法,包含: 透過一頭戴裝置提供一影像至一受測者; 在一第一時間,於該頭戴裝置的顯示器的一顯示範圍中的一第一位置顯示一第一物件影像,並計算該受測者對於該第一物件影像以一控制裝置提供一控制訊號的一第一反應時間; 在一第二時間,於該顯示範圍中的一第二位置顯示一第二物件影像,並計算該受測者對於該第二物件影像以該控制裝置提供該控制訊號的一第二反應時間;以及 依據該第一位置、該第二位置、該第一反應時間與該第二反應時間,建立一空間位置-反應時間關係。 A response assessment method comprising: providing an image to a subject through a head-mounted device; At a first time, display a first object image at a first position in a display range of the display of the head-mounted device, and calculate that the subject provides a control signal to the first object image with a control device a first reaction time; displaying a second object image at a second position in the display range at a second time, and calculating a second reaction time for the subject to provide the control signal with the control device to the second object image; as well as According to the first position, the second position, the first response time and the second response time, a spatial position-response time relationship is established. 如請求項7之反應評估方法,其中該控制裝置還包含: 一眼球追蹤模組,依據該受測者的一視線位置提供該控制訊號。 As the response evaluation method of claim 7, wherein the control device further includes: The eyeball tracking module provides the control signal according to a gaze position of the subject. 如請求項7之反應評估方法,其中該顯示範圍以象限劃分,該第一位置對應該顯示範圍的一第一象限,該第二位置對應其餘象限中任一者。The response evaluation method according to claim 7, wherein the display range is divided into quadrants, the first position corresponds to a first quadrant of the display range, and the second position corresponds to any of the remaining quadrants. 如請求項7之反應評估方法,其中該第二位置隨機出現於該顯示範圍中與該第一位置不重疊的位置。The response evaluation method according to claim 7, wherein the second position randomly appears in a position in the display range that does not overlap with the first position. 如請求項7之反應評估方法,還包括: 依據該空間位置-反應時間關係,產生一反應時間頻譜圖。 Such as the response evaluation method of claim item 7, which also includes: According to the spatial position-response time relationship, a response time spectrogram is generated. 如請求項7之反應評估方法,還包括: 對該第一反應時間與該第二反應時間中至少一者進行一校正處理。 Such as the response evaluation method of claim item 7, which also includes: A calibration process is performed on at least one of the first response time and the second response time.
TW111117927A 2022-05-12 2022-05-12 Visual spatial-specific response time evaluation system and method based on immersive virtual reality device TWI796222B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW111117927A TWI796222B (en) 2022-05-12 2022-05-12 Visual spatial-specific response time evaluation system and method based on immersive virtual reality device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW111117927A TWI796222B (en) 2022-05-12 2022-05-12 Visual spatial-specific response time evaluation system and method based on immersive virtual reality device

Publications (2)

Publication Number Publication Date
TWI796222B true TWI796222B (en) 2023-03-11
TW202344225A TW202344225A (en) 2023-11-16

Family

ID=86692413

Family Applications (1)

Application Number Title Priority Date Filing Date
TW111117927A TWI796222B (en) 2022-05-12 2022-05-12 Visual spatial-specific response time evaluation system and method based on immersive virtual reality device

Country Status (1)

Country Link
TW (1) TWI796222B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160213301A1 (en) * 2013-10-04 2016-07-28 Indiana University Research and Technology Corpora tion Eye movement monitoring of brain function
CN109152559A (en) * 2016-06-07 2019-01-04 脑部评估系统有限公司 For the method and system of visual movement neural response to be quantitatively evaluated
TW201944429A (en) * 2018-03-04 2019-11-16 美商阿奇力互動實驗室公司 Cognitive screens, monitor and cognitive treatments targeting immune-mediated and neuro-degenerative disorders
TW202128074A (en) * 2019-11-05 2021-08-01 盧森堡商阿斯貝克特拉公司 Augmented reality headset for medical imaging
CN113260300A (en) * 2018-11-07 2021-08-13 斯达克实验室公司 Fixed point gaze motion training system employing visual feedback and related methods
TW202141345A (en) * 2020-04-22 2021-11-01 宏達國際電子股份有限公司 Head mounted display and control method thereof
US20210401339A1 (en) * 2017-01-10 2021-12-30 Biostream Technologies, Llc Adaptive behavioral training, and training of associated physiological responses, with assessment and diagnostic functionality

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160213301A1 (en) * 2013-10-04 2016-07-28 Indiana University Research and Technology Corpora tion Eye movement monitoring of brain function
CN109152559A (en) * 2016-06-07 2019-01-04 脑部评估系统有限公司 For the method and system of visual movement neural response to be quantitatively evaluated
US20210401339A1 (en) * 2017-01-10 2021-12-30 Biostream Technologies, Llc Adaptive behavioral training, and training of associated physiological responses, with assessment and diagnostic functionality
TW201944429A (en) * 2018-03-04 2019-11-16 美商阿奇力互動實驗室公司 Cognitive screens, monitor and cognitive treatments targeting immune-mediated and neuro-degenerative disorders
CN113260300A (en) * 2018-11-07 2021-08-13 斯达克实验室公司 Fixed point gaze motion training system employing visual feedback and related methods
TW202128074A (en) * 2019-11-05 2021-08-01 盧森堡商阿斯貝克特拉公司 Augmented reality headset for medical imaging
TW202141345A (en) * 2020-04-22 2021-11-01 宏達國際電子股份有限公司 Head mounted display and control method thereof

Also Published As

Publication number Publication date
TW202344225A (en) 2023-11-16

Similar Documents

Publication Publication Date Title
EP3709861B1 (en) Systems for visual field analysis
RU2716201C2 (en) Method and apparatus for determining visual acuity of user
CN110573061B (en) Ophthalmologic examination method and apparatus
CN109285602B (en) Master module, system and method for self-checking a user's eyes
CN117770757A (en) Light field processor system
JP2001509693A (en) Visual field inspection method and apparatus
CN101742957A (en) Testing vision
JP2024525811A (en) COMPUTER PROGRAM, METHOD AND APPARATUS FOR DETERMINING MULTIPLE FUNCTIONAL OCULAR PARAMETERS - Patent application
US11937877B2 (en) Measuring dark adaptation
AU2016410178A1 (en) Method and system for quantitative assessment of visual motor response
US20250349429A1 (en) Systems and methods for ophthalmic digital diagnostics via telemedicine
KR102570505B1 (en) Apparatus for evaluating disorders of conscious using eye tracking based on virtual reality
US20230293004A1 (en) Mixed reality methods and systems for efficient measurement of eye function
TWI796222B (en) Visual spatial-specific response time evaluation system and method based on immersive virtual reality device
JP5876704B2 (en) Field of view measurement method and field of view measurement apparatus
US12178510B2 (en) Determining a visual performance of an eye of a person
US20230404388A1 (en) Method and apparatus for measuring relative afferent pupillary defects
WO2023114744A1 (en) Televisual field device and method for assessing neurological performance