Dynamic virtual image plane adjusting device and method based on multi-image plane eye tracking
Technical Field
The invention belongs to the technical field of vision training, and particularly provides a dynamic virtual image plane adjusting device and method based on multi-image plane eye tracking.
Background
In recent years, virtual Reality (VR) and Augmented Reality (AR) technologies have been rapidly developed, and have been widely used in various fields such as medical treatment, education, entertainment, and the like. However, the technical limitations of the existing VR/AR devices in display form remain significant, especially in terms of focal plane adjustment, which presents a number of bottlenecks. Most VR/AR devices in the current market adopt a fixed focal plane design, so that the image planes watched by users in the virtual environment are always at the same depth, and the positions of the image planes cannot be dynamically adjusted according to the change of the watching point of the users. The design of the fixed focal length not only limits the authenticity of the immersive experience, but also can cause visual fatigue after long-term use, affecting the visual health of the user.
Some devices have a zooming function, but the adjustment mode often depends on manual operation of a user, so that the process is complicated, and real-time and intelligent focal length adjustment is difficult to realize. For example, some zoom devices require a user to adjust the lens position through a mechanical knob or button, while a few devices with electronic focusing function rely on a preset focusing mode, which not only lacks an active feedback mechanism, but also does not have a dynamic adjusting function, so that it is difficult to adapt to the individual requirements of the user in different scenes.
The current eye tracking technology is mainly used in the fields of sight line position analysis, interactive control and the like, and is not applied to focal length dynamic adjustment. The mainstream eye tracking method comprises a pupil positioning technology based on an infrared camera and a sight prediction method based on machine learning, and the pupil positioning technology and the sight prediction method have greatly progressed in accuracy and real-time. However, these techniques have not been effectively combined and applied to optimize the visual experience of virtual reality devices, especially intelligent applications in focus adjustment, remain in the infancy.
On the other hand, the multi-image-plane display technology is a method for alleviating visual fatigue, and the core principle of the multi-image-plane display technology is that a plurality of virtual image planes are generated at different depths, so that a user can obtain visual experience which is closer to reality when watching virtual objects at different depths. However, the implementation of this technology often relies on complex optical designs such as multi-layer displays, optical layering devices, or light field based dynamic reconstruction systems. The application of the schemes in the existing equipment is limited mainly due to the fact that the hardware structure is complex, the calculation cost is high, and meanwhile, a large optimization space still exists in the aspects of synchronism with the change of the user's sight and coordination with the dynamic adjustment of the image plane.
For the currently existing related patents, the analysis is as follows:
Compared with the patent technology CN209014753U (variable focus lens and VR equipment), the variable focus lens has the advantages that the refractive index is adjusted by adopting the thickness change of liquid in the elastic optical surface cavity, so that the zoom function is realized. However, the zooming mode still depends on passive manual adjustment, a fixed focal length is required to be preset and focusing is performed through an analog circuit, and active feedback based on real-time gazing information of a user is absent.
Compared with the patent technology CN208823365U (variable-focus VR eye vision instrument), the variable-focus VR eye vision instrument adopts a motor driving mode to adjust the distance between two lenses so as to change the optical path between the virtual image plane and the human eye. Although the method can realize the adjustment of near vision and far vision to a certain extent, manual operation or adjustment depending on fixed time intervals is needed, and the self-adaptive adjustment on the real-time fixation requirement of the user is lacking.
Compared with the patent technology CN114895793A (an active self-adaptive eye tracking method and AR glasses), the method utilizes eye tracking to capture the point of gaze of a user, and combines a TOF (Time-of-Flight) sensor to detect the distance of the actual observation point of the user so as to adjust the imaging position and focusing distance of an image module and avoid visual discomfort caused by frequent change of a virtual scene. However, this approach focuses mainly on the adjustment of the image projection position, and does not optimize the focus adjustment in combination with the multi-image plane display technique.
Most of the devices on the market today use a fixed focal plane display design, which is prone to visual fatigue and discomfort when used for a long period of time by the user. The main reason for this is that existing VR/AR devices typically only support a single fixed focal plane, and cannot dynamically adjust the position of the image plane according to the depth of the user's gaze point, or manually focus by the user. Therefore, the image plane of the user looking at in the virtual scene is always at the same depth, the natural focusing requirement of the sight cannot be met, the lens can be kept in a fixed state continuously after long-time use, the visual burden is increased, the use experience of the user cannot be met, and accordingly visual fatigue is caused and adverse effects are caused on eye health.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a dynamic virtual image plane adjusting device and method based on multi-image plane eye tracking, which realize real-time tracking and dynamic adjustment of a user visual focus through multi-system cooperation and meet the accurate visual interaction requirement under a multi-depth scene, thereby dynamically adjusting the image plane position according to the change of a user fixation point.
The technical scheme adopted for solving the technical problems is as follows:
A dynamic virtual image plane regulating device based on multi-image plane eye tracking comprises a multi-image plane eye tracking system, a lens-screen regulating system, a control system and a display system,
The multi-image-plane eye tracking system captures eye images of a user by using an infrared camera, calculates through an image acquisition module, a pupil detection module, an ellipse fitting module, a calibration module and a polynomial fitting module, converts the eye images to obtain accurate gaze point coordinates, and transmits data to a control system;
The control system receives the gaze point information, analyzes the current required image plane depth, generates a lens adjustment instruction and sends the lens adjustment instruction to the lens-screen adjustment system;
The lens-screen adjusting system drives the lens and the screen to carry out physical adjustment according to the control instruction, and simultaneously monitors adjustment accuracy in real time by utilizing a sensing feedback mechanism to ensure that the virtual image position is matched with the user gazing focus;
The display system combines the depth information provided by the control system to dynamically adjust the animation picture to enable the animation picture to synchronously change with the gazing depth of the user, so that visual immersion is enhanced.
Further, in the multi-image-plane eye tracking system, the image acquisition module acquires pupil images of eyes of a user by calling the active infrared camera, the pupil detection module is used for calibrating pupil outlines, the ellipse fitting module is used for detecting pupil center positions with highest confidence degrees, the calibration module is used for carrying out coordinate conversion by combining known reference points, and the polynomial fitting module is used for carrying out optimization processing on data and finally converting pupil data into gaze point coordinates.
Preferably, the implementation flow of the image acquisition module is as follows:
Step 111, selecting a suitable 940nm short-focus camera and an infrared lamp bead;
112, connecting the 940nm short-focus camera with the infrared lamp beads, so as to facilitate subsequent collection;
and 113, adjusting the positions of the 940nm short-focus camera and the infrared lamp beads to ensure that the eyes are completely illuminated and avoid pupil detection failure.
Still preferably, the implementation flow of the pupil detection module is as follows:
step 121, judging whether pupil detection fails according to a pupil threshold algorithm, if so, repeating step 113 to ensure success of pupil detection;
Step 122, converting the input video into video frames, and processing frame by frame:
The input original image is subjected to cropping and scaling operations to unify the image size and facilitate subsequent processing. Calculating a New width new_width=int (Desire _ratio) by calculating the current_ratio (current_width/current_height) of the Current image and the target width ratio Desire _ratio (preset as width/height, for example, width=580, height=480), if the current_ratio > Desire _ratio indicates that the image is too wide, calculating a New width new_width=int (Desire _ratio) by calculating the current_ratio, clipping the current_width from the position of the center offset loc_w= (current_width-new_width)// 2 in the image width direction, and obtaining a clipped image size cut_image if current_ratio is excessively high, namely, calculating a New height new_height=int (current_width/Desire _ratio), clipping the image from the center offset loc_w= (current_width-new_width)// 2, and finally obtaining the clipping the image size of the current_image, [ current_image ]: desire _ratio ];
Step 123, storing A, B, C binary images with similar three different areas in the image:
converting the cropped and scaled image into a gray image, applying a strict binarization threshold to the gray image by adding a threshold increment to the darkest pixel value Darkest _value Calculating to obtain a Threshold value Threshold= Darkest _value +Binarizing the image using a cv2.Threshold function, setting pixels smaller than the threshold to white (255) and black (0) to obtain a strictly thresholded image Thresholded _image_1, and centering around the darkest point Darkest _point with a side lengthCreating a square mask, setting the pixel value outside the square area asPerforming mask processing on Thresholded _image_1;
Adding the darkest pixel value Darkest _value to the threshold increment Calculating to obtain a Threshold value Threshold= Darkest _value +Binarizing the gray level image to obtain Thresholded _image_2, and taking the darkest point as the center and the side lengthPerforming mask treatment;
Adding the darkest pixel value Darkest _value to the threshold increment Calculating to obtain a Threshold value Threshold= Darkest _value +Binarizing the gray image to obtain Thresholded _image_3, and centering on the darkest pointPerforming mask treatment;
And Thresholded _image_1 Thresholded _image_2 Thresholded _image_3;
step 124, detecting A, B, C color threshold ranges of three different areas;
at step 125, the pupil image with less fluctuation of the average color threshold is selected as the most reliable pupil image.
More preferably, the implementation flow of the ellipse fitting module is as follows:
Step 131, performing dilation processing on each binary image to enhance the target area:
For the image after multi-thresholding and masking, morphological dilation operation (such as cv2.Dilate function) is first used to enhance the contour features in the image, then contours in the image are found by cv2. Final contours function, then the found contours are filtered, all contours are traversed, the area of each contour is calculated as area=cv2. Final area (contour), if area > =pixel (e.g. pixel=10), the width w and height h of the bounding rectangle of the contour are further calculated (by cv2. Final features function), and the aspect ratio length=max (w, h), width=min (w, h), current_ratio=max (length/width, width/length) are calculated. If current_ratio < = ratio_thresh (e.g. ratio_thresh=5), the contour is eligible, returning to the eligible contour with the largest area;
step 132, extracting an external contour in the expanded image;
step 133, screening the profile through area and number limitation, and returning to the maximum profile;
Step 134, not searching within D pixels of the image edge;
Step 135, checking the brightness every E pixels in the area;
Step 136, edge Shaft and method for producing the sameThe axis samples once every F pixels, ignoring the boundary;
step 137, updating the pixel point with the maximum threshold value between 0 and 255 color channels;
More preferably, the implementation flow of the calibration module is as follows:
step 141, selecting the screen size as a calibration screen, inputting screen parameters into a control system and projecting the screen parameters into human eyes through a lens;
step 142, selecting 9-12 points uniformly paved on the screen as standard points;
step 143, the user wears the device and adjusts the size of the eye image in the picture;
Step 144, performing autonomous calibration through a peripheral device such as a keyboard or a handle, to ensure that the point at which the user gazes in the current state is a calibration point:
In the calibration mode, images are acquired in real time through a camera, the center coordinates (x_eye, y_eye) of pupils are obtained through the pupil detection algorithm, a currently calibrated target point (displayed in red) is drawn on each frame of image, and when a user presses a space key, the current pupil coordinates (x_eye, y_eye) and corresponding screen coordinates (namely, the screen coordinates (x_screen, y_screen) of the currently calibrated target point) are recorded into a calibration data dictionary calibration_data. After the recording of all calibration points is completed, storing the calibration_data into a file (such as a calibration_data. Json);
step 145, adjusting the distance between the lens and the screen, thereby changing the distance between the image plane and the human eye, and repeating steps 142 to 144;
And step 146, saving the weight information marked by the different depth indexes.
Preferably, the implementation flow of the polynomial fitting module is as follows:
step 151, obtaining pixel point coordinates of 9-12 calibration points;
step 152, outputting the coordinates of the pixel point at the pupil center from the video frame;
step 153, coordinates of the pixel points in step 151 and step 152 And (3) withSplit intoOne-dimensional data;
Step 154, selecting a polynomial to perform one-dimensional data fitting on coordinate points;
Step 155, fitting Polynomial coefficients in the function;
Step 156, fitting Polynomial coefficients in the function;
step 157, save the fitted Shaft and method for producing the sameCoefficients of one-dimensional polynomial of axes:
Pupil coordinate data eye_coordinates and screen coordinate data screen_coordinates are loaded from the stored calibration data file. Extracting x-component eye_x and y-component eye_y of the pupil coordinate, and performing polynomial fitting on the pupil x-coordinate and the screen x-coordinate and the pupil y-coordinate and the screen y-coordinate respectively by using functions to obtain fitting coefficients coefficis_x and coefficis_y;
Step 158, starting to predict the gaze point coordinates by inputting pupil center point coordinates;
Still further, the lens-screen adjusting system comprises a lens, a screen, a linear slide rail, a micro stepping motor and a driving circuit, wherein the driving circuit is connected with the micro stepping motor, the action end of the micro stepping motor is linked with the screen, the screen is slidably arranged on the linear slide rail, the screen and the lens are respectively arranged on the front side and the rear side of the linear slide rail, all parts of the system are highly integrated, when a fixation point enters a target area, the driving circuit sends a control signal to the micro stepping motor, and the micro driving motor drives the screen to perform stable and accurate linear displacement along the linear slide rail, so that the geometric distance between the lens and the screen is adjusted. In this scheme, the introduction of miniature stepper motor has realized the lightweight design of structure, possesses accurate feedback adjustment ability simultaneously, and after receiving the data from many image plane eye tracking system, the system can carry out dynamic adjustment through miniature stepper motor drive lens-screen unit to ensure the real-time matching of user's visual focus and image plane degree of depth.
The lens-screen adjusting system further comprises an adjusting module and a sensing feedback module, wherein the adjusting module is used for selecting the distance to be adjusted in real time according to the predicted gaze point coordinates, and the sensing feedback module is used for driving the micro stepping motor to adjust the distance between the lens and the screen.
Preferably, the implementation flow of the adjusting module is as follows:
Step 211, selecting a picture area according to the predicted gaze point coordinates output in step 158;
in step 212, the previously set travel range is adjusted when this area is marked.
Still preferably, the implementation flow of the sensing feedback module is as follows:
step 221, selecting a miniature stepping motor with a stroke of 8 mm;
step 222, applying PWM waves to control the stroke of the micro stepping motor according to different gazing areas.
The control system comprises a data processing module, a calculation module and a control instruction module, wherein the data processing module is used for receiving the gaze point coordinates from the eye tracking system, the calculation module is used for analyzing the current required image plane depth, and the control instruction module is used for sending a motion instruction to the lens-screen adjusting system.
Preferably, the implementation flow of the data processing module is as follows:
step 311, receiving pupil center coordinates output by an eye tracking system;
step 312, receiving and analyzing the calibration fixation point coordinates;
step 313, performing coordinate correction and filtering based on the pupil center coordinates and the calibration gaze point coordinates, removing noise, and improving data precision;
step 314, storing the corrected gaze point coordinates and transmitting the data to a computing module;
Still preferably, the implementation flow of the computing module is as follows:
Step 321, receiving gaze point coordinate information from a data processing module;
Step 322, calculating the current required image plane depth based on a gaze point-depth mapping model preset by the system;
Step 323, detecting abnormal value of the calculation result, and performing interpolation processing to optimize data smoothness;
Step 324, storing the calculated depth information and sending it to the control instruction module;
More preferably, the implementation flow of the control instruction module is as follows:
Step 331, receiving image plane depth data transmitted by a computing module;
step 332, generating control instructions of the stepper motor according to the depth requirement, including forward, backward and stroke length;
step 333, controlling the moving step length and speed of the micro stepping motor by using a PWM (pulse width modulation) signal;
step 334, monitoring the state of the stepping motor in real time to ensure accurate adjustment of the lens position;
step 335, feeding back the execution result, and performing secondary adjustment if necessary to improve the system response accuracy.
The display system comprises an animation preprocessing module and a fixation point sensing module, wherein the animation preprocessing module is used for setting virtual image depths corresponding to different areas in a picture in advance, and the fixation point sensing module is used for ensuring that picture updating and lens adjustment are synchronously carried out.
Preferably, the implementation flow of the animation preprocessing module is as follows:
step 411, setting depth information in the picture according to the depth information in real life;
step 412, presetting depth information for different regions of each frame of picture in the animation;
step 413, presetting different depth information for the pictures at different depths;
Step 414, setting preset depth information as a feedback terminal;
Preferably, the implementation flow of the gaze point sensing module is as follows:
Step 421, setting a region threshold range;
step 422, presetting different area threshold ranges for the pictures at different depths;
step 423, when the gaze point is transferred to the region threshold, feedback information is generated, and the micro stepping motor of step 222 is driven to realize feeding.
A dynamic virtual image plane adjusting method based on multi-image plane eye tracking comprises the following steps:
Step 1, wearing equipment by a user, namely, correctly wearing eye tracking equipment by the user, ensuring that the equipment stably fits the face, adjusting the position, enabling a camera to clearly capture the eye area of the user;
step 2, eye movement calibration, namely entering a calibration mode, guiding a user to sequentially watch a plurality of preset calibration points to establish a user eye movement model, and precisely fitting personalized eye movement parameters of the user by calculating the mapping relation between the pupil center offset and screen coordinates;
step 3, gaze point prediction, namely continuously collecting eye movement data of a user, including pupil positions and sight line directions, under a normal operation mode, calculating and predicting current gaze point coordinates of the user in real time based on an established eye movement model, and determining a specific target area of the sight line of the user on a screen, wherein the prediction data is used for driving a follow-up self-adaptive vision adjustment mechanism;
Detecting micro movement of eyeballs of the user, and dynamically adjusting calculation accuracy of the gaze point so as to cope with eye movement characteristics of different users;
And 5, adjusting the lens-screen distance, namely when confirming that the gaze point of the user is positioned in the threshold range of the area set by the gaze point sensing module, generating a driving signal by the driving circuit, transmitting the driving signal to the micro stepping motor, and accurately moving the screen along the linear sliding rail by the micro stepping motor to change the geometric distance between the screen and the lens so as to adjust the visual focal length, thereby matching the current vision requirement of the user.
The invention provides an intelligent real-time response virtual image plane adjusting device by combining an eye movement tracking technology with a multi-image plane dynamic adjusting scheme. The device can dynamically adjust the lens position according to the depth information of the user's gaze point, so that the virtual image plane can realize intelligent focusing on different depths, thereby effectively relieving visual fatigue, improving immersive experience, and providing a more advanced solution for focal length adjustment of future VR/AR equipment. Compared with the prior art CN208823365U, the invention combines the eye movement tracking with the multi-image-plane dynamic adjustment technology, so that the focal length adjustment process is more natural and more intelligent, and visual fatigue can be effectively relieved, and compared with the prior art CN114895793A, the invention further provides a virtual image plane dynamic adjustment scheme combining the multi-image-plane eye movement tracking technology on the basis of the prior art, so that the system can dynamically adjust the relative positions of the lens and the screen according to the depth information of the user's gaze point, and more accurate focal length adjustment is realized.
The invention senses the sight line position of the user in real time through the multi-image-plane eye tracking module, combines the lens-screen adjusting module to dynamically adjust the light path length, and cooperates with the efficient data processing of the control system module and the depth image presentation of the display system module to achieve the effect of dynamically adjusting the depth of the virtual image plane, thereby providing more comfortable and natural visual experience for the user.
The beneficial effects of the invention are mainly shown in the following steps:
1) The invention discloses a dynamic virtual image plane regulating device and a dynamic virtual image plane regulating method based on multi-image plane eye movement tracking, which are integrated with eye movement tracking according to a regulating principle of a crystalline lens, skillfully and efficiently combine optics and ophthalmology according to ophthalmic medical requirements and optical basic knowledge.
2) Multi-image-plane eye tracking, namely, multi-image-plane eye tracking can meet the requirement that a user can realize eye tracking functions under different image plane depths, and compared with the traditional VR/AR equipment which usually only performs eye tracking on a single image plane, the multi-image-plane eye tracking cannot be realized.
3) Setting the depth of the animation, namely setting the depth information corresponding to different positions in each frame of animation in advance, and realizing a feedback mechanism by combining the depth information with a multi-image-plane eye tracking system to help a user to realize a more real interaction scene.
4) And the active feedback is realized by judging the position of the eye point and changing the physical distance between the lens and the screen in real time so as to change the distance between the virtual image surface and the human eye without manual adjustment.
Drawings
FIG. 1 is a schematic block diagram of a dynamic virtual image plane adjustment device based on multi-image plane eye tracking;
FIG. 2 is a flow chart of a dynamic virtual image plane adjustment method based on multi-image plane eye tracking;
FIG. 3 is a schematic diagram of the optical path of a multi-image-plane eye tracking system;
FIG. 4 is a schematic diagram of a preset depth of the display system;
FIG. 5 is a schematic diagram of a front view of a dynamic virtual image plane adjustment device based on multi-image plane eye tracking according to the present invention;
fig. 6 is a schematic diagram of a back structure of a dynamic virtual image plane adjustment device based on multi-image plane eye tracking in the present invention.
Wherein 101 is a screen, 102 is a lens 102,103 is a human eye, 104 is an active infrared camera, 105 is a baffle, 106 is a miniature stepper motor, 201 is a near-looking cylindrical obstacle, 202 is a far-looking cylindrical obstacle, 203 is the sun, 204 is a cloud, 301 is a shell, 302 is a linear slide rail, 303 is a lens frame, 304 is a clamping groove, 305 is an active infrared light source, and 401 is a screen frame.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Referring to fig. 1-6, a dynamic virtual image plane adjusting device combining a multi-image plane eye tracking technology detects the gaze point position of a user in real time by presetting depth information of different positions in a display animation, dynamically adjusts the distance between a lens and a screen, changes the position of the virtual image plane, realizes active feedback, does not need manual adjustment of the user, and therefore relieves visual fatigue and improves user experience.
Fig. 1 is a schematic block diagram of a dynamic virtual image plane adjusting device based on multi-image plane eye tracking in the present invention, and as shown in fig. 1, the dynamic virtual image plane adjusting device based on multi-image plane eye tracking includes a multi-image plane eye tracking system, a control system, a lens-screen adjusting system, and a display system.
The optical path of the multi-image-plane eye tracking system is shown in fig. 3, light emitted by the screen 101 is emitted into the human eye 103 after passing through the lens 102, and eye images are collected by the active infrared camera 104 to perform multi-image-plane eye tracking, wherein the partition plate 105 is used for ensuring that eyes are split, and the distance between the screen and the lens is changed through the micro stepping motor 106 according to the predicted position of the fixation point, so that the distance between an image generated by the light emitted by the screen and the human eye is changed, and a real scene can be simulated more truly.
The multi-image-plane eye tracking system comprises an image acquisition module, a pupil detection module, an ellipse fitting module, a calibration module and a polynomial fitting module, wherein:
The image acquisition module is used for acquiring a high-resolution image of the eyes of a user so as to ensure the accuracy of subsequent processing, and the implementation flow is as follows:
step 111, selecting a suitable 940nm short-focus camera and an infrared light source as data acquisition equipment to enhance the contrast ratio of pupil detection;
112, correctly connecting the 940nm short-focus camera with an infrared light source to ensure the stable operation of the acquisition system;
Step 113, optimizing the positions of the camera and the infrared light source, so that the eye area is fully illuminated, pupil detection failure is avoided, and the quality of data acquisition is improved;
the pupil detection module is used for identifying a pupil area from the acquired image, extracting pupil edge information, and preliminarily determining the pupil center position, and the implementation flow is as follows:
Step 121, evaluating the validity of the current detection result based on a pupil threshold algorithm. If the detection fails, the camera and the light source are required to be adjusted (the environment adjustment step 113 is repeated) so as to ensure that a stable pupil image is obtained;
Step 122, converting the input video stream into independent video frames, and analyzing each frame by frame;
step 123, binarizing A, B, C different areas in the image, and storing the binary image to enhance pupil characteristics;
Step 124, detecting a color threshold range of A, B, C areas and analyzing the variation trend thereof;
Step 125, selecting a region with the minimum fluctuation of the color threshold value from the A, B, C region as the most reliable pupil image so as to improve the detection stability;
The ellipse fitting module is used for performing ellipse fitting on the pupil area based on the pupil detection result so as to optimize the positioning accuracy of the pupil center, and the implementation flow is as follows:
step 131, performing expansion processing on the binarized image to enhance the visibility of the target area;
step 132, extracting an external contour from the inflated image to obtain a candidate pupil region;
Step 133, screening out the maximum contour most conforming to the pupil shape based on the contour area and the number features;
step 134, performing no pupil search in the area within D pixels of the image edge to reduce noise interference;
Step 135, in the specific area, brightness detection is performed every E pixels to enhance the robustness of pupil detection;
136, sampling every F pixels along the X axis and the Y axis, and ignoring boundary noise to improve the calculation efficiency;
Step 137, updating the pixel point with the largest threshold change in the 0-255 color interval to improve the pupil detection precision;
The calibration module is used for establishing a mapping relation between the pupil center and the screen fixation point, ensuring the accuracy of eye movement tracking, and the implementation flow is as follows:
Step 141, selecting the screen size as a calibration screen, and inputting parameters thereof into a control system to ensure that the lens projection imaging meets the visual requirement of a user;
Step 142, uniformly arranging 9-12 calibration points on a screen to ensure uniform data distribution so as to improve calibration accuracy;
Step 143, after the user wears the device, the size of the eye image on the screen is adjusted to meet the calibration requirement;
Step 144, the user performs calibration through external equipment (keyboard or handle) to ensure that the current gaze point corresponds to the calibration point correctly;
step 145, adjusting the distance between the lens and the screen, thereby changing the distance between the image plane and the human eye, and repeating steps 142 to 144;
Step 146, recording calibration weight information under different depth conditions, and providing reference data for subsequent eye tracking;
the polynomial fitting module is responsible for fitting the mathematical relationship between the pupil center coordinates and the screen fixation point, and constructing a mathematical model for predicting the user fixation position, and the implementation flow is as follows:
Step 151, obtaining pixel point coordinates of 9-12 calibration points ;
Step 152, outputting the coordinates of the pixel point at the pupil center from the video frame;
Step 153, coordinates of the pixel points in step 151 and step 152And (3) withSplit intoOne-dimensional data;
step 154, selecting polynomial orders, fitting coordinate point data to construct an optimal mapping function;
step 155, training based on data The polynomial coefficient of (2) ensures the conversion accuracy in the X-axis direction;
step 156, calculate To ensure the mapping accuracy in the Y-axis direction;
step 157, save the fitted Shaft and method for producing the sameCoefficients of a one-dimensional polynomial of the axis;
Step 158, in the actual running process, inputting pupil center coordinates detected in real time, and predicting gaze point coordinates on a screen by using a fitting function to realize high-precision eye tracking;
The display system comprises an animation preprocessing module and a fixation point sensing module, fig. 4 is a schematic diagram of a preset depth image in the display system, wherein a scene on a road is simulated in the image, the scene comprises a near-looking cylindrical obstacle 201, a far-looking cylindrical obstacle 202, a sun 203 and a cloud 204, according to a geometric perspective relationship, a human eye spontaneously considers that the near-looking cylindrical obstacle 201 is closer than the far-looking cylindrical obstacle 202, and the sun 203 and the cloud 204 can be regarded as infinity, so that when a predicted fixation point reaches a certain position in 201-204, a signal is sent to a screen-lens adjusting system, and the distance between the two is changed through a micro stepping motor.
The display system comprises an animation preprocessing module and a fixation point sensing module, wherein:
The animation preprocessing module is used for setting virtual image depths corresponding to different areas in a picture in advance, and the implementation flow is as follows:
Step 211, setting a depth parameter in the virtual picture according to the real-world depth information;
Step 212, allocating depth information of different areas for each frame of picture in the animation;
step 213, setting corresponding depth parameters for different depth ranges;
step 214, transmitting the calculated depth information to a feedback terminal for dynamic adjustment in visual display;
The gaze point sensing module is used for ensuring that the picture update and the lens adjustment are synchronously carried out, and the implementation flow is as follows:
Step 221, setting a region threshold range of a screen for gaze detection;
Step 222, presetting different area thresholds according to different depth ranges;
step 223, when the gaze point of the user enters the set region range, triggering feedback information to drive the micro stepping motor to adjust the focal length so as to optimize the viewing experience.
According to the flow chart shown in fig. 2, the tester needs to wear the housing 301 shown in fig. 5 on the head correctly after starting and can see the screen 101 shown in fig. 6 through the lens 102, the screen 101 is mounted in the screen frame 401, the screen frame 401 is slidably mounted on one side of the linear slide rail 302, the lens 102 is fixedly mounted on the other side of the linear slide rail 302 through the lens frame 303 and is connected with the partition 105 through the clamping groove 304, during the process of looking at the screen 101, the partition 105 separates the eyes, so that the eyes can see the images seen by the two eyes relatively independently, the active infrared light source 305 in fig. 5 illuminates the pupils after the test adjustment is finished, and the pupil images are shot by the active infrared camera 104 for eye movement calibration, after the calibration is finished, the micro stepping motor 106 in fig. 6 drives the screen frame 401 to move, thereby changing the distance between the screen and the lens, during the distance change, the partition 105 and the lens frame 303 are connected with the partition 105 through the clamping groove 304, the eyes are still independent from each other, and the circulation is completed according to the flow chart shown after the end of the distance change.
Further, the micro stepping motor 106 drives the screen frame 401 to move, which is accomplished by a lens-screen adjusting system.
The lens-screen adjusting system comprises a lens 102, a screen 101, a linear slide rail 302, a micro stepping motor 106 and a driving circuit, wherein the driving circuit is connected with the micro stepping motor 106, the action end of the micro stepping motor 106 is linked with the screen 101, the screen 101 is slidably arranged on the linear slide rail 302, the front side and the rear side of the linear slide rail 302 are respectively provided with the screen 101 and the lens 102, all parts of the system are highly integrated, when a fixation point enters a target area, the driving circuit sends a control signal to the micro stepping motor 106, and the driving motor drives the screen 101 to perform stable and accurate linear displacement along the linear slide rail 302, so that the geometric distance between the lens 102 and the screen 101 is adjusted. In this scheme, the introduction of miniature stepper motor 106 has realized the lightweight design of structure, possesses accurate feedback adjustment ability simultaneously, and after receiving the data from many image plane eye tracking system, the system can carry out dynamic adjustment through miniature stepper motor drive lens-screen unit to ensure the real-time matching of user's visual focus and image plane degree of depth.
The lens-screen adjustment system includes an adjustment module and a sensory feedback module, wherein:
the adjusting module is used for selecting the distance to be adjusted in real time according to the predicted fixation point coordinates, and the implementation flow is as follows:
step 311, selecting a picture area according to the outputted predicted gaze point coordinates;
step 312, when the gaze point falls into the region, the system adjusts the focal length according to the preset travel range to match the corresponding visual requirement;
The sensing feedback module is used for driving the miniature stepping motor to adjust the distance between the lens and the screen, and the implementation flow is as follows:
step 321, adopting a high-precision miniature stepping motor with a stroke range of 8mm to ensure the accuracy and stability of adjustment;
step 322, according to the difference of the user's gazing areas, the system generates PWM (pulse width modulation) control signals to drive the micro stepping motor to execute corresponding displacement adjustment, so as to realize focal length optimization;
the circulation flow is realized by a control system, the control system comprises a data processing module, a calculation module and a control instruction module, wherein:
the data processing module is used for receiving the fixation point coordinates from the eye tracking system, and the implementation flow is as follows:
Step 411, receiving pupil center coordinates output by an eye tracking system;
Step 412, receiving and resolving the calibration gaze point coordinates;
step 413, performing coordinate correction and filtering based on the pupil center coordinate and the calibration gaze point coordinate, removing noise, and improving data precision;
step 414, storing the corrected gaze point coordinates and transmitting the data to the computing module;
The calculation module is used for analyzing the current required image plane depth, and the implementation flow is as follows:
step 421, receiving gaze point coordinate information from a data processing module;
step 422, calculating the current required image plane depth based on the gaze point-depth mapping model preset by the system;
Step 423, detecting an abnormal value of the calculation result, and performing interpolation processing to optimize the data smoothness;
step 424, storing the calculated depth information and sending it to the control instruction module;
the control instruction module is used for sending a motion instruction to the lens-screen adjusting system, and the implementation flow is as follows:
step 431, receiving the image plane depth data transmitted by the computing module;
Step 432, generating control instructions of the stepper motor according to the depth requirement, including forward, backward and stroke length;
Step 433, controlling the moving step length and speed of the micro stepping motor by adopting PWM (pulse width modulation) signals;
step 434, monitoring the state of the stepper motor in real time to ensure accurate adjustment of the lens position;
in step 435, the execution result is fed back, and secondary adjustment is performed if necessary, so as to improve the system response accuracy.
Referring to fig. 2, a dynamic virtual image plane adjustment method based on multi-image plane eye tracking is provided, the system is started, the system is initialized and all core systems are started, at this stage, all hardware is ensured to work normally, and software enters a standby state to prepare for subsequent operation;
the adjusting method comprises the following steps:
step 1, wearing equipment by a user, namely, correctly wearing the eye movement tracking equipment by the user, ensuring that the equipment stably fits the face, and adjusting the equipment to a proper position so that the camera can clearly capture the eye area of the user. The system self-checks whether the pupil of the user is completely visible or not, and adjusts the infrared illumination intensity so as to optimize the pupil imaging quality and ensure the accuracy of subsequent data acquisition;
Step 2, eye movement calibration, namely, the system enters a calibration mode, guides a user to sequentially watch a plurality of preset calibration points to establish a user eye movement model, and accurately fits the personalized eye movement parameters of the user by calculating the mapping relation between the pupil center offset and the screen coordinates to ensure higher precision of subsequent eye movement point prediction;
Step 3, gaze point prediction, in which under a normal operation mode, the system continuously collects eye movement data of a user, including pupil position, gaze direction and other characteristic information, based on an established eye movement model, the system calculates and predicts current gaze point coordinates of the user in real time, and determines a specific target area of the gaze of the user on a screen, wherein the prediction data is used for driving a subsequent self-adaptive vision adjustment mechanism;
Step 4, gaze point sensing, wherein the system further accurately measures the real-time gaze point of the user and carries out error correction on the predicted data; the system can detect the micro-movement of eyeballs of users and dynamically adjust the calculation precision of the fixation point so as to cope with the eye movement characteristics of different users;
Step 5, lens-screen distance adjustment, namely when the system confirms that the gaze point of the user is positioned in the threshold range of the region set by the gaze point sensing module, the driving circuit generates a driving signal and transmits the driving signal to the micro stepping motor, the micro stepping motor accurately moves the screen along the linear sliding rail to change the geometric distance between the screen and the lens so as to adjust the visual focal length to match the current vision requirement of the user;
The system releases hardware resources when exiting, saves personalized calibration parameters of the user, can be quickly loaded in subsequent use, and improves the response efficiency of the system;
The embodiments described in this specification are merely illustrative of the manner in which the inventive concepts may be implemented. The scope of the present invention should not be construed as being limited to the specific forms set forth in the embodiments, but the scope of the present invention and the equivalents thereof as would occur to one skilled in the art based on the inventive concept.