[go: up one dir, main page]

CN116421139A - Vision detection method and device, cloud server and storage medium - Google Patents

Vision detection method and device, cloud server and storage medium Download PDF

Info

Publication number
CN116421139A
CN116421139A CN202310459037.XA CN202310459037A CN116421139A CN 116421139 A CN116421139 A CN 116421139A CN 202310459037 A CN202310459037 A CN 202310459037A CN 116421139 A CN116421139 A CN 116421139A
Authority
CN
China
Prior art keywords
image
detected
pupil
camera
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310459037.XA
Other languages
Chinese (zh)
Inventor
黄亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaolu Laisi Technology Dongguan Co ltd
Original Assignee
Xiaolu Laisi Technology Dongguan Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaolu Laisi Technology Dongguan Co ltd filed Critical Xiaolu Laisi Technology Dongguan Co ltd
Priority to CN202310459037.XA priority Critical patent/CN116421139A/en
Publication of CN116421139A publication Critical patent/CN116421139A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/103Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining refraction, e.g. refractometers, skiascopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0016Operational features thereof
    • A61B3/0025Operational features thereof characterised by electronic signal processing, e.g. eye models
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/11Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for measuring interpupillary distance or diameter of pupils
    • A61B3/112Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for measuring interpupillary distance or diameter of pupils for measuring diameter of pupils
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the invention discloses a vision testing method, a device, a cloud server and a storage medium, which are applied to the technical field of vision testing and can solve the problem of how to conveniently and simply perform more accurate vision testing on a user. The vision detection device obtains a target distance between a user and the camera module; sequentially starting at least one infrared lamp in the infrared scanning module, and acquiring a plurality of images to be detected through the camera module; transmitting the target distance, the plurality of images to be detected and the position information of the infrared lamp in the on state corresponding to each image to be detected to a cloud server, wherein the cloud server performs pupil extraction on the plurality of images to be detected and determines pupil characteristics corresponding to each image to be detected; and processing the target distance, the pupil characteristics and the position information through the target neural network model to obtain a vision detection result, and sending the vision detection result to the vision detection device.

Description

Vision detection method and device, cloud server and storage medium
Technical Field
The embodiment of the invention relates to the technical field of vision testing, in particular to a vision testing method and device, a cloud server and a storage medium.
Background
The common vision testing equipment in the market at present is provided with a handheld vision screening instrument, a optometry instrument and the like, the handheld vision screening instrument can be used for detecting the vision of a tested person in a contactless manner at a certain distance, the optometry instrument needs to be closely attached to the instrument and can not be moved to be detected, so that the operation requirement on the tested person can be higher, and a professional detecting person is needed to finish vision detection. Therefore, how to conveniently and simply perform relatively accurate vision detection on a user becomes a problem to be solved at present.
Disclosure of Invention
The embodiment of the invention provides a vision testing method, a vision testing device, a cloud server and a storage medium, which are used for solving the problem of how to conveniently and simply test the vision of a user accurately in the prior art.
In a first aspect, there is provided a vision testing method applied to a vision testing apparatus, the vision testing apparatus comprising: the method comprises the steps of shooting a camera module and an infrared scanning module, wherein a plurality of infrared lamps are regularly arranged in the infrared scanning module, and the method comprises the following steps: acquiring a target distance between a user and the camera module;
sequentially starting at least one infrared lamp in the infrared scanning module, acquiring a plurality of images to be detected through the camera module, wherein the infrared lamps in the starting state corresponding to each image to be detected are different, and each image to be detected comprises eyes of the user;
Transmitting the target distance, the plurality of images to be detected and the position information of the infrared lamp in the on state corresponding to each image to be detected to a cloud server, and receiving a vision detection result transmitted by the cloud server;
the vision detection result is obtained by detecting the cloud server according to the target distance, the plurality of images to be detected and the position information of the infrared lamp in the on state corresponding to each image to be detected.
As an optional implementation manner, in a first aspect of the embodiment of the present invention, the camera module includes a first camera and a second camera, where the first camera is a visible light wide-angle camera, and the second camera is a small field angle infrared sensing camera, and before the obtaining the target distance between the user and the camera module, the method further includes:
when the user exists in the first photographable range of the first camera through the first camera, determining the position information of the eyes of the user in the first photographable range;
determining an angle to be adjusted of the second camera according to the relative position relation between the first camera and the second camera and the position information;
And according to the angle to be adjusted, performing angle adjustment on the second camera so that the user exists in a second shooting range of the second camera.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, a bidirectional rotating motor device is further provided in the vision detecting device, and the bidirectional rotating motor device is connected to the image capturing module, and after the angle adjustment is performed on the second camera according to the angle to be adjusted, the method further includes:
when the user is detected to exist in the second shooting range, acquiring an initial image through the second camera;
detecting the initial image and determining initial positions of the eyes of the user in the initial image;
determining an offset distance and an offset direction according to the initial position and a center position, wherein the center position is the center position of the second photographable range;
determining an angle to be adjusted of the second camera according to the offset distance and the offset direction;
and carrying out angle adjustment on the second camera according to the angle to be adjusted through the bidirectional rotating motor device, so that the eyes of the user move to the center position.
In a first aspect of the embodiment of the present invention, the obtaining the target distance between the user and the camera module includes:
acquiring binocular images after automatic focusing through the second camera;
and determining the target distance between the user and the camera module according to the automatic focusing parameters and the binocular image.
In a first aspect of the embodiment of the present invention, the obtaining the target distance between the user and the camera module includes:
acquiring a first image through the first camera and acquiring a second image through the second camera;
and determining the target distance between the user and the camera module according to the sizes of the eyes in the first image, the sizes of the eyes in the second image and the relative position relationship between the first camera and the second camera.
In a first aspect of the embodiment of the present invention, the infrared scanning module includes a first infrared lamp group and a second infrared lamp group, the number of infrared lamps included in the first infrared lamp group is the same as the number of infrared lamps included in the second infrared lamp group, the first infrared lamp group and the second infrared lamp group each include at least one infrared lamp, and the sequentially turning on at least one infrared lamp in the infrared scanning module, and acquiring, through the image capturing module, a plurality of images to be detected includes:
When the first infrared lamp set is in an on state and the second infrared lamp set is in an off state, at least one first image to be detected is obtained through the camera module;
when the first infrared lamp set is in a closed state and the second infrared lamp set is in an open state, at least one second image to be detected is obtained through the camera module;
and determining the plurality of images to be tested according to the at least one first image to be tested and the at least one second image to be tested.
As an optional implementation manner, in a first aspect of the embodiment of the present invention, the determining the plurality of images to be measured according to the at least one first image to be measured and the at least one second image to be measured includes:
respectively carrying out binocular identification on the at least one first image to be detected and the at least one second image to be detected to obtain an identification result;
determining all the effective images with the identification results indicating the existence of the eyes as the images to be detected;
at least one effective image exists in the at least one first image to be tested, and at least one effective image exists in the at least one second image to be tested.
In a second aspect, a vision testing method is provided and applied to a cloud server, and the method includes: receiving a target distance, a plurality of images to be detected and position information of an infrared lamp in an on state, which corresponds to each image to be detected, sent by a vision detection device, wherein the target distance is the distance between a user and a camera module, and the images to be detected comprise eyes of the user;
pupil extraction is carried out on the images to be detected, and pupil characteristics corresponding to each image to be detected are determined;
processing the target distance, the pupil characteristics and the position information through a target neural network model to obtain a vision detection result;
and sending the vision testing result to the vision testing device.
In a second aspect of the embodiment of the present invention, the performing pupil extraction on the plurality of images to be measured, and determining pupil features corresponding to each image to be measured includes:
pupil identification is carried out on the images to be detected, and pupil images corresponding to the images to be detected are obtained;
and extracting the characteristics of each pupil image to obtain the pupil characteristics corresponding to each image to be detected.
In a second aspect of the embodiment of the present invention, the feature extraction of each pupil image to obtain the pupil feature corresponding to each image to be measured includes:
performing section sampling on each pupil image to obtain at least one pupil section sampling data set corresponding to each pupil image, wherein the number of the pupil section sampling data sets is the same as the number of the infrared lamps in an on state corresponding to the pupil images;
expanding the number of the sampling data included in each pupil section sampling data set to a preset number, and arranging at least one expanded pupil section sampling data set to obtain the pupil characteristics.
In a second aspect of the embodiment of the present invention, pupil recognition is performed on the plurality of images to be measured to obtain pupil images corresponding to each of the images to be measured, where the pupil recognition includes:
determining target feature points in each image to be detected;
superposing the plurality of images to be detected according to the target feature points to obtain a superposed picture set;
standard deviation data is acquired for the intersection of the superimposed picture sets, and pupil mask images corresponding to the superimposed picture sets are obtained;
And determining pupil images corresponding to each image to be detected according to the pupil mask images.
In a second aspect of the embodiment of the present invention, pupil recognition is performed on the plurality of images to be measured to obtain pupil images corresponding to each of the images to be measured, where the pupil recognition includes:
performing binocular identification on the plurality of images to be detected through an image segmentation algorithm to obtain binocular images corresponding to each image to be detected;
and performing pupil identification on each binocular image through the image segmentation algorithm to obtain pupil images corresponding to each binocular image.
In a second aspect of the embodiment of the present invention, the processing, by the target neural network model, the target distance, the pupil feature, and the position information to obtain a vision detection result includes:
performing convolution processing, pooling processing, data shaping processing, full connection processing and activation processing on the pupil characteristics through the target neural network model to obtain processed pupil characteristics;
mixing the processed pupil characteristics, the target distance and the position information through the target neural network model to obtain mixed data;
And performing activation processing and full connection processing on the mixed data through the target neural network model to obtain the vision detection result.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the method further includes:
a plurality of test data sets are obtained in advance, wherein the test data sets comprise: pupil test characteristics, test distance, test position information;
performing vision labeling on each test data set to obtain a vision test result corresponding to each test data set;
inputting the multiple test data sets and vision test results corresponding to each test data set into an original neural network model for training to obtain the target neural network model.
In a third aspect, there is provided a vision testing device comprising: the camera module and infrared scanning module, a plurality of infrared lamps have regularly been arranged in the infrared scanning module, vision testing device includes: the acquisition module is used for acquiring the target distance between the user and the camera module;
the processing module is used for sequentially starting at least one infrared lamp in the infrared scanning module, acquiring a plurality of images to be detected through the camera module, wherein the infrared lamps in the on state corresponding to each image to be detected are different;
The receiving and transmitting module is used for transmitting the target distance, the plurality of images to be detected and the position information of the infrared lamp which corresponds to each image to be detected and is in an on state to the cloud server and receiving the vision detection result transmitted by the cloud server;
the vision detection result is obtained by detecting the cloud server according to the target distance, the plurality of images to be detected and the position information of the infrared lamp in the on state corresponding to each image to be detected.
In a fourth aspect, a cloud server is provided, the cloud server comprising: the receiving and transmitting module is used for receiving the target distance sent by the vision detection device, a plurality of images to be detected and the position information of the infrared lamp in the on state corresponding to each image to be detected, wherein the target distance is the distance between a user and the camera module;
the processing module is used for extracting pupils of the images to be detected and determining the pupil characteristics corresponding to each image to be detected;
the processing module is further used for processing the target distance, the pupil characteristics and the position information through a target neural network model so as to obtain a vision detection result;
The receiving and transmitting module is further used for transmitting the vision testing result to the vision testing device.
In a fifth aspect, there is provided a vision testing device comprising: the camera module and infrared scanning module, a plurality of infrared lamps have regularly been arranged in the infrared scanning module, vision testing device includes:
a memory storing executable program code;
a processor coupled to the memory;
the processor invokes the executable program code stored in the memory to perform the vision testing method of the first aspect of the embodiment of the present invention.
In a sixth aspect, there is provided a cloud server, the cloud server comprising:
a memory storing executable program code;
a processor coupled to the memory;
the processor invokes the executable program code stored in the memory to perform the vision testing method in the second aspect of the embodiments of the present invention.
In a seventh aspect, there is provided a computer-readable storage medium storing a computer program that causes a computer to execute the vision testing method in the first aspect of the embodiment of the present invention or to execute the vision testing method in the second aspect of the embodiment of the present invention. The computer readable storage medium includes ROM/RAM, magnetic disk or optical disk, etc.
In an eighth aspect, there is provided a computer program product for causing a computer to perform some or all of the steps of any one of the methods of the first aspect or to perform some or all of the steps of any one of the methods of the second aspect when the computer program product is run on the computer.
A ninth aspect provides an application publishing platform for publishing a computer program product, wherein the computer program product when run on a computer causes the computer to perform some or all of the steps of any of the methods of the first aspect or to perform some or all of the steps of any of the methods of the second aspect.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
in an embodiment of the present invention, a vision inspection apparatus includes: the vision detection device can acquire the target distance between a user and the camera module; the vision detection device sequentially starts at least one infrared lamp in the infrared scanning module, and acquires a plurality of images to be detected through the camera module, wherein the infrared lamps in the on state corresponding to each image to be detected are different; the vision detection device sends the target distance, a plurality of images to be detected and the position information of the infrared lamp in the on state corresponding to each image to be detected to the cloud server, and the cloud server performs pupil extraction on the images to be detected and determines pupil characteristics corresponding to each image to be detected; the cloud server processes the target distance, the pupil characteristics and the position information through the target neural network model to obtain a vision detection result, and sends the vision detection result to the vision detection device. In the scheme, the vision detection device only needs to acquire a plurality of images to be detected and the target distance between the user and the camera module, then the cloud server can calculate the vision detection result of the user, the vision detection device does not need the cooperation of the user, the operation is simple and quick, and the detection time is greatly shortened; and the cloud server can process the related data through the target neural network model, so that the accuracy of the vision detection result is effectively improved, the cloud server replaces the vision detection device to calculate the vision detection result, and the workload of the vision detection device can be reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of a vision testing device according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a vision testing method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an infrared scanning module according to an embodiment of the present invention;
fig. 4 is a second flow chart of a vision testing method according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram II of a vision testing device according to an embodiment of the present invention;
fig. 6 is a schematic diagram of image processing of a vision testing method according to an embodiment of the present invention;
fig. 7 is a schematic diagram of image processing of a vision testing method according to an embodiment of the present invention;
fig. 8 is a schematic diagram of image processing in a vision testing method according to an embodiment of the present invention;
Fig. 9 is a schematic diagram of image processing of a vision testing method according to an embodiment of the present invention;
fig. 10 is a schematic diagram of image processing of a vision testing method according to an embodiment of the present invention;
fig. 11 is a schematic structural view of a vision testing device according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of a cloud server according to an embodiment of the present invention;
fig. 13 is a schematic structural view of a vision testing device according to an embodiment of the present invention;
fig. 14 is a schematic diagram of a cloud server according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms first and second and the like in the description and in the claims, are used for distinguishing between different objects and not necessarily for describing a particular sequential or chronological order of the objects. For example, a first camera and a second camera, etc., are used to distinguish between different cameras, rather than to describe a particular order of cameras.
The terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that, in the embodiments of the present invention, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g." in an embodiment should not be taken as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
The vision detecting device according to the embodiment of the invention can be a device specially used for detecting vision, can be an electronic device, can be a functional module and/or a functional entity capable of realizing the vision detecting method in the electronic device, and can be specifically determined according to actual use requirements, and the embodiment of the invention is not limited. The electronic device may include a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal device, a wearable device, an Ultra-mobile personal computer (Ultra-Mobile Personal Computer, UMPC), a netbook, a personal digital assistant (Personal Digital Assistant, PDA), or the like. The wearable device may be an intelligent watch, an intelligent bracelet, a watch phone, an intelligent earphone, etc., and the embodiment of the invention is not limited.
As shown in fig. 1, the vision inspection device may be provided with a processor 10, an image capturing module 11 and an infrared scanning module 12, where the processor 10 may include a micro control unit (Microcontroller Unit, MCU), a central processing unit (Central Processing Unit, CPU), an acceleration processor (Accelerated Processing Unit, APU), a microprocessor (digital signal processor, DSP), etc., the steps executed by the vision inspection device disclosed in the embodiment of the present invention may be executed by the processor 10, where the processor 10 and the image capturing module 11 are connected to control the image capturing module 11, and the processor 10 and the infrared scanning module 12 are connected to control the infrared scanning module 12. The camera module 11 can be used for shooting images; the infrared scanning module 12 may have a plurality of infrared lamps regularly arranged therein, and the processor 10 may individually control the on/off of each infrared lamp.
The vision detection method disclosed by the embodiment of the invention can be realized through interaction between the vision detection device and the cloud server.
As shown in fig. 2, an embodiment of the present invention provides a vision testing method, which may include the steps of:
201. the vision detecting device obtains a target distance between a user and the camera module.
In the embodiment of the invention, when the user needs to perform vision testing, the user can sit beside the vision testing device and face the direction of the camera module in the vision testing device, so that the vision testing device can acquire the target distance between the user and the camera module, namely the distance between the user and the vision testing device.
202. The vision detecting device sequentially starts at least one infrared lamp in the infrared scanning module.
In the embodiment of the invention, at least one infrared lamp can be regularly arranged in the infrared scanning module, the vision detection device can sequentially turn on the infrared lamps, of course, the vision detection device can turn on one infrared lamp at a time, can turn on two infrared lamps at a time, and can turn on more infrared lamps at a time, but the number of the infrared lamps turned on at a time is required to be equal.
Optionally, when the vision detecting device turns on the infrared lamps, the infrared lamps can be selected to be turned on according to the arrangement rule of the infrared lamps, and the infrared lamps can also be randomly selected to be turned on, but each infrared lamp needs to be ensured to be turned on.
Optionally, the vision inspection device turns off at least one infrared lamp after turning on the at least one infrared lamp, and then turns on the at least one infrared lamp after turning on the at least one infrared lamp. If at least one infrared lamp which is turned on each time is regarded as one group of infrared lamps, only one group of infrared lamps is turned on each time, and only after the group of infrared lamps are turned off, the next group of infrared lamps is turned on, and two groups of infrared lamps are not turned on simultaneously.
As shown in fig. 1 and 3, for example, 48 infrared lamps 121 are provided in the infrared scanning module 12, and the 48 infrared lamps 121 are divided into 6 parts, and the infrared lamps 121 in each part are arranged in 3 rows, and 3, 2 infrared lamps 121 are provided in each row, respectively. In fig. 3, two infrared lamps 121 are labeled, namely, an infrared lamp a and an infrared lamp B, which are illustrated as examples, and the vision inspection device may first turn on the infrared lamp a and keep the infrared lamp B in an off state; after the infrared lamp a is turned off, the infrared lamp B is turned on again, that is, the infrared lamp a and the infrared lamp B can be turned on only one at a time, that is, the infrared lamp a or the infrared lamp B is turned on, and the infrared lamp a and the infrared lamp B cannot be turned on at the same time.
203. The vision detecting device acquires a plurality of images to be detected through the camera module.
In the embodiment of the invention, the infrared lamps in the on state corresponding to each image to be detected are different, and each image to be detected comprises two eyes of a user, namely, the image pickup module is the image to be detected which is shot against the face of the user.
It should be noted that, after the vision detecting device turns on at least one infrared lamp, the image to be detected is collected by the camera module; and then turning off the at least one infrared lamp, turning on the other at least one infrared lamp, and collecting the image to be detected through the camera module.
For example, as shown in fig. 3, two infrared lamps 121, namely an infrared lamp a and an infrared lamp B, may be turned on first, and the vision detecting device may keep the infrared lamp B in an off state, and then a to-be-detected image is captured by the camera module; then the infrared lamp A is turned off, the infrared lamp B is turned on, and an image to be detected is shot through the camera module. That is, the vision inspection device acquires two images to be inspected, wherein one image to be inspected is turned on corresponding to the infrared lamp a, and the other image to be inspected is turned on corresponding to the infrared lamp B.
Alternatively, the camera module may be located in the middle of the infrared scanning module, as shown in fig. 3, and the camera module 11 may be disposed in a hexagonal area surrounded by the plurality of infrared lamps 121.
204. The vision detection device sends the target distance, a plurality of images to be detected and the position information of the infrared lamp in the on state corresponding to each image to be detected to the cloud server; the cloud server receives the target distance sent by the vision detection device, a plurality of images to be detected and the position information of the infrared lamp in the on state corresponding to each image to be detected.
In the embodiment of the invention, since the infrared lamps in the on state corresponding to each image to be detected are different, when the vision detecting device acquires a plurality of images to be detected, the position information of the infrared lamps in the on state corresponding to each image to be detected is recorded, and the position information is the angle information of the infrared lamps.
Optionally, a communication connection is established between the vision detection device and the cloud server, as shown in fig. 1, a communication module 13 may be disposed in the vision detection device, where the communication module 13 may be a wireless local area network communication, or may be a bluetooth communication, etc.; similarly, a communication module can also be arranged in the cloud server.
205. And the cloud server performs pupil extraction on the plurality of images to be detected, and determines pupil characteristics corresponding to each image to be detected.
In the embodiment of the invention, the plurality of images to be detected are images of both eyes of the user, so that the cloud server can extract pupils from each image to be detected, thereby obtaining the pupil characteristics corresponding to each image to be detected, and the pupil characteristics can be represented by pixel brightness values.
206. And the cloud server processes the target distance, the pupil characteristics and the position information through the target neural network model to obtain a vision detection result.
In the embodiment of the invention, the position information can be represented by the angle between the infrared lamp in the on state and the camera module, and the cloud server can process the data through the target neural network model which is subjected to deep learning in advance, so that diopter, namely a vision detection result, is calculated.
Optionally, the target neural network model is obtained by performing deep learning on a large amount of measured data in advance, and the deep learning process specifically may include: the cloud server acquires a plurality of test data sets in advance, wherein the test data sets comprise: pupil test characteristics, test distance, test position information; the cloud server performs vision labeling on each test data set to obtain a vision test result corresponding to each test data set; and the cloud server inputs the multiple test data sets and vision test results corresponding to each test data set into the original neural network model for training so as to obtain a target neural network model.
In the implementation mode, since pupil characteristics, distances and positions of the infrared lamps all affect calculation results of diopters, the cloud server can perform vision labeling on a large number of pupil test characteristics, test distances and test position information in advance, a plurality of test data sets and vision test results corresponding to each test data set can be obtained, and then the test data sets and the vision test results are learned through an original neural network model, so that the target neural network model can be obtained.
It should be noted that, the test data set may include: pupil test characteristics, test distance, test position information; there is at least one data difference between any two test data sets.
It should be noted that, when performing deep learning, a large number of testers may be selected to perform vision testing, and the position of the vision testing device may be continuously adjusted to perform vision testing, so as to obtain a large number of pupil test features, test distances, test position information and vision test results.
207. The cloud server sends the vision detection result to the vision detection device; the vision testing device receives a vision testing result sent by the cloud server.
In the embodiment of the invention, the vision detection result is obtained by detecting the cloud server according to the target distance, the plurality of images to be detected and the position information of the infrared lamp in the on state corresponding to each image to be detected.
The embodiment of the invention provides a vision testing method, a vision testing device comprises: the vision detection device can acquire the target distance between a user and the camera module; the vision detection device sequentially starts at least one infrared lamp in the infrared scanning module, and acquires a plurality of images to be detected through the camera module, wherein the infrared lamps in the on state corresponding to each image to be detected are different; the vision detection device sends the target distance, a plurality of images to be detected and the position information of the infrared lamp in the on state corresponding to each image to be detected to the cloud server, and the cloud server performs pupil extraction on the images to be detected and determines pupil characteristics corresponding to each image to be detected; the cloud server processes the target distance, the pupil characteristics and the position information through the target neural network model to obtain a vision detection result, and sends the vision detection result to the vision detection device. In the scheme, the vision detection device only needs to acquire a plurality of images to be detected and the target distance between the user and the camera module, then the cloud server can calculate the vision detection result of the user, the vision detection device does not need the cooperation of the user, the operation is simple and quick, and the detection time is greatly shortened; and the cloud server can process the related data through the target neural network model, so that the accuracy of the vision detection result is effectively improved, the cloud server replaces the vision detection device to calculate the vision detection result, and the workload of the vision detection device can be reduced.
As shown in fig. 4, an embodiment of the present invention provides a vision testing method, which may further include the steps of:
401. when the vision detecting device detects that a user exists in a first shooting range of the first camera through the first camera, the vision detecting device determines position information of eyes of the user in the first shooting range.
In the embodiment of the invention, the camera module can comprise a first camera and a second camera, wherein the first camera is a visible light wide-angle camera, and the second camera is a small-field-angle infrared sensing camera.
As shown in fig. 5, the camera module may include a first camera 111 and a second camera 112, where the first camera 111 is a visible light wide-angle camera (a short-focus lens with a field angle greater than 60 degrees is selected for shooting, and a field of view is larger), and the second camera 112 is a small-field angle infrared light-sensitive camera (a long-focus lens with a field angle less than 20 degrees is selected for shooting, and a field of view is narrower). Wherein, the first camera 111 may be located above the infrared scanning module 12 to collect a wider field of view; the second camera 112 may be located in the middle of the infrared scanning module 12, i.e. in the middle of the plurality of infrared lamps 121.
Optionally, as shown in fig. 5, an auto-focusing module 51 may also be disposed in the second camera 112, so that the second camera 112 can capture a clear image of an object at 30 cm-80 cm.
Alternatively, as shown in fig. 5, the hexagonal area in the middle of the infrared scanning module 12 is a hollow area, and light can pass through the hollow area to enter the shooting range of the second camera 112.
Furthermore, a certain height of light wave is arranged on the boundary of the hexagonal area, and the light wave can shield the light emitted by the infrared lamp beside the light wave from directly entering the second camera, so that the light emitted by the infrared lamp can only illuminate the fundus of the pupil and then be reflected back to the second camera.
In the embodiment of the invention, since the first camera is a visible light wide-angle camera, whether a user sits beside the vision detection device or not can be determined through the first camera, namely, whether the user exists in a first photographable range of the first camera or not can be determined through the first camera, and when the user exists, the user can be described to be beside the vision detection device, so that in order to enable the second camera to well collect eyes of the user, the first camera is required to position the position of the user relative to the vision detection device, namely, the vision detection device can determine the position information of the eyes of the user in the first photographable range.
Optionally, the vision detecting device may specifically capture an image through the first camera, and determine, according to the image, positional information of both eyes of the user within the first photographable range.
402. The vision detecting device determines the angle to be adjusted of the second camera according to the relative position relation and the position information between the first camera and the second camera.
In the embodiment of the invention, after the vision detection device determines the position information of the eyes of the user in the first shooting range, namely, the position relation of the eyes of the user relative to the first camera is determined, the position relation of the eyes of the user relative to the second camera can be calculated by combining the relative position relation between the first camera and the second camera, and in order to enable the eyes of the user to be positioned in the field of view of the second camera, the vision detection device needs to adjust the angle of the second camera.
403. The vision detecting device adjusts the angle of the second camera according to the angle to be adjusted so that a user exists in a second shooting range of the second camera.
Optionally, the vision detecting device performs angle adjustment on the second camera according to the angle to be adjusted, which specifically may include: when the vision detecting device detects that a user exists in the second shooting range, the vision detecting device acquires an initial image through the second camera; the vision detection device detects the initial image and determines the initial positions of the eyes of the user in the initial image; the vision detecting device determines an offset distance and an offset direction according to the initial position and a central position, wherein the central position is the central position of the second shooting range; the vision detection device determines the angle to be adjusted of the second camera according to the offset distance and the offset direction; the vision detecting device adjusts the angle of the second camera according to the angle to be adjusted through the bidirectional rotating motor device, so that the eyes of the user move to the center position.
In this implementation, since the angle of the second camera is not fixed, it is possible that the eyes of the user are not located at the center of the second camera at the beginning, and then the second camera needs to be adjusted in angle so that the eyes of the user are located at the center; therefore, the vision detecting device can acquire an initial image through the second camera, determine initial positions of eyes of a user in the initial image, then determine offset distances and offset directions between the initial positions and the central positions, and then can conduct angle adjustment on the second camera according to the offset distances and the offset directions so that the eyes of the user are located at the central positions after movement.
It should be noted that, the center position is a center position within a photographable range of the second camera, that is, the center position is moved with the angle adjustment of the second camera, and the user does not move, so that the eyes of the user are finally moved to the center position, which is to be the center position moved to be aligned with the eyes of the user.
Optionally, in order to ensure that the eyes of the user can be located at the central position, after the vision detection device performs angle adjustment on the second camera according to the offset distance and the offset direction, a test image can be shot through the second camera, the positions of the eyes of the user in the test image are detected, and if the eyes of the user in the test image are located at the central position, it can be determined that the second camera is adjusted; if both eyes of the user are still not located at the center in the test image, the second camera also needs to perform angle adjustment.
Optionally, in this implementation manner, as shown in fig. 1, a bi-directional rotating motor device 14 may be further provided in the vision detecting device, where the bi-directional rotating motor device 14 is connected to the processor 10 and the second camera 112, as shown in fig. 1 and fig. 5, the bi-directional rotating motor device 14 may specifically include an X-axis rotating motor device 141 and a Y-axis rotating motor device 142, where the X-axis rotating motor device 141 may control the second camera 112 to perform an up-down angle adjustment, the Y-axis rotating motor device 142 may control the second camera 112 to perform a left-right angle adjustment, the bi-directional rotating motor device 14 corresponds to a driving device, the second camera 112 is provided with a rotating shaft connected to the bi-directional rotating motor device 14, and when the bi-directional rotating motor device 14 rotates, the driving rotating shaft rotates synchronously, so as to drive the second camera 112 to also rotate synchronously, so that both eyes of the user move to a central position.
It should be noted that, after the vision detecting device determines the offset distance and the offset direction, the angle to be adjusted of the second camera may be calculated, the angle to be adjusted may offset the offset distance and the offset direction, the angle to be adjusted may include angles in the X direction and the Y direction, and then the bidirectional rotating motor device may rotate according to the angle to be adjusted.
404. The vision detecting device obtains a target distance between a user and the camera module.
In the embodiment of the present invention, for the description of step 404, please refer to the detailed description of step 201 in the above embodiment, and the description of the embodiment of the present invention is omitted.
Optionally, the manner in which the vision detecting device obtains the target distance between the user and the camera module may at least include, but is not limited to, the following optional implementation manners:
the implementation mode is as follows: the vision detection device acquires binocular images after automatic focusing through a second camera; the vision detecting device determines the target distance between the user and the camera module according to the automatic focusing parameters and the binocular image.
In this implementation manner, after the second camera performs auto-focusing through the auto-focusing module, a clear binocular image can be shot, and at this time, the target distance between the user and the camera module in the binocular image can be determined according to the focal length information of the second camera.
The implementation mode II is as follows: the vision detecting device acquires a first image through the first camera, and acquires a second image through the second camera; the vision detection device determines a target distance between a user and the camera module according to the sizes of the two eyes in the first image, the sizes of the two eyes in the second image and the relative position relationship between the first camera and the second camera.
In this implementation manner, since the ratio between the image captured by the camera and the actual scene is fixed, the vision detecting device may acquire images, i.e., the first image and the second image, respectively, through the two cameras, and then determine the sizes of both eyes in each image, and determine the target distance between the user and the camera module in combination with the relative positional relationship between the first camera and the second camera.
405. When the first infrared lamp set is in an on state and the second infrared lamp set is in an off state, the vision detection device obtains at least one first image to be detected through the camera module.
In the embodiment of the invention, the vision detection device can control the infrared lamps to be turned on in sequence, and the number of the infrared lamps turned on each time is equal, so that the vision detection device can determine the infrared lamps required to be turned on each time as one infrared lamp group, namely the infrared scanning module can comprise a first infrared lamp group and a second infrared lamp group, the number of the infrared lamps included in the first infrared lamp group is the same as the number of the infrared lamps included in the second infrared lamp group, and the first infrared lamp group and the second infrared lamp group comprise at least one infrared lamp.
406. When the first infrared lamp set is in an off state and the second infrared lamp set is in an on state, the vision detection device acquires at least one second image to be detected through the camera module.
407. The vision testing device determines a plurality of images to be tested according to at least one first image to be tested and at least one second image to be tested.
In the embodiment of the invention, the vision detecting device can determine a plurality of images to be detected after sequentially turning on all the infrared lamps and shooting the images.
Optionally, the vision testing device determines a plurality of images to be tested according to at least one first image to be tested and at least one second image to be tested, which specifically may include: the vision detection device respectively carries out binocular identification on at least one first image to be detected and at least one second image to be detected to obtain an identification result; the vision detecting device determines all the effective images with the identification results indicating the existence of the eyes as images to be detected.
At least one effective image exists in at least one first image to be tested, and at least one effective image exists in at least one second image to be tested.
In this implementation manner, the vision detection device may screen the image obtained by the image capturing module, that is, perform binocular recognition on at least one first image to be detected and at least one second image to be detected, that is, detect whether at least one first image to be detected and at least one second image to be detected include eyes of a user, if both eyes of the user are included, the effective image is obtained, if both eyes of the user are not included, the ineffective image is obtained, and finally determine all the effective images as the images to be detected.
It should be noted that, when the user shoots through the camera module, eyes may be closed, or light is reflected, so that at least one first image to be detected and at least one second image to be detected may not shoot eyes of the user, and then the vision condition of the user cannot be calculated, so that the images excluding eyes of the user need to be screened out, and only effective images including eyes of the user are reserved for calculation.
408. The vision detection device sends the target distance, a plurality of images to be detected and the position information of the infrared lamp in the on state corresponding to each image to be detected to the cloud server; the cloud server receives the target distance sent by the vision detection device, a plurality of images to be detected and the position information of the infrared lamp in the on state corresponding to each image to be detected.
In the embodiment of the present invention, for the description of step 408, please refer to the detailed description of step 204 in the above embodiment, and the description of the embodiment of the present invention is omitted.
409. And the cloud server performs pupil identification on the plurality of images to be detected to obtain pupil images corresponding to each image to be detected.
In the embodiment of the invention, the cloud server can extract the pupil image in each image to be detected independently.
Optionally, the method for performing pupil identification on the plurality of images to be detected by the cloud server to obtain the pupil image corresponding to each image to be detected at least includes, but is not limited to, the following optional implementation manners:
the implementation mode is as follows: the cloud server determines target feature points in each image to be detected; the cloud server superimposes a plurality of images to be detected according to the target feature points to obtain a superimposed picture set; the cloud server acquires standard deviation data of the intersection of the superimposed picture sets to obtain pupil mask images corresponding to the superimposed picture sets; and the cloud server determines pupil images corresponding to each image to be detected according to the pupil mask images.
In this implementation, the human eye does not change much in character during the short time of measurement, while the pupil's character changes with the change in lighting different infrared lamps. The cloud server can superimpose some images to be detected according to the target feature points to obtain a superimposed image set, as shown in fig. 6, so that two-dimensional data of the images to be detected become three-dimensional data, all superimposed image areas are obtained to obtain an intersection, or all data of the intersection areas, and a three-dimensional structure is determined: d= [ N, X, Y ], N being the total number of pictures, (X, Y) being the size of the picture; the pupil characteristics are periodically changed, and the standard deviation value is far higher than that of the non-pupil area, so that the vision detection device can calculate the standard deviation of the 0 th dimension data of the D data, namely, in the (X, Y) range, calculate the standard deviation of the N data respectively to obtain S= [ X, Y ] and reduce the dimension to two-dimensional data, as shown in fig. 7, further can perform self-adaptive binarization processing on the standard deviation result, and then filter noise points, so as to obtain a pupil mask image, as shown in fig. 8; and finally, extracting a pupil image from the original image to be detected according to the pupil mask image. The method has the advantages that the image segmentation accuracy at the pixel level can be achieved, and the ultrahigh image segmentation accuracy is beneficial to the subsequent algorithm to obtain a better calculation result.
The implementation mode II is as follows: the cloud server performs binocular identification on a plurality of images to be detected through an image segmentation algorithm to obtain binocular images corresponding to each image to be detected; and the cloud server performs pupil identification on each binocular image through an image segmentation algorithm to obtain pupil images corresponding to each binocular image.
In this implementation manner, the cloud server may process the images to be tested by using a common image segmentation algorithm, and first determine a binocular image from each image to be tested, as shown in fig. 9; then, on the basis of the binocular image, the image segmentation algorithm is used again to extract the pupil image, as shown in fig. 10.
410. And the cloud server performs feature extraction on each pupil image to obtain pupil features corresponding to each image to be detected.
Optionally, the feature extraction of each pupil image by the cloud server to obtain pupil features corresponding to each image to be detected may specifically include: the cloud server performs section sampling on each pupil image to obtain at least one pupil section sampling data set corresponding to each pupil image; the cloud server expands the quantity of the sampling data included in each pupil section sampling data set to a preset quantity, and arranges at least one expanded pupil section sampling data set to obtain pupil characteristics.
In the implementation manner, since at least one infrared lamp is turned on when the camera module obtains the image to be detected, the obtained image to be detected has reflection points of the at least one infrared lamp in the pupil, and the cloud server can perform section sampling on the at least one reflection point when sampling each pupil image, so that at least one pupil section sampling data set is obtained, that is, the number of the pupil section sampling data sets is the same as the number of the infrared lamps in the turned-on state corresponding to the pupil image; since the number of sample data in each pupil section sample data set may be different, for convenience of subsequent calculation, the number of sample data included in each pupil section sample data set may be expanded to the same number, and pupil characteristics may be obtained after arrangement.
The method for processing the data input size consistency of the neural network is characterized in that the sample data expansion is a processing method for ensuring the data input size consistency of the neural network, the number of the expanded sample data can be preset number, the specific expansion mode can be interpolation expansion or fixed value expansion, and the embodiment of the invention is not particularly limited.
For example, assuming that the pupil profile sampling data set includes sampling data of 128 sampling points, the 128 sampling data may be interpolated to be supplemented to 256 sampling data, or 64 sampling data may be respectively supplemented to two sides of the 128 sampling data, so that the total reaches 256 sampling points, and the supplemented expanded sampling data is zero.
411. And the cloud server carries out convolution processing, pooling processing, data shaping processing, full connection processing and activation processing on the pupil characteristics through the target neural network model to obtain the processed pupil characteristics.
In the embodiment of the invention, the cloud server can process pupil characteristics through a trained target neural network model, and the specific steps are as follows: pupil feature- & gt one-dimensional convolution (nn. Conv1 d) for feature extraction- & gt one-dimensional pooling (nn. MaxPool1 d) for reducing parameters and increasing translational non-deformation- & gt data shaping (Reshape) for aligning a subsequent full-connection network- & gt full-connection layer (nn. Linear) - & gt activation function (nn. Tanh) for introducing nonlinearity, so that the processed pupil feature is obtained.
412. And the cloud server mixes the processed pupil characteristics, the target distance and the position information through the target neural network model to obtain mixed data.
In the embodiment of the invention, the cloud server can mix the processed pupil characteristics, the target distance and the position information to obtain mixed data.
413. And the cloud server performs activation processing and full connection processing on the mixed data through the target neural network model to obtain a vision detection result.
In the embodiment of the invention, the cloud server can process the mixed data through the trained target neural network model, and the specific steps are as follows: mixing data- & gt full connection layer (nn. Linear) & gt activation function (nn. Tanh) introduces nonlinearity- & gt full connection layer (nn. Linear) & gt diopter, thereby obtaining vision detection result.
Alternatively, the vision test result, namely the diopter of the pupil, can be found by investigation to be related to the following features: pupil characterization
Figure BDA0004200083690000181
The position information (θ) of the infrared lamp, the target distance (D) between the pupil and the camera module, and the pupil size (D), then the target neural network model can be expressed as: />
Figure BDA0004200083690000182
Wherein θ can be recoded into vector +.>
Figure BDA0004200083690000183
Then the target neural network model may be changed to:
Figure BDA0004200083690000184
wherein pupil size (d) is included in pupil feature +.>
Figure BDA0004200083690000185
The target neural network model may continue to reduce: / >
Figure BDA0004200083690000186
414. The cloud server sends the vision detection result to the vision detection device; the vision testing device receives a vision testing result sent by the cloud server.
In the embodiment of the present invention, for the description of step 414, please refer to the detailed description of step 207 in the above embodiment, and the description of the embodiment of the present invention is omitted.
The embodiment of the invention provides a vision detection method, wherein a vision detection device can firstly carry out position adjustment on a camera module so as to ensure that a user can be shot, then acquire a target distance between the user and the camera module, sequentially control the on and off of an infrared lamp group to acquire a plurality of images to be detected, then a cloud server can extract pupil images from the images to be detected to determine pupil characteristics, carry out convolution processing, pooling processing, data shaping processing, full connection processing and activation processing on the pupil characteristics through a target neural network model, and then carry out activation processing and full connection processing by combining the target distance and position information so as to calculate a vision detection result. The vision detecting device does not need the cooperation of users, has simple and quick operation and greatly shortens the detecting time; and the cloud server can process the related data through the pre-trained target neural network model, so that the accuracy of the vision detection result is effectively improved, the cloud server replaces the vision detection device to calculate the vision detection result, and the workload of the vision detection device can be reduced.
As shown in fig. 11, an embodiment of the present invention provides a vision testing apparatus including: the module of making a video recording and infrared scanning module, a plurality of infrared lamps have regularly been arranged in the infrared scanning module, and this eyesight testing device includes:
an obtaining module 1101, configured to obtain a target distance between a user and the camera module;
the processing module 1102 is configured to sequentially turn on at least one infrared lamp in the infrared scanning module, and obtain a plurality of images to be detected through the camera module, where the infrared lamps in the turned-on state corresponding to each image to be detected are different, and each image to be detected includes both eyes of a user;
the transceiver module 1103 is configured to send the target distance, the plurality of images to be tested, and the position information of the infrared lamp corresponding to each image to be tested, which is in an on state, to the cloud server, and receive the vision detection result sent by the cloud server;
the vision detection result is obtained by detecting the cloud server according to the target distance, the plurality of images to be detected and the position information of the infrared lamp in the on state corresponding to each image to be detected.
Optionally, the camera module comprises a first camera and a second camera, the first camera is a visible light wide-angle camera, the second camera is a small-angle infrared light camera,
The processing module 1102 is further configured to determine, when a user is detected by the first camera to exist in a first photographable range of the first camera, position information of both eyes of the user in the first photographable range;
the processing module 1102 is further configured to determine an angle to be adjusted of the second camera according to the relative positional relationship between the first camera and the second camera and the positional information;
the processing module 1102 is further configured to perform angle adjustment on the second camera according to the angle to be adjusted, so that a user exists in a second photographable range of the second camera.
Optionally, the vision detecting device is also provided with a bidirectional rotating motor device which is connected with the camera module,
the obtaining module 1101 is further configured to obtain, when it is detected that a user exists in the second photographable range, an initial image through the second camera;
the processing module 1102 is further configured to detect an initial image, and determine initial positions of both eyes of the user in the initial image;
the processing module 1102 is further configured to determine an offset distance and an offset direction according to the initial position and a center position, where the center position is a center position of the second photographable range;
the processing module 1102 is further configured to determine an angle to be adjusted of the second camera according to the offset distance and the offset direction;
The processing module 1102 is further configured to perform angle adjustment on the second camera according to the angle to be adjusted through the bi-directional rotating motor device, so that both eyes of the user move to the center position.
Optionally, the obtaining module 1101 is specifically configured to obtain, through the second camera, an auto-focused binocular image;
the processing module 1102 is specifically configured to determine a target distance between the user and the camera module according to the auto-focus parameter and the binocular image.
Optionally, the acquiring module 1101 is specifically configured to acquire a first image through the first camera and acquire a second image through the second camera;
the processing module 1102 is specifically configured to determine a target distance between the user and the camera module according to a size of both eyes in the first image, a size of both eyes in the second image, and a relative positional relationship between the first camera and the second camera.
Optionally, the infrared scanning module comprises a first infrared lamp group and a second infrared lamp group, the number of the infrared lamps included in the first infrared lamp group is the same as that of the infrared lamps included in the second infrared lamp group, the first infrared lamp group and the second infrared lamp group both comprise at least one infrared lamp,
the obtaining module 1101 is specifically configured to obtain, through the camera module, at least one first image to be tested when the first infrared lamp set is in an on state and the second infrared lamp set is in an off state;
The obtaining module 1101 is specifically configured to obtain, through the camera module, at least one second image to be detected when the first infrared lamp set is in an off state and the second infrared lamp set is in an on state;
the processing module 1102 is specifically configured to determine a plurality of images to be tested according to at least one first image to be tested and at least one second image to be tested.
Optionally, the processing module 1102 is specifically configured to perform binocular recognition on at least one first to-be-detected image and at least one second to-be-detected image respectively, so as to obtain a recognition result;
the processing module 1102 is specifically configured to determine, as an image to be detected, all valid images whose recognition results indicate that two eyes exist;
at least one effective image exists in at least one first image to be tested, and at least one effective image exists in at least one second image to be tested.
In the embodiment of the present invention, each module may implement the vision detection method provided in the above embodiment of the method, and may achieve the same technical effects, so that repetition is avoided, and details are not repeated here.
As shown in fig. 12, an embodiment of the present invention provides a cloud server, which includes:
the receiving and transmitting module 1201 is configured to receive a target distance sent by the vision detection device, a plurality of images to be detected, and position information of an infrared lamp in an on state corresponding to each image to be detected, where the target distance is a distance between a user and the camera module, and the plurality of images to be detected include both eyes of the user;
The processing module 1202 is configured to perform pupil extraction on a plurality of images to be detected, and determine pupil features corresponding to each image to be detected;
the processing module 1202 is further configured to process the target distance, the pupil feature and the position information through the target neural network model to obtain a vision detection result;
the transceiver module 1201 is further configured to send the vision testing result to the vision testing device.
Optionally, the processing module 1202 is specifically configured to perform pupil identification on a plurality of images to be detected, so as to obtain a pupil image corresponding to each image to be detected;
the processing module 1202 is specifically configured to perform feature extraction on each pupil image to obtain pupil features corresponding to each image to be detected.
Optionally, the processing module 1202 is specifically configured to perform profile sampling on each pupil image to obtain at least one pupil profile sampling data set corresponding to each pupil image, where the number of pupil profile sampling data sets is the same as the number of infrared lamps in an on state corresponding to the pupil image;
the processing module 1202 is specifically configured to expand the number of sampling data included in each pupil section sampling data set to a preset number, and arrange at least one expanded pupil section sampling data set to obtain pupil characteristics.
Optionally, the processing module 1202 is specifically configured to determine a target feature point in each image to be measured;
the processing module 1202 is specifically configured to superimpose a plurality of images to be detected according to the target feature points to obtain a superimposed image set;
the processing module 1202 is specifically configured to obtain standard deviation data for an intersection set of the superimposed image sets, so as to obtain a pupil mask image corresponding to the superimposed image sets;
the processing module 1202 is specifically configured to determine, according to the pupil mask images, a pupil image corresponding to each image to be measured.
Optionally, the processing module 1202 is specifically configured to perform binocular recognition on the multiple images to be detected through an image segmentation algorithm, so as to obtain a binocular image corresponding to each image to be detected;
the processing module 1202 is specifically configured to perform pupil recognition on each binocular image through an image segmentation algorithm, so as to obtain a pupil image corresponding to each binocular image.
Optionally, the processing module 1202 is specifically configured to perform convolution processing, pooling processing, data shaping processing, full connection processing, and activation processing on pupil features through the target neural network model, to obtain processed pupil features;
the processing module 1202 is specifically configured to mix the processed pupil feature, the target distance and the position information through the target neural network model to obtain mixed data;
The processing module 1202 is specifically configured to perform activation processing and full connection processing on the mixed data through the target neural network model, so as to obtain a vision detection result.
Optionally, the cloud server further includes:
the acquiring module 1203 is configured to acquire a plurality of test data sets in advance, where the test data sets include: pupil test characteristics, test distance, test position information;
the processing module 1202 is further configured to perform vision labeling on each test data set to obtain a vision test result corresponding to each test data set;
the processing module 1202 is further configured to input a plurality of test data sets and vision test results corresponding to each test data set into the original neural network model for training, so as to obtain a target neural network model.
In the embodiment of the present invention, each module may implement the vision detection method provided in the above embodiment of the method, and may achieve the same technical effects, so that repetition is avoided, and details are not repeated here.
As shown in fig. 13, an embodiment of the present invention further provides a vision testing apparatus, including: the module of making a video recording and infrared scanning module, a plurality of infrared lamps have regularly been arranged in the infrared scanning module, and this eyesight testing device can include:
A memory 1301 storing executable program code;
a processor 1302 coupled to the memory 1301;
the processor 1302 calls executable program codes stored in the memory 1301 to execute the vision detecting method executed by the vision detecting apparatus in the above-described method embodiments.
As shown in fig. 14, an embodiment of the present invention further provides a cloud server, where the cloud server may include:
a memory 1401 storing executable program code;
a processor 1402 coupled to the memory 1401;
the processor 1402 invokes the executable program code stored in the memory 1401 to execute the vision testing method executed by the cloud end server in the above-mentioned method embodiments.
The present invention provides a computer-readable storage medium storing a computer program, wherein the computer program causes a computer to execute some or all of the steps of the method as in the above method embodiments.
Embodiments of the present invention also provide a computer program product, wherein the computer program product, when run on a computer, causes the computer to perform some or all of the steps of the method as in the method embodiments above.
The embodiment of the invention also provides an application publishing platform, wherein the application publishing platform is used for publishing a computer program product, and the computer program product, when running on a computer, causes the computer to execute part or all of the steps of the method as in the above method embodiments.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Those skilled in the art will also appreciate that the embodiments described in the specification are alternative embodiments and that the acts and modules referred to are not necessarily required for the present invention. The above embodiments are not necessarily independent embodiments, and the separation into the embodiments is merely used to highlight different technical features in different embodiments, and those skilled in the art should appreciate that the above embodiments may be combined arbitrarily.
In various embodiments of the present invention, it should be understood that the sequence numbers of the foregoing processes do not imply that the execution sequences of the processes should be determined by the functions and internal logic of the processes, and should not be construed as limiting the implementation of the embodiments of the present invention.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer-accessible memory. Based on this understanding, the technical solution of the present invention, or a part contributing to the prior art or all or part of the technical solution, may be embodied in the form of a software product stored in a memory, comprising several requests for a computer device (which may be a personal computer, a server or a network device, etc., in particular may be a processor in a computer device) to execute some or all of the steps of the above-mentioned method of the various embodiments of the present invention.
Those of ordinary skill in the art will appreciate that all or part of the steps of the various methods of the above embodiments may be implemented by a program that instructs associated hardware, the program may be stored in a computer readable storage medium including Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), programmable Read-Only Memory (Programmable Read-Only Memory, PROM), erasable programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), one-time programmable Read-Only Memory (OTPROM), electrically erasable programmable Read-Only Memory (EEPROM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM) or other optical disk Memory, magnetic disk Memory, tape Memory, or any other medium that can be used for carrying or storing data that is readable by a computer.

Claims (19)

1. A vision testing method, characterized by being applied to a vision testing device, the vision testing device comprising: the method comprises the steps of shooting a camera module and an infrared scanning module, wherein a plurality of infrared lamps are regularly arranged in the infrared scanning module, and the method comprises the following steps:
Acquiring a target distance between a user and the camera module;
sequentially starting at least one infrared lamp in the infrared scanning module, acquiring a plurality of images to be detected through the camera module, wherein the infrared lamps in the starting state corresponding to each image to be detected are different, and each image to be detected comprises eyes of the user;
transmitting the target distance, the plurality of images to be detected and the position information of the infrared lamp in the on state corresponding to each image to be detected to a cloud server, and receiving a vision detection result transmitted by the cloud server;
the vision detection result is obtained by detecting the cloud server according to the target distance, the plurality of images to be detected and the position information of the infrared lamp in the on state corresponding to each image to be detected.
2. The method of claim 1, wherein the camera module includes a first camera and a second camera, the first camera is a visible wide angle camera, the second camera is a small field angle infrared camera, and the method further includes, prior to the obtaining the target distance between the user and the camera module:
When the user exists in the first photographable range of the first camera through the first camera, determining the position information of the eyes of the user in the first photographable range;
determining an angle to be adjusted of the second camera according to the relative position relation between the first camera and the second camera and the position information;
and according to the angle to be adjusted, performing angle adjustment on the second camera so that the user exists in a second shooting range of the second camera.
3. The method according to claim 2, wherein the vision inspection device is further provided with a bi-directional rotating motor device, the bi-directional rotating motor device is connected with the camera module, and after the second camera is angularly adjusted according to the angle to be adjusted, the method further comprises:
when the user is detected to exist in the second shooting range, acquiring an initial image through the second camera;
detecting the initial image and determining initial positions of the eyes of the user in the initial image;
determining an offset distance and an offset direction according to the initial position and a center position, wherein the center position is the center position of the second photographable range;
Determining an angle to be adjusted of the second camera according to the offset distance and the offset direction;
and carrying out angle adjustment on the second camera according to the angle to be adjusted through the bidirectional rotating motor device, so that the eyes of the user move to the center position.
4. The method of claim 3, wherein the obtaining the target distance between the user and the camera module comprises:
acquiring binocular images after automatic focusing through the second camera;
and determining the target distance between the user and the camera module according to the automatic focusing parameters and the binocular image.
5. The method of claim 3, wherein the obtaining the target distance between the user and the camera module comprises:
acquiring a first image through the first camera and acquiring a second image through the second camera;
and determining the target distance between the user and the camera module according to the sizes of the eyes in the first image, the sizes of the eyes in the second image and the relative position relationship between the first camera and the second camera.
6. The method of claim 1, wherein the infrared scanning module comprises a first infrared lamp set and a second infrared lamp set, the number of infrared lamps included in the first infrared lamp set is the same as the number of infrared lamps included in the second infrared lamp set, the first infrared lamp set and the second infrared lamp set each comprise at least one infrared lamp, the at least one infrared lamp in the infrared scanning module is turned on sequentially, and a plurality of images to be measured are acquired through the camera module, including:
when the first infrared lamp set is in an on state and the second infrared lamp set is in an off state, at least one first image to be detected is obtained through the camera module;
when the first infrared lamp set is in a closed state and the second infrared lamp set is in an open state, at least one second image to be detected is obtained through the camera module;
and determining the plurality of images to be tested according to the at least one first image to be tested and the at least one second image to be tested.
7. The method of claim 6, wherein the determining the plurality of images to be measured from the at least one first image to be measured and the at least one second image to be measured comprises:
Respectively carrying out binocular identification on the at least one first image to be detected and the at least one second image to be detected to obtain an identification result;
determining all the effective images with the identification results indicating the existence of the eyes as the images to be detected;
at least one effective image exists in the at least one first image to be tested, and at least one effective image exists in the at least one second image to be tested.
8. A vision testing method, applied to a cloud server, comprising:
receiving a target distance, a plurality of images to be detected and position information of an infrared lamp in an on state, which corresponds to each image to be detected, sent by a vision detection device, wherein the target distance is the distance between a user and a camera module, and the images to be detected comprise eyes of the user;
pupil extraction is carried out on the images to be detected, and pupil characteristics corresponding to each image to be detected are determined;
processing the target distance, the pupil characteristics and the position information through a target neural network model to obtain a vision detection result;
and sending the vision testing result to the vision testing device.
9. The method of claim 8, wherein performing pupil extraction on the plurality of images to be measured to determine a pupil feature corresponding to each image to be measured comprises:
pupil identification is carried out on the images to be detected, and pupil images corresponding to the images to be detected are obtained;
and extracting the characteristics of each pupil image to obtain the pupil characteristics corresponding to each image to be detected.
10. The method of claim 9, wherein the performing feature extraction on each pupil image to obtain the pupil feature corresponding to each image to be detected includes:
performing section sampling on each pupil image to obtain at least one pupil section sampling data set corresponding to each pupil image, wherein the number of the pupil section sampling data sets is the same as the number of the infrared lamps in an on state corresponding to the pupil images;
expanding the number of the sampling data included in each pupil section sampling data set to a preset number, and arranging at least one expanded pupil section sampling data set to obtain the pupil characteristics.
11. The method of claim 9, wherein performing pupil recognition on the plurality of images to be measured to obtain a pupil image corresponding to each image to be measured, comprises:
Determining target feature points in each image to be detected;
superposing the plurality of images to be detected according to the target feature points to obtain a superposed picture set;
standard deviation data is acquired for the intersection of the superimposed picture sets, and pupil mask images corresponding to the superimposed picture sets are obtained;
and determining pupil images corresponding to each image to be detected according to the pupil mask images.
12. The method of claim 9, wherein performing pupil recognition on the plurality of images to be measured to obtain a pupil image corresponding to each image to be measured, comprises:
performing binocular identification on the plurality of images to be detected through an image segmentation algorithm to obtain binocular images corresponding to each image to be detected;
and performing pupil identification on each binocular image through the image segmentation algorithm to obtain pupil images corresponding to each binocular image.
13. The method of claim 8, wherein the processing the target distance, the pupil characteristics, and the location information by a target neural network model to obtain vision testing results comprises:
performing convolution processing, pooling processing, data shaping processing, full connection processing and activation processing on the pupil characteristics through the target neural network model to obtain processed pupil characteristics;
Mixing the processed pupil characteristics, the target distance and the position information through the target neural network model to obtain mixed data;
and performing activation processing and full connection processing on the mixed data through the target neural network model to obtain the vision detection result.
14. The method of claim 13, wherein the method further comprises:
a plurality of test data sets are obtained in advance, wherein the test data sets comprise: pupil test characteristics, test distance, test position information;
performing vision labeling on each test data set to obtain a vision test result corresponding to each test data set;
inputting the multiple test data sets and vision test results corresponding to each test data set into an original neural network model for training to obtain the target neural network model.
15. A vision testing device, the vision testing device comprising: the camera module and infrared scanning module, a plurality of infrared lamps have regularly been arranged in the infrared scanning module, vision testing device includes:
the acquisition module is used for acquiring the target distance between the user and the camera module;
The processing module is used for sequentially starting at least one infrared lamp in the infrared scanning module, acquiring a plurality of images to be detected through the camera module, wherein the infrared lamps in the starting state corresponding to each image to be detected are different, and each image to be detected comprises eyes of the user;
the receiving and transmitting module is used for transmitting the target distance, the plurality of images to be detected and the position information of the infrared lamp which corresponds to each image to be detected and is in an on state to the cloud server and receiving the vision detection result transmitted by the cloud server;
the vision detection result is obtained by detecting the cloud server according to the target distance, the plurality of images to be detected and the position information of the infrared lamp in the on state corresponding to each image to be detected.
16. A cloud server, the cloud server comprising:
the receiving and transmitting module is used for receiving a target distance sent by the vision detection device, a plurality of images to be detected and position information of the infrared lamp in an on state corresponding to each image to be detected, wherein the target distance is the distance between a user and the camera module, and the images to be detected comprise eyes of the user;
The processing module is used for extracting pupils of the images to be detected and determining the pupil characteristics corresponding to each image to be detected;
the processing module is further used for processing the target distance, the pupil characteristics and the position information through a target neural network model so as to obtain a vision detection result;
the receiving and transmitting module is further used for transmitting the vision testing result to the vision testing device.
17. A vision testing device, the vision testing device comprising: the camera module and infrared scanning module, a plurality of infrared lamps have regularly been arranged in the infrared scanning module, vision testing device includes:
a memory storing executable program code;
and a processor coupled to the memory;
the processor invokes the executable program code stored in the memory for performing the vision testing method of any one of claims 1 to 7.
18. A cloud server, the cloud server comprising:
a memory storing executable program code;
and a processor coupled to the memory;
the processor invokes the executable program code stored in the memory for performing the vision testing method of any one of claims 8 to 14.
19. A computer-readable storage medium, comprising: the computer readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the vision testing method of any one of claims 1 to 7 or the vision testing method of any one of claims 8 to 14.
CN202310459037.XA 2023-04-24 2023-04-24 Vision detection method and device, cloud server and storage medium Pending CN116421139A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310459037.XA CN116421139A (en) 2023-04-24 2023-04-24 Vision detection method and device, cloud server and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310459037.XA CN116421139A (en) 2023-04-24 2023-04-24 Vision detection method and device, cloud server and storage medium

Publications (1)

Publication Number Publication Date
CN116421139A true CN116421139A (en) 2023-07-14

Family

ID=87088938

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310459037.XA Pending CN116421139A (en) 2023-04-24 2023-04-24 Vision detection method and device, cloud server and storage medium

Country Status (1)

Country Link
CN (1) CN116421139A (en)

Similar Documents

Publication Publication Date Title
JP5529660B2 (en) Pupil detection device and pupil detection method
JP5538160B2 (en) Pupil detection device and pupil detection method
US20170213385A1 (en) Apparatus and method for generating 3d face model using mobile device
US20160335475A1 (en) 3d image analyzer for determining the gaze direction
CN108416285A (en) Rifle ball linkage surveillance method, apparatus and computer readable storage medium
JP5466610B2 (en) Gaze estimation device
CN109712192A (en) Camera module scaling method, device, electronic equipment and computer readable storage medium
CN109002796B (en) Image acquisition method, device and system and electronic equipment
CN113260299A (en) System and method for eye tracking
CN100405974C (en) Eye image pickup apparatus
CN103324908A (en) Rapid iris collecting, judging and controlling method for iris identification
CN103945109A (en) Image pickup apparatus, remote control apparatus, and methods of controlling image pickup apparatus and remote control apparatus
EP1171996B1 (en) Fast focus assessment system and method for imaging
CN109753925A (en) A kind of method and apparatus that iris feature extracts
CN105208273A (en) Method and device for taking photos through dual-camera terminal and dual-camera terminal
CN104333709B (en) Method, device and the electronic equipment being controlled to flash lamp
CN108965713A (en) Image-pickup method, device and computer readable storage medium
CN112255239B (en) Contamination location detection method, apparatus, device, and computer-readable storage medium
KR101635602B1 (en) Method and apparatus for iris scanning
JP2002000567A (en) Method of measuring pupil center position and method of detecting view point position
CN116421139A (en) Vision detection method and device, cloud server and storage medium
JP5946785B2 (en) Visual effect evaluation device, method and program, and image pickup device
CN110956054B (en) Iris image acquisition method and terminal
CN112971712A (en) Biological information acquisition device, terminal device, acquisition method, and recording medium
US20230368457A1 (en) Method and system for three-dimensional scanning of arbitrary scenes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination