CN120599536A - A rat identification method, system, device and storage medium under infrared night vision - Google Patents
A rat identification method, system, device and storage medium under infrared night visionInfo
- Publication number
- CN120599536A CN120599536A CN202510701314.2A CN202510701314A CN120599536A CN 120599536 A CN120599536 A CN 120599536A CN 202510701314 A CN202510701314 A CN 202510701314A CN 120599536 A CN120599536 A CN 120599536A
- Authority
- CN
- China
- Prior art keywords
- mouse
- optical flow
- detection
- updated
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a mouse identification method, a system, equipment and a storage medium under infrared night vision, which can be applied to the technical field of large language models, acquire a plurality of continuous frame images to be detected, preprocess each frame image to be detected to obtain a plurality of continuous processed images, perform target detection on a moving area by using an LK optical flow method and a target detection model based on the plurality of continuous processed images to obtain a target area, perform threshold segmentation processing and opening operation on the target area to determine a plurality of mouse eye candidate positions, match each mouse eye candidate position with each detection frame one by one to obtain a plurality of mouse detection frames and eye feature point sets corresponding to the mouse detection frames, and further continue tracking by using an optical flow method, thereby improving the stability and accuracy of target tracking.
Description
Technical Field
The invention relates to the technical field of computer vision, in particular to a method, a system, equipment and a storage medium for recognizing mice under infrared night vision.
Background
In various environments, mice often become a major problem for people. Especially at night, it is difficult for the conventional monitoring system to effectively detect and recognize the presence of mice due to insufficient light. Although the monitoring devices on the market at present can provide night monitoring functions, effective identification means for specific targets are still generally lacking, and the signs of the activities of mice cannot be accurately found. Furthermore, existing monitoring systems often rely on manual observation, which is not only time consuming and labor intensive, but also does not perform well during night time conditions.
In recent years, with the development of image processing technology and computer vision, more and more intelligent monitoring systems have been developed to improve monitoring efficiency and accuracy. Particularly, under the condition of infrared night vision, the monitoring camera can capture weak heat source signals in the environment. However, the gray level image displayed by the monitoring system in the infrared night vision mode and the characteristic information in the infrared image are relatively less, and the difficulty in identifying the mice is high.
Disclosure of Invention
In order to solve the technical problems, the embodiment of the invention provides a method, a system, equipment and a storage medium for recognizing mice under infrared night vision, which are used for solving the technical problem that the mice can not be recognized accurately under the infrared night vision condition.
A first aspect of an embodiment of the present invention provides a method for identifying a mouse under infrared night vision, including:
Acquiring a plurality of continuous frame images to be detected;
preprocessing each frame image to be detected to obtain a plurality of continuous processed images, and calculating a first processed image by using an LK optical flow method to obtain a motion region;
Invoking a target detection model to perform target detection on the motion area to obtain a target area, performing threshold segmentation processing according to each detection frame in the target area to obtain corresponding binary images, and performing opening operation on each binary image to obtain a plurality of mouse eye candidate positions;
and matching the eye candidate positions of the mice with the detection frames one by one to obtain a plurality of mouse detection frames and eye feature point sets corresponding to the mouse detection frames, continuously processing frame images after preset time by using an LK optical flow method and a target detection model to obtain updated eye feature point sets, obtaining the position change of the mice according to the eye feature point sets and the updated eye feature point sets, and obtaining the mouse activity heat map according to the position change of the mice.
In a possible implementation manner of the first aspect, the calculating using the LK optical flow method based on the continuous plurality of processed images to obtain the motion region includes:
calculating global optical flow according to the neighborhood of the target pixel point in the processed image, and judging that the camera moves if the global optical flow is larger than a preset threshold value;
if the global optical flow is smaller than the preset threshold, the camera is judged to be stationary, optical flow fields of a plurality of continuous processed images are overlapped to obtain an overlapped result, and the processed images are divided according to the overlapped result to obtain a motion area.
In a possible implementation manner of the first aspect, invoking a target detection model to perform target detection on a motion area to obtain a target area, and further includes:
Inputting the motion region into a feature extraction network in a target detection model to perform feature extraction to obtain a first feature extraction result, wherein the feature extraction network is formed based on CSPDARKNET network;
inputting the first feature extraction result into a neck network for feature fusion to obtain a fused feature result;
And inputting the fused characteristic result into a detection head to perform target detection, so as to obtain a target area.
In a possible implementation manner of the first aspect, performing a threshold segmentation process according to each detection frame in the target area to obtain a corresponding binary image, including:
According to each detection frame in the target area, extracting an infrared image area corresponding to the detection frame, and calculating a gray level histogram of each infrared image area;
and dividing each infrared image area according to the self-adaptive threshold value to obtain a corresponding binary image.
In a possible implementation manner of the first aspect, the processing the frame image after the preset time by using the LK optical flow method and the object detection model to obtain the updated eye feature point set includes:
Acquiring a plurality of continuous frame images to be detected after a preset time to obtain updated frame images to be detected;
preprocessing each updated frame image to be detected to obtain a plurality of continuous updated processed images, and calculating by using an LK optical flow method based on the plurality of continuous updated processed images to obtain updated motion areas;
invoking a target detection model to perform target detection on the updated motion region to obtain an updated target region, performing threshold segmentation processing according to each detection frame in the updated target region to obtain a corresponding updated binary image, and performing opening operation on each updated binary image to obtain a plurality of updated mouse eye candidate positions;
And matching each updated mouse eye candidate position with each detection frame one by one to obtain a plurality of updated mouse detection frames and corresponding updated eye feature point sets.
To solve the same technical problem, a second aspect of the embodiments of the present invention provides a mouse recognition system under infrared night vision, including:
The acquisition module is used for acquiring a plurality of continuous frame images to be detected;
The gray processing module is used for preprocessing each frame image to be detected to obtain a plurality of continuous processed images, and calculating by using an LK optical flow method based on the plurality of continuous processed images to obtain a motion area;
the detection module is used for calling the target detection model to carry out target detection on the motion area to obtain a target area, carrying out threshold segmentation processing according to each detection frame in the target area to obtain corresponding binary images, and carrying out opening operation on each binary image to obtain a plurality of candidate mouse eye positions;
the tracking module is used for matching the eye candidate positions of the mice with the detection frames one by one to obtain a plurality of mouse detection frames and eye characteristic point sets corresponding to the mouse detection frames, continuously processing frame images after preset time by using an LK optical flow method and a target detection model to obtain an updated eye characteristic point set, obtaining the position change of the mice according to the eye characteristic point set and the updated eye characteristic point set, and obtaining the mouse activity heat map according to the position change of the mice.
In a possible implementation manner of the second aspect, the gray scale processing module includes a global optical flow calculation unit and a dividing unit, including:
the global optical flow calculation unit is used for calculating global optical flow according to the neighborhood of the target pixel point in the processed image, and if the global optical flow is larger than a preset threshold value, the camera is judged to move;
the dividing unit is used for superposing optical flow fields of a plurality of continuous processed images to obtain a superposition result if the global optical flow is smaller than a preset threshold value, and dividing the processed images to obtain a motion area according to the superposition result.
A third aspect of an embodiment of the present invention provides a computer apparatus, comprising:
a memory for storing a computer program;
A processor for performing the steps of the method for recognizing a mouse under infrared night vision as in the first aspect when executing a computer program.
A fourth aspect of the embodiments of the present invention provides a storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method for identifying mice under infrared night vision as in the first aspect.
The technical scheme of the invention has the following advantages:
According to the mouse identification method under infrared night vision, the continuous multiple frame images to be detected are collected, each frame image to be detected is preprocessed, the continuous multiple processed images are obtained, and the efficient detection and identification of the mouse are achieved based on the continuous multiple processed images and combined with an optical flow method and a built target detection model. Specifically, the high reflection characteristic of the mouse eyes in the infrared image is utilized, the optical flow method and the target detection model are utilized to obtain a target area, then the candidate positions of the mouse eyes are detected through threshold segmentation operation, and further the optical flow method is utilized for tracking, so that the stability and the accuracy of target tracking are improved. In addition, the provided target detection model can also realize a multi-scale and multi-target detection mode, is suitable for mice with different sizes and numbers, and further improves the accuracy of identifying the mice.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a method for recognizing a mouse under infrared night vision according to an embodiment of the present invention;
FIG. 2 is a flow chart of a single-frame mouse detection model of a mouse identification method under infrared night vision according to an embodiment of the invention;
FIG. 3 is a flowchart showing the whole process of mouse detection and tracking by the method for recognizing mice under infrared night vision according to the embodiment of the present invention;
FIG. 4 is a system block diagram of a mouse recognition system under infrared night vision according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As shown in FIG. 1, FIG. 1 is a flowchart of a method for identifying mice under infrared night vision, which is provided by the embodiment of the invention, and comprises steps S101-S104, wherein the steps are as follows:
s101, acquiring a plurality of continuous frame images to be detected.
In this embodiment, a monitoring camera with an infrared night vision function is used to automatically turn on an infrared illumination mode under night conditions to obtain continuous images in the environment. The image acquired by the monitoring camera should have higher resolution and higher signal-to-noise ratio so as to ensure the image quality.
It should be noted that how many frame images to be detected are obtained specifically may be selected according to actual situations, and the frame images to be detected may be understood as a single Zhang Jingtai frame in video monitoring shot by the monitoring camera.
S102, preprocessing each frame image to be detected to obtain a plurality of continuous processed images, and calculating by using an LK optical flow method based on the plurality of continuous processed images to obtain a motion area.
In this embodiment, the acquired frame image to be detected is preprocessed, so as to improve the image quality and the accuracy of subsequent processing. Specifically, the acquired frame image to be detected is a gray image, and the image quality can be improved by using noise removal, contrast enhancement and other pretreatment methods.
Then, the optical flow method is used for analyzing and detecting the pixel change between the continuous frames, and a moving target in the image is detected, namely, a sparse optical flow field is formed by using the LK optical flow method, and the moving and static pixels are distinguished by setting a threshold according to the intensity and the direction of the optical flow field, so that the image is divided into a moving area and a static area.
In one embodiment, based on a plurality of processed images in succession, the calculation is performed using the LK optical flow method to obtain a motion region, including:
calculating global optical flow according to the neighborhood of the target pixel point in the processed image, and judging that the camera moves if the global optical flow is larger than a preset threshold value;
if the global optical flow is smaller than the preset threshold, the camera is judged to be stationary, optical flow fields of a plurality of continuous processed images are overlapped to obtain an overlapped result, and the processed images are divided according to the overlapped result to obtain a motion area.
In this embodiment, the LK optical flow method assumes that each pixel has the same optical flow vector in the neighborhood of the target pixel m, and after solving the global optical flow, analyzes and determines whether there is a larger global optical flow, if yes, determines that the camera moves, determines that the camera is a background vector, and if no, determines that the camera stops moving, and continues analysis. And calculating the superposition result of optical flow fields of a plurality of continuous processed images, distinguishing the pixels moving and static through a preset threshold value, and dividing the image area into a moving area and a static area. And extracting a connected region from the motion region to form potential motion target candidates, and determining stationary pixels as a background region.
Specifically, a target pixel point m is selected from a plurality of processed images, all pixels in the neighborhood of the point m are assumed to have the same optical flow, the optical flow field is solved by minimizing gray errors of the pixels in the neighborhood, then the global optical flow is calculated, the operation of the camera is judged according to the global optical flow, and if the global optical flow is larger than a preset threshold, the camera is judged to move. If the global optical flow is smaller than the preset threshold, the camera is judged to be stationary, and then moving target extraction under the stationary camera is carried out.
When moving object extraction is carried out, optical flow fields of a plurality of continuous processed images are obtained, all optical flow fields are overlapped to obtain an optical flow field overlapping result, the optical flow field overlapping result is divided according to a set threshold value, the optical flow field overlapping result which is larger than the set threshold value is determined to be a moving area, and the optical flow field overlapping result which is smaller than the set optical flow field overlapping result is determined to be a static area.
S103, invoking a target detection model to perform target detection on the motion area to obtain a target area, performing threshold segmentation processing according to each detection frame in the target area to obtain corresponding binary images, and performing opening operation on each binary image to obtain a plurality of mouse eye candidate positions.
In this embodiment, a mouse in a movement region is detected using a target detection model constructed by a deep learning network within the detected movement region. Specifically, an improved YOLOv model is used for constructing a target detection model, and a movement area is detected by using the target detection model to identify mice. The mice in the image are determined as target areas. The target area comprises n detection frames, an adaptive threshold is set for each detection frame by using a histogram analysis method, and the infrared images of the n detection frame areas are divided into a foreground area and a background area which possibly contain eye areas according to the adaptive threshold, so that n binary images are obtained.
Then, on operation in morphological operation is applied to the n binary images, small noise points are removed, and a circle in the images is found through Hough circle transformation, and the circle center is used as a candidate position of the mouse eyes.
It should be noted that, for n detection frames of each frame of image, no more than 2n candidate positions of mouse eyes can be obtained. The target detection model is obtained by training with sample data, as shown in fig. 2, and multiple mouse images with different spatial backgrounds and multiple forms under multiple time periods are collected and organized into a data set, and a single-frame mouse detection model for the detection task needs to be trained. Before training, a plurality of monitoring videos in an infrared night vision mode are collected, and a plurality of mice are contained in the monitoring videos to form an original data set. Sampling from the original dataset, an effective image containing the mouse is obtained, forming a first dataset. And distinguishing whether obvious mouse eyes appear or not for the mouse images in the first data set, and selecting the images with the mouse eyes to form a second data set. And (3) the mouse image in the second data set is pasted into other background images after being cut, extracted and data amplified to form a third data set. Training on the third dataset using the modified YOLOv model until the model converges to obtain a target detection model.
In an embodiment, invoking the target detection model to perform target detection on the motion area to obtain a target area, and further includes:
Inputting the motion region into a feature extraction network in a target detection model to perform feature extraction to obtain a first feature extraction result, wherein the feature extraction network is formed based on CSPDARKNET network;
inputting the first feature extraction result into a neck network for feature fusion to obtain a fused feature result;
And inputting the fused characteristic result into a detection head to perform target detection, so as to obtain a target area.
In this embodiment, when the improved YOLOv model is used to detect the motion area, the main network uses the CSPDARKNET feature extraction network, and mainly extracts the multi-level features of the motion area, that is, the first feature extraction result, through the multi-layer convolution layer, the residual block and the cross-stage part (CSP) structure. And then fusing the extracted multi-level features by using a neck network to obtain a fused feature result. During fusion, the neck network uses SPP modules, PANet and other structures, and the detection capability of the model on targets with different sizes is improved through multi-scale feature fusion. And inputting the fused characteristic result into a detection head to perform target detection, so as to obtain a target area.
In addition, SENet is introduced into the backbone network, the global context is used for calibrating the weight of different channels to adjust the channel dependence, the attention of the model to key features is enhanced, the default activation function SiLU is replaced by GELU, and the nonlinear expression capacity and convergence rate of the model are improved.
In one embodiment, performing threshold segmentation processing according to each detection frame in the target area to obtain a corresponding binary image, including:
Extracting infrared image areas corresponding to the detection frames according to each detection frame in the target area, calculating gray histograms of each infrared image area, and determining a self-adaptive threshold according to the gray histograms;
and dividing each infrared image area according to the self-adaptive threshold value to obtain a corresponding binary image.
In this embodiment, according to the n detection frames in the target area acquired in S103, the infrared image area corresponding to each detection frame is cut out, the gray distribution of each infrared image area is counted, a corresponding histogram is obtained, and the histogram can be smoothed, so that the noise influence is reduced. An adaptive threshold is then set based on the histogram.
And (3) dividing the infrared image area into a foreground area and a background area which possibly contain the eye area by applying an adaptive threshold value to obtain n binary images. Specifically, binarizing is performed on the infrared image area to obtain a mask M i, and then the mask is divided according to the self-adaptive threshold value to obtain a foreground area and a background area, namely:
Where R i is the infrared image region and T i is the adaptive threshold.
A binary image B i=MiRj is obtained, where R j is the infrared image region that retains only the foreground region.
It should be noted that, when the adaptive threshold is determined, the adaptive threshold may be calculated according to a method commonly used at present in combination with an actual situation, which is not described herein.
S104, matching the eye candidate positions of the mice with the detection frames one by one to obtain a plurality of mouse detection frames and eye feature point sets corresponding to the mouse detection frames, continuously processing frame images after preset time by using an LK optical flow method and a target detection model to obtain updated eye feature point sets, obtaining the position change of the mice according to the eye feature point sets and the updated eye feature point sets, and obtaining a mouse activity heat map according to the position change of the mice.
In this embodiment, serial numbers are marked on n detection frames, and candidate positions of eyes of all mice are matched to corresponding detection frames one by one, so as to obtain n mouse detection frames and a corresponding eye feature point set P t. Then, the optical flow method is used for tracking the position change of the eyes of the mice. And reading a frame image after the next pretreatment, predicting an optical flow field by using an LK optical flow method, and updating the position of the characteristic point set to be P t+1. It is understood herein that tracking is performed in combination with the mouse position and its eye characteristics, and subsequent video frames are continuously read to form a mouse motion trajectory from the mouse eye positions. Specifically, a detection interval Δt is set, and at a time t+Δt, the target detection model is used again for detection, and a hungarian algorithm is used for matching with the original detection frame, so that the track of the whole process is kept continuous.
In general, a region with high brightness is detected by a histogram analysis method, morphological operation is used to detect candidate positions of mouse eyes, then, the mouse in a corresponding detection frame to which the eyes belong is judged by combining the detected mouse positions and the characteristics of the eyes, and in the subsequent detection, the position change of the mouse eyes is tracked by an optical flow method, so that the position change of the mouse is obtained.
In one embodiment, the processing of the frame image after the preset time to obtain the updated eye feature point set using the LK optical flow method and the object detection model includes:
Acquiring a plurality of continuous frame images to be detected after a preset time to obtain updated frame images to be detected;
preprocessing each updated frame image to be detected to obtain a plurality of continuous updated processed images, and calculating by using an LK optical flow method based on the plurality of continuous updated processed images to obtain updated motion areas;
invoking a target detection model to perform target detection on the updated motion region to obtain an updated target region, performing threshold segmentation processing according to each detection frame in the updated target region to obtain a corresponding updated binary image, and performing opening operation on each updated binary image to obtain a plurality of updated mouse eye candidate positions;
And matching each updated mouse eye candidate position with each detection frame one by one to obtain a plurality of updated mouse detection frames and corresponding updated eye feature point sets.
In this embodiment, after n mouse detecting frames and the corresponding eye feature point sets P t are obtained, the next frame of image is continuously processed. It is understood that, assuming that the current time is t, the time t+Δt is reached after the time interval Δt. And at the new moment, re-acquiring a plurality of continuous frame images to be detected, obtaining a plurality of continuous updated frame images, preprocessing, and calculating the plurality of continuous updated processed frame images by using an LK optical flow method to obtain an updated motion region. Performing target detection on the updated movement region by using a target detection model to obtain an updated target region, then performing threshold segmentation processing according to each detection frame in the updated target region to obtain a corresponding updated binary image, performing opening operation on each updated binary image to obtain a plurality of updated mouse eye candidate positions, and performing one-by-one matching on each updated mouse eye candidate position and each detection frame to obtain a plurality of updated mouse detection frames and a corresponding updated eye feature point set P t+1.
And then counting to obtain the number of mice, the activity time, the activity area and the like in a specific scene according to an eye characteristic point set obtained by processing a plurality of continuous frame images to be detected at each moment, and realizing the functions of system mouse prevention early warning and the like in each place.
And generating an activity heat map of the mice according to the recorded mouse activity track information, and storing activity data such as detected mouse activity time, activity position and the like in a specific position.
As shown in FIG. 3, an infrared night vision gray level image in the environment is obtained through image acquisition, image preprocessing is performed to improve quality, then an LK optical flow method is adopted to detect a moving target and classify the moving target into a static pixel and a moving pixel, an improved YOLOv model is used for accurately detecting mice, the eyes of the mice are detected through threshold segmentation and morphological operation, and the positions of the mice are continuously tracked through an optical flow method to generate a movement track of the mice.
Specifically, by combining the optical flow method and the target detection model, the presence of a mouse can be efficiently and accurately detected using an infrared night vision camera under night conditions. The conventional target tracking method generally depends on the integral characteristics of the target, but the target is easy to lose under the condition of complex background and shielding, the optical flow method can effectively detect the moving target, the target detection model can accurately identify the mouse, and the combination of the two can obviously improve the detection efficiency and accuracy. The conventional single-frame target detection lacks application of video information, and the problems of false detection and omission are easy to occur in a complex scene. The target detection model can better realize a multi-scale and multi-target detection mode, and can detect mice with different sizes and numbers at night. The conventional method often cannot process complex scenes, and is difficult to effectively realize mouse detection under different illumination conditions, different shooting distances and different angles.
Moreover, by performing target tracking using the high reflection characteristics of the eyes of the mouse, the movement trace of the mouse can be more accurately captured and tracked under night conditions. The mouse eyes show the characteristic of 'luminescence' in the infrared image, have higher brightness, and can be accurately identified in threshold segmentation. By utilizing the information of the eyes of the mice, special conditions such as recognition in the rapid movement process of the mice can be effectively solved, meanwhile, reliable characteristic points are provided for target tracking, and the mice can be accurately positioned. In the conventional target detection processing process of a general scene, as the detected target generally does not have obvious image attributes, the problem that tracking failure or false detection easily occurs due to the lack of reliable characteristic points in the rapid moving process of the target is difficult to realize better detection precision by adopting a combination method of a similar optical flow method and a neural network. The method obviously improves the stability and accuracy of target tracking, and ensures that mice can be more effectively detected and tracked under night infrared night vision conditions.
As shown in fig. 4, fig. 4 is a system block diagram of a mouse recognition system 400 under infrared night vision, which includes:
an acquisition module 401, configured to acquire a plurality of continuous frame images to be detected;
The gray processing module 402 is configured to perform preprocessing on each frame image to be detected to obtain a plurality of continuous processed images, and calculate by using an LK optical flow method based on the plurality of continuous processed images to obtain a motion region;
The detection module 403 is configured to invoke a target detection model to perform target detection on the motion region, obtain a target region, perform threshold segmentation processing according to each detection frame in the target region, obtain a corresponding binary image, and perform an opening operation on each binary image to obtain a plurality of candidate mouse eye positions;
the tracking module 404 is configured to match each candidate position of the mouse eye with each detection frame one by one to obtain a plurality of mouse detection frames and eye feature point sets corresponding to the mouse detection frames, and process frame images after a preset time by continuously using an LK optical flow method and a target detection model to obtain an updated eye feature point set, obtain a position change of the mouse according to the eye feature point set and the updated eye feature point set, and obtain a mouse activity heat map according to the position change of the mouse.
The gradation processing module 402 includes a global optical flow calculation unit and a division unit, including:
the global optical flow calculation unit is used for calculating global optical flow according to the neighborhood of the target pixel point in the processed image, and if the global optical flow is larger than a preset threshold value, the camera is judged to move;
the dividing unit is used for superposing optical flow fields of a plurality of continuous processed images to obtain a superposition result if the global optical flow is smaller than a preset threshold value, and dividing the processed images to obtain a motion area according to the superposition result.
The specific embodiment of the mouse identification system for infrared night vision is substantially the same as the specific embodiment of the mouse identification method for infrared night vision, and will not be described herein.
In one embodiment of the present application, a computer device is provided, where the computer device includes a memory and a processor, the memory stores a computer program, and the processor implements the above steps when executing the computer program.
In one embodiment of the present application, a computer readable storage medium is provided, on which a computer program is stored, which when executed by a processor, implements the above steps, and the implementation principle and technical effects of the computer readable storage medium provided in this embodiment are similar to those of the above method embodiments, and are not described herein again.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing embodiments have been provided for the purpose of illustrating the general principles of the present invention, and are not to be construed as limiting the scope of the invention. It should be noted that any modifications, equivalent substitutions, improvements, etc. made by those skilled in the art without departing from the spirit and principles of the present invention are intended to be included in the scope of the present invention.
Claims (9)
1. A method for identifying mice under infrared night vision, comprising:
Acquiring a plurality of continuous frame images to be detected;
Preprocessing each frame image to be detected to obtain a plurality of continuous processed images, and calculating by using an LK optical flow method based on the plurality of continuous processed images to obtain a motion region;
Invoking a target detection model to perform target detection on the motion region to obtain a target region, performing threshold segmentation processing according to each detection frame in the target region to obtain corresponding binary images, and performing opening operation on each binary image to obtain a plurality of mouse eye candidate positions;
and matching the candidate positions of the eyes of the mice with the detection frames one by one to obtain a plurality of mouse detection frames and eye feature point sets corresponding to the mouse detection frames, continuously using the LK optical flow method and the target detection model to process frame images after preset time to obtain updated eye feature point sets, obtaining the position change of the mice according to the eye feature point sets and the updated eye feature point sets, and obtaining a mouse activity heat map according to the position change of the mice.
2. The method for recognizing a mouse under infrared night vision according to claim 1, wherein the calculating using LK optical flow method based on the continuous plurality of processed images to obtain the movement region comprises:
Calculating global optical flow according to the neighborhood of the target pixel point in the processed image, and judging that the camera moves if the global optical flow is larger than a preset threshold value;
If the global optical flow is smaller than the preset threshold, the camera is judged to be stationary, optical flow fields of a plurality of continuous processed images are overlapped to obtain an overlapping result, and the processed images are divided according to the overlapping result to obtain a motion area.
3. The method for recognizing a mouse under infrared night vision according to claim 1, wherein the invoking the object detection model to perform object detection on the moving region to obtain a target region, further comprises:
inputting the motion region into a feature extraction network in the target detection model to perform feature extraction to obtain a first feature extraction result, wherein the feature extraction network is formed based on CSPDARKNET network;
inputting the first feature extraction result into the neck network to perform feature fusion to obtain a fused feature result;
and inputting the fused characteristic result into a detection head to carry out target detection, so as to obtain a target area.
4. The method for recognizing a mouse under infrared night vision according to claim 1, wherein the performing a threshold segmentation process according to each detection frame in the target area to obtain a corresponding binary image comprises:
extracting infrared image areas corresponding to the detection frames according to each detection frame in the target area, calculating gray histograms of the infrared image areas, and determining an adaptive threshold according to the gray histograms;
And dividing each infrared image area according to the self-adaptive threshold value to obtain a corresponding binary image.
5. The method for recognizing a mouse under infrared night vision according to claim 1, wherein the processing the frame image after a preset time using the LK optical flow method and the object detection model to obtain the updated eye feature point set comprises:
Acquiring a plurality of continuous frame images to be detected after a preset time to obtain updated frame images to be detected;
preprocessing each updated frame image to be detected to obtain a plurality of continuous updated processed images, and calculating by using an LK optical flow method based on the plurality of continuous updated processed images to obtain an updated motion region;
Invoking a target detection model to perform target detection on the updated motion region to obtain an updated target region, performing threshold segmentation processing according to each detection frame in the updated target region to obtain a corresponding updated binary image, and performing opening operation on each updated binary image to obtain a plurality of updated mouse eye candidate positions;
and matching each updated mouse eye candidate position with each detection frame one by one to obtain a plurality of updated mouse detection frames and corresponding updated eye feature point sets.
6. An infrared night vision rat identification system, comprising:
The acquisition module is used for acquiring a plurality of continuous frame images to be detected;
the gray processing module is used for preprocessing each frame image to be detected to obtain a plurality of continuous processed images, and calculating by using an LK optical flow method based on the plurality of continuous processed images to obtain a motion area;
The detection module is used for calling a target detection model to carry out target detection on the motion area to obtain a target area, carrying out threshold segmentation processing according to each detection frame in the target area to obtain corresponding binary images, and carrying out opening operation on each binary image to obtain a plurality of mouse eye candidate positions;
The tracking module is used for matching the candidate positions of the eyes of the mice with the detection frames one by one to obtain a plurality of mouse detection frames and eye characteristic point sets corresponding to the mouse detection frames, continuously using the LK optical flow method and the target detection model to process frame images after preset time to obtain updated eye characteristic point sets, obtaining the position change of the mice according to the eye characteristic point sets and the updated eye characteristic point sets, and obtaining the mouse activity heat map according to the position change of the mice.
7. The infrared night vision mouse recognition system of claim 6, wherein the gray scale processing module comprises a global optical flow calculation unit and a division unit, comprising:
The global optical flow calculation unit is used for calculating global optical flow according to the neighborhood of the target pixel point in the processed image, and if the global optical flow is larger than a preset threshold value, the camera is judged to move;
And the dividing unit is used for superposing optical flow fields of a plurality of continuous processed images to obtain a superposition result if the global optical flow is smaller than the preset threshold value, and dividing the processed images to obtain a motion area according to the superposition result.
8. A computer device, comprising:
a memory for storing a computer program;
a processor for implementing the method for recognizing mice under infrared night vision according to any one of claims 1 to 5 when executing the computer program.
9. A storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method for mouse identification under infrared night vision as claimed in any one of claims 1 to 5.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510701314.2A CN120599536A (en) | 2025-05-28 | 2025-05-28 | A rat identification method, system, device and storage medium under infrared night vision |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510701314.2A CN120599536A (en) | 2025-05-28 | 2025-05-28 | A rat identification method, system, device and storage medium under infrared night vision |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN120599536A true CN120599536A (en) | 2025-09-05 |
Family
ID=96898554
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202510701314.2A Pending CN120599536A (en) | 2025-05-28 | 2025-05-28 | A rat identification method, system, device and storage medium under infrared night vision |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN120599536A (en) |
-
2025
- 2025-05-28 CN CN202510701314.2A patent/CN120599536A/en active Pending
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN103971386B (en) | A kind of foreground detection method under dynamic background scene | |
| KR101731243B1 (en) | A video surveillance apparatus for identification and tracking multiple moving objects with similar colors and method thereof | |
| CN111582126B (en) | Pedestrian re-recognition method based on multi-scale pedestrian contour segmentation fusion | |
| Ali et al. | Vehicle detection and tracking in UAV imagery via YOLOv3 and Kalman filter | |
| CN108197604A (en) | Fast face positioning and tracing method based on embedded device | |
| Naufal et al. | Preprocessed mask RCNN for parking space detection in smart parking systems | |
| CN115205655B (en) | Infrared dark spot target detection system under dynamic background and detection method thereof | |
| CN110298297A (en) | Flame identification method and device | |
| EP3376438B1 (en) | A system and method for detecting change using ontology based saliency | |
| CN111814690B (en) | Target re-identification method, device and computer readable storage medium | |
| CN119672613B (en) | A surveillance video information intelligent processing system based on cloud computing | |
| CN118864537B (en) | A method, device and equipment for tracking moving targets in video surveillance | |
| JP2024516642A (en) | Behavior detection method, electronic device and computer-readable storage medium | |
| CN115049954A (en) | Target identification method, device, electronic equipment and medium | |
| CN112541403B (en) | Indoor personnel falling detection method by utilizing infrared camera | |
| CN114821441B (en) | A deep learning-based method for identifying moving targets at airports combined with ADS-B information | |
| KR101243294B1 (en) | Method and apparatus for extracting and tracking moving objects | |
| CN112465854A (en) | Unmanned aerial vehicle tracking method based on anchor-free detection algorithm | |
| CN119131364A (en) | A method for detecting small targets in drones based on unsupervised adversarial learning | |
| CN116665015B (en) | A method for detecting weak and small targets in infrared sequence images based on YOLOv5 | |
| KR101690050B1 (en) | Intelligent video security system | |
| CN111275733A (en) | Method for realizing rapid tracking processing of multiple ships based on deep learning target detection technology | |
| Kapoor | A video surveillance detection of moving object using deep learning | |
| CN116824641B (en) | Gesture classification method, device, equipment and computer storage medium | |
| CN119442145A (en) | A multi-target detection method for complex traffic scenes |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |