[go: up one dir, main page]

CN111210464A - System and method for alarming people falling into water based on convolutional neural network and image fusion - Google Patents

System and method for alarming people falling into water based on convolutional neural network and image fusion Download PDF

Info

Publication number
CN111210464A
CN111210464A CN201911398640.1A CN201911398640A CN111210464A CN 111210464 A CN111210464 A CN 111210464A CN 201911398640 A CN201911398640 A CN 201911398640A CN 111210464 A CN111210464 A CN 111210464A
Authority
CN
China
Prior art keywords
image
module
neural network
visible light
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911398640.1A
Other languages
Chinese (zh)
Inventor
周文闻
周航
黄滔
陈冬梅
张婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Marine Diesel Engine Research Institute
Original Assignee
Shanghai Marine Diesel Engine Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Marine Diesel Engine Research Institute filed Critical Shanghai Marine Diesel Engine Research Institute
Priority to CN201911398640.1A priority Critical patent/CN111210464A/en
Publication of CN111210464A publication Critical patent/CN111210464A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/32Determination of transform parameters for the alignment of images, i.e. image registration using correlation-based methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Alarm Systems (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

本发明提供了一种基于卷积神经网络和图像融合的人员落水报警系统及方法,包括:图像采集模块采集可见光图像和红外图像数据信息;图像存储模块实时保存图像采集模块获取的可见光图像和红外图像,并压缩成视频流;图像配准模块校准可见光图像和红外图像的空间信息;利用多尺度变换和融合规则融合可见光图像和红外图像;卷积神经网络模块进行目标检测,判断是否有人员落水险情发生;控制报警模块将卷积神经网络模块的判断结果反馈到控制报警模块,当有险情发生时,开启报警器,并存储落水信息;图像显示模块显示卷积神经网络模块的输出结果和图像存储模块的数据。本发明在复杂的航行环境中获得比传统的目标检测算法更高的落水人员识别率。

Figure 201911398640

The present invention provides a system and method for alarming people falling into the water based on convolutional neural network and image fusion, including: an image acquisition module collects visible light image and infrared image data information; The image is compressed into a video stream; the image registration module calibrates the spatial information of the visible light image and the infrared image; uses the multi-scale transformation and fusion rules to fuse the visible light image and the infrared image; the convolutional neural network module performs target detection to determine whether a person falls into the water Dangerous situation occurs; the control alarm module feeds back the judgment result of the convolutional neural network module to the control alarm module, when a dangerous situation occurs, the alarm is turned on and the information of falling water is stored; the image display module displays the output results and images of the convolutional neural network module Data for the storage module. In the complex navigation environment, the invention obtains a higher recognition rate of persons in the water than the traditional target detection algorithm.

Figure 201911398640

Description

System and method for alarming people falling into water based on convolutional neural network and image fusion
Technical Field
The invention relates to the technical field of target detection, in particular to a system and a method for alarming falling water of people based on a convolutional neural network and image fusion.
Background
In the process of ship navigation, people need to be monitored for drowning risks all day and night. In order to obtain real-time information of an area where dangerous cases may occur, a visible light camera is deployed for video monitoring, and a thermal infrared imager is deployed to obtain thermal imaging video images under the conditions of night, dense fog and the like. After the video images are obtained, the mode of monitoring the dangerous case in real time by the shipman wastes time and labor, the concentration of the shipman is limited, the efficiency is very low, and the dangerous case falling into the water is easily missed. Whether people are contained in the image or not is automatically detected by using a target detection method, and the development trend of video monitoring is realized.
Many security monitoring devices in the market have basic target identification functions, and the premise of high technical accuracy is that the light source in a monitoring area is sufficient and the background is fixed. In the course of navigation, the ship cannot have a supplementary light source all the time, and a backlight scene often appears along with different navigation directions. In the navigation environment, severe weather such as fog, rain, snow and the like can be encountered, the image background is extremely variable, and the form and posture of people are not completely fixed by analysis of detected target people. Therefore, the equipment with the simple identification function is available in the market and does not meet the requirements of the ship industry. In recent years, target detection is changed from an original traditional manual feature extraction method to feature extraction based on a convolutional neural network, and a deep learning model gradually replaces a traditional machine vision method and becomes a mainstream algorithm in the field of target detection.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a personnel overboard alarm system and a personnel overboard alarm method based on a convolutional neural network and image fusion.
The invention provides a personnel overboard alarm system based on a convolutional neural network and image fusion, which comprises:
an image acquisition module: the image acquisition module acquires data information of visible light images and infrared images;
an image storage module: the image storage module stores the visible light image and the infrared image acquired by the image acquisition module in real time and compresses the visible light image and the infrared image into a video stream;
an image registration module: the image registration module calibrates the spatial information of the visible light image and the infrared image;
an image fusion module: fusing the visible light image and the infrared image calibrated by the image registration module by utilizing a multi-scale transformation and fusion rule;
a convolutional neural network module: the convolutional neural network module performs target detection by using the fused visible light image and infrared image, and judges whether a dangerous case of people falling into water occurs;
the control alarm module: feeding back a judgment result of the convolutional neural network module to a control alarm module, starting an alarm when a dangerous case occurs, and storing water falling information;
an image display module: and the image display module displays the output result of the convolutional neural network module and the data of the image storage module.
Preferably, the image registration module comprises: the image registration module carries out smooth denoising processing on the visible light image and the infrared image, and calibrates the visible light image and the infrared image at the same moment according to the field angle range of the image resolution, so that the spatial areas of the images are the same.
Preferably, the convolutional neural network module includes: the method comprises the steps that a deep learning model trained by visible light image data and infrared image data is utilized, real-time image data are input into the deep learning model, whether a current image contains people falling into water or not is calculated according to network weights obtained by training, and if the current image contains people falling into water, marking is carried out;
the convolutional neural network model takes a residual error network as a main body, and a neural network layer comprises a convolutional layer and a pooling layer;
the deep learning model is a convolutional neural network model;
when the deep learning model is trained by using the visible light image data and the infrared image data, the visible light image data and the infrared image data comprise image data of people who fall into water and image data of people who do not fall into water, and the quantity of the image data of the people who fall into water is equivalent to that of the image data of the people who do not fall into water.
Preferably, the control alarm module comprises: the alarm device comprises an alarm device unit and an alarm information storage unit;
when the control alarm module obtains the dangerous case information, the alarm unit starts an alarm; the alarm information storage unit stores the drowning information;
the drowning information comprises a drowning time and a drowning position.
Preferably, the image display module includes: displaying different image data according to different output results of the convolutional neural network module, and when an emergency occurs, displaying data of an emergency area and marking people falling into water; when no dangerous case occurs, displaying real-time images of each monitoring area;
the image display module displays the data of the image storage module, and the video is read from the image storage module in a selected time period.
The invention provides a personnel overboard alarm method based on a convolutional neural network and image fusion, which comprises the following steps:
an image acquisition step: the image acquisition module acquires data information of visible light images and infrared images;
an image storage step: the image storage module stores the visible light image and the infrared image acquired in the image acquisition step in real time and compresses the visible light image and the infrared image into a video stream;
an image registration step: the image registration module calibrates the spatial information of the visible light image and the infrared image;
an image fusion step: fusing the visible light image and the infrared image calibrated in the image registration step by utilizing a multi-scale transformation and fusion rule;
a convolution neural network step: the convolutional neural network module performs target detection by using the fused visible light image and infrared image, and judges whether a dangerous case of people falling into water occurs;
controlling and alarming: feeding back a judgment result of the convolutional neural network module to a control alarm module, starting an alarm when a dangerous case occurs, and storing water falling information;
an image display module: and the image display module displays the output result of the convolutional neural network module and the data of the image storage module.
Preferably, the image registration step comprises: the image registration module carries out smooth denoising processing on the visible light image and the infrared image, and calibrates the visible light image and the infrared image at the same moment according to the field angle range of the image resolution, so that the spatial areas of the images are the same.
Preferably, the convolutional neural network step comprises: the method comprises the steps that a deep learning model trained by visible light image data and infrared image data is utilized, real-time image data are input into the deep learning model, whether a current image contains people falling into water or not is calculated according to network weights obtained by training, and if the current image contains people falling into water, marking is carried out;
the convolutional neural network model takes a residual error network as a main body, and a neural network layer comprises a convolutional layer and a pooling layer;
the deep learning model is a convolutional neural network model;
when the deep learning model is trained by using the visible light image data and the infrared image data, the visible light image data and the infrared image data comprise image data of people who fall into water and image data of people who do not fall into water, and the quantity of the image data of the people who fall into water is equivalent to that of the image data of the people who do not fall into water.
Preferably, the controlling and alarming step includes: the control alarm module comprises an alarm unit and an alarm information storage unit; when the control alarm module obtains the dangerous case information, the alarm unit starts an alarm; the alarm information storage unit stores the drowning information;
the drowning information comprises a drowning time and a drowning position.
Preferably, the image displaying step includes: displaying different image data according to different output results of the convolutional neural network module, and when an emergency occurs, displaying data of an emergency area and marking people falling into water; when no dangerous case occurs, displaying real-time images of each monitoring area;
the image display module displays the data of the image storage module, and the video is read from the image storage module in a selected time period.
Compared with the prior art, the invention has the following beneficial effects:
1. by utilizing the characteristics of high resolution of the visible light camera and all-weather monitoring of the infrared imager, the visible light camera has strong detail resolution capability, and pixels can reach millions. Designing a data fusion algorithm to obtain a fusion image of the two devices;
2. establishing navigation environment databases of two imaging devices, analyzing image characteristics of navigation environments, designing a neural network structure, establishing a target detection model, and realizing end-to-end training on acquired navigation pictures, wherein the obtained deep learning model can overcome the defects of the traditional target detection algorithm, has better flexibility and more accurate result;
3. when the dangerous case that people fall into water is detected, automatic alarm is realized, workers are reminded to intervene in time, and the falling water information is stored in a database so as to carry out detailed analysis and history tracing.
4. The method has the advantages that the higher drowning person identification rate is obtained in the complex navigation environment compared with the traditional target detection algorithm.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
fig. 1 is a block diagram of a personnel overboard alarm system based on a deep learning network and image fusion according to a real-time embodiment of the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
Aiming at the technical problems in the prior art, the method provided by the invention processes the visible light image and the infrared image data based on a deep learning method so as to realize personnel target identification, and simultaneously develops an image fusion algorithm by utilizing the complementary characteristics of two devices, enhances the image characteristic information and improves the identification accuracy.
The invention provides a personnel overboard alarm system based on a convolutional neural network and image fusion, which comprises: as shown in figure 1 of the drawings, in which,
an image acquisition module: the image acquisition module acquires data information of visible light images and infrared images;
the image acquisition module is divided into a visible light acquisition device and an infrared image acquisition device, and image data output by different image acquisition devices are subjected to interframe synchronization. The image storage module compresses the collected image data into a video data stream in an H.264 format and stores the video data stream in a disk array;
the image acquisition module is respectively connected with the image storage module and the image registration module, and the image storage module stores real-time image data as a video stream in an H.264 format.
An image storage module: the image storage module stores the visible light image and the infrared image acquired by the image acquisition module in real time and compresses the visible light image and the infrared image into a video stream;
an image registration module: the image registration module calibrates the spatial information of the visible light image and the infrared image;
specifically, the image registration module comprises: the image registration module carries out smooth denoising processing on the visible light image and the infrared image, and calibrates the visible light image and the infrared image at the same moment according to the field angle range of the image resolution, so that the spatial areas of the images are the same.
An image fusion module: the visible light image and the infrared image which are calibrated by the image registration module are fused by utilizing a multi-scale transformation and fusion rule to realize pixel-level fusion; in the embodiment, the visible light and infrared image fusion method is mainly applied to the collection of images under the condition of variable environment, so that the identification degree of the fused images is higher;
a convolutional neural network module: the convolutional neural network module performs target detection by using the fused visible light image and infrared image, and judges whether a dangerous case of people falling into water occurs;
specifically, the convolutional neural network module includes: the deep learning model trained by utilizing visible light image data and infrared image data is actually used for collecting pictures of a person drowning scene by deploying visible light and infrared photographic equipment, and on the data set, a neural network structure for target detection is designed to establish a two-classification deep learning model suitable for a ship navigation environment.
The convolutional neural network module realizes the feature extraction of the fused image, outputs a target detection result in a specific layer after multi-layer convolution and activation function, and feeds the result back to the control alarm module;
inputting real-time image data into a deep learning model, calculating whether a current image contains a person falling into water or not according to the network weight obtained by training, and if the current image contains the person falling into water, marking;
the convolutional neural network model takes a residual error network as a main body, and a neural network layer comprises a convolutional layer and a pooling layer;
the deep learning model is a convolutional neural network model;
when the deep learning model is trained by using the visible light image data and the infrared image data, the visible light image data and the infrared image data comprise image data of people who fall into water and image data of people who do not fall into water, and the quantity of the image data of the people who fall into water is equivalent to that of the image data of the people who do not fall into water.
In the present embodiment, the convolutional neural network is mainly composed of a residual network, and is composed of a plurality of network layers, including convolutional layers, pooling layers, and the like, and adopts a full convolutional layer structure, and the size of an input image is variable;
the control alarm module: the judgment result of the convolutional neural network module is fed back to the control alarm module, when a dangerous case occurs, the background alarm is controlled to be turned on or off, and the water falling information is stored so as to facilitate history tracing;
specifically, the control alarm module includes: the alarm device comprises an alarm device unit and an alarm information storage unit;
when the control alarm module obtains the dangerous case information, the alarm unit starts an alarm; reminding workers to intervene, storing the information falling into water by the alarm information storage unit, storing the information falling into water into the SQLITE database, calling real-time images of the image acquisition module by the workers in time for intervention, calling playback video segments of the video storage module, and reasonably deploying rescue actions after detailed judgment. The historical data can be conveniently traced;
the drowning information comprises a drowning time and a drowning position.
An image display module: and the image display module displays the output result of the convolutional neural network module and the historical data of the image storage module.
Specifically, the image display module includes: displaying different image data according to different output results of the convolutional neural network module, and when an emergency occurs, displaying data of an emergency area and marking people falling into water; when no dangerous case occurs, displaying real-time images of each monitoring area;
meanwhile, a history playback function is realized, the image display module displays data of the image storage module, and a worker can read videos from the image storage module at a selected time period.
The invention provides a personnel overboard alarm method based on a convolutional neural network and image fusion, which comprises the following steps: as shown in figure 1 of the drawings, in which,
an image acquisition step: the image acquisition module acquires data information of visible light images and infrared images;
the image acquisition step is divided into a visible light acquisition device and an infrared image acquisition device, and image data output by different image acquisition devices are subjected to interframe synchronization. The image storage module compresses the collected image data into a video data stream in an H.264 format and stores the video data stream in a disk array;
the image acquisition module is respectively connected with the image storage module and the image registration module, and the image storage module stores real-time image data as a video stream in an H.264 format.
An image storage step: the image storage module stores the visible light image and the infrared image acquired by the image acquisition module in real time and compresses the visible light image and the infrared image into a video stream;
an image registration step: the image registration module calibrates the spatial information of the visible light image and the infrared image;
specifically, the image registration step includes: the image registration module carries out smooth denoising processing on the visible light image and the infrared image, and calibrates the visible light image and the infrared image at the same moment according to the field angle range of the image resolution, so that the spatial areas of the images are the same.
An image fusion step: the visible light image and the infrared image which are calibrated by the image registration module are fused by utilizing a multi-scale transformation and fusion rule to realize pixel-level fusion; in the embodiment, the visible light and infrared image fusion method is mainly applied to the collection of images under the condition of variable environment, so that the identification degree of the fused images is higher;
a convolution neural network step: the convolutional neural network module performs target detection by using the fused visible light image and infrared image, and judges whether a dangerous case of people falling into water occurs;
specifically, the convolutional neural network step includes: the deep learning model trained by utilizing visible light image data and infrared image data is actually used for collecting pictures of a person drowning scene by deploying visible light and infrared photographic equipment, and on the data set, a neural network structure for target detection is designed to establish a two-classification deep learning model suitable for a ship navigation environment.
The convolution neural network step realizes the feature extraction of the fused image, outputs a target detection result in a specific layer after multi-layer convolution and activation function, and feeds the result back to the control alarm module;
inputting real-time image data into a deep learning model, calculating whether a current image contains a person falling into water or not according to the network weight obtained by training, and if the current image contains the person falling into water, marking;
the convolutional neural network model takes a residual error network as a main body, and a neural network layer comprises a convolutional layer and a pooling layer;
the deep learning model is a convolutional neural network model;
when the deep learning model is trained by using the visible light image data and the infrared image data, the visible light image data and the infrared image data comprise image data of people who fall into water and image data of people who do not fall into water, and the quantity of the image data of the people who fall into water is equivalent to that of the image data of the people who do not fall into water.
In the present embodiment, the convolutional neural network is mainly composed of a residual network, and is composed of a plurality of network layers, including convolutional layers, pooling layers, and the like, and adopts a full convolutional layer structure, and the size of an input image is variable;
controlling and alarming: the judgment result of the convolutional neural network module is fed back to the control alarm module, when a dangerous case occurs, the background alarm is controlled to be turned on or off, and the water falling information is stored so as to facilitate history tracing;
specifically, the step of controlling the alarm includes: the alarm device comprises an alarm device unit and an alarm information storage unit;
when the control alarm module obtains the dangerous case information, the alarm unit starts an alarm; reminding workers to intervene, storing the information falling into water by the alarm information storage unit, storing the information falling into water into the SQLITE database, calling real-time images of the image acquisition module by the workers in time for intervention, calling playback video segments of the video storage module, and reasonably deploying rescue actions after detailed judgment. The historical data can be conveniently traced;
the drowning information comprises a drowning time and a drowning position.
An image display step: and the image display module displays the output result of the convolutional neural network module and the historical data of the image storage module.
Specifically, the image displaying step includes: displaying different image data according to different output results of the convolutional neural network module, and when an emergency occurs, displaying data of an emergency area and marking people falling into water; when no dangerous case occurs, displaying real-time images of each monitoring area;
meanwhile, a history playback function is realized, the image display module displays data of the image storage module, and a worker can read videos from the image storage module at a selected time period.
It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are merely illustrative of the principles of the invention, but that various changes and modifications may be made without departing from the spirit and scope of the invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.
Those skilled in the art will appreciate that, in addition to implementing the systems, apparatus, and various modules thereof provided by the present invention in purely computer readable program code, the same procedures can be implemented entirely by logically programming method steps such that the systems, apparatus, and various modules thereof are provided in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system, the device and the modules thereof provided by the present invention can be considered as a hardware component, and the modules included in the system, the device and the modules thereof for implementing various programs can also be considered as structures in the hardware component; modules for performing various functions may also be considered to be both software programs for performing the methods and structures within hardware components.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (10)

1. A personnel alarm system that falls into water based on convolutional neural network and image fusion, its characterized in that includes:
an image acquisition module: the image acquisition module acquires data information of visible light images and infrared images;
an image storage module: the image storage module stores the visible light image and the infrared image acquired by the image acquisition module in real time and compresses the visible light image and the infrared image into a video stream;
an image registration module: the image registration module calibrates the spatial information of the visible light image and the infrared image;
an image fusion module: fusing the visible light image and the infrared image calibrated by the image registration module by utilizing a multi-scale transformation and fusion rule;
a convolutional neural network module: the convolutional neural network module performs target detection by using the fused visible light image and infrared image, and judges whether a dangerous case of people falling into water occurs;
the control alarm module: feeding back a judgment result of the convolutional neural network module to a control alarm module, starting an alarm when a dangerous case occurs, and storing water falling information;
an image display module: and the image display module displays the output result of the convolutional neural network module and the data of the image storage module.
2. The system of claim 1, wherein the image registration module comprises: the image registration module carries out smooth denoising processing on the visible light image and the infrared image, and registers the visible light image and the infrared image at the same moment according to the field angle range of the image resolution, so that the spatial regions of the images are the same.
3. The system of claim 1, wherein the convolutional neural network module comprises: training a deep learning model by utilizing visible light image data and infrared image data, calculating whether a current image contains a person falling into water or not by inputting real-time image data into the trained deep learning model according to network weight obtained by training, and marking if the person falling into water exists;
the convolutional neural network model takes a residual error network as a main body, and a neural network layer comprises a convolutional layer and a pooling layer;
the deep learning model is a convolutional neural network model;
when the deep learning model is trained by using the visible light image data and the infrared image data, the visible light image data and the infrared image data comprise image data of people who fall into water and image data of people who do not fall into water, and the quantity of the image data of the people who fall into water is equivalent to that of the image data of the people who do not fall into water.
4. The system for alarming people falling into water based on the convolutional neural network and the image fusion as claimed in claim 1, wherein the control alarm module comprises: the alarm device comprises an alarm device unit and an alarm information storage unit;
when the control alarm module obtains the dangerous case information, the alarm unit starts an alarm; the alarm information storage unit stores the drowning information;
the drowning information comprises a drowning time and a drowning position.
5. The system for warning the drowning of people based on the convolutional neural network and the image fusion as claimed in claim 1, wherein the image display module comprises: displaying different image data according to different output results of the convolutional neural network module, and when an emergency occurs, displaying data of an emergency area and marking people falling into water; when no dangerous case occurs, displaying real-time images of each monitoring area;
the image display module displays the data of the image storage module, and the video is read from the image storage module in a selected time period.
6. A personnel overboard alarm method based on a convolutional neural network and image fusion is characterized by comprising the following steps:
an image acquisition step: the image acquisition module acquires data information of visible light images and infrared images;
an image storage step: the image storage module stores the visible light image and the infrared image acquired in the image acquisition step in real time and compresses the visible light image and the infrared image into a video stream;
an image registration step: the image registration module calibrates the spatial information of the visible light image and the infrared image;
an image fusion step: fusing the visible light image and the infrared image calibrated in the image registration step by utilizing a multi-scale transformation and fusion rule;
a convolution neural network step: the convolutional neural network module performs target detection by using the fused visible light image and infrared image, and judges whether a dangerous case of people falling into water occurs;
controlling and alarming: feeding back a judgment result of the convolutional neural network module to a control alarm module, starting an alarm when a dangerous case occurs, and storing water falling information;
an image display module: and the image display module displays the output result of the convolutional neural network module and the data of the image storage module.
7. The method for warning of man overboard according to claim 6, wherein the image registration step comprises: the image registration module carries out smooth denoising processing on the visible light image and the infrared image, and calibrates the visible light image and the infrared image at the same moment according to the field angle range of the image resolution, so that the spatial areas of the images are the same.
8. The personnel overboard alarm method based on convolutional neural network and image fusion as claimed in claim 6, wherein the convolutional neural network step comprises: the method comprises the steps that a deep learning model trained by visible light image data and infrared image data is utilized, real-time image data are input into the deep learning model, whether a current image contains people falling into water or not is calculated according to network weights obtained by training, and if the current image contains people falling into water, marking is carried out;
the convolutional neural network model takes a residual error network as a main body, and a neural network layer comprises a convolutional layer and a pooling layer;
the deep learning model is a convolutional neural network model;
when the deep learning model is trained by using the visible light image data and the infrared image data, the visible light image data and the infrared image data comprise image data of people who fall into water and image data of people who do not fall into water, and the quantity of the image data of the people who fall into water is equivalent to that of the image data of the people who do not fall into water.
9. The personnel overboard alarm method based on the convolutional neural network and the image fusion as claimed in claim 6, wherein the controlling and alarming step comprises: the control alarm module comprises an alarm unit and an alarm information storage unit; when the control alarm module obtains the dangerous case information, the alarm unit starts an alarm; the alarm information storage unit stores the drowning information;
the drowning information comprises a drowning time and a drowning position.
10. The system of claim 6, wherein the image display step comprises: displaying different image data according to different output results of the convolutional neural network module, and when an emergency occurs, displaying data of an emergency area and marking people falling into water; when no dangerous case occurs, displaying real-time images of each monitoring area;
the image display module displays the data of the image storage module, and the video is read from the image storage module in a selected time period.
CN201911398640.1A 2019-12-30 2019-12-30 System and method for alarming people falling into water based on convolutional neural network and image fusion Pending CN111210464A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911398640.1A CN111210464A (en) 2019-12-30 2019-12-30 System and method for alarming people falling into water based on convolutional neural network and image fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911398640.1A CN111210464A (en) 2019-12-30 2019-12-30 System and method for alarming people falling into water based on convolutional neural network and image fusion

Publications (1)

Publication Number Publication Date
CN111210464A true CN111210464A (en) 2020-05-29

Family

ID=70786519

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911398640.1A Pending CN111210464A (en) 2019-12-30 2019-12-30 System and method for alarming people falling into water based on convolutional neural network and image fusion

Country Status (1)

Country Link
CN (1) CN111210464A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111986240A (en) * 2020-09-01 2020-11-24 交通运输部水运科学研究所 Drowning person detection method and system based on visible light and thermal imaging data fusion
CN112418181A (en) * 2020-12-13 2021-02-26 西北工业大学 Personnel overboard detection method based on convolutional neural network
CN115100826A (en) * 2022-05-20 2022-09-23 交通运输部水运科学研究所 A real-time monitoring-based alarm method and system for ship dangerous goods falling into the water

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102682565A (en) * 2012-03-23 2012-09-19 合肥博构元丰信息技术有限公司 Fire protection and security integrated intelligent video monitoring system suitable for open space
US20120314066A1 (en) * 2011-06-10 2012-12-13 Lee Yeu Yong Fire monitoring system and method using composite camera
CN108710910A (en) * 2018-05-18 2018-10-26 中国科学院光电研究院 A kind of target identification method and system based on convolutional neural networks
CN110148283A (en) * 2019-05-16 2019-08-20 安徽天帆智能科技有限责任公司 It is a kind of to fall water monitoring system in real time based on convolutional neural networks
CN110569772A (en) * 2019-08-30 2019-12-13 北京科技大学 A method for detecting the state of people in a swimming pool
CN110588973A (en) * 2019-09-27 2019-12-20 江苏科技大学 A youth drowning prevention and rescue platform and method based on amphibious unmanned aerial vehicle

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120314066A1 (en) * 2011-06-10 2012-12-13 Lee Yeu Yong Fire monitoring system and method using composite camera
CN102682565A (en) * 2012-03-23 2012-09-19 合肥博构元丰信息技术有限公司 Fire protection and security integrated intelligent video monitoring system suitable for open space
CN108710910A (en) * 2018-05-18 2018-10-26 中国科学院光电研究院 A kind of target identification method and system based on convolutional neural networks
CN110148283A (en) * 2019-05-16 2019-08-20 安徽天帆智能科技有限责任公司 It is a kind of to fall water monitoring system in real time based on convolutional neural networks
CN110569772A (en) * 2019-08-30 2019-12-13 北京科技大学 A method for detecting the state of people in a swimming pool
CN110588973A (en) * 2019-09-27 2019-12-20 江苏科技大学 A youth drowning prevention and rescue platform and method based on amphibious unmanned aerial vehicle

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111986240A (en) * 2020-09-01 2020-11-24 交通运输部水运科学研究所 Drowning person detection method and system based on visible light and thermal imaging data fusion
CN112418181A (en) * 2020-12-13 2021-02-26 西北工业大学 Personnel overboard detection method based on convolutional neural network
CN112418181B (en) * 2020-12-13 2023-05-02 西北工业大学 Personnel falling water detection method based on convolutional neural network
CN115100826A (en) * 2022-05-20 2022-09-23 交通运输部水运科学研究所 A real-time monitoring-based alarm method and system for ship dangerous goods falling into the water
CN115100826B (en) * 2022-05-20 2023-11-07 交通运输部水运科学研究所 Real-time monitoring-based ship dangerous cargo falling water alarm method and system

Similar Documents

Publication Publication Date Title
CN109508688B (en) Skeleton-based behavior detection method, terminal equipment and computer storage medium
US10366509B2 (en) Setting different background model sensitivities by user defined regions and background filters
CN111210464A (en) System and method for alarming people falling into water based on convolutional neural network and image fusion
CN106951849B (en) Monitoring method and system for preventing children from accidents
CN110569772A (en) A method for detecting the state of people in a swimming pool
CN105898107B (en) A kind of target object grasp shoot method and system
KR102353724B1 (en) Apparatus and method for monitoring city condition
CN106097346A (en) A kind of video fire hazard detection method of self study
CN112270253A (en) High-altitude parabolic detection method and device
CN116846059A (en) Edge detection system for power grid inspection and monitoring
CN109584213A (en) A kind of selected tracking of multiple target number
CN108052865A (en) A kind of flame detecting method based on convolutional neural networks and support vector machines
CN110067274A (en) Apparatus control method and excavator
CN103152558B (en) Based on the intrusion detection method of scene Recognition
US20240046701A1 (en) Image-based pose estimation and action detection method and apparatus
CN115115713A (en) Unified space-time fusion all-around aerial view perception method
CN115761618A (en) Key site security monitoring image identification method
CN112927214A (en) Building defect positioning method, system and storage medium
CN112613359B (en) Construction method of neural network for detecting abnormal behaviors of personnel
CN112434827A (en) Safety protection identification unit in 5T fortune dimension
CN112434828A (en) Intelligent identification method for safety protection in 5T operation and maintenance
CN116152745A (en) Smoking behavior detection method, device, equipment and storage medium
CN114020043A (en) UAV construction engineering supervision system, method, electronic device and storage medium
Zaman et al. Human detection from drone using you only look once (YOLOv5) for search and rescue operation
Ali et al. Real-time safety monitoring vision system for linemen in buckets using spatio-temporal inference

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200529

RJ01 Rejection of invention patent application after publication