WO2019237567A1 - Procédé de détection de chute fondé sur un réseau neuronal à convolution - Google Patents
Procédé de détection de chute fondé sur un réseau neuronal à convolution Download PDFInfo
- Publication number
- WO2019237567A1 WO2019237567A1 PCT/CN2018/107975 CN2018107975W WO2019237567A1 WO 2019237567 A1 WO2019237567 A1 WO 2019237567A1 CN 2018107975 W CN2018107975 W CN 2018107975W WO 2019237567 A1 WO2019237567 A1 WO 2019237567A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- model
- image
- neural network
- convolutional neural
- gaussian
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B21/00—Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
- G08B21/02—Alarms for ensuring the safety of persons
- G08B21/04—Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
- G08B21/0407—Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons based on behaviour analysis
- G08B21/043—Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons based on behaviour analysis detecting an emergency event, e.g. a fall
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/23—Recognition of whole body movements, e.g. for sport training
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B21/00—Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
- G08B21/02—Alarms for ensuring the safety of persons
- G08B21/04—Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
- G08B21/0438—Sensor means for detecting
- G08B21/0476—Cameras to detect unsafe condition, e.g. video cameras
Definitions
- the invention relates to a fall detection method, in particular to a fall detection method based on a convolutional neural network.
- the current fall detection systems are mainly divided into two categories: the first is a wearable detection system based on a sensor; the other is a video-based detection system.
- the vision-based detection system uses one or more cameras to capture the movement of the target, and uses specific image processing algorithms to determine the image characteristics of the fall, thereby distinguishing the fall from daily activities.
- the commonly used vision-based fall detection algorithms are mainly the threshold method and intelligent algorithms.
- the threshold method usually detects the head position or the center of gravity of the human body.
- Rougier et al determine whether a fall has occurred by locating the head position, then estimating the head position in the next frame of the image through a particle filter, calculating the speed in the horizontal and vertical directions, and comparing it with a threshold.
- These methods are simple to implement, but the accuracy is easily affected by external factors such as the environment.
- the method based on machine learning mainly extracts people from the image, and then manually extracts features, and then inputs the obtained features into the model to detect and recognize the fall behavior. This method requires manual extraction of features.
- the amount of engineering is huge, and most of them only focus on the problem of binary classification. Considering that the requirements for smart homes will become higher in the future, recognition of various poses of the human body has become an indispensable part. .
- a fall detection method based on a convolutional neural network includes:
- Training a convolutional neural network specifically includes:
- Pre-process each frame of image obtained, and the pre-processing work includes foreground extraction, normalization, and whitening operations in order;
- the step "pre-process each frame of the image obtained, the pre-processing includes sequentially extracting the foreground, normalizing, and whitening operations;" the processed image is put into the pre-trained model for model training, and Parameters of the model; and
- the method for extracting the foreground includes:
- the above-mentioned fall detection method based on the convolutional neural network applies the classification method based on the convolutional neural network to the fall detection method.
- an improved foreground detection method is used.
- the processed images are put into a convolutional neural network for model training.
- the pre-processing includes sequentially extracting the foreground, normalizing, and whitening operations;" and acquiring each frame of the image It is obtained by reading the video file.
- the training convolutional neural network further includes: displaying a detection effect diagram of each frame of the picture, and implementing the model Visualization of the convolution kernel.
- the detection effect diagram of each frame of pictures is displayed on the matlab platform, and the model is implemented. Convolution kernel visualization.
- the step "processing the image using the background difference method;” specifically includes:
- N t (x, y)
- the step "processing the image using a hybrid Gaussian model;” specifically includes:
- the probability density function of a pixel value can be expressed as:
- w i, t represents the weight of the Gaussian model
- probability density function of the Gaussian model is expressed as:
- the K Gaussian mixture models are sorted according to the quotient of the weight divided by the standard deviation, and then the previous B Gaussian models are selected for distinguishing, and the value of B is expressed as:
- the pixel is determined as the background. If the above conditions are not satisfied in the B Gaussian models, the pixel is determined to belong to the foreground.
- ⁇ i, t (1- ⁇ ) ⁇ i, t-1 + ⁇ X i, t
- ⁇ i, t (1- ⁇ ) ⁇ i, t-1 + ⁇ (X i, t - ⁇ i, t ) (X i, t - ⁇ i, t ) T
- the threshold T, the learning rate ⁇ , and the parameter ⁇ are constants specified in advance.
- the test set in the step "input the test set to the trained model and use the test set to check the accuracy of the model;" the test set is from the UR Fall Detection Dataset.
- a computer device includes a memory, a processor, and a computer program stored on the memory and executable on the processor. When the processor executes the program, the steps of any one of the methods are implemented.
- a computer-readable storage medium having stored thereon a computer program that, when executed by a processor, implements the steps of any one of the methods.
- a processor is configured to run a program, and when the program runs, the method according to any one of the methods is executed.
- FIG. 1 is a schematic flowchart of a fall detection method based on a convolutional neural network according to an embodiment of the present application.
- FIG. 2 is a schematic diagram of a residual learning construction module in a fall detection method based on a convolutional neural network according to an embodiment of the present application.
- FIG. 3 is a schematic diagram of a function of a loss value in a fall detection method based on a convolutional neural network according to an embodiment of the present application.
- FIG. 4 is a flowchart of testing a model in a fall detection method based on a convolutional neural network provided by an embodiment of the present application.
- FIG. 5 is a schematic diagram of the effect of the background difference method in a fall detection method based on a convolutional neural network provided by an embodiment of the present application.
- FIG. 6 is a schematic diagram of the effect of a Gaussian mixed environment model in a fall detection method based on a convolutional neural network according to an embodiment of the present application.
- FIG. 7 is a schematic view showing the effect of an improved foreground detection method in a fall detection method based on a convolutional neural network provided by an embodiment of the present application.
- FIG. 8 is an RGB image of foreground extraction in a fall detection method based on a convolutional neural network provided by an embodiment of the present application.
- FIG. 9 is a visualization of a convolution kernel in a fall detection method based on a convolutional neural network provided by an embodiment of the present application.
- FIG. 10 is a first-layer feature map of a fall detection method based on a convolutional neural network according to an embodiment of the present application.
- FIG. 11 is a second-layer feature map of a fall detection method based on a convolutional neural network provided by an embodiment of the present application.
- a fall detection method based on a convolutional neural network includes:
- Training a convolutional neural network specifically includes:
- Pre-process each frame of image obtained, and the pre-processing work includes foreground extraction, normalization, and whitening operations in order;
- the step "pre-process each frame of the image obtained, the pre-processing includes sequentially extracting the foreground, normalizing, and whitening operations;" the processed image is put into the pre-trained model for model training, and Parameters of the model; and
- the method for extracting the foreground includes:
- the above-mentioned fall detection method based on the convolutional neural network applies the classification method based on the convolutional neural network to the fall detection method.
- an improved foreground detection method is used.
- the processed images are put into a convolutional neural network for model training.
- the pre-processing includes sequentially extracting the foreground, normalizing, and whitening operations;" and acquiring each frame of the acquired image. It is obtained by reading the video file.
- the training convolutional neural network further includes: displaying a detection effect diagram of each frame of the picture, and implementing the model Visualization of the convolution kernel.
- the detection effect diagram of each frame of pictures is displayed on the matlab platform, and the model is implemented. Convolution kernel visualization.
- the step "processing the image using the background difference method;” specifically includes:
- N t (x, y)
- the step "processing the image using a hybrid Gaussian model;” specifically includes:
- the probability density function of a pixel value can be expressed as:
- w i, t represents the weight of the Gaussian model
- probability density function of the Gaussian model is expressed as:
- the K Gaussian mixture models are sorted according to the quotient of the weight divided by the standard deviation, and then the previous B Gaussian models are selected for distinguishing, and the value of B is expressed as:
- the pixel is determined as the background. If the above conditions are not satisfied in the B Gaussian models, the pixel is determined to belong to the foreground.
- ⁇ i, t (1- ⁇ ) ⁇ i, t-1 + ⁇ X i, t
- ⁇ i, t (1- ⁇ ) ⁇ i, t-1 + ⁇ (X i, t - ⁇ i, t ) (X i, t - ⁇ i, t ) T
- the threshold T, the learning rate ⁇ , and the parameter ⁇ are constants specified in advance.
- the test set in the step "input the test set to the trained model and use the test set to check the accuracy of the model;" the test set is from the UR Fall Detection Dataset.
- a computer device includes a memory, a processor, and a computer program stored on the memory and executable on the processor. When the processor executes the program, the steps of any one of the methods are implemented.
- a computer-readable storage medium having stored thereon a computer program that, when executed by a processor, implements the steps of any one of the methods.
- a processor is configured to run a program, and when the program runs, the method according to any one of the methods is executed.
- an improved foreground detection method is used to extract a person's foreground.
- the methods of foreground extraction mainly include inter-frame difference method, background difference method, optical flow method, and mixed Gaussian model.
- N t (x, y)
- the hybrid Gaussian model is an adaptive hybrid Gaussian background extraction method based on background modeling proposed by Stauffer et al.
- a Gaussian mixture model is used to model the background, the pixel values of each pixel of the sequence image can be simulated with k Gaussian models. Therefore, at time t, the probability density function of a pixel value can be expressed as:
- w i, t represents the weight of the Gaussian model
- probability density function of the Gaussian model can be expressed as:
- the K Gaussian mixture models are sorted according to the quotient of the weight divided by the standard deviation, and then the previous B Gaussian models are selected for distinguishing, and the value of B is expressed as:
- the pixel is determined as the background. If the above conditions are not satisfied in the B Gaussian models, the pixel is determined to belong to the foreground.
- ⁇ i, t (1- ⁇ ) ⁇ i, t-1 + ⁇ X i, t
- ⁇ i, t (1- ⁇ ) ⁇ i, t-1 + ⁇ (X i, t - ⁇ i, t ) (X i, t - ⁇ i, t ) T
- an initial Gaussian model is used to replace the Gaussian model with the smallest weight.
- the threshold T, the learning rate ⁇ , and the parameter ⁇ are constants specified in advance.
- the background difference method is simple and requires a small amount of calculation, it will cause a "ghost image” phenomenon.
- the mixed Gaussian model not only models the background but also the foreground, so it is very sensitive to sudden changes in global brightness.
- the present invention proposes an improved foreground detection method, that is, performing an AND operation on the processing effects of the two can well solve the problems of "ghost image” and light sensitivity. .
- the output of the background difference method is D (x, y)
- the output of the improved foreground detection method is R (x, y).
- the image is opened and closed and the maximum connected area is selected to determine the position of the person's foreground.
- the corresponding position is intercepted to obtain the corresponding RGB image of the person's foreground.
- the 2015 championship model ResNet is adopted as the network model of the present invention.
- the ResNet network can effectively solve the problem that the accuracy of the algorithm tends to saturate and decrease rapidly as the network depth increases.
- its parameter amount is lower than VGGNet, and the effect is very significant.
- the training speed has also been greatly improved. This is mainly due to the construction of the residual module used in it, as shown in Figure 2.
- ResNet's network architecture is shown in the following table:
- ImageNet is used to pre-train the parameters of the convolutional neural network.
- the role of the pre-trained model is to initialize the parameters of the network to the model parameters trained by ImageNet on the network.
- ImageNet is divided into 1,000 categories, and the present invention only needs to divide the image into two categories, the fully connected layer needs to be modified, the value of num_output is changed from the original 1000 to 2, and the name of the fully connected layer .
- the training process of the network is completed based on the Caffe platform.
- Caffe is constructed according to a hypothesis of a neural network: where all calculations are expressed in the form of layers, and the effect of the layers is based on the input data, so as to output the calculated results.
- convolution if an image is input, convolution operation is performed with the parameters of this layer, and finally the result of convolution is output.
- Each layer requires two operations: 1) the forward path, which calculates the output data from the input data; 2) the reverse path, which calculates the gradient relative to the input based on the gradient value above.
- the function of the network is to calculate the expected output based on the input data (image, voice, or other information forms).
- the loss function and gradient of the output model can be calculated according to the known label results, and then the parameters of the network can be further updated based on the gradient value.
- the ResNet34 model is built by defining trian_val.prototxt, which defines the specific network structure of ResNet.
- the organizational structure of the file is in the form of a class structure. Each layer of the layer structure includes many parameters.
- the bottom parameter represents the bottom input
- the top parameter represents the result output to the next layer
- the param parameter represents the parameters of this layer, including num_output represents the number of filters, kernel_size represents the size of the filter, and stride represents the step size.
- ResNet's training model is written through the file solver.prototxt. Each line in this file represents a training parameter. Among them, some common training parameters, such as the net parameter, are used to specify the model to be used, and the max_iter parameter indicates the maximum number of iterations set, and the snapshot_prefix parameter indicates the prefix name of the model when it is saved.
- the loss function curve during model training can also be drawn on the Matlab platform.
- the loss value is plotted every iteration, and the accuracy is plotted every 100 iterations. As shown in Figure 3.
- each convolutional layer It can be seen as a stack of two-dimensional pictures, each of which is called a feature map. If the input layer is a grayscale image, then there is only one feature map; if the input layer is a color image, it will generally be 3 feature maps (red, green, and blue). There are many convolution kernels between layers. Each feature map of the previous layer and each convolution kernel perform a convolution operation, which will generate a feature map of the next layer.
- the training set has 7381 images and the test set has 1326 images. All the pictures in the data set are first subjected to the operation of foreground extraction, and the pictures shown in FIG. 8 are obtained and then put into the network for model training.
- the ResNet network used in the present invention is pre-trained on the ImageNet dataset to obtain a pre-trained model.
- the preprocessed images (foreground detection, whitening, and normalization) of the training set are input to the network for training.
- the feature map of matlab is called to implement the feature map visualization, as shown in Figure 10 and Figure 11.
- the underlying convolution kernel is mainly used to extract basic features such as the outline of a character.
- the training set and test set of the present invention are both from the UR, Fall, and Detection Dataset.
- the data set is trained on the caffe platform for model training and accuracy testing, and is accelerated using cuDNN.
- the final display accuracy reaches 96.7% and the time complexity is 49ms .
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Psychiatry (AREA)
- Multimedia (AREA)
- Social Psychology (AREA)
- Gerontology & Geriatric Medicine (AREA)
- Business, Economics & Management (AREA)
- Emergency Management (AREA)
- Psychology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
L'invention concerne un procédé de détection de chute fondé sur un réseau neuronal à convolution, comprenant les étapes suivantes consistant : à entraîner un réseau neuronal à convolution, l'entraînement du réseau neuronal à convolution comprenant plus particulièrement : le prétraitement de chaque trame d'image acquise, le prétraitement comprenant séquentiellement une extraction d'avant-plan, une normalisation et un blanchiment ; et à tout d'abord pré-entraîner un réseau ResNet sur un ensemble de données ImageNet pour obtenir un modèle pré-entraîné. Un procédé de classification fondé sur un réseau neuronal à convolution est appliqué au procédé de détection de chute. En même temps, afin d'améliorer la précision du système et de réduire la complexité opérationnelle, un procédé de détection d'avant-plan amélioré est utilisé pour extraire un personnage dans un arrière-plan complexe, puis mettre l'image traitée dans le réseau neuronal à convolution en vue d'un entraînement de modèle.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810614024.4A CN108961675A (zh) | 2018-06-14 | 2018-06-14 | 基于卷积神经网络的跌倒检测方法 |
| CN201810614024.4 | 2018-06-14 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2019237567A1 true WO2019237567A1 (fr) | 2019-12-19 |
Family
ID=64488772
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2018/107975 Ceased WO2019237567A1 (fr) | 2018-06-14 | 2018-09-27 | Procédé de détection de chute fondé sur un réseau neuronal à convolution |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN108961675A (fr) |
| WO (1) | WO2019237567A1 (fr) |
Cited By (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111209848A (zh) * | 2020-01-03 | 2020-05-29 | 北京工业大学 | 一种基于深度学习的实时跌倒检测方法 |
| CN111353394A (zh) * | 2020-02-20 | 2020-06-30 | 中山大学 | 一种基于三维交替更新网络的视频行为识别方法 |
| CN111523492A (zh) * | 2020-04-26 | 2020-08-11 | 安徽皖仪科技股份有限公司 | 一种黑烟车的检测方法 |
| CN111598042A (zh) * | 2020-05-25 | 2020-08-28 | 西安科技大学 | 一种用于井下钻杆计数的视觉统计方法 |
| CN111680614A (zh) * | 2020-06-03 | 2020-09-18 | 安徽大学 | 一种基于视频监控中的异常行为检测方法 |
| CN111782857A (zh) * | 2020-07-22 | 2020-10-16 | 安徽大学 | 基于混合注意力密集网络的足迹图像检索方法 |
| CN112528775A (zh) * | 2020-11-28 | 2021-03-19 | 西北工业大学 | 一种水下目标的分类方法 |
| CN112541403A (zh) * | 2020-11-20 | 2021-03-23 | 中科芯集成电路有限公司 | 一种利用红外摄像头的室内人员跌倒检测方法 |
| CN113379614A (zh) * | 2021-03-31 | 2021-09-10 | 西安理工大学 | 基于Resnet网络的计算鬼成像重建恢复方法 |
| CN113947612A (zh) * | 2021-09-28 | 2022-01-18 | 西安电子科技大学广州研究院 | 基于前景背景分离的视频异常检测方法 |
| CN114049585A (zh) * | 2021-10-12 | 2022-02-15 | 北京控制与电子技术研究所 | 一种基于运动前景提取的使用手机动作检测方法 |
| CN116469132A (zh) * | 2023-06-20 | 2023-07-21 | 济南瑞泉电子有限公司 | 基于双流特征提取的跌倒检测方法、系统、设备及介质 |
| CN119714703A (zh) * | 2024-12-19 | 2025-03-28 | 南京电力设计研究院有限公司 | 一种基于gil抗振支架的多参数监测系统及方法 |
Families Citing this family (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109871788A (zh) * | 2019-01-30 | 2019-06-11 | 云南电网有限责任公司电力科学研究院 | 一种输电走廊自然灾害影像识别方法 |
| CN112489368A (zh) * | 2020-11-30 | 2021-03-12 | 安徽国广数字科技有限公司 | 智能跌倒识别与检测报警方法及系统 |
| CN113269105A (zh) * | 2021-05-28 | 2021-08-17 | 西安交通大学 | 电梯场景下的实时晕倒检测方法、装置、设备和介质 |
| CN113435306B (zh) * | 2021-06-24 | 2022-07-19 | 三峡大学 | 一种基于混合级联卷积的跌倒检测方法和装置 |
| CN114299012B (zh) * | 2021-12-28 | 2025-02-28 | 以萨技术股份有限公司 | 一种基于卷积神经网络的物体表面缺陷检测方法及系统 |
| CN116958663A (zh) * | 2023-07-10 | 2023-10-27 | 北京理工大学 | 一种基于图像处理的餐具分类方法 |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107220604A (zh) * | 2017-05-18 | 2017-09-29 | 清华大学深圳研究生院 | 一种基于视频的跌倒检测方法 |
| CN108090458A (zh) * | 2017-12-29 | 2018-05-29 | 南京阿凡达机器人科技有限公司 | 人体跌倒检测方法和装置 |
| CN108154113A (zh) * | 2017-12-22 | 2018-06-12 | 重庆邮电大学 | 基于全卷积网络热度图的跌倒事件检测方法 |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104134068B (zh) * | 2014-08-12 | 2017-11-14 | 江苏理工学院 | 基于稀疏编码的监控车辆特征表示及分类方法 |
| CN108124119A (zh) * | 2016-11-28 | 2018-06-05 | 天津市军联科技有限公司 | 基于嵌入式Linux的智能视频监控系统 |
-
2018
- 2018-06-14 CN CN201810614024.4A patent/CN108961675A/zh active Pending
- 2018-09-27 WO PCT/CN2018/107975 patent/WO2019237567A1/fr not_active Ceased
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107220604A (zh) * | 2017-05-18 | 2017-09-29 | 清华大学深圳研究生院 | 一种基于视频的跌倒检测方法 |
| CN108154113A (zh) * | 2017-12-22 | 2018-06-12 | 重庆邮电大学 | 基于全卷积网络热度图的跌倒事件检测方法 |
| CN108090458A (zh) * | 2017-12-29 | 2018-05-29 | 南京阿凡达机器人科技有限公司 | 人体跌倒检测方法和装置 |
Cited By (22)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111209848A (zh) * | 2020-01-03 | 2020-05-29 | 北京工业大学 | 一种基于深度学习的实时跌倒检测方法 |
| CN111353394A (zh) * | 2020-02-20 | 2020-06-30 | 中山大学 | 一种基于三维交替更新网络的视频行为识别方法 |
| CN111353394B (zh) * | 2020-02-20 | 2023-05-23 | 中山大学 | 一种基于三维交替更新网络的视频行为识别方法 |
| CN111523492A (zh) * | 2020-04-26 | 2020-08-11 | 安徽皖仪科技股份有限公司 | 一种黑烟车的检测方法 |
| CN111523492B (zh) * | 2020-04-26 | 2023-04-18 | 安徽皖仪科技股份有限公司 | 一种黑烟车的检测方法 |
| CN111598042A (zh) * | 2020-05-25 | 2020-08-28 | 西安科技大学 | 一种用于井下钻杆计数的视觉统计方法 |
| CN111598042B (zh) * | 2020-05-25 | 2023-04-07 | 西安科技大学 | 一种用于井下钻杆计数的视觉统计方法 |
| CN111680614B (zh) * | 2020-06-03 | 2023-04-14 | 安徽大学 | 一种基于视频监控中的异常行为检测方法 |
| CN111680614A (zh) * | 2020-06-03 | 2020-09-18 | 安徽大学 | 一种基于视频监控中的异常行为检测方法 |
| CN111782857A (zh) * | 2020-07-22 | 2020-10-16 | 安徽大学 | 基于混合注意力密集网络的足迹图像检索方法 |
| CN111782857B (zh) * | 2020-07-22 | 2023-11-03 | 安徽大学 | 基于混合注意力密集网络的足迹图像检索方法 |
| CN112541403A (zh) * | 2020-11-20 | 2021-03-23 | 中科芯集成电路有限公司 | 一种利用红外摄像头的室内人员跌倒检测方法 |
| CN112541403B (zh) * | 2020-11-20 | 2023-09-22 | 中科芯集成电路有限公司 | 一种利用红外摄像头的室内人员跌倒检测方法 |
| CN112528775A (zh) * | 2020-11-28 | 2021-03-19 | 西北工业大学 | 一种水下目标的分类方法 |
| CN113379614A (zh) * | 2021-03-31 | 2021-09-10 | 西安理工大学 | 基于Resnet网络的计算鬼成像重建恢复方法 |
| CN113947612A (zh) * | 2021-09-28 | 2022-01-18 | 西安电子科技大学广州研究院 | 基于前景背景分离的视频异常检测方法 |
| CN113947612B (zh) * | 2021-09-28 | 2024-03-29 | 西安电子科技大学广州研究院 | 基于前景背景分离的视频异常检测方法 |
| CN114049585A (zh) * | 2021-10-12 | 2022-02-15 | 北京控制与电子技术研究所 | 一种基于运动前景提取的使用手机动作检测方法 |
| CN114049585B (zh) * | 2021-10-12 | 2024-04-02 | 北京控制与电子技术研究所 | 一种基于运动前景提取的使用手机动作检测方法 |
| CN116469132B (zh) * | 2023-06-20 | 2023-09-05 | 济南瑞泉电子有限公司 | 基于双流特征提取的跌倒检测方法、系统、设备及介质 |
| CN116469132A (zh) * | 2023-06-20 | 2023-07-21 | 济南瑞泉电子有限公司 | 基于双流特征提取的跌倒检测方法、系统、设备及介质 |
| CN119714703A (zh) * | 2024-12-19 | 2025-03-28 | 南京电力设计研究院有限公司 | 一种基于gil抗振支架的多参数监测系统及方法 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN108961675A (zh) | 2018-12-07 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2019237567A1 (fr) | Procédé de détection de chute fondé sur un réseau neuronal à convolution | |
| CN110569756B (zh) | 人脸识别模型构建方法、识别方法、设备和存储介质 | |
| US11830230B2 (en) | Living body detection method based on facial recognition, and electronic device and storage medium | |
| CN109584248B (zh) | 基于特征融合和稠密连接网络的红外面目标实例分割方法 | |
| US20210264144A1 (en) | Human pose analysis system and method | |
| CN111091109B (zh) | 基于人脸图像进行年龄和性别预测的方法、系统和设备 | |
| CN107516316B (zh) | 一种在fcn中引入聚焦机制对静态人体图像进行分割的方法 | |
| WO2021042547A1 (fr) | Procédé d'identification de comportement, dispositif et support de stockage lisible par ordinateur | |
| CN104077579B (zh) | 基于专家系统的人脸表情图像识别方法 | |
| WO2022022154A1 (fr) | Procédé et appareil de traitement d'image faciale, dispositif, et support de stockage | |
| WO2021143101A1 (fr) | Procédé de reconnaissance faciale et dispositif de reconnaissance faciale | |
| JP6351243B2 (ja) | 画像処理装置、画像処理方法 | |
| WO2018188453A1 (fr) | Procédé de détermination d'une zone de visage humain, support de stockage et dispositif informatique | |
| CN108229330A (zh) | 人脸融合识别方法及装置、电子设备和存储介质 | |
| CN108647625A (zh) | 一种表情识别方法及装置 | |
| CN106960202A (zh) | 一种基于可见光与红外图像融合的笑脸识别方法 | |
| CN112508991B (zh) | 一种前后景分离的熊猫照片卡通化方法 | |
| CN108062543A (zh) | 一种面部识别方法及装置 | |
| CN111666813B (zh) | 基于非局部信息的三维卷积神经网络的皮下汗腺提取方法 | |
| CN111797709A (zh) | 一种基于回归检测的实时动态手势轨迹识别方法 | |
| CN112818899A (zh) | 人脸图像处理方法、装置、计算机设备和存储介质 | |
| WO2021139167A1 (fr) | Procédé et appareil de reconnaissance faciale, dispositif électronique et support de stockage lisible par ordinateur | |
| CN106557750A (zh) | 一种基于肤色和深度二叉特征树的人脸检测方法 | |
| CN106951826A (zh) | 人脸检测方法及装置 | |
| CN113302619B (zh) | 目标区域评估和特征点评估的系统和方法 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18922581 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 18922581 Country of ref document: EP Kind code of ref document: A1 |