WO2020237644A1 - Algorithme et système de synthèse de profondeur de champ en temps réel reposant sur une architecture fpga - Google Patents
Algorithme et système de synthèse de profondeur de champ en temps réel reposant sur une architecture fpga Download PDFInfo
- Publication number
- WO2020237644A1 WO2020237644A1 PCT/CN2019/089585 CN2019089585W WO2020237644A1 WO 2020237644 A1 WO2020237644 A1 WO 2020237644A1 CN 2019089585 W CN2019089585 W CN 2019089585W WO 2020237644 A1 WO2020237644 A1 WO 2020237644A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- window
- standard deviation
- module
- fpga
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
Definitions
- the invention relates to the field of depth synthesis technology, in particular to a real-time depth synthesis algorithm and system based on FPGA architecture.
- Depth-of-field synthesis technology is based on digital image processing, which uses the same image sensor to collect corresponding images at different depths of field, and then merges multiple images according to the focus characteristics of each image.
- the present invention proposes a real-time depth-of-field synthesis algorithm and system based on an FPGA architecture, which utilizes the parallel processing capability of the FPGA to optimize the algorithm that originally required a high-tech processor into the FPGA, thereby improving the real-time processing capability and reducing product cost.
- a real-time depth-of-field synthesis algorithm based on FPGA architecture specifically including the following steps:
- FPGA calculates the standard deviation of the above image, calculates the standard deviation of the image by using n*n filter kernel window, and performs filtering processing; n is an integer;
- S3 Calculate the average value of the standard deviation of the image by using the n*n filter kernel window, perform average filtering, and obtain the focus coefficient of the image; n is an integer;
- FPGA compares the focus coefficient of the new image with the previous composite image, controls the iteration of the new and old image data, updates the pixel data of the composite image and maintains the iterated image focus coefficient; if the focus coefficient of the new image at a certain pixel is greater than the previous composite For the focus factor of the image, replace the pixel of the new image with the pixel of the previous composite image, otherwise it will not be replaced; and update the larger value of the focus factor to the iterated image focus factor;
- FPGA repeats the above steps S1-S4 based on the parallel processing capability until the program terminates.
- step S1 a camera or a photosensitive component collects a new image.
- step S2 specifically includes the following steps
- N represents the number of pixels in the window n*n
- x i represents the value of each pixel in the window
- S N represents the standard deviation value of the pixel where the window is located.
- step S3 specifically includes the following steps
- n represents the area of the window n*n
- x i represents each value in the window
- a real-time depth-of-field synthesis system based on FPGA architecture including
- Image acquisition module used to acquire new image data, convert the acquired image into video stream data, and transmit it to the image standard deviation filtering module;
- the image standard deviation filtering module is used to perform standard deviation filtering on the received video stream data, generating new video stream data and image standard deviation data and transmitting them to the image average filtering module;
- the image average filtering module is used to perform average filtering on the received new image standard deviation data to obtain the image focus coefficient, and then align the corresponding video stream data and transmit it to the image synthesis module;
- the image synthesis module reads the image data and image focus coefficient after the previous iteration from the DDR controller module, compares the new image standard deviation data with the image focus coefficient after the previous iteration, controls the iteration of the new and old image data, and updates the composite image Pixel data and the image focus coefficient updated to the iteration; if the new image focus coefficient of a certain pixel is greater than the focus coefficient of the previous composite image, the pixel of the new image will be replaced with the pixel of the previous composite image, otherwise it will not be replaced, and Update the larger value of the focus factor to the iterated image focus factor;
- the DDR controller module is used to receive instructions from the image synthesis module, retrieve and maintain data from the DDR.
- the formula for standard deviation filtering is
- N represents the number of pixels in the window n*n
- x i represents the value of each pixel in the window
- S N represents the standard deviation value of the pixel where the window is located
- n represents the area of the window n*n
- x i represents each value in the window
- it also includes
- the image display/transmission module is used to display or transmit the image synthesized by the image synthesis module.
- it also includes
- Parameter control module used to transmit different commands to each module, used to control switches, window size parameters of different filter modules.
- the beneficial effects of the present invention are: utilizing the parallel processing capability of the FPGA to optimize the algorithm that originally required a high-tech processor into the FPGA, improving the real-time processing capability and reducing the product cost.
- Figure 1 is a flow chart of a real-time depth of field synthesis algorithm based on FPGA architecture of the present invention
- Fig. 2 is a schematic block diagram of a real-time depth-of-field synthesis system based on FPGA architecture of the present invention.
- the present invention proposes a real-time depth-of-field synthesis algorithm based on FPGA architecture, which specifically includes the following steps:
- step S1 a new image is collected; in step S1, a camera or photosensitive component collects a new image.
- FPGA calculates the standard deviation of the above image, uses n*n filter core window to calculate the standard deviation of the image, and performs filtering processing; n is an integer; the collected image is transmitted to the FPGA device, which will be processed in the FPGA The standard deviation of an image is calculated.
- n*n processing window is used in image processing, and this window is used to filter the image.
- N represents the number of pixels in the window n*n
- x i represents the value of each pixel in the window
- S N represents the standard deviation value of the pixel where the window is located.
- n*n filter kernel window to calculate the average value of the image, perform average filtering, and obtain the focus coefficient of the image; n is an integer; the image data is received from the previous step, and the standard deviation of the image is averaged in this step Filtering, the advantage of such processing is that it can smooth the excessive standard deviation of the image, avoid noise interference in the synthesis process, and edge aliasing.
- the mean filtering in FPGA also uses an n*n filtering window to calculate the average value of all the values in this window.
- n represents the area of the window n*n
- x i represents each value in the window
- Each pixel point pair has a focus coefficient. If the focus coefficient of a certain point is also large, it means that the point is also close to a clear image in all image series, which is the image data used for depth-of-field synthesis.
- the calculation amount of the above two steps will increase geometrically as the filter window becomes larger.
- the larger the window the longer the calculation time.
- the window reaches 31*31, it can meet the depth of field synthesis in most cases. If the resolution of the image is relatively large, most computers cannot achieve real-time processing, but on the FPGA architecture, all image data can be processed in parallel, and all can be processed in real time.
- FPGA compares the focus coefficient of the new image with the previous composite image, controls the iteration of the new and old image data, updates the pixel data of the composite image and maintains the iterated image focus coefficient; if the focus coefficient of the new image at a certain pixel is greater than the previous composite For the focus factor of the image, the pixel of the new image is replaced with the pixel of the previous composite image, otherwise it is not replaced; and the larger value of the focus factor is updated to the iterated image focus factor.
- FPGA repeats the above steps S1-S4 based on the parallel processing capability until the program terminates.
- the present invention also proposes a real-time depth-of-field synthesis system based on FPGA architecture, including
- Image acquisition module used to acquire new image data, convert the acquired image into video stream data (data stream 1), and transmit it to the image standard deviation filtering module;
- the image standard deviation filtering module is used to perform standard deviation filtering on the received video stream data to generate new video stream data (data stream 1) and image standard deviation data (data stream 2) and transmit them to the image average filter module; this calculation
- the module is a part of calculating the image focus function.
- This module is mainly composed of an N*N standard deviation filter core, and the window size of the filter core can be controlled by the parameter control module.
- the image average filter module is used to average filter the received new image standard deviation data (data stream 2) to obtain the image focus coefficient, and then align the corresponding video stream data (data stream 1) and transmit it to the image synthesis module; this
- the main function of the module is to implement mean blurring of the image standard deviation, thereby reducing noise interference and image edge jaggedness during image synthesis.
- This module is also a part of calculating the image focus function.
- the module will receive the image standard deviation data (data stream 2) sent by the image standard deviation filtering module, and perform average filtering on the image standard deviation data, and then generate a new image focus coefficient data (data stream 3) for image synthesis Module usage.
- This module is mainly composed of an N*N mean filter core, and the window size of the filter core can be controlled by the parameter control module.
- the image synthesis module reads the image data and image focus coefficient after the previous iteration from the DDR controller module, compares the new image standard deviation data with the image focus coefficient after the previous iteration, controls the iteration of the new and old image data, and updates the composite image Pixel data and the image focus coefficient updated to the iteration; if the new image focus coefficient of a certain pixel is greater than the focus coefficient of the previous composite image, the pixel of the new image is replaced with the pixel of the previous composite image, otherwise it is not replaced. And update the larger value of the focus factor to the iterated image focus factor.
- This module will read the previously synthesized image data (data stream 4) from the DDR controller, and the previously saved image focus coefficient (data stream 5).
- the new image focus factor (data stream 3) and the previous image focus factor (data stream 5) point by point. If the new image focus factor of a certain pixel is greater than the focus factor of the previous image, it represents The position of the new image at this point is closer to a clear image, and its video stream data (data stream 1) and image focus coefficient (data 3) need to be replaced with the previously synthesized image data (data stream 4) and previous On the contrary, the new image is far away from the clear image at this point. The image data will not be replaced at this point, and the original image data that has been synthesized is retained. Finally, the module will generate new image and video stream data (data stream 6) and new image focus coefficient (data stream 7) and send them to the DDR control module. And send the new video stream data to the image display/transmission module for display or output to realize depth synthesis;
- the DDR controller module is used to receive instructions from the image synthesis module, retrieve and maintain data from the DDR.
- the formula for standard deviation filtering is
- N represents the number of pixels in the window n*n
- x i represents the value of each pixel in the window
- S N represents the standard deviation value of the pixel where the window is located
- n represents the area of the window n*n
- x i represents each value in the window
- the image display/transmission module is used to display or transmit the image synthesized by the image synthesis module.
- Parameter control module used to transmit different commands to each module, used to control switches, window size parameters of different filter modules.
- the implementation framework of the present invention consists of 7 modules: DDR controller module, image acquisition module, image standard deviation filter module, image mean filter module, image synthesis module, image display/transmission module and parameter control module.
- each module can work independently and in parallel, so that the parallelism of the depth synthesis algorithm is maximized and optimized, so that the depth synthesis of images of different resolutions can achieve real-time depth synthesis.
- the present invention uses a specific actual algorithm, which expresses the superiority of FPGA for depth synthesis.
- Other different algorithms can also realize the depth synthesis of different algorithms by modifying the existing implementation modules.
- Using FPGA architecture to realize real-time depth synthesis can maximize the parallel algorithm and greatly shorten the calculation time of depth synthesis, so as to meet the needs of real-time depth synthesis of images at different resolutions.
- the FPGA structure can also be used as an accelerator card for different devices to realize the needs of real-time depth of field synthesis for different devices.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
L'invention concerne un algorithme et un système de synthèse de profondeur de champ en temps réel reposant sur une architecture FPGA. Plus précisément, l'algorithme comprend les étapes suivantes : collecte d'une nouvelle image ; réalisation d'un calcul d'écart type sur l'image au moyen d'un FPGA, calcul d'un écart type de l'image à l'aide d'une fenêtre de noyau de filtre n × n, et réalisation d'un traitement de filtrage, n étant un nombre entier ; calcul d'une valeur moyenne de l'écart type de l'image à l'aide de la fenêtre de noyau de filtre n × n, réalisation d'un filtrage de moyenne, et obtention d'un coefficient de mise au point de l'image, n étant un nombre entier ; comparaison des coefficients de mise au point de la nouvelle image et d'une image précédemment synthétisée au moyen du FPGA, commande d'itération de données de nouvelle image et d'ancienne image, et mise à jour de données de pixel de l'image synthétisée et conservation d'un coefficient de mise au point d'image après itération ; et répétition des étapes au moyen du FPGA sur la base d'une capacité de traitement parallèle jusqu'à ce qu'un programme se termine. Selon le procédé, la capacité de traitement parallèle du FPGA est utilisée, un algorithme qui a besoin à l'origine d'un processeur à hautes performances est optimisé dans le FPGA, la capacité de traitement en temps réel est améliorée et les coûts de produit sont réduits.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201980096049.2A CN113892126B (zh) | 2019-05-31 | 2019-05-31 | 一种基于fpga架构的实时景深合成算法和系统 |
| PCT/CN2019/089585 WO2020237644A1 (fr) | 2019-05-31 | 2019-05-31 | Algorithme et système de synthèse de profondeur de champ en temps réel reposant sur une architecture fpga |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2019/089585 WO2020237644A1 (fr) | 2019-05-31 | 2019-05-31 | Algorithme et système de synthèse de profondeur de champ en temps réel reposant sur une architecture fpga |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2020237644A1 true WO2020237644A1 (fr) | 2020-12-03 |
Family
ID=73552479
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2019/089585 Ceased WO2020237644A1 (fr) | 2019-05-31 | 2019-05-31 | Algorithme et système de synthèse de profondeur de champ en temps réel reposant sur une architecture fpga |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN113892126B (fr) |
| WO (1) | WO2020237644A1 (fr) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116380140A (zh) * | 2023-06-07 | 2023-07-04 | 山东省科学院激光研究所 | 基于均值滤波技术的分布式声波传感系统及其测量方法 |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101742107A (zh) * | 2008-11-25 | 2010-06-16 | 索尼株式会社 | 成像装置和成像方法 |
| CN104781846A (zh) * | 2012-06-20 | 2015-07-15 | 前视红外系统股份公司 | 用于补偿与振动相关的动态模糊的方法 |
| US20150348239A1 (en) * | 2014-06-02 | 2015-12-03 | Oscar Nestares | Image refocusing for camera arrays |
| CN106993130A (zh) * | 2017-03-09 | 2017-07-28 | 北京小米移动软件有限公司 | 采集图像的方法、装置及移动设备 |
Family Cites Families (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR101375829B1 (ko) * | 2009-11-03 | 2014-03-17 | 삼성테크윈 주식회사 | 감시 카메라의 제어 방법 및 이를 사용한 감시 카메라 |
| KR20170101532A (ko) * | 2016-02-29 | 2017-09-06 | (주)나모인터랙티브 | 이미지 융합 방법 및 이를 위한 컴퓨터 프로그램, 그 기록매체 |
| CN106154690B (zh) * | 2016-09-20 | 2018-09-25 | 照旷科技(上海)有限公司 | 一种控制平台控制相机快速自动调焦的方法 |
| CN106485678A (zh) * | 2016-10-11 | 2017-03-08 | 东南大学 | 一种基于时空滤波的景深时空一致性及精度增强的方法 |
| WO2018100414A1 (fr) * | 2016-12-01 | 2018-06-07 | Synaptive Medical (Barbados) Inc. | Système d'appareil photo permettant de fournir des images avec une résolution élevée et une grande profondeur de champ simultanées |
| CN107277381A (zh) * | 2017-08-18 | 2017-10-20 | 成都市极米科技有限公司 | 摄像头对焦方法和装置 |
| CN108492320B (zh) * | 2018-03-14 | 2022-04-12 | 四川长九光电科技有限责任公司 | 一种基于并行处理的红外弱小目标检测方法 |
| CN109782414B (zh) * | 2019-03-01 | 2021-05-18 | 广州医软智能科技有限公司 | 一种基于无参考结构清晰度的自动调焦方法 |
-
2019
- 2019-05-31 CN CN201980096049.2A patent/CN113892126B/zh active Active
- 2019-05-31 WO PCT/CN2019/089585 patent/WO2020237644A1/fr not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101742107A (zh) * | 2008-11-25 | 2010-06-16 | 索尼株式会社 | 成像装置和成像方法 |
| CN104781846A (zh) * | 2012-06-20 | 2015-07-15 | 前视红外系统股份公司 | 用于补偿与振动相关的动态模糊的方法 |
| US20150348239A1 (en) * | 2014-06-02 | 2015-12-03 | Oscar Nestares | Image refocusing for camera arrays |
| CN106993130A (zh) * | 2017-03-09 | 2017-07-28 | 北京小米移动软件有限公司 | 采集图像的方法、装置及移动设备 |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116380140A (zh) * | 2023-06-07 | 2023-07-04 | 山东省科学院激光研究所 | 基于均值滤波技术的分布式声波传感系统及其测量方法 |
| CN116380140B (zh) * | 2023-06-07 | 2023-11-03 | 山东省科学院激光研究所 | 基于均值滤波技术的分布式声波传感系统及其测量方法 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN113892126B (zh) | 2024-11-15 |
| CN113892126A (zh) | 2022-01-04 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP5909540B2 (ja) | 画像処理表示装置 | |
| JP6087671B2 (ja) | 撮像装置およびその制御方法 | |
| CN102959586B (zh) | 深度推测装置以及深度推测方法 | |
| CN111986106B (zh) | 一种基于神经网络的高动态图像重建方法 | |
| Yu et al. | Reconfigisp: Reconfigurable camera image processing pipeline | |
| CN110958363B (zh) | 图像处理方法及装置、计算机可读介质和电子设备 | |
| WO2023236508A1 (fr) | Procédé et système d'assemblage d'images basés sur une caméra ayant un réseau d'un milliard de pixels | |
| KR20110056098A (ko) | P s f를 추정하기 위한 장치 및 방법 | |
| CN103348667A (zh) | 摄像装置、摄像方法及程序 | |
| CN112529813B (zh) | 图像去雾处理方法、装置及计算机存储介质 | |
| US12200361B2 (en) | Imaging support device that performs registration control for setting a detected subject image position, imaging support system, imaging system, imaging support method, and program | |
| CN115546043B (zh) | 视频处理方法及其相关设备 | |
| CN102763405A (zh) | 包括目标跟踪功能的成像设备 | |
| WO2021104394A1 (fr) | Procédé et appareil de traitement d'image, dispositif électronique et support de stockage | |
| CN107613216A (zh) | 对焦方法、装置、计算机可读存储介质和电子设备 | |
| WO2024072250A1 (fr) | Procédé et appareil d'entraînement de réseau de neurones artificiels, ainsi que procédé et appareil de traitement d'image | |
| CN114298942A (zh) | 图像去模糊方法及装置、计算机可读介质和电子设备 | |
| CN107730472A (zh) | 一种基于暗原色先验的图像去雾优化算法 | |
| CN113391644B (zh) | 一种基于图像信息熵的无人机拍摄距离半自动寻优方法 | |
| WO2020237644A1 (fr) | Algorithme et système de synthèse de profondeur de champ en temps réel reposant sur une architecture fpga | |
| CN111371987A (zh) | 图像处理方法和装置、电子设备、计算机可读存储介质 | |
| CN111968039A (zh) | 基于硅传感器相机的昼夜通用图像处理方法、装置及设备 | |
| CN107295261A (zh) | 图像去雾处理方法、装置、存储介质和移动终端 | |
| EP3605450B1 (fr) | Appareil de traitement d'images, appareil de capture d'images, procédé de commande de l'appareil de traitement d'images et programme informatique | |
| CN101600049A (zh) | 图像处理装置与方法 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19930553 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 19930553 Country of ref document: EP Kind code of ref document: A1 |