[go: up one dir, main page]

WO2025206438A1 - Dispositif de traitement d'image et procédé d'analyse d'un objet dangereux sur une route - Google Patents

Dispositif de traitement d'image et procédé d'analyse d'un objet dangereux sur une route

Info

Publication number
WO2025206438A1
WO2025206438A1 PCT/KR2024/004061 KR2024004061W WO2025206438A1 WO 2025206438 A1 WO2025206438 A1 WO 2025206438A1 KR 2024004061 W KR2024004061 W KR 2024004061W WO 2025206438 A1 WO2025206438 A1 WO 2025206438A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
road surface
image processing
processing device
preprocessing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/KR2024/004061
Other languages
English (en)
Korean (ko)
Inventor
김용훈
신동하
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dareesoft Inc
Original Assignee
Dareesoft Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dareesoft Inc filed Critical Dareesoft Inc
Priority to PCT/KR2024/004061 priority Critical patent/WO2025206438A1/fr
Publication of WO2025206438A1 publication Critical patent/WO2025206438A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/60Image enhancement or restoration using machine learning, e.g. neural networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering

Definitions

  • This specification relates to computer vision technology, and more particularly, to a device, method, and recording medium recording the method for processing an image captured of a space in which a vehicle is driving, such as a road, before analyzing the presence of a hazardous material in the space.
  • Korean Patent No. 10-2147540 “Information Sharing Server for Enabling Sharing of Road Condition Information Based on Vehicle Location Information and Driving Condition Information and Operating Method Thereof," introduces a technology for sharing road condition information based on vehicle location information and driving condition information.
  • image preprocessing can utilize a wide variety of element technologies depending on the purpose of the image processing.
  • image processing suitable for analyzing road images must be possible depending on the purpose of recognizing specific objects, such as hazardous materials, within road images acquired from a moving vehicle.
  • the presentation of technologies that match the application area or environment (road space) of the technology and the characteristics of the target image (road image) is required.
  • the technical problems that the embodiments of the present specification seek to solve are: resolving the problem that it is difficult to analyze road surface images using an artificial intelligence model depending on the road environment or weather conditions when acquiring real-time road images through a camera installed in a driving vehicle; overcoming the limitation that image quality improvement technologies targeting general subjects do not properly reflect the image characteristics of road surfaces; and resolving the weakness that a method of batch-correcting a large number of continuously input images actually causes inappropriate errors in road surface image analysis.
  • a method for processing an image for object analysis by an image processing device having at least one processor includes the steps of: receiving, by the image processing device, at least one image for a space; determining, by the image processing device, whether preprocessing is necessary in consideration of image characteristics of a road surface included in the input image; improving, by the image processing device, image deterioration of a road surface included in a set target image according to a result of the determination of whether preprocessing is necessary; and outputting, by the image processing device, one or more images including the improved target image as an artificial intelligence model for object analysis.
  • the step of determining whether preprocessing is necessary may include the steps of: generating a histogram representing a distribution of each pixel value for the input image; counting the number of pixels having an illuminance value smaller than an illuminance threshold for a road surface image on the histogram; and setting the image as a target image for preprocessing when the ratio of the number of counted pixels to the total number of pixels of the image is greater than the ratio threshold.
  • the step of improving image degradation may include a step of reducing noise of a road surface included in a target image by referring to surrounding pixel values for a target image set to require preprocessing; and a step of non-linearly adjusting a brightness level of a filtered target image to improve contrast of a road surface included in the target image.
  • the following provides a computer-readable recording medium having recorded thereon a program for executing the image processing method described above on a computer.
  • the image characteristics of the road surface may be determined from a correlation between the variables and the detection performance when the detection performance of the artificial intelligence model for the road surface itself, a hazardous material or damage on the road surface changes due to at least one or more variables among a moving speed of the image processing device, a shutter speed of an image sensor provided in the image processing device, illuminance, and whether it is day or night.
  • the software code may include a command for generating a histogram representing a distribution of each pixel value for the input image, counting the number of pixels having an illuminance value smaller than an illuminance threshold value for a road surface image on the histogram, and setting the image as a target image for preprocessing when a ratio of the number of counted pixels to the total number of pixels of the image is greater than a ratio threshold value.
  • the software code may include instructions for reducing noise of a road surface included in a target image by referring to surrounding pixel values for a target image set to require preprocessing, and for non-linearly adjusting a brightness level of a filtered target image to enhance contrast of a road surface included in the target image.
  • Embodiments of the present specification determine whether preprocessing is necessary by considering the image characteristics of the road surface included in the image, while comprehensively considering the illuminance value of each pixel and the distribution ratio of pixels that are not suitable for analysis within the image area, thereby reducing the burden on the system by performing preprocessing on only some images among a plurality of images continuously input in real time, thereby improving the performance of road surface image analysis.
  • noise is reduced by preserving the boundaries of the image by referring to the surrounding pixel values, and contrast is improved by non-linearly adjusting the brightness level of the image, thereby significantly improving the analysis performance for road surface areas where the environment and weather conditions change.
  • Figure 1 is an exemplary diagram illustrating a situation in which a hazardous material detected in a road environment in which embodiments of the present specification are implemented is analyzed.
  • FIG. 2 is a diagram showing a schematic process for processing images for road hazard analysis proposed by embodiments of the present specification.
  • FIG. 3 is a flowchart illustrating a method for processing an image for object analysis according to one embodiment of the present specification.
  • FIG. 4 is a flowchart illustrating in more detail the process of determining whether preprocessing is necessary in the embodiment of FIG. 3 for processing an image for object analysis.
  • Figures 5 and 6 are diagrams illustrating comparison of histogram analysis results for road images captured during the day and at night, respectively.
  • FIG. 7 is a flowchart illustrating in more detail the process of improving image degradation in the embodiment of FIG. 3 for processing images for object analysis.
  • FIG. 8 is a drawing illustrating a process for reducing noise on a road surface in an image processing method according to one embodiment of the present specification.
  • FIG. 9 is a drawing for explaining a process for improving the contrast of a road surface in an image processing method according to one embodiment of the present specification.
  • FIG. 10 and FIG. 11 are diagrams for explaining a process of optimizing a control factor used in a preprocessing process in an image processing method according to one embodiment of the present specification.
  • FIG. 12 is a block diagram illustrating an image processing device for object analysis according to one embodiment of the present specification.
  • Figure 1 is an exemplary diagram illustrating a situation in which hazards detected in a road environment are analyzed in which embodiments of the present disclosure are implemented. It illustrates a situation in which a vehicle is driving on a road at night, but streetlights are partially present within the space. Furthermore, the reference vehicle is equipped with an image sensor (camera) that captures road images, thereby acquiring continuous images (e.g., video) of the vehicle's surroundings (e.g., forward).
  • image sensor camera
  • FIG. 2 is a diagram illustrating a schematic process for processing images for road hazard analysis proposed by embodiments of the present disclosure.
  • Road images are acquired through an image sensor (camera) mounted on a vehicle, and format conversion such as ISP, YUV, RGB, etc. can be performed.
  • image sensor camera
  • format conversion such as ISP, YUV, RGB, etc.
  • This can be appropriately configured depending on the implementation environment or requirements, and is not limited to the embodiments of the present disclosure.
  • the embodiments of the present disclosure assume that a number of continuous images are input in real time, and aim to perform preprocessing (210) on some of these images and then input them into an artificial intelligence model. At this time, a standard for determining which images are subject to preprocessing (210) is required. In addition, it is necessary to determine which technical means will be used to manipulate images that meet the standard.
  • the embodiments of the present specification analyze road surface images. Therefore, the detection performance of an artificial intelligence model for the road surface itself, hazards, or damage on the road surface varies depending on at least one or more variables, including the moving speed of the image processing device, the shutter speed of the image sensor equipped with the image processing device, the level of illumination, and whether it is day or night. In this case, a correlation appears between various variables and detection performance, and it is necessary to perform preprocessing by thoroughly understanding these characteristics.
  • the embodiments of the present specification establish criteria for determining preprocessing targets by considering these characteristics of road images, and also specify a preprocessing method to enhance the processing performance of the artificial intelligence model.
  • road condition monitoring requires rapid processing of video consisting of multiple frames input in real time while a vehicle is driving on the road. For example, uniform preprocessing of approximately 30 images per second for a fast-moving vehicle in bad weather conditions or low-light/night conditions still places a burden on hardware resources. Therefore, it is necessary to quickly determine whether preprocessing is necessary for multiple images input continuously in real time, and this process can be achieved according to the judgment criteria established in the embodiments of the present specification.
  • a target image requiring preprocessing (210) is selected from among a plurality of RGB images, and then improved through the preprocessing process to be suitable for road analysis.
  • "improvement” does not mean changing the quality of the image to make it more pleasing to the eye, but rather means processing the image so that an artificial intelligence model can perform analysis on the road or road surface image.
  • FIG. 3 is a flowchart illustrating a method for processing an image for object analysis according to one embodiment of the present specification, wherein an image processing device having at least one processor can perform the following series of processing steps.
  • the image processing device may be a device installed in a moving object (e.g., a moving vehicle) to receive and analyze images of the surroundings (e.g., the front) of the vehicle acquired through an image sensor (e.g., a camera).
  • the image processing device does not necessarily have to be physically connected to the image sensor or located in the vehicle, and it is sufficient if it is a device that is connected to the vehicle via a wired or wireless communication means and can analyze road images transmitted from the vehicle in real time.
  • the image processing device receives at least one image for space.
  • the image processing device determines whether preprocessing is necessary by considering the image characteristics of the road surface included in the image input through step S310.
  • the image characteristics of the road surface can be determined from the correlation between the variables and the detection performance when the detection performance of the artificial intelligence model for the road surface itself, hazardous materials, or damage on the road surface changes due to at least one or more variables among the moving speed of the image processing device, the shutter speed of the image sensor provided in the image processing device, the illuminance, and whether it is day or night.
  • the determination of whether preprocessing is necessary is basically performed by referring to the histogram of the image.
  • the histogram represents the distribution of each pixel value in the image and provides a basis for judgment for tasks such as adjusting the brightness or contrast of the image. Since the embodiments of the present specification analyze road images (particularly road surfaces), they have the problem of difficulty in recognizing cracks or damage within asphalt, which is composed of monotonous colors based on gray or black. In addition, road images have the weakness of uneven distribution of pixel values or lack of clear contrast. Therefore, considering the image characteristics of road surfaces, the necessity of preprocessing can be determined based on how clearly the weakness of road images that makes image analysis difficult is revealed.
  • the embodiments of the present specification determine whether preprocessing is necessary by comprehensively considering the illuminance value of each pixel and the distribution status (distribution ratio) of these unsuitable pixels within the image area. A more specific implementation process will be described later with reference to FIG. 4.
  • step S350 the image processing device improves the image deterioration of the road surface included in the set target image based on the result of the judgment as to whether preprocessing is necessary.
  • the image determined to require preprocessing in the previous step S330 is set as the target image, and only the target image is selectively preprocessed among a plurality of images continuously input in real time.
  • it has the advantage of being able to preprocess only the absolutely necessary frames while reducing the burden on system resources considering the situation of monitoring road conditions.
  • the deviations in each scene that appear during road driving may actually have a negative impact on the analysis when input to the artificial intelligence model. Therefore, it is very important to determine only the images unsuitable for road surface analysis among a plurality of images and selectively perform preprocessing.
  • the preprocessing process proposed by the embodiments of the present specification is to detect target images that are unsuitable for the analysis task of the artificial intelligence model.
  • the improvement of image deterioration performed by performing the preprocessing process is also to process the target image so that it is suitable for the analysis task of the artificial intelligence model. Therefore, it can be seen that the control factors involved in the target and execution of the preprocessing process are all values set to be suitable for the artificial intelligence model to be used later.
  • being suitable for the artificial intelligence model means whether the artificial intelligence model can perform the analysis of the road image well and accurately recognize the hazards or road surface damage contained in the image.
  • the success rate or accuracy of any one of detection, tracking, recognition, or classification of an object using the artificial intelligence model can be set as a judgment index for optimizing the control factors for the analysis of the artificial intelligence model.
  • FIG. 4 is a flowchart illustrating in more detail the process (step S330) of determining whether preprocessing is necessary in the embodiment of FIG. 3 for processing an image for object analysis.
  • step S33 a histogram representing the distribution of each pixel value for the input image is generated.
  • step S333 the number of pixels having an illuminance value lower than an illuminance threshold for a road surface image is counted on the histogram. If a pixel has an illuminance value lower than a preset illuminance threshold on the histogram, it means that the pixel is not suitable for road surface image analysis, and the number of pixels below the threshold is calculated to determine how distributed such pixels are within the entire image area.
  • step S335 if the ratio of the number of counted pixels to the total number of pixels in the image is greater than a ratio threshold, the image can be set as a target image for preprocessing.
  • a ratio threshold A greater number of pixels below the threshold indicates that the image is unsuitable for road surface image analysis using an artificial intelligence model, and therefore, the ratio of unsuitable pixels needs to be calculated. If the calculated ratio of unsuitable pixels is greater than a preset ratio threshold, the image can be determined as a target image requiring preprocessing.
  • the illuminance threshold value and the ratio threshold value can be set so that the detection performance of the artificial intelligence model for a hazardous material object included in a road surface image is greater than a specific performance standard value.
  • Figures 5 and 6 are diagrams illustrating comparison of histogram analysis results for road images captured during the day and at night, respectively.
  • the daytime image demonstrates sufficient light, making the captured image quality suitable for analysis using an AI model even at high speeds or with increased camera shutter speeds.
  • the histogram illustrated in Figure 5 demonstrates a wide range of brightness values, a uniform distribution of pixel intensities, high mid-tone intensities, and clear image contrast.
  • Fig. 6 which shows a night image or a low-light image
  • the brightness distribution is uneven, there is insufficient contrast for the road surface area, and there is a partial adverse effect due to light reflection.
  • the criteria for determining the image to be preprocessed can be confirmed through the difference in these histograms.
  • the embodiments of the present specification comprehensively consider the luminance value of each pixel and the distribution status (distribution ratio) of these unsuitable pixels within the image area to determine whether preprocessing is necessary.
  • the method for determining whether preprocessing is necessary proposed by the embodiments of the present specification enables robust determination by responding relatively well to uneven brightness distributions when compared to a low-luminance classification method that simply uses the average of the pixel values of all pixels.
  • the method for determining whether preprocessing is necessary proposed by the embodiments of the present specification can selectively adjust the brightness threshold considered as low-luminance and the ratio of low-luminance pixels to all pixels. This adjustment method allows for more precise control over different lighting conditions and environments, increasing the flexibility and accuracy of low-light judgment.
  • FIG. 7 is a flowchart illustrating in more detail the process of improving image degradation (step S350) in the embodiment of FIG. 3 for processing images for object analysis.
  • control factors can be adjusted to maximize the performance of an AI model for object detection, for example, by utilizing optimization algorithms such as genetic algorithms or gradient descent, thereby maximizing the detection performance of road hazards.
  • the optimization process is repeatedly performed through a large number of sample images, which improves the generalization ability of the model and ensures the reliability of hazard detection in actual driving environments.
  • the image processing technology proposed in the embodiments of this specification focuses on designing a pipeline for image preprocessing in low-light environments and optimizing control factors, thereby aiming to significantly improve the performance of road hazard detection in nighttime and low-light environments.
  • a dataset of images captured and labeled under various road conditions is first input into a preprocessing pipeline.
  • This pipeline has control factors of alpha, beta, mu, and gamma, and each object is assigned a unique combination of control factor values.
  • the preprocessed image dataset is input into a hazard detection AI model, and the accuracy of each object is analyzed based on the inferred results. Among the evaluation results for multiple objects, the top N/2 objects with relatively high accuracy can be selected.
  • new combinations of control factors can be generated through crossover and mutation operations. These new combinations of control factors can be fed back into the preprocessing pipeline, and a series of performance evaluations can be repeated until a preset termination condition is reached.
  • This process aims to observe how each combination of control factors changes the performance of the AI model for hazard detection in image data obtained from real-world road conditions, and to continuously adjust the control factors based on the changing results.
  • This genetic algorithm-based control factor optimization method can enable the AI model to achieve more robust detection capabilities across a variety of road conditions.
  • FIG. 12 is a block diagram illustrating an image processing device for object analysis according to one embodiment of the present specification, which reconstructs the image processing method of FIG. 3 from the perspective of hardware configuration. Therefore, to avoid redundant explanation, only an outline of the operation and function of each component is briefly described herein.
  • An image processing device (90) includes a memory (10) that stores software code for processing images for object analysis, and a processor (30) that executes the software code.
  • the software code includes a command for receiving at least one image for a space, determining whether preprocessing is necessary by considering the image characteristics of a road surface included in the input image, improving image deterioration of a road surface included in a set target image according to a result of determining whether preprocessing is necessary, and outputting one or more images including the improved target image as an artificial intelligence model for object analysis.
  • the image characteristics of the road surface can be determined from a correlation between the variables and the detection performance when the detection performance of the artificial intelligence model for the road surface itself, a hazard object, or damage on the road surface changes due to at least one variable among a moving speed of the image processing device, a shutter speed of an image sensor provided in the image processing device, illuminance, and whether it is day or night.
  • the image sensor (50) does not have to be included in the image processing device (90), but it is natural that the image acquired through the image sensor (50) should be provided as input.
  • the image sensor (50) may be configured as an integral assembly with the image processing device (90) depending on implementation needs.
  • the software code may include a command to generate a histogram representing the distribution of each pixel value for the input image, count the number of pixels having an illuminance value smaller than an illuminance threshold value for a road surface image on the histogram, and set the image as a target image for preprocessing when the ratio of the number of counted pixels to the total number of pixels of the image is greater than the ratio threshold value.
  • the software code may include instructions for reducing noise of a road surface included in a target image by referring to surrounding pixel values for a target image set to require preprocessing, and for non-linearly adjusting a brightness level of the filtered target image to enhance contrast of the road surface included in the target image.
  • Embodiments according to the present specification may be implemented by various means, for example, hardware, firmware, software, or a combination thereof.
  • an embodiment of the present specification may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, microcontrollers, microprocessors, etc.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGAs field programmable gate arrays
  • processors controllers, microcontrollers, microprocessors, etc.
  • firmware or software implementation an embodiment of the present specification may be implemented in the form of a module, procedure, function, etc. that performs the capabilities or operations described above.
  • Software code may be stored in a memory and executed by a processor.
  • the memory may be located inside or outside the processor and
  • Computer-readable recording media include all types of recording devices that store data that can be read by a computer system. Examples of computer-readable recording media include ROMs, RAMs, CD-ROMs, magnetic tapes, floppy disks, optical data storage devices, etc.
  • the computer-readable recording media can be distributed across network-connected computer systems, so that the computer-readable codes can be stored and executed in a distributed manner.
  • functional programs, codes, and code segments for implementing the embodiments can be easily inferred by programmers in the technical field to which the present specification pertains.
  • One or more non-transitory computer-readable media storing one or more instructions according to one embodiment, the one or more instructions being executable by one or more processors, wherein the one or more instructions process an image for object analysis, wherein at least one image for a space is input, and whether pre-processing is necessary in consideration of image characteristics of a road surface included in the input image is determined, and according to a result of the determination of whether pre-processing is necessary, image deterioration of a road surface included in a set target image is improved, and one or more images including the improved target image are output as an artificial intelligence model for object analysis.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

La présente invention se rapporte à une technologie de vision artificielle. Un procédé au moyen duquel un dispositif de traitement d'image traite une image pour une analyse d'objet consiste : à recevoir une entrée d'au moins une image pour un espace ; à déterminer, en tenant compte des caractéristiques d'image d'une surface de route comprise dans l'image d'entrée, si un prétraitement est requis ; à améliorer la dégradation d'image de la surface de route comprise dans une image cible définie selon le résultat de la détermination que le prétraitement est requis ; et à délivrer une ou plusieurs images comprenant l'image cible améliorée à un modèle d'intelligence artificielle pour une analyse d'objet.
PCT/KR2024/004061 2024-03-29 2024-03-29 Dispositif de traitement d'image et procédé d'analyse d'un objet dangereux sur une route Pending WO2025206438A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/KR2024/004061 WO2025206438A1 (fr) 2024-03-29 2024-03-29 Dispositif de traitement d'image et procédé d'analyse d'un objet dangereux sur une route

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/KR2024/004061 WO2025206438A1 (fr) 2024-03-29 2024-03-29 Dispositif de traitement d'image et procédé d'analyse d'un objet dangereux sur une route

Publications (1)

Publication Number Publication Date
WO2025206438A1 true WO2025206438A1 (fr) 2025-10-02

Family

ID=97217886

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2024/004061 Pending WO2025206438A1 (fr) 2024-03-29 2024-03-29 Dispositif de traitement d'image et procédé d'analyse d'un objet dangereux sur une route

Country Status (1)

Country Link
WO (1) WO2025206438A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20070000982A (ko) * 2005-06-28 2007-01-03 엘지.필립스 엘시디 주식회사 미디언 필터링 방법 및 장치
KR20190103508A (ko) * 2018-02-12 2019-09-05 경북대학교 산학협력단 차선 검출 방법, 이를 수행하기 위한 장치 및 기록매체
KR102103770B1 (ko) * 2018-04-02 2020-04-24 동국대학교 산학협력단 보행자 검출 장치 및 방법
CN114735026A (zh) * 2022-04-13 2022-07-12 中山大学 面向特殊场景的无人运输车辆横纵向决策控制方法
KR20230048429A (ko) * 2020-10-21 2023-04-11 콘티넨탈 오토노머스 모빌리티 저머니 게엠베하 해질녘 및 야간의 야생 동물 횡단으로 인한 사고를 방지하기 위한 시스템

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20070000982A (ko) * 2005-06-28 2007-01-03 엘지.필립스 엘시디 주식회사 미디언 필터링 방법 및 장치
KR20190103508A (ko) * 2018-02-12 2019-09-05 경북대학교 산학협력단 차선 검출 방법, 이를 수행하기 위한 장치 및 기록매체
KR102103770B1 (ko) * 2018-04-02 2020-04-24 동국대학교 산학협력단 보행자 검출 장치 및 방법
KR20230048429A (ko) * 2020-10-21 2023-04-11 콘티넨탈 오토노머스 모빌리티 저머니 게엠베하 해질녘 및 야간의 야생 동물 횡단으로 인한 사고를 방지하기 위한 시스템
CN114735026A (zh) * 2022-04-13 2022-07-12 中山大学 面向特殊场景的无人运输车辆横纵向决策控制方法

Similar Documents

Publication Publication Date Title
CN101882034B (zh) 触摸装置的触摸笔颜色识别装置及方法
EP1918872B1 (fr) Procédé et système de segmentation d'image
WO2020171305A1 (fr) Appareil et procédé de capture et de mélange d'images multiples pour photographie flash de haute qualité à l'aide d'un dispositif électronique mobile
JP2022509034A (ja) ニューラルネットワークを使用した輝点除去
CN110346116B (zh) 一种基于图像采集的场景照度计算方法
JP4985394B2 (ja) 画像処理装置および方法、プログラム、並びに記録媒体
WO2013114803A1 (fr) Dispositif et procédé de traitement d'image, programme informatique et système de traitement d'image
JP5814799B2 (ja) 画像処理装置及び画像処理方法
WO2016024680A1 (fr) Boîte noire de véhicule permettant de reconnaître en temps réel une plaque d'immatriculation de véhicule en mouvement
CN115019340A (zh) 一种基于深度学习的夜间行人检测算法
CN108040244A (zh) 基于光场视频流的抓拍方法及装置、存储介质
CN111726543B (zh) 一种提高图像动态范围的方法、摄像机
CN112183158B (zh) 一种谷物烹饪设备的谷物种类识别方法和谷物烹饪设备
CN111833376B (zh) 目标跟踪系统及方法
CN112884805A (zh) 一种跨尺度自适应映射的光场成像方法
WO2025206438A1 (fr) Dispositif de traitement d'image et procédé d'analyse d'un objet dangereux sur une route
WO2023219466A1 (fr) Procédés et systèmes pour améliorer une trame à faible lumière dans un système à caméras multiples
CN117475421A (zh) 一种车牌曝光补偿方法、装置与电子设备
KR20250145873A (ko) 도로 위험물 분석을 위한 이미지 처리 장치 및 방법
US11631183B2 (en) Method and system for motion segmentation
CN113808117B (zh) 灯具检测方法、装置、设备及存储介质
CN113947602B (zh) 一种图像亮度的检测方法及装置
JP7475830B2 (ja) 撮像制御装置および撮像制御方法
JP6525723B2 (ja) 撮像装置及びその制御方法、プログラム、並びに記憶媒体
CN119653246B (zh) 一种曝光参数调整方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24931402

Country of ref document: EP

Kind code of ref document: A1