EP3981163A1 - Cascaded video analytics for edge computing - Google Patents
Cascaded video analytics for edge computingInfo
- Publication number
- EP3981163A1 EP3981163A1 EP20727423.4A EP20727423A EP3981163A1 EP 3981163 A1 EP3981163 A1 EP 3981163A1 EP 20727423 A EP20727423 A EP 20727423A EP 3981163 A1 EP3981163 A1 EP 3981163A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- processing
- devices
- edge
- cloud
- cameras
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/71—Indexing; Data structures therefor; Storage structures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/73—Querying
- G06F16/738—Presentation of query results
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7837—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5044—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering hardware capabilities
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5083—Techniques for rebalancing the load in a distributed system
- G06F9/5088—Techniques for rebalancing the load in a distributed system involving task migration
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/12—Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/63—Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
- H04N21/647—Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless
- H04N21/64784—Data processing by the network
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/508—Monitor
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
Definitions
- the description generally relates to techniques for performing video analytics.
- One example includes a system that includes a processor and a storage memory storing computer-readable instructions, which when executed by the processor, cause the processor to receive a video query regarding a live video stream determine resources available to the system and a defined threshold confidence value associated with the video query, select a configuration for processing the video query based at least on the determined resources allocate processing between one or more cameras and one or more edge devices according to the selected configuration, and adjust the selected configuration to include processing among one or more cloud devices when processing results from the one or more cameras and the one or more edge devices do not meet the defined threshold confidence value.
- FIG. 5 illustrates an example method or technique that is consistent with some implementations of the present concepts.
- query processing can instead be performed locally at the edge, either by the IoT devices or at edge processing units, such as a server associated with a cluster of IoT devices.
- edge processing units such as a server associated with a cluster of IoT devices.
- overall processing costs can be reduced by efficiently managing processing between both edge devices and cloud devices.
- a video analytics system can lower computational resource utilization and produce results with higher accuracy, while also avoiding potential downfalls of a cloud-only system, such as network unavailability or downtime.
- edge processing module 114 there may be no need to pass information on to edge processing module 114. For example, if a given video analytics query is only concerned with detecting any possible movement in frames, the threshold confidence value can be set low, and the results from background subtraction module 112 may be sufficient to achieve these goals, thereby obviating the need to involve any additional processing up the pipeline.
- edge processing module 114 may invoke a lightweight DNN model, such as tiny Yolo, to indeed confirm that an object of interest pertaining to the query (e.g., a vehicle) is located within the frame. If edge processing module 114 does not determine a result within a threshold confidence value, then the pipeline can invoke cloud processing module 116 on cloud device 106. Cloud processing module 116 can invoke a“heavy” model (i.e., a computationally-expensive model which may be more expensive than the lightweight model), such as full YoloV3, which can provide a greater amount of accuracy in object detection.
- a“heavy” model i.e., a computationally-expensive model which may be more expensive than the lightweight model
- full YoloV3 which can provide a greater amount of accuracy in object detection.
- the various processing performed by decoding module 110, background subtraction module 112, and edge processing module 114 can be viewed as local processing 118, as the processing can all be performed locally distributed between smart cameras 102 or edge device 104. Moreover, in certain instances, the processing may be performed solely by smart cameras 102, or solely by edge device 104, depending on potential unavailability of any of the devices. As such, a lightweight DNN model could potentially be run on smart cameras 102, in the event that edge device 104 is unavailable. If the results of local processing 118 do not meet the threshold confidence value, then data can be sent to cloud device 106 for processing through, for example, WAN 108, but the pipeline may seek to rely on local processing results as much as possible.
- FIGS. 2A-2D depict an example scenario of processing video stream data according to the pipeline depicted in FIG. 1.
- a frame data 202 is depicted as resulting from processing of a live video stream by decoding module 110.
- Frame data 202 depicts a roadway having a number of objects within the field of vision, such as vehicles 204A and 204B, and an oil spill 206.
- the query involved seeks to identify moving vehicles in the field of view, with a threshold confidence value of 75%.
- Edge processing module 114 may invoke, for example, a lightweight DNN model on the frame data, resulting in processed frame 212 depicted in FIG. 2C.
- the lightweight DNN model correctly excluded oil spill 206, but had difficulty in determining that there are two vehicles moving, as the lightweight DNN model grouped both cars into a detected change 214.
- the lightweight processing module may not meet the 75% threshold confidence value for a number of reasons discussed in further detail with regard to FIG. 3, such as where the frame data resolution was too low due to a selected processing configuration.
- the pipeline may turn to cloud processing by invoking cloud processing module 116.
- profiler 304 can perform resource accuracy profiling, which can estimate the total resource requirements of the query and can take into account the threshold confidence value.
- profiler 304 may select from a plurality of different resource configurations that are to be utilized for the video analytics. These configurations can represent adjustable attributes or settings that are applied to the analytical pipeline, which can impact query accuracy and resource demands.
- the configurations can be multi-dimensional and can include choices such as frame resolution, frame rate, and what DNN model to use (i.e., either the lightweight model or heavy model, or in some instances, both models). While configurations such as higher resolution or higher frame rate can improve detection, these configurations can also overburden available resources or bandwidth capabilities.
- profiler 304 can access a database of video processing configurations, which can then be evaluated against resource available between the edge devices and the cloud devices to result in a resource quality dataset 306, depicted in FIG. 3 in a graph form.
- Resource quality dataset 306 can be developed by, for example, recording a small amount of video at the given configuration. The recorded video can then be tested against the resource capabilities of the devices using the various data processing models, such as lightweight models and heavy models, to determine appropriate processing times and resource consumption. Based on this testing, a number of data plots can be established that define a certain accuracy level based on the configuration, such as frame resolution, frame rate, bandwidth rate, and/or processing cores available to a given device. Furthermore, the testing can be repeated based on changing conditions, such as network availability or bandwidth, to ensure that a new query can be handled in the most efficient manner.
- the system may perform periodic polling of resource availability between the edge and cloud devices. While the initial configuration may attempt to achieve a maximized accuracy, changing system and network conditions can affect the ability to achieve this efficient processing. Therefore, a periodic polling loop may operate, whereby the various conditions associated with the devices are checked, and when resource availability has changed, the allocation of processing between the edge and cloud devices can be modified to reflect the change in resources.
- Objects can be clustered based on feature vectors into object clusters, and a top ic index can be created which maps each class to a set of object clusters.
- the top-K ingest index provides a mapping between object classes and the clusters. Then, at a query time, such as when a user queries for a certain class X, matching clusters can be retrieved from the top-K index, and the centroids of the clusters are run through a ground truth CNN model to filter out potential frames that do not contain object class X.
- One example includes a system comprising a processor and a storage memory storing computer-readable instructions, which when executed by the processor, cause the processor to: receive a video query regarding a live video stream, determine resources available to the system and a defined threshold confidence value associated with the video query, select a configuration for processing the video query based at least on the determined resources, allocate processing between one or more cameras and one or more edge devices according to the selected configuration, and adjust the selected configuration to include processing among one or more cloud devices when processing results from the one or more cameras and the one or more edge devices do not meet the defined threshold confidence value.
- Another example can include any of the above and/or below examples where the method further comprises dynamically modifying the selected configuration upon determining that the resource availability has changed.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Signal Processing (AREA)
- Computational Linguistics (AREA)
- Computing Systems (AREA)
- Library & Information Science (AREA)
- General Health & Medical Sciences (AREA)
- Computer Networks & Wireless Communication (AREA)
- Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Evolutionary Computation (AREA)
- Computer Security & Cryptography (AREA)
- Medical Informatics (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US16/431,305 US20200387539A1 (en) | 2019-06-04 | 2019-06-04 | Cascaded video analytics for edge computing |
| PCT/US2020/029424 WO2020247101A1 (en) | 2019-06-04 | 2020-04-23 | Cascaded video analytics for edge computing |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| EP3981163A1 true EP3981163A1 (en) | 2022-04-13 |
Family
ID=70779855
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| EP20727423.4A Ceased EP3981163A1 (en) | 2019-06-04 | 2020-04-23 | Cascaded video analytics for edge computing |
Country Status (3)
| Country | Link |
|---|---|
| US (2) | US20200387539A1 (en) |
| EP (1) | EP3981163A1 (en) |
| WO (1) | WO2020247101A1 (en) |
Families Citing this family (15)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11551099B1 (en) * | 2020-06-27 | 2023-01-10 | Unicorn Labs Llc | Smart sensor |
| US11461591B2 (en) | 2020-12-16 | 2022-10-04 | Microsoft Technology Licensing, Llc | Allocating computing resources during continuous retraining |
| US20220255988A1 (en) | 2021-02-05 | 2022-08-11 | Veea Inc. | Systems and Methods for Collaborative Edge Computing |
| WO2022170156A1 (en) * | 2021-02-05 | 2022-08-11 | Salmasi Allen | Systems and methods for collaborative edge computing |
| CN112799823B (en) * | 2021-03-31 | 2021-07-23 | 中国人民解放军国防科技大学 | Online dispatching and scheduling method and system for edge computing tasks |
| WO2023036436A1 (en) * | 2021-09-10 | 2023-03-16 | Nokia Technologies Oy | Apparatus, methods, and computer programs |
| US20230079908A1 (en) * | 2021-09-16 | 2023-03-16 | Dell Products, L.P. | Distributed Fault Detection |
| US11503101B1 (en) * | 2021-12-15 | 2022-11-15 | Motorola Solutions, Inc. | Device and method for assigning video analytics tasks to computing devices |
| CN116567363A (en) * | 2022-01-27 | 2023-08-08 | 中国移动通信有限公司研究院 | Video analysis method and device |
| US12010035B2 (en) * | 2022-05-12 | 2024-06-11 | At&T Intellectual Property I, L.P. | Apparatuses and methods for facilitating an identification and scheduling of resources for reduced capability devices |
| CN114972550B (en) * | 2022-06-16 | 2023-03-24 | 慧之安信息技术股份有限公司 | Edge computing method for real-time video stream analysis |
| CN115695332B (en) * | 2022-09-07 | 2025-10-21 | 天翼视联科技有限公司 | Camera operation resource allocation method, device, electronic device and storage medium |
| CN115620188B (en) * | 2022-09-16 | 2025-09-12 | 浪潮通信技术有限公司 | Video inference method, device, electronic device and storage medium |
| CN115761865A (en) * | 2022-12-08 | 2023-03-07 | 中电信数智科技有限公司 | Face recognition method and system for edge equipment |
| CN116866352B (en) * | 2023-08-31 | 2023-11-14 | 清华大学 | Cloud-edge-coordinated intelligent camera system |
Family Cites Families (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6381628B1 (en) * | 1998-10-02 | 2002-04-30 | Microsoft Corporation | Summarized application profiling and quick network profiling |
| US20030046396A1 (en) * | 2000-03-03 | 2003-03-06 | Richter Roger K. | Systems and methods for managing resource utilization in information management environments |
| US20110016214A1 (en) * | 2009-07-15 | 2011-01-20 | Cluster Resources, Inc. | System and method of brokering cloud computing resources |
| US8352868B2 (en) * | 2008-06-27 | 2013-01-08 | Google Inc. | Computing with local and remote resources including user mode control |
| US20130086590A1 (en) * | 2011-09-30 | 2013-04-04 | John Mark Morris | Managing capacity of computing environments and systems that include a database |
| US20140068621A1 (en) * | 2012-08-30 | 2014-03-06 | Sriram Sitaraman | Dynamic storage-aware job scheduling |
| US20140095695A1 (en) * | 2012-09-28 | 2014-04-03 | Ren Wang | Cloud aware computing distribution to improve performance and energy for mobile devices |
| US10216549B2 (en) * | 2013-06-17 | 2019-02-26 | Seven Networks, Llc | Methods and systems for providing application programming interfaces and application programming interface extensions to third party applications for optimizing and minimizing application traffic |
| US20180374022A1 (en) * | 2017-06-26 | 2018-12-27 | Midea Group Co., Ltd. | Methods and systems for improved quality inspection |
| US11657316B2 (en) * | 2017-07-10 | 2023-05-23 | General Electric Company | Self-feeding deep learning method and system |
| US20200280761A1 (en) * | 2019-03-01 | 2020-09-03 | Pelco, Inc. | Automated measurement of end-to-end latency of video streams |
-
2019
- 2019-06-04 US US16/431,305 patent/US20200387539A1/en not_active Abandoned
-
2020
- 2020-04-23 EP EP20727423.4A patent/EP3981163A1/en not_active Ceased
- 2020-04-23 WO PCT/US2020/029424 patent/WO2020247101A1/en not_active Ceased
-
2023
- 2023-12-12 US US18/537,291 patent/US20240119089A1/en active Pending
Non-Patent Citations (4)
| Title |
|---|
| KIRYONG HA ET AL: "Towards wearable cognitive assistance", MOBILE SYSTEMS, APPLICATIONS, AND SERVICES, ACM, 2 PENN PLAZA, SUITE 701 NEW YORK NY 10121-0701 USA, 2 June 2014 (2014-06-02), pages 68 - 81, XP058049620, ISBN: 978-1-4503-2793-0, DOI: 10.1145/2594368.2594383 * |
| LU ZONGQING ET AL: "CrowdVision: A Computing Platform for Video Crowdprocessing Using Deep Learning", IEEE TRANSACTIONS ON MOBILE COMPUTING, IEEE SERVICE CENTER, LOS ALAMITOS, CA, US, vol. 18, no. 7, 31 May 2019 (2019-05-31), pages 1513 - 1526, XP011728651, ISSN: 1536-1233, [retrieved on 20190603], DOI: 10.1109/TMC.2018.2864212 * |
| See also references of WO2020247101A1 * |
| XUKAN RAN ET AL: "Delivering Deep Learning to Mobile Devices via Offloading", VIRTUAL REALITY AND AUGMENTED REALITY NETWORK, ACM, 2 PENN PLAZA, SUITE 701NEW YORKNY10121-0701USA, 11 August 2017 (2017-08-11), pages 42 - 47, XP058370558, ISBN: 978-1-4503-5055-6, DOI: 10.1145/3097895.3097903 * |
Also Published As
| Publication number | Publication date |
|---|---|
| US20240119089A1 (en) | 2024-04-11 |
| WO2020247101A1 (en) | 2020-12-10 |
| US20200387539A1 (en) | 2020-12-10 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20240119089A1 (en) | Cascaded video analytics for edge computing | |
| Ananthanarayanan et al. | Real-time video analytics: The killer app for edge computing | |
| Li et al. | Reducto: On-camera filtering for resource-efficient real-time video analytics | |
| US11064162B2 (en) | Intelligent video analysis system and method | |
| US10776665B2 (en) | Systems and methods for object detection | |
| AU2020278660B2 (en) | Neural network and classifier selection systems and methods | |
| US20210149731A1 (en) | Atomic Pool Manager | |
| US20200151482A1 (en) | Client terminal for performing hybrid machine vision and method thereof | |
| AU2021269911B2 (en) | Optimized deployment of analytic models in an edge topology | |
| KR20170068312A (en) | Image analysis system and integrated control system capable of performing effective image searching/analyzing and operating method thereof | |
| CN111737371B (en) | Data flow detection classification method and device capable of dynamically predicting | |
| US20220138893A9 (en) | Distributed image analysis method and system, and storage medium | |
| CN112925741B (en) | Heterogeneous computing method and system | |
| Wang et al. | Gecko: Resource-efficient and accurate queries in real-time video streams at the edge | |
| EP4471718B1 (en) | Automatic efficient small model selection for monocular depth estimation | |
| Constantinou et al. | A crowd-based image learning framework using edge computing for smart city applications | |
| Liu et al. | Criticality-based data segmentation and resource allocation in machine inference pipelines | |
| CN119847713A (en) | Parallel computing system based on multi-mode perception fusion and computing unit partition | |
| CN119006986A (en) | Real-time video stream analysis method based on edge calculation | |
| Wolfrath et al. | Leveraging multi-modal data for efficient edge inference serving | |
| US20240135705A1 (en) | Utilizing machine learning models to classify vehicle trajectories and collect road use data in real-time | |
| Liang et al. | SplitStream: Distributed and workload-adaptive video analytics at the edge | |
| Kim et al. | Resource-Efficient Design and Implementation of Real-Time Parking Monitoring System with Edge Device | |
| Czyżewski et al. | Massive surveillance data processing with supercomputing cluster | |
| Li et al. | E2EC: Edge-to-Edge Collaboration for Efficient Real-Time Video Surveillance Inference |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
| PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
| 17P | Request for examination filed |
Effective date: 20211111 |
|
| AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
| DAV | Request for validation of the european patent (deleted) | ||
| DAX | Request for extension of the european patent (deleted) | ||
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
| 17Q | First examination report despatched |
Effective date: 20230920 |
|
| REG | Reference to a national code |
Ref country code: DE Ref legal event code: R003 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED |
|
| 18R | Application refused |
Effective date: 20250423 |