CVF: Cross-Video Filtration on the Edge
Paper i proceeding, 2024

Many edge applications rely on expensive Deep-Neural-Network (DNN) inference-based video analytics. Typically, a single instance of an inference service analyzes multiple realtime camera streams concurrently. In many cases, only a fraction of these streams contain objects-of-interest at a given time. Hence, it is a waste of computational resources to process all frames from all cameras using the DNNs. On-camera filtration of frames has been suggested as a possible solution to improve the system efficiency and reduce resource wastage. However, many cameras do not have on-camera processing or filtering capabilities. In addition, filtration can be enhanced if frames across the different feeds are selected and prioritized for processing based on the system load and the available resource capacity. This paper introduces CVF, a Cross-video Filtration framework designed around video content and resource constraints. The CVF pipeline leverages compressed-domain data from encoded video formats, lightweight binary classification models, and an efficient prioritization algorithm. This enables the effective filtering of cross-camera frames from multiple sources, processing only a fraction of frames using resource-intensive DNN models. Our experiments show that CVF is capable of reducing the overall response time of video analytics pipelines by up to 50% compared to state-of-the-art solutions while increasing the throughput by up to 120%.

Video Analytics

Codecs

Edge

Video Filtration

Författare

Ali Rahmanian

Umeå universitet

Siddharth Amin

Student vid Chalmers

Harald Gustafsson

Ericsson AB

Ahmed Ali-Eldin Hassan

Nätverk och System

MMSys 2024 - Proceedings of the 2024 ACM Multimedia Systems Conference

231-242
9798400704123 (ISBN)

15th ACM Multimedia Systems Conference, MMSys 2024
Bari, Italy,

Ämneskategorier

Datorsystem

DOI

10.1145/3625468.3647627

Mer information

Senast uppdaterat

2024-07-02