On-demand Memory Compression of Stream Aggregates through Reinforcement Learning
Paper i proceeding, 2025
Stream Aggregates are crucial in digital infrastructures for transforming continuous data streams into actionable insights. However, state-of-the-art Stream Processing Engines lack mechanisms to effectively balance performance with memory consumption - a capability that is especially crucial in environments with fluctuating computational resources and data-intensive workloads. This paper tackles this gap by introducing a novel on-demand adaptive memory compression scheme for stream Aggregates. Our approach uses Reinforcement Learning (RL) to dynamically adapt how a stream Aggregate compresses its state, balancing performance and memory utilization under a given processing latency threshold. We develop a model that incorporates the application- and data-specific nuances of stream Aggregates and create a framework to train RL Agents to adjust memory compression levels in real-time. Additionally, we shed light on a trade-off between the timeliness of an RL Agent training and its resulting behavior, defining several policies to account for this trade-off. Through extensive evaluation, we show that the proposed RL Agent supports well on-demand memory compression. We also study the effects of our policies - providing guidance on their role in RL applied to stream Aggregates - and show our framework supports lean execution of such RL jobs.
reinforcement learning
stream aggregates
memory compression