Explainable Artificial Intelligence-Guided Optimization of ML-Based Traffic Prediction
Poster (konferens), 2024
Traffic prediction is an evergreen research topic in networking, with modern allocation algorithms often utiliz- ing forecasts for optimized decisions. However, the employed machine learning (ML) models are usually operated as black boxes – without any insight into their internal operations. Such an approach creates a risk of using excessive input features, unnecessarily expanding the model complexity. In this work, we extract insights into the operation of traffic prediction models using explainable artificial intelligence (XAI) tools. We explore the impact of literature-proposed features on various traffic types, sampling rates, and ML algorithms. We identify the common trends and dependencies regarding the most relevant features depending on traffic fluctuation levels and aggregation type. We discover how only a subset of inputs contributes meaningfully to the final model decision, as opposed to the conventional approach of only analyzing the resulting prediction quality after adding new features. We demonstrate how training and inference times can be significantly reduced by exploiting the obtained knowledge without degrading prediction quality and bandwidth blocking.
Explain- able Artificial Intelligence
Machine Learning
Traffic Prediction
Feature Selection