Online Learning for Energy Efficient Navigation in Stochastic Transport Networks
Licentiatavhandling, 2021

Reducing the dependence on fossil fuels in the transport sector is crucial to have a realistic chance of halting climate change. The automotive industry is, therefore, transitioning towards an electrified future at an unprecedented pace. However, in order for electric vehicles to be an attractive alternative to conventional vehicles, some issues, like range anxiety, need to be mitigated. One way to address these problems is by developing more accurate and robust navigation systems for electric vehicles. Furthermore, with highly stochastic and changing traffic conditions, it is useful to continuously update prior knowledge about the traffic environment by gathering data. Passively collecting energy consumption data from vehicles in the traffic network might lead to insufficient information gathered in places where there are few vehicles. Hence, in this thesis, we study the possibility of adapting the routes presented by the navigation system to adequately explore the road network, and properly learn the underlying energy model.
The first part of the thesis introduces an online machine learning framework for navigation of electric vehicles, with the objective of adaptively and efficiently navigating the vehicle in a stochastic traffic environment. We assume that the road-specific probability distributions of vehicle energy consumption are unknown, and thus, we need to learn their parameters through observations. Furthermore, we take a Bayesian approach and assign prior beliefs to the parameters based on longitudinal vehicle dynamics. We view the task as a combinatorial multi-armed bandit problem, and utilize Bayesian bandit algorithms, such as Thompson Sampling, to address it. We establish theoretical performance guarantees for Thompson Sampling, in the form of upper bounds on the Bayesian regret, on single-agent, multi-agent and batched feedback variants of the problem. To demonstrate the effectiveness of the framework, we perform simulation experiments on various real-life road networks.
In the second half of the thesis, we extend the online learning framework to find paths which minimize or avoid bottlenecks. Solutions to the online minimax path problem represent risk-averse behaviors, by avoiding road segments with high variance in costs. We derive upper bounds on the Bayesian regret of Thompson Sampling adapted to this problem, by carefully handling the non-linear path cost function. We identify computational tractability issues with the original problem formulation, and propose an alternative approximate objective with an associated algorithm based on Thompson Sampling. Finally, we conduct several experimental studies to evaluate the performance of the approximate algorithm.

Online Shortest Path Problem

Energy Efficient Navigation

Online Learning

Combinatorial Semi-Bandits

Machine Learning

Thompson Sampling

Online Minimax Path Problem

Multi-Armed Bandits

Room 8103, EDIT Building, Rännvägen 6. Zoom (password request: caremil@chalmers.se)
Opponent: Prof. Joakim Jaldén, Department of Intelligent Systems, KTH Royal Institute of Technology

Författare

Niklas Åkerblom

Data Science och AI

Åkerblom, N., Chen, Y., Chehreghani, M. H. Online Learning of Energy Consumption for Navigation of Electric Vehicles

Åkerblom, N., Hoseini, F. S., Chehreghani, M. H. Online Learning of Network Bottlenecks via Minimax Paths

EENE: Energieffektiv Navigering för Elfordon

FFI - Fordonsstrategisk forskning och innovation (2018-01937), 2019-01-01 -- 2022-12-31.

Ämneskategorier

Annan data- och informationsvetenskap

Datavetenskap (datalogi)

Datorseende och robotik (autonoma system)

Styrkeområden

Informations- och kommunikationsteknik

Transport

Energi

Utgivare

Chalmers tekniska högskola

Room 8103, EDIT Building, Rännvägen 6. Zoom (password request: caremil@chalmers.se)

Online

Opponent: Prof. Joakim Jaldén, Department of Intelligent Systems, KTH Royal Institute of Technology

Mer information

Senast uppdaterat

2021-11-17