Citation

BibTex format

@inproceedings{Thomas:2024,
author = {Thomas, D and Jiang, J and Kori, A and Russo, A and Winkler, S and Sale, S and McMillan, J and Belardinelli, F and Rago, A},
publisher = {ACM},
title = {Explainable reinforcement learning for Formula One race strategy},
url = {http://hdl.handle.net/10044/1/116251},
year = {2024}
}

RIS format (EndNote, RefMan)

TY  - CPAPER
AB - In Formula One, teams compete to develop their cars to achieve the highest possible finishing position in each race. During a race, however, teams are unable to alter the car, so they must improve their cars’ finishing positions via race strategy, i.e. optimising their selection of which tyre compounds to put on the car and when to do so. In this work, we introduce a reinforcement learning model, RSRL(Race Strategy Reinforcement Learning), to control race strategies in simulations, offering a faster alternative to the industry standard of hard-coded and Monte Carlo-based race strategies. Controlling cars with a pace equating to an expected finishing position of P5.5 (where P1 represents first place and P20 is last place), RSRL achieves an average finishing position of P5.33 on our test race, the 2023Bahrain Grand Prix, outperforming the best baseline of P5.63. We then demonstrate, in a generalisability study, how performance for one track or multiple tracks can be prioritised via training. Further, we supplement model predictions with feature importance, decision tree-based surrogate models, and decision tree counterfactualstowards improving user trust in the model. Finally, we provide illustrations which exemplify our approach in real-world situations, drawing parallels between simulations and reality.
AU - Thomas,D
AU - Jiang,J
AU - Kori,A
AU - Russo,A
AU - Winkler,S
AU - Sale,S
AU - McMillan,J
AU - Belardinelli,F
AU - Rago,A
PB - ACM
PY - 2024///
TI - Explainable reinforcement learning for Formula One race strategy
UR - http://hdl.handle.net/10044/1/116251
ER -