Reframing Offline Reinforcement Learning as a Regression Problem

Thumbnail Image
Date
2024-01-21
Authors
Koirala, Prajwal
Major Professor
Advisor
Committee Member
Journal Title
Journal ISSN
Volume Title
Publisher
arXiv
Abstract
The study proposes the reformulation of offline reinforcement learning as a regression problem that can be solved with decision trees. Aiming to predict actions based on input states, return-to-go (RTG), and timestep information, we observe that with gradient-boosted trees, the agent training and inference are very fast, the former taking less than a minute. Despite the simplification inherent in this reformulated problem, our agent demonstrates performance that is at least on par with established methods. This assertion is validated by testing it across standard datasets associated with D4RL Gym-MuJoCo tasks. We further discuss the agent's ability to generalize by testing it on two extreme cases, how it learns to model the return distributions effectively even with highly skewed expert datasets, and how it exhibits robust performance in scenarios with sparse/delayed rewards.
Series Number
Journal Issue
Is Version Of
Versions
Series
Academic or Administrative Unit
Type
Preprint
Comments
This is a proceeding preprint from Koirala, Prajwal, and Cody Fleming. "Reframing Offline Reinforcement Learning as a Regression Problem." arXiv preprint arXiv:2401.11630 (2024). doi: https://doi.org/10.48550/arXiv.2401.11630. © Jan - 2024 P. Koirala & C. Fleming.
Rights Statement
Copyright
Funding
DOI
Supplemental Resources