Addressing stale gradients in asynchronous federated deep reinforcement learning

Thumbnail Image
Date
2023-08
Authors
Stanley, Justin
Major Professor
Advisor
Jannesari, Ali
Quinn, Christopher
Tian, Jin
Huai, Mengdi
Committee Member
Journal Title
Journal ISSN
Volume Title
Publisher
Altmetrics
Authors
Research Projects
Organizational Units
Journal Issue
Is Version Of
Versions
Series
Department
Computer Science
Abstract
Advancements in reinforcement learning (RL) via deep neural networks have enabled their application to a variety of real-world problems. However, these applications often suffer from long training times. While attempts to distribute training have been successful in controlled scenarios, they face challenges in heterogeneous-capacity, unstable, and privacy critical environments. This work applies concepts from federated learning (FL) to distributed RL, specifically addressing the stale gradient problem. A deterministic framework for asynchronous federated RL is utilized to explore dynamic methods for handling stale gradient updates in the Arcade Learning Environment. Experimental results from applying these methods to two Atari-2600 games demonstrate a relative speedup of up to 95\% compared to plain A3C in large and unstable federations.
Comments
Description
Keywords
Citation
Source
Subject Categories
Copyright