Selector-Actor-Critic and Tuner-Actor-Critic Algorithms for Reinforcement Learning

dc.contributor.author Wang, Zhengdao
dc.contributor.author Wang, Zhengdao
dc.contributor.author Kamal, Ahmed
dc.contributor.author Masadeh, Ala’eddin
dc.contributor.department Electrical and Computer Engineering
dc.date 2020-11-20T20:46:36.000
dc.date.accessioned 2021-02-25T17:07:58Z
dc.date.available 2021-02-25T17:07:58Z
dc.date.copyright Tue Jan 01 00:00:00 UTC 2019
dc.date.embargo 2018-01-01
dc.date.issued 2019-01-01
dc.description.abstract <p>This work presents two reinforcement learning (RL) architectures, which mimic rational humans in the way of analyzing the available information and making decisions. The proposed algorithms are called selector-actor-critic (SAC) and tuner-actor-critic (TAC). They are obtained by modifying the well known actor-critic (AC) algorithm. SAC is equipped with an actor, a critic, and a selector. The role of the selector is to determine the most promising action at the current state based on the last estimate from the critic. TAC is model based, and consists of a tuner, a model-learner, an actor, and a critic. After receiving the approximated value of the current state-action pair from the critic and the learned model from the model-learner, the tuner uses the Bellman equation to tune the value of the current state-action pair. Then, this tuned value is used by the actor to optimize the policy. We investigate the performance of the proposed algorithms, and compare with AC algorithm to show the advantages of the proposed algorithms using numerical simulations.</p>
dc.description.comments <p>This is a manuscript of a proceeding published as Masadeh, Ala'eddin, Zhengdao Wang, and Ahmed E. Kamal. "Selector-Actor-Critic and Tuner-Actor-Critic Algorithms for Reinforcement Learning." In <em>2019 11th International Conference on Wireless Communications and Signal Processing (WCSP)</em>. DOI: <a href="https://doi.org/10.1109/WCSP.2019.8928124" target="_blank">10.1109/WCSP.2019.8928124</a>. Posted with permission.</p>
dc.format.mimetype application/pdf
dc.identifier archive/lib.dr.iastate.edu/ece_conf/104/
dc.identifier.articleid 1105
dc.identifier.contextkey 20254644
dc.identifier.s3bucket isulib-bepress-aws-west
dc.identifier.submissionpath ece_conf/104
dc.identifier.uri https://dr.lib.iastate.edu/handle/20.500.12876/93893
dc.language.iso en
dc.source.bitstream archive/lib.dr.iastate.edu/ece_conf/104/2019_WangZhengdao_SelectorActor.pdf|||Fri Jan 14 18:20:29 UTC 2022
dc.source.uri 10.1109/WCSP.2019.8928124
dc.subject.disciplines Signal Processing
dc.subject.disciplines Systems and Communications
dc.subject.keywords Reinforcement learning
dc.subject.keywords model-based learning
dc.subject.keywords model-free learning
dc.subject.keywords actor-critic
dc.title Selector-Actor-Critic and Tuner-Actor-Critic Algorithms for Reinforcement Learning
dc.type article
dc.type.genre conference
dspace.entity.type Publication
relation.isAuthorOfPublication b7a82e2a-7fd1-4c26-85f1-64be3e645430
relation.isAuthorOfPublication 8b78cd8b-fbd3-47b1-aa96-113e1b2b159e
relation.isOrgUnitOfPublication a75a044c-d11e-44cd-af4f-dab1d83339ff
File
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
2019_WangZhengdao_SelectorActor.pdf
Size:
1.59 MB
Format:
Adobe Portable Document Format
Description: