Stochastic Conservative Contextual Linear Bandits

dc.contributor.author Lin, Jiabin
dc.contributor.author Lee, Xian Yeow
dc.contributor.author Jubery, Talukder
dc.contributor.author Moothedath, Shana
dc.contributor.author Sarkar, Soumik
dc.contributor.author Ganapathysubramanian, Baskar
dc.contributor.department Mechanical Engineering
dc.contributor.department Electrical and Computer Engineering
dc.contributor.department Plant Sciences Institute
dc.date.accessioned 2022-04-05T22:55:16Z
dc.date.available 2022-04-05T22:55:16Z
dc.date.issued 2022
dc.description.abstract Many physical systems have underlying safety considerations that require that the strategy deployed ensures the satisfaction of a set of constraints. Further, often we have only partial information on the state of the system. We study the problem of safe real-time decision making under uncertainty. In this paper, we formulate a conservative stochastic contextual bandit formulation for real-time decision making when an adversary chooses a distribution on the set of possible contexts and the learner is subject to certain safety/performance constraints. The learner observes only the context distribution and the exact context is unknown, and the goal is to develop an algorithm that selects a sequence of optimal actions to maximize the cumulative reward without violating the safety constraints at any time step. By leveraging the UCB algorithm for this setting, we propose a conservative linear UCB algorithm for stochastic bandits with context distribution. We prove an upper bound on the regret of the algorithm and show that it can be decomposed into three terms: (i) an upper bound for the regret of the standard linear UCB algorithm, (ii) a constant term (independent of time horizon) that accounts for the loss of being conservative in order to satisfy the safety constraint, and (ii) a constant term (independent of time horizon) that accounts for the loss for the contexts being unknown and only the distribution being known. To validate the performance of our approach we perform extensive simulations on synthetic data and on real-world maize data collected through the Genomes to Fields (G2F) initiative.
dc.description.comments This is a pre-print of the article Lin, Jiabin, Xian Yeow Lee, Talukder Jubery, Shana Moothedath, Soumik Sarkar, and Baskar Ganapathysubramanian. "Stochastic Conservative Contextual Linear Bandits." arXiv preprint arXiv:2203.15629 (2022). DOI: 10.48550/arXiv.2203.15629. Copyright 2022 The Author(s). Posted with permission.
dc.identifier.uri https://dr.lib.iastate.edu/handle/20.500.12876/JwjbJ7yw
dc.language.iso en
dc.publisher arXiv
dc.source.uri https://doi.org/10.48550/arXiv.2203.15629 *
dc.title Stochastic Conservative Contextual Linear Bandits
dc.type Preprint
dspace.entity.type Publication
relation.isAuthorOfPublication da41682a-ff6f-466a-b99c-703b9d7a78ef
relation.isOrgUnitOfPublication 6d38ab0f-8cc2-4ad3-90b1-67a60c5a6f59
relation.isOrgUnitOfPublication a75a044c-d11e-44cd-af4f-dab1d83339ff
relation.isOrgUnitOfPublication 4b65823e-e153-42b2-8a61-9d65df8a816e
File
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
2022-GanapathysubramanianBaskar-StochasticConservative.pdf
Size:
6.87 MB
Format:
Adobe Portable Document Format
Description:
Collections