Skip to content

Latest commit

 

History

History
52 lines (52 loc) · 2.09 KB

2022-06-28-ariu22a.md

File metadata and controls

52 lines (52 loc) · 2.09 KB
title booktitle abstract layout series publisher issn id month tex_title firstpage lastpage page order cycles bibtex_author author date address container-title volume genre issued pdf extras
Thresholded Lasso Bandit
Proceedings of the 39th International Conference on Machine Learning
In this paper, we revisit the regret minimization problem in sparse stochastic contextual linear bandits, where feature vectors may be of large dimension $d$, but where the reward function depends on a few, say $s_0\ll d$, of these features only. We present Thresholded Lasso bandit, an algorithm that (i) estimates the vector defining the reward function as well as its sparse support, i.e., significant feature elements, using the Lasso framework with thresholding, and (ii) selects an arm greedily according to this estimate projected on its support. The algorithm does not require prior knowledge of the sparsity index $s_0$ and can be parameter-free under some symmetric assumptions. For this simple algorithm, we establish non-asymptotic regret upper bounds scaling as $\mathcal{O}( \log d + \sqrt{T} )$ in general, and as $\mathcal{O}( \log d + \log T)$ under the so-called margin condition (a probabilistic condition on the separation of the arm rewards). The regret of previous algorithms scales as $\mathcal{O}( \log d + \sqrt{T \log (d T)})$ and $\mathcal{O}( \log T \log d)$ in the two settings, respectively. Through numerical experiments, we confirm that our algorithm outperforms existing methods.
inproceedings
Proceedings of Machine Learning Research
PMLR
2640-3498
ariu22a
0
Thresholded Lasso Bandit
878
928
878-928
878
false
Ariu, Kaito and Abe, Kenshi and Proutiere, Alexandre
given family
Kaito
Ariu
given family
Kenshi
Abe
given family
Alexandre
Proutiere
2022-06-28
Proceedings of the 39th International Conference on Machine Learning
162
inproceedings
date-parts
2022
6
28