Optimization as estimation with Gaussian processes in bandit settings
[摘要] Optimizing an expensive unknown function is an important problem addressed by Bayesian optimization. Motivated by the challenge of parameter tuning in both machine learning and robotic planning problems, we study the maximization of a black-box function with the assumption that the function is drawn from a Gaussian process (GP) with known priors. We propose an optimization strategy that directly uses a maximum a posteriori (MAP) estimate of the argmax of the function. This strategy offers both practical and theoretical advantages: no tradeoff parameter needs to be selected, and, moreover, we establish close connections to the popular GPUCB and GP-PI strategies. GP-UCB and GP-PI may be viewed as special cases of MAP estimation; while, conversely, MAP criterion can be understood as automatically and adaptively trading off exploration and exploitation in GP-UCB and GP-PI. We illustrate the effects of this adaptive tuning both theoretically and empirically. We establish tighter regret bounds than previous methods, as well as an upper bound on the number of steps necessary to achieve a low regret. In our experiments, we show an extensive empirical evaluation on robotics and vision tasks, demonstrating the robustness of this strategy for a range of performance criteria.
[发布日期] [发布机构] Massachusetts Institute of Technology
[效力级别] [学科分类]
[关键词] [时效性]