Multi-Armed Bandits: Theory and Applications to Online Learning in Networks

Multi-Armed Bandits: Theory and Applications to Online Learning in Networks

Morgan & Claypool | English | 2020 | ISBN-10: 1681736373 | 147 Pages | PDF | 1.21 MB

by Qing Zhao (Author), R Srikant (Editor)

Multi-armed bandit problems pertain to optimal sequential decision making and learning in unknown environments.

Since the first bandit problem posed by Thompson in 1933 for the application of clinical trials, bandit problems have enjoyed lasting attention from multiple research communities and have found a wide range of applications across diverse domains. This book covers classic results and recent development on both Bayesian and frequentist bandit problems. We start in Chapter 1 with a brief overview on the history of bandit problems, contrasting the two schools-Bayesian and frequentis -of approaches and highlighting foundational results and key applications. Chapters 2 and 4 cover, respectively, the canonical Bayesian and frequentist bandit models. In Chapters 3 and 5, we discuss major variants of the canonical bandit models that lead to new directions, bring in new techniques, and broaden the applications of this classical problem. In Chapter 6, we present several representative application examples in communication networks and social-economic systems, aiming to illuminate the connections between the Bayesian and the frequentist formulations of bandit problems and how structural results pertaining to one may be leveraged to obtain solutions under the other

About the Author
Qing Zhao is a Joseph C. Ford Professor of Engineering at Cornell University. Prior to that, she was a Professor in the ECE Department at University of California, Davis. She received a Ph.D. in Electrical Engineering from Cornell in 2001. Her research interests include sequential decision theory, stochastic optimization, machine learning, and algorithmic theory with applications in infrastructure, communications, and social-economic networks. She is a Fellow of IEEE, a Distinguished Lecturer of the IEEE Signal Processing Society, a Marie Sklodowska-Curie Fellow of the European Union Research and Innovation program, and a Jubilee Chair Professor of Chalmers University during her 2018-2019 sabbatical leave. She received the 2010 IEEE Signal Processing Magazine Best Paper Award and the 2000 Young Author Best Paper Award from IEEE Signal Processing Society.

R. Srikant received his B.Tech. from the Indian Institute of Technology, Madras in 1985, his M.S. and Ph.D. from the University of Illinois at Urbana-Champaign in 1988 and 1991, respectively, all in Electrical Engineering. He was a Member of Technical Staff at AT&T Bell Laboratories from 1991 to 1995. He is currently with the University of Illinois at Urbana-Champaign, where he is the Fredric G. and Elizabeth H. Nearing Endowed Professor of Electrical and Computer Engineering and a Professor in the Coordinated Science Lab.

Download:

http://longfiles.com/60t8iojpso09/Multi-Armed_Bandits_Theory_and_Applications_to_Online_Learning_in_Networks.pdf.html

[Fast Download] Multi-Armed Bandits: Theory and Applications to Online Learning in Networks


Related eBooks:
The Testing Network: An Integral Approach to Test Activities in Large Software Projects
Guide to Convolutional Neural Networks: A Practical Application to Traffic-Sign Detection and Classi
Information Systems Security
Listen and Talk: Full-duplex Cognitive Radio Networks
High-Resolution and High-Speed Integrated CMOS AD Converters for Low-Power Applications
Energy-Efficient Spectrum Management for Cognitive Radio Sensor Networks
Intelligent Information Processing VIII
Advances in Ubiquitous Networking 2
Millimeter-Wave Low Noise Amplifiers
HCNA Networking Study Guide
Channel Codes: Classical and Modern
Low-cost Smart Antennas
Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.