Wiley.com
Print this page Share

Multi-armed Bandit Allocation Indices, 2nd Edition

ISBN: 978-0-470-67002-6
Hardcover
312 pages
March 2011
List Price: US $132.00
Government Price: US $88.92
Enter Quantity:   Buy
Multi-armed Bandit Allocation Indices, 2nd Edition (0470670029) cover image
This is a Print-on-Demand title. It will be printed specifically to fill your order. Please allow an additional 10-15 days delivery time. The book is not returnable.

Foreword.

Foreword to the first edition. 

Preface.

Preface to the first edition.

1 Introduction or Exploration.

Exercises.

2 Main Ideas: Gittins Index.

2.1 Introduction.

2.2 Decision processes.

2.3 Simple families of alternative bandit processes.

2.4 Dynamic programming.

2.5 Gittins index theorem.

2.6 Gittins index.

2.7 Proof of the index theorem by interchanging bandit portions.

2.8 Continuous-time bandit processes.

2.9 Proof of the index theorem by induction and interchange argument.

2.10 Calculation of Gittins indices.

2.11 Monotonicity conditions.

2.12 History of the index theorem.

2.13 Some decision process theory.

Exercises.

3 Necessary Assumptions for Indices.

3.1 Introduction.

3.2 Jobs.

3.3 Continuous-time jobs.

3.4 Necessary assumptions.

3.5 Beyond the necessary assumptions.

Exercises.

4 Superprocesses, Precedence Constraints and Arrivals.

4.1 Introduction.

4.2 Bandit superprocesses.

4.3 The index theorem for superprocesses.

4.4 Stoppable bandit processes.

4.5 Proof of the index theorem by freezing and promotion rules.

4.6 The index theorem for jobs with precedence constraints.

4.7 Precedence constraints forming an out-forest.

4.8 Bandit processes with arrivals.

4.9 Tax problems.

4.10 Near optimality of nearly index policies.

Exercises.

5 The Achievable Region Methodology.

5.1 Introduction.

5.2 A simple example.

5.3 Proof of the index theorem by greedy algorithm.

5.4 Generalized conservation laws and indexable systems.

5.5 Performance bounds for policies for branching bandits.

5.6 Job selection and scheduling problems.

5.7 Multi-armed bandits on parallel machines.

Exercises.

6 Restless Bandits and Lagrangian Relaxation.

6.1 Introduction.

6.2 Restless bandits.

6.3 Whittle indices for restless bandits.

6.4 Asymptotic optimality.

6.5 Monotone policies and simple proofs of indexability.

6.6 Applications to multi-class queuing systems.

6.7 Performance bounds for the Whittle index policy.

6.8 Indices for more general resource configurations.

Exercises.

7 Multi-Population Random Sampling (Theory).

7.1 Introduction.

7.2 Jobs and targets.

7.3 Use of monotonicity properties.

7.4 General methods of calculation: use of invariance properties.

7.5 Random sampling times.

7.6 Brownian reward processes.

7.7 Asymptotically normal reward processes.

7.8 Diffusion bandits.

Exercises.

8 Multi-Population Random Sampling (Calculations).

8.1 Introduction.

8.2 Normal reward processes (known variance).

8.3 Normal reward processes (mean and variance both unknown).

8.4 Bernoulli reward processes.

8.5 Exponential reward processes.

8.6 Exponential target process.

8.7 Bernoulli/exponential target process.

Exercises.

9 Further Exploitation.

9.1 Introduction.

9.2 Website morphing.

9.3 Economics.

9.4 Value of information.

9.5 More on job-scheduling problems.

9.6 Military applications.

References.

Tables.

Index.

Related Titles

Models

by K. J. McConway, M. C. Jones, P. C. Taylor
by Pierre Baldi, Paolo Frasconi, Padhraic Smyth
by Andrea Saltelli, Stefano Tarantola, Francesca Campolongo, Marco Ratto
Back to Top