Bayesian Bandit Explorer
This tool lets you watch the performance of
Thompson Sampling
for solving the
Multi-Armed Bandit Problem
:
Play N slot machines ("one-armed bandits"), each with an unknown probability of winning
Inspired by
Ted Dunning's blog
and Cam Davidson-Pilon's
Bayesian Methods for Hackers
A few notes are
here
B.F. Lyon Visualizations
Bandit 1
Bandit 2
Bandit 3
Bandit 4
Probability of Success