A Markov Chain Analysis of the Three Strikes Game

Article excerpt

INTRODUCTION

The Price is Right has been a daytime television staple for decades. Many college students have missed a class or two to watch the show. This popularity among college students makes it a convenient example to get students excited about an aspect of mathematics.

In 1998, in an attempt to increase the likelihood of winning, The Price is Right changed the rules of one of its pricing games, the Three Strikes game [1]. Is a contestant today better off than one who played the game before the change in the rules? To address this question, one can model the two versions of the game using Markov chains.

The Three Strikes game is always played for a car. The five single digit numbers in the price of the car are placed in a bag in the form of tokens with the numbers on them. One token with a strike (X) on it is also placed in the bag. The contestant pulls out one token at a time. If a number is drawn, the contestant guesses which position the number is in the price of the car. If the contestant is incorrect, the number is returned to the bag. If the contestant correctly guesses the placement of the number, the number is not returned to the bag. If the contestant pulls out the strike token, it is returned to the bag. To win the car, the contestant must successfully draw out the five numbers and correctly place them in the price of the car before pulling out the strike three times. Prior to the change in 1998, the Three Strikes game included three strikes placed in the bag. However, if the contestant pulled out a strike, it was not replaced.

ASSUMPTIONS

For the Markov chains, we need two key assumptions. One is that the contestant does not use logic in guessing the position of the digits in the price of the car. Guessing placements is done at random. In a Markov chain, the probability that one is in state i on the next observation depends only on states i and j (which may be the same state) and not on the history of the process prior to the current observation. For example, a contestant could guess O as the first digit, or a contestant could attempt to place a particular digit incorrectly in the same position more than once. The other assumption is that the five numbers in the price of the car are distinct, as is the case on the show.

In addition to the mathematical assumptions, students who were to analyze the two versions of the Three Strikes games or ones with similar analyses (e.g., Money Game, Pathfinder) must have the appropriate mathematical background. At a minimum, students need to be familiar with matrices and their operations and elementary probability. If students have not been exposed to Markov chains in their coursework, then these students would also need to have the ability to learn this material independently. A student who has completed linear algebra and has had probability would have a good foundation. A modeling course or other upper level mathematics course that included Markov chains would also suffice in preparing a student for this type of analysis.

MARKOV CHAIN SET-UP

Recall that a Markov chain can be used to provide information about the probabilities of events that are described in terms of states or sets of states. In addition to well-defined states, a key ingredient to using Markov chains is the transition matrix. An n x n transition matrix consists of one-step transition probabilities, pij, each denoting the conditional probability that if the system is in state i on one observation it will be in state j on the next observation where 1 ≤ i, j ≤ n. The probabilities are independent of the observation number.

The Three Strikes game can be thought of as occurring in states. There are seventeen states. The initial state of a contestant is having zero strikes and zero numbers correctly placed. State fifteen is the winning state, having five correctly placed numbers and two strikes or fewer. State sixteen is the losing state, having three strikes and four or fewer correctly placed numbers. …