# MATH3901 Probability and stochastic processes (1 Viewer)

#### Superbox

##### New Member
A slot machine works on inserting $1 coin. If the player wins, the coin is returned with an additional$1 coin, otherwise the original coin is
lost. The probability of winning is 1/2 unless the previous play has resulted
in a win, in which case the probability is p < 1/2. If the cost of maintaining
the machine averages $c per play (with c < 1/3), give conditions on the value of p that the owner of the machine must arrange in order to make a profit in the long run. Not sure how to start this. Is this a markov chain (gambler's ruin) #### InteGrand ##### Well-Known Member A slot machine works on inserting$1 coin. If the player wins,
the coin is returned with an additional $1 coin, otherwise the original coin is lost. The probability of winning is 1/2 unless the previous play has resulted in a win, in which case the probability is p < 1/2. If the cost of maintaining the machine averages$c per play (with c < 1/3), give conditions on the value
of p that the owner of the machine must arrange in order to make a profit in
the long run.

Not sure how to start this. Is this a markov chain (gambler's ruin)
$\bg_white \noindent Imagine repeatedly inserting \1 coins and consider a sequence of random variables X_{1}, X_{2}, X_{3},\ldots defined by X_{j} = I(A_{j}), for j=1,2,3,\ldots, where A_{j} is the event that the player wins on the j-th play. (I represents an indicator function.)$

$\bg_white \noindent Note that for j\geq 1, if X_{j}=1 (player won on j-th play), then X_{j+1} is distributed as Bernoulli(p). If X_{j} = 0 (player lost on j-th play), then X_{j+1} is distributed as Bernoulli\left(\frac{1}{2}\right). So the distribution of X_{j+1} is conditionally independent of the past, given the present (i.e. X_{j}). So the X_{j}'s form a Markov chain.$

$\bg_white \noindent The (random) profit made by the owner of the machine in the j-th play is P_{j} = -2X_{j}+1 (that is, he loses \1 (gains \(-1)) if X_{j} = 1, i.e. if the player wins the round, and he gains \1 if X_{j} = 0, that is, if the player loses the round). The state space of the Markov chain is \mathcal{S} = \{0,1\}. Try and find the stationary distribution of this Markov chain \left(X_{j}\right)_{j\geq 1}. This will allow you to work out the limiting behaviour of the profit P_{j} made by the owner in a round (i.e. in the long run). From this, you should be able to see what the condition on p should be.$

Last edited:

#### Superbox

##### New Member
$\bg_white \noindent Imagine repeatedly inserting \1 coins and consider a sequence of random variables X_{1}, X_{2}, X_{3},\ldots defined by X_{j} = I(A_{j}), for j=1,2,3,\ldots, where A_{j} is the event that the player wins on the j-th play. (I represents an indicator function.)$

$\bg_white \noindent Note that for j\geq 1, if X_{j}=1 (player won on j-th play), then X_{j+1} is distributed as Bernoulli(p). If X_{j} = 0 (player lost on j-th play), then X_{j+1} is distributed as Bernoulli\left(\frac{1}{2}\right). So the distribution of X_{j+1} is conditionally independent of the past, given the present (i.e. X_{j}). So the X_{j}'s form a Markov chain.$

$\bg_white \noindent The (random) profit made by the owner of the machine in the j-th play is P_{j} = -2X_{j}+1 (that is, he loses \1 (gains \(-1)) if X_{j} = 1, i.e. if the player wins the round, and he gains \1 if X_{j} = 0, that is, if the player loses the round). The state space of the Markov chain is \mathcal{S} = \{0,1\}. Try and find the stationary distribution of this Markov chain \left(X_{j}\right)_{j\geq 1}. This will allow you to work out the limiting behaviour of the profit P_{j} made by the owner in a round (i.e. in the long run). From this, you should be able to see what the condition on p should be.$
Thanks for the details response. Is the answer $p < \frac{1 -3c}{2(1 - c)}$ (no idea how to make latex work.

Last edited:

#### Superbox

##### New Member
A flea hops on the vertices A, B, and C of a triangle. Each hop takes it from one vertex to the next and the times between sucessive hops are independent random variables, each with an exponential distribution with mean 1/λ. Each hop is equally likely to be in the clockwise direction or in the anticlockwise direction. Find the probability that the flea is at vertex A at a given time t>0, starting from A at time t=0.

(Hint: Write the Kolmogorov's backward equations and solve for the transition probability function of interest. The solution of y′(x)=a+by(x) is y(x)=c*e^(bx)−a/b, for some constant \$c.)

Have another question. I wrote out the transition instanteuous rate matrix and there are 9 back ward equation to solve. Not sure how to solve these equations using the hint.