Musings

A random collection

Archive for the ‘probability-problems’ Category

PROB: Rules of Probability

  1. Trivial cases:
      P(\emptyset) = 0, P(\Omega) = 1
      P\left(\cup A_i\right) = \sum P(A_i), \text{ where } A_i \cap A_j = \emptyset
  2. E or F (general case):
      P(E \text{ or } F) = P(E \cup F) = P(E) + P(F) - P(E \cap F)
    • If E and F are mutually exclusive (or E and F are disjoint sets), then
        P(E \cup F) = P(E) + P(F)
  3. Complement:
      P(E^c) = P(\bar{E}) = 1 - P(E)
  4. Conditional Probability:
      P(E \vert F) = \frac{P(E \cap F)}{P(F)} = \frac{P(F\vert E)P(E)}{P(F)}

    If E and F are independent events, then P(E|F) = P(E)

  5. E and F (General Case)
      P(E \text{ and } F) = P(E \cap F) = P(E \vert F) P(F) = P(F \vert E) P(E)
    • If E and F are independent events, then
        P(E \cap F) = P(E) P(F)
  6. Important formula
    Given a pairwise disjoint subsets (B’s) of the sample space, i.e., \cup B_i = \Omega, B_i \cap B_j = \emptyset, then

      P(A) = \sum P(A \cap B_i) = \sum P(A\vert B_i)P(B_i)
  7. Baye’s formula
      P(A | B) = \frac{P(A \text{ and } B)}{P(B)} =\frac{P(B | A) P(A)}{P(A \text{ and } B) + P(\text{not } A \text{ and } B)} = \frac{P(B | A) P(A)}{P(B | A) P(A)) +P(B | \text{not }A) P(\text{not }A)}
  8. Random Variable – a function from Sample Space set to the set of Real Numbers
      X \colon \Omega \to \mathbb{R}

    Types:

    • Discrete Random Variables – can take only discrete values from a set of real numbers \{ x_1, x_2, \cdots \}. The set of discrete values may be finite or countably infinite.
    • Continuous Random Variables
  9. Distribution Function – the probability that random variable will have value less than x
      F_X(x) = P(\omega \in \Omega \colon X(\omega) \leq x) = F(x) = P(X \leq x)
      and
      \bar{F}(x) = 1 - F(x) = P(X > x)

    Easy to see F(-\infty) = 0, F(\infty) = 1 and

      P(a < X \leq b) = F(b) - F(a)

    Distribution Functions for Discrete and Continuous variables

    • Discrete, let p_i = P(X = x_i) be the probability that X is x_i
        F(x) = P(X \leq x) = \sum_{x_i \leq x} p_i
    • Continuous, let f(x) = \frac{d F(x)}{d x} be the probability density function
        F(x) = P(X \leq x) = \int_{-\infty}^x f(u) du

    Note:

      f(x) = \lim_{h \to 0} \frac{P(x < X \leq x+h)}{h} = \lim_{h \to 0} \frac{F(x+h)-F(x)}{h} = \frac{d F(x)}{d x}

    and at a single specific point, x

      P(X = x) = \lim_{h \to 0} F(x+h)-F(x) = 0
  10. Indicator Function of a set A
    \mathbf{1}_A(x) = \begin{cases} 1 & \text{if } x \in A \\ 0 & \text{otherwise} \end{cases}
  11. Stieltjes Notation
    Discrete Random Variables: dF(x) = p_x
    Continuous Random Variables: dF(x) = f(x)dx
  12. Expected Value
      E[X] = \int_{-\infty}^{\infty} x dF(x)
      and
      E[g(X)] = \int_{-\infty}^{\infty} g(x) dF(x)

    Linear function of random variables a+bX

      E[a+bX] = a+b E[X]

    Expected value of Indicator function

      E[\mathbf{1}_A(x)] = P(X \in A)
  13. Variance
      \text{Var}[X] = E[(X-\mu)^2] = E[X^2] - \mu^2

    Linear function of random variable

      \text{Var}[a+bX] = b^2 \text{Var}[X]
  14. Skewness – measure of symmetry
      s[X] = \frac{E[(X-\mu)^3]}{(\text{Var}[X])^{3/2}}

    Linear function of random variable

      s[a+bX] = s[X]
  15. Kurtosis – measure of spread
      \kappa[X] = \frac{E[(X-\mu)^4]}{(\text{Var}[X])^{2}}

    Linear function of random variable

      \kappa[a+bX] = \kappa[X]

    For a normal distribution, kurtosis is 3.

  16. Moment Generating Function
      M_X(t) = E[e^{tX}], M_X(0) = 1

    Properties of MGF

    1. Unique distribution M_{X_1}(t) = M_{X_2}(t) \implies F_{X_1}(x) = F_{X_2}(x)
      Identical MGF for 2 different random variables implies they have the identical/same distribution function.
    2. Linear function of random variables M_{Y}(t) = M_{a+bX}(t) = e^{at}M_X(bt)
    3. Derivative of MGF \frac{d M_X(t)}{dt} = \int x e^{tx} dF(x) = E[Xe^{tX}]
    4. First order Derivative at t=0 \frac{d M_X(t)}{dt}\Vert_{t=0} = E[X]
    5. Higher order Derivative at t=0 \frac{d^k M_X(t)}{dt^k}\Vert_{t=0} = E[X^k]
  17. Laplace Transform
      L_X(t) = E[e^{-tX}] = M_X(-t)
  18. Characteristic Function
      \phi_X(t) = E[e^{itX}] = M_X(it)

    Check out

  19. Change of variable
    Given Y = y(X), X = y-1(X) = x(Y). y(X) is a continuous, monotonic and differentiable function, then it’s inverse x(Y) exists and is also continuously differentiable.

    Let fX(x) be density function of random variable X.

      f_Y(y) = f_X(x(y)) |\frac{d}{dy}x(y)|

    Hint: If y'(x) > 0, FY(y) = P(Y ≤ y) = P(X ≤ x(y)) = FX(x(y)).

    Otherwise y'(x) < 0, then FY(y) = P(Y < y) = P(X > x(y)) = 1-P(X ≤ x(y)) = 1-FX(x(y)).

    Now evaluate fY(y) = d FY(y)/dy.

  20. Conditional Distributions
    Goal: What is the distribution of “X conditioned that X ∈ A”? Example: X|X>a, loss conditioned on loss is bigger than a.

      FX|X>a(x) = P(X≤x|X>a) = P(a < X ≤ x)/P(X > a) = (F(x)-F(a))/(1-F(a))
      Density is f_{X\vert X > a}(x) = \begin{cases} 0 & x \leq a, \\ \frac{f(x)}{F(a)} & x > a \end{cases}
  21. Quantiles
    q-quantile of distribution function F is Πq(F)

      \pi_q(F) = \inf\{x \colon F(x) \geq q\} = F^{-1}(q), F(\pi_q) = q

    That value of x, say Πq for which F(x) = F(Πq) = q

  22. Inverse distribution functions
    We define a generalized inverse distribution function as

      F^{-1}(x) = \inf\{u \colon F(u) \geq x \}

    This holds even when F may not be continuous and strictly increasing.

Advertisements

Written by curious

February 3, 2010 at 10:57 pm

PROB: Rare disease

A disease effects 1 out of every 10,000 people.

A pharmaceutical company comes out with a test that gives positive 99% of the times, if somebody has a disease (and misses 1% of the times).

It also gives false positives, 2.5% of the time. When a person does not have a disease, the test might still come out positive by error.

What are the chances of a person having the disease if that person tested positive?

Solution:

A – The event that somebody contracts the disease

B – The event that the test turns out positive

Given

P(A) = 1/10000 = 0.0001, P (not A) = 1 - P(A) = 0.9999

P(B | A) = 0.99 : given a person has the disease (A), the probability that the test will be positive (B)

P(B | not A) = 0.025 : given a person does not have the disease (not A), the probability that the test will be positive (false positive) B

A not A sum
B P(A and B) = P(B|A)P(A) P(not A and B) = P(B|not A)P(not A) P(B)
not B P(A and not B) p(not A and not B) P(not B)
total P(A) = 0.0001 P(not A) = 0.9999 1

Goal: To find P(A | B) : given that the test is positive (B), the probability that somebody will have the disease (A)

P(A | B) = P (A and B)/P(B) = P(B | A) P(A)/P(B)

We do not know P(B). How can we calculate it? Note: B = (B and A) or (B and not A)

P(B) = P(B and A) + P(B and not A) = P(B|A)P(A) + P(B|not A)P(not A)

P(B) = 0.99 x 0.0001 + 0.025 x 0.9999 =0 .0250965

therefore P(A|B) = 0.99 x 0.0001/P(B) = 0.0039447731

Written by curious

February 3, 2010 at 10:50 pm

PROB: At least 1 double six in 24 rolls of a `pair’ of dice

What is the probability of rolling 1 or more double sixes in 24 throws of a pair of dice, P(A)?

Probability of not throwing a double six in a single throw of a pair of dice, P(B) = 35/36

Probability of not throwing any double six in 24 throws of a pair of dice, P(NOT A) = \left(\frac{35}{36}\right)^{24}

Probability of rolling 1 or more double sixes in 24 throws of a pair of dice, P(A) = 1 – P(NOT A)

P(A) = 1 - P(\text{NOT A}) = 1 - P(B)^{24} = 1 - \left(\frac{35}{36}\right)^{24}

Written by curious

February 3, 2010 at 10:37 pm

PROB: At least 1 six in 4 throws of a dice

What is the probability of getting at least 1 six in 4 rolls of a dice?
Let us define event A as

A Rolling 1 or more sixes in 4 rolls of a dice
NOT A Rolling no sixes in 4 rolls of dice
B No six in a single roll of a dice
P(B) 1/6

P(A) = 1 - P(NOT A) = 1 - P(B)^{4} = 1 - \left(\frac{5}{6}\right)^4

Written by curious

February 3, 2010 at 10:28 pm

PROB: Game of dice

Let’s consider a game of rolling a standard, fair, six-sided die at most five times. You may stop whenever you want and receive as a reward the number of dollars corresponding to the number of dots shown on the die at the time you stop. The values at each roll will be 1, 2, 3, 4, 5, or 6, and the probability of each number on each roll is one-sixth. The objective is to find the stopping rule that will maximize the number of dollars you can expect to win on average.

If you always stop with the first roll, for example, the winnable amount is simply the expected value of a random variable that takes the values 1, 2, 3, 4, 5, and 6 with probability 1/6 each. That is, one-sixth of the time you will win 1, one-sixth of the time you will win 2, and so on, which yields the expected value 1(1/6) + 2(1/6) + 3(1/6) + 4(1/6) + 5(1/6) + 6(1/6) = 7/2. Thus if you always quit on the first roll, you expect to win 3.5 dollars on the average.

But clearly it is not optimal to stop on the first roll if it is a 1, and it is always optimal to stop with a 6, so already you know part of the optimal stopping rule. Should you stop with a 5 on the first roll?

Clearly it is optimal to stop on the first roll if the value seen on the first roll is greater than the amount expected if you do not stop—that is, if you continue to roll after rejecting the first roll. That would put you in a new game where you are only allowed four rolls, the expected value of which is also unknown at the outset. The optimal strategy in a four-roll problem, in turn, is to stop at the first roll if that value is greater than the amount you expect to win if you continue in a three-roll problem, and so on. Working down, you arrive at one strategy that you do know. In a one-roll problem there is only one strategy, namely to stop, and the expected reward is the expected value of one roll of a fair die, which we saw is 3.5. That information now yields the optimal strategy in a two-roll problem—stop on the first roll if the value is more than you expect to win if you continue, that is, more than 3.5. So now we know the optimal strategy for a two-roll problem—stop at the first roll if it is a 4, 5, or 6, and otherwise continue—and that allows us to calculate the expected reward of the strategy.

In a two-roll problem, you win 4, 5, or 6 on the very first roll, with probability 1/6 each, and stop. Otherwise (the half the time that the first roll was a 1, 2 or 3) you continue, in which case you expect to win 3.5 on the average. Thus the expected reward for the two-roll problem is 4(1/6) + 5(1/6) + 6(1/6) + (1/2)(3.5) = 4.25. This now gives you the optimal strategy for a three-roll problem—namely, stop if the first roll is a 5 or 6 (that is, more than 4.25), otherwise continue and stop only if the second roll is a 4, 5, or 6, and otherwise proceed with the final third roll. Knowing this expected reward for three rolls in turn yields the optimal strategy for a four-roll problem, and so forth. Working backwards, this yields the optimal strategy in the original five-roll problem: Stop on the first roll only if it is a 5 or 6, stop on the second roll if it is a 5 or 6, on the third roll if it is a 5 or 6, the fourth roll if it is a 4, 5 or 6, and otherwise continue to the last roll. This strategy guarantees that you will win about 5.12 Dollars on average, and no other strategy is better. (So, in a six-roll game you should stop with the initial roll only if it is a 6.)

Total number of rolls Stop if initial roll is Average optimal expected reward
1 {1,2,3,4,5,6} 3.5
2 {4,5,6} (4+5+6)/6+3.5/2 = 4.25
3 {5,6} (5+6)/6+4.25*4/6 = 4.67
4 {5,6} (5+6)/6+4.67*4/6 = 4.94
5 {5,6} (5+6)/6+4.94*4/6 = 5.13

Written by curious

January 30, 2010 at 12:42 pm