(9/26/05)
MAJOR PROJECT: The project is due November 11th.
1.2.2: (a) S = {aaa,aaf,afa,aff,faa,faf,ffa,fff} (all orders). (b) Z_F = {aaf,aff,faf,fff} (ends in f). X_A
= {aaa,aaf,afa,aff} (starts
with a). (c) Intersection is {aaf,aff},
not empty, so not mutually exclusive. (d)
1.3.2: S = {HF, HW, MF, MW}.
Fill in probabilties given and calculated: S =
{HF=0.2, HW=0.4, MF=0.3, MW=0.1}. Now easy to see (a) P[W]=0.4+0.1=0.5;
(b) P[MF]=0.3; (c) P[H]=0.2+0.4=0.6.
1.4.1: P[H_0]=0.1+0.4=0.5;
P[B]=0.4+0.1+0.1=0.6; P[L or H_2]=P[L]+P[B and H_2]=0.1+0.1+0.2 + 0.1=0.5.
1.4.2: (a) P[L] = 1 - P[3 or
less minutes] = 1 - P[B_1] - P[B_2] - P[B_3] = 1 - a - a(1-a) - a(1-a)^2 =
(1-a)^3 =0.57; (b) P[9 or less minutes] = sum_{n=1 to 9} a(1-a)^{n-1} = 1 - a^9
= 1 - (0.57)^3 = .815
1.4.6: There are six unknowns
but only 4 facts so there are an infinite number of solutions in part (a). In part
(b), with 2 more facts, we get the single solution: S = {FH0=1/4, FH1=1/6,
FH2=0, VH0=1/12, VH1=1/6, VH2=1/3}.
1.5.5: The sample space is S =
{234,243,324,342,423,432} each with 1/6 probability. The rest is just counting.
1.5.6: Given P[L]=0.16 and P[H]=0.10 and (read carefully) P[L and H | L
or H] = 0.10. Set up equations and solve for P[L and
H] = 0.0236. Then P[H|L]=0.236/0.16=0.1475. Details of
(a): Since (L and H) is a subset of (L or H), we have P[L
and H|L or H] = P[(L and H) and (L or H)]/P[L or H] = P[L and H]/P[L or H] =
0.10. Therefore, P[L and H] = 0.10*P[L or H] =
0.10*(P[L] + P[H] - P[L and H]). Solving for P[L and
H] = 0.10*(0.16 + 0.10)/1.1 = 0.0236.
1.6.4: (a) P[A
and B] = 0 so 5/8 = P[A or B] = P[A] + P[B] - 0 = 3/8 + P[B] thus P[B] = 1/4.
Note that A is a subset of B^c so P[A
and B^c] = P[A] = 3/8. Similarly, P[A
or B^c] = P[B^c] = 3/4. (b)
Both P[A and B] and P[A]*P[B] equal 3/32 <> 0 so
not independent. (c) Since C and D are independent, P[C and D] = P[C]*P[D] and thus P[D] = P[C and D]/P[C] = (1/3)/(1/2) = 2/3.
Note that C = (C and D) or (C and D^c) is a disjoint
division of C. Thus, P[C and D^c] = P[C] - P[C and D]
= 1/2 - 1/3 = 1/6. Next, P[C^c and D^c] = P[(C or D)^c] = 1 - P[C or
D] = 1 - {P[C] + P[D] - P[C and D]} = 1 - {1/2 + 2/3 - 1/3} = 1/6. Finally,
since C and D independent, P[C|D] = P[C] = 1/2. (d) We
have P[C or D] = 5/6 already. P[C or D^c] = P[C] + P[D^c] - P[C and D^c] = 1/2 + (1 - 2/3) - 1/6 = 2/3. (e) They are
independent since P[C and D^c] = 1/6 = (1/2)*(1/3) =
P[C]*P[D^c].
1.7.6: Let 1A and 1D stand for
the first detector being acceptable or defective and 2A and 2D for the second
similarly. From the description, the sample space - with probabilities - is S =
{1A2A = (3/5)*(4/5) = 12/25, 1A2D = (3/5)*(1/5) = 3/25, 1D2A = (2/5)*(2/5) =
4/25, 1D2D = (2/5)*(3/5) = 6/25}. Now all we have to do is add: (a) P[exactly one A] = P[1A2D] + P[1D2A] = 12/25 + 4/25 = 16/25.
(b) P[both D] = P[1D2D] = 6/25.
1.7.7: The sample space with
probabilities is: S = {A1H1H2 = (1/2)*(1/4)*(3/4) = 3/32,
..., A1T1H2 = (1/2)*(3/4)*(3/4) = 9/32, ..., B1T1T2 = (1/2)*(1/4)*(3/4)
= 3/32} (calculate the others similarly). Note that the second toss is with the
other coin! There are two cases with H1H2 and P[H1H2]
= 3/32 + 3/32 = 6/32. There are four cases of H1 and P[H1]
= 3/32 + 1/32 + 3/32 + 9/32 = 1/2. There are four cases of H2 and P[H2] = 3/32 + 9/32 + 3/32 + 1/32 = 1/2. Since P[H1H2] <> P[H1]*P[H2], they are not independent.
1.8.3: (a) First card: 52
choices; second card: 51 remaining cards available. Thus
52*51 = 2652 outcomes. (b) First card: 52 choices; second card: 3
remaining of same type (rank). Thus 52*3 = 156 outcomes.
(c) Probability obviously 156/2652 = 1/17 (about 0.0588 but the fraction is
better). (d) If order is not important, then one-half of the two values found
in (a) and (b). The probability remains the same, however.
1.9.2:
P[8 straight] = (0.32)^8 = 0.00011. P[10
in 11] = C(11,10)*(0.32)^10*(0.68)^1 = 0.0000842.
2.2.3: (a) c(1
+ 4 + 9 + 16) = 30c = 1 so c = 1/30. (b) P[V a square]
= P[1] + P[4] = (1/30) + (16/30) = 17/30. (c) P[V
even] = P[2] + P[4] = (4/30) + (16/30) = 20/30 = 2/3. (D) P[V
> 2] = P[3] + P[4] = (9/30) + (16/30) = 25/30 = 5/6.
2.2.9: (b) Like the Pascal
distribution: when is first success. P_K(k) = (1-p)^{k-1}p for k=1..5; but for
k=6 we don't care if it responds or not so P_K(6) = (1-p)^5p + (1-p)^6 =
(1-p)^5. (c) Busy signal if 6 failures: (1-p)^6. (d)
Want (1-p)^n <= 0.02 with p=0.9 or 0.1^n <=
0.02; take logs and get n*log(0.1) <= log(0.02) or n >=
log(0.02)/log(0.10) = 1.7 so 2 {I don't think the question is correct}.
2.3.7: T/5
buses in T minutes means 1/5 per minute. (a) Poisson P_B(b) = (T/5)^be^{-T/5}/b!. (b) T=2, b=3 so P_B(3) = (2/5)^3e^{-2/3)/3! = 0.00715. (c) T=10, b=0 so P_B(0) = 2^0e^{-10/5}/0! = e^{-2} =
0.135. (d) P[B >= 1] = 1 - P[B=0] = 1 - e^{-T/5} >= 0.99 or e^{-T/5}
<= 0.01; take logs and get -T/5 <= ln(0.01) = -ln(100) so T/5 >= ln(100) and
we need T >= 5ln(100) = 23.
2.3.10: (a) Pascal(6,0.75) =
P_N(n) = C(n-1,5)(0.75)^6(0.25)^{n-6}. (b) P_N(10) =
C(9,5)(0.75)^6(0.25)4 = 0.0876. (c) P[N >=9] = 1 - P[N < 9] = 1 -
(P[6]+P[7]+P[8]) = 1 - (0.75)^6 * [1 + 6(0.25) + 21(0.25)^2] = 0.3215.
2.4.3: (b) P_X(x) = 0.4 at
x=-3, 0.4 at x=5, 0.2 at x=7, and zero elsewhere.
2.4.8: See 2.2.9 above for the
PMF. If p = 1/2, we have P_N(n) = (1/2)^n for n=1..5
while = (1/2)^5 for n=6 (zero elsewhere, of course). CDF now obvious: F_N(n) = 0 for n<1, = 1/2 for 1 ><= n < 2, = 3/4 for 2
<= n < 3, ..., = 31/32 for 5 <= n < 6, and 1 for n >= 6.
2.5.2:
(a) Obviously, P_C(20) = 0.6 and P_C(30) = 0.4 and
zero elsewhere using cents as units. (b) E[C] = 20(.6)+30(.4)
= 24.
2.5.5:
From 2.4.3 above, E[X] = (-3)(.4) + (5)(.4) + (7)(.2)
= 2.2.
2.6.3:
(a) From 2.4.3 above and W = -X, P_W(w) = 0.4 at w=3,
0.4 at w=-5, 0.2 at w=-7, and zero elsewhere. (b) F_W(w)
= 0 for w < -7, 0.2 for -7 <= w < -5, 0.6 for -5 <= w < 3, and 1
for w >= 3. (c) If we use the PMF for W we calculate E[W]
= (-7)(.2) + (-5)(.4) + (3)(.4) = -2.2. However, Theorem 2.12 tells us
immediately that E[W] = E[-X] = -E[X] = -2.2.
2.6.6:
For a geometric RV such as M, P_M(m) = (1-p)^{m-1}p
for m=1,2,3… and zero otherwise. [Note: you talk until you first hang-up.]
Since there is a flat fee, the monthly cost is at least $20 so P_C[c] = 0 for c
< 20. Next, P_C(20) = P[M <= 30] = SUM_{m=1..30}(1-p)^{m-1}p = 1 –
(1-p)^30 summing the finite series. For M>=30, C = 20 + (M-30)/2 since each
minute over 30 costs ½ dollar. Solving we get M = 2C-10. Thus P_C[c] =
P_M[2c-10] for c = 20.5, 21, 21.5, … (these are
the only possible charges). Finally, P_C[c] = above values for c<20 and c=20
and = (1-p)^{2c-10-1}p for the other (half) values
with c>20. (Yes, p=1/30)
2.7.6:
Now we have no free minutes but less flat fee. Thus, C = 15 + M and for c >=
16, P_C(c) = P_M(c-15) = (1-p)^{c-16}p. By 2.12, E[C]
= E[15 + M] = 15 + E[M] = 15 + 1/p.
[We
didn’t do 2.7.5 but its answer is E[C] = 20 + (1-p)^30/2p
after lots of algebra and Math Fact B7! The new plan is a bargain if 15+1/p is
less than that. It turns out to be for p less than 0.2 approximately.]
2.8.4:
We have the PMF of X and its expected value. The expected value E[X^2] = (-3)^2(0.4) + 5^2(0.4) + 7^2(0.2) = 23.4. The variance Var[X] = E[X^2] – (E[X])^2 =
23.4 – (2.2)^2 = 18.56.
2.8.9:
In 2.6.5 we transmit until successful (Geometric) so P_X(x) = (1-p)^{x-1}p = q^{x-1}(1-q) if we use q as the probability of
failure. The time to send a packet and acknowledgement for X transmissions
gives us T = 2X-1 milliseconds until packet is correctly received. By Theorem
2.5, Var[X] = (1-p)/p^2 or q/(1-q)^2.
Using Theorem 2.12, Var[T] = 2^2*Var[X]
= 4q/(1-q)^2 and the maximum “jitter” or standard variation is 2*SQRT(q)/(1-q)
= 2. Algebra gives us the equation q^2 - 3q + 1 = 0 which we solve getting q =
(3 + SQRT(5))/2. The positive sign gives a
value outside (0,1) necessary for a probability, so
only the negative sign makes sense. Thus q = (3 – SQRT(5))/2
= 0.382 is the necessary error rate maximum to assure jitter less than 2
milliseconds.
2.9.3:
2.9.6: