Wednesday, June 07, 2006

06/07/06

1. Difficult - So p hat is an estimate of p just as X bar is an estimate of mu? (and both are that way because they both are for samples and not the entire population... right?)
2. Reflective - We definately dealt with "p hat" in high school statistics (I was in regular, not AP). We also calculated standard error... except she just gave us the eqn, some numbers and we plugged and chugged.

Sunday, June 04, 2006

06/05/06

12.7.2 (part)

1. Difficult - Why is Xn considered the "unbaised" estimator of mu. I understand how it can get close to mu but why is it "unbaised." If we had bad luck, couldn't we have randomly picked a very skewed sample... which doesn't accurately represent the population? What would be "baised" then?

2. Reflective - We could probably use the standard error in Chemistry. (Though we didn't use this exact calc in labs... we did do the whole "5.433 +/- 0.001" thing.)

Friday, June 02, 2006

06/02/06

1. Difficult - It is difficult to find anything to say because this section is pretty straight forward. I will note something interesting though, I never really knew probability and statistics were intertwined. (Like in High School they seem very separate. Most likely because the mathematics isn't high enough yet to explain how they connect.)... I think I can tell by the prob. function graphs and the norm. dist graphs that those are what are used in probability.

Oh the subk notations are to denote each RV right x1, x2, x3 (when used in Ex 1 and 2 in the summations)
2. Reflective - I think I have seen the S sub n notation before in stat's class for variance... I think.

Tuesday, May 30, 2006

05/31/06

12.6.2 Reading

1. Difficult - So, is the CLT saying basically that as the sample gets larger, the better the approximation to F(x). Isn't this the same as the "Weak Law of Large Numbers"? Or wait, it says that F(x) is the dist. func. of SND so, how is it saying that it is approaching SND? How do we know that (I know it says it is not proven here)? Couldn't there theoretically be a skewed graph which would have the same mean and variance as a graph as a normally distributed graph? Isn't a skewed graph not SND?

2. Reflective - The approximating "function" of the CLT reminds me of... was is Poisson?... one of the distributions we learned about was approximating... I think it also depended on a very large n.

Thursday, May 25, 2006

0526/06

1. Difficult - I don't understand what the difference is between "arithmetic average" and the expectation. They are essentially the same thing, are they not?
2. Reflective - 12.6.1 is pretty intuitive. It makes sense that the larger the sample the more reliable the expectation as it is "harder" for big samples to be wrong (there are more individual factors that need to be wrong... it is less likely that 10 pieces of data are all off than, 5 for example).

Sunday, May 21, 2006

05/24/06

1. Difficult - I don't understand the book's explanation of "hazard rate"? Perhaps you could give us a graphical example?
2. Reflective - Too bad our lives cannot be predicted with a function... too many variables/environmental factors I think... (I'm looking at the aging example)... otherwise we could be better able to schedule waht we should and shouldn't do before we die.

05/22/06

1. Difficult - For example 18 why is the notation P(F(X) is less than or equal to u) instead of P(X=x)... is it not a density? Or does it signify you can use many functions for these problems?
2. Reflective - Why is the exponential distribution a "poor lifetime model"? Is it because in nature distributions are more erratic?

Tuesday, May 16, 2006

05/17/06

12.5.3
1. Difficult - Is there an intuitive explanation for the 12 for the variance formula (b-a)^2/12... the squaring makes sense but the 12 doesn't really (the calculations do make sense though).
2. Reflective - Are there special strange distributions like the Uniform Distribution (like for example one that forms a triangle? or a arc?)?