Ask Question
10 August, 19:14

To assess the precision of a laboratory scale, we measure a block known to have a mass of 1 gram. we measure the block n times and record the mean x¯ of the measurements. suppose the scale readings are normally distributed with unknown mean µ and standard deviation σ = 0.001 g. how large should n be so that a 95% confidence interval for µ has a margin of error of ± 0.0001

+5
Answers (1)
  1. 10 August, 20:25
    0
    n = 5 The formula for the confidence interval (CI) is CI = m ± z*d/sqrt (n) where CI = confidence interval m = mean z = z value in standard normal table for desired confidence n = number of samples Since we want a 95% confidence interval, we need to divide that in half to get 95/2 = 47.5 Looking up 0.475 in a standard normal table gives us a z value of 1.96 Since we want the margin of error to be ± 0.0001, we want the expression ± z*d/sqrt (n) to also be ± 0.0001. And to simplify things, we can omit the ± and use the formula 0.0001 = z*d/sqrt (n) Substitute the value z that we looked up, and get 0.0001 = 1.96*d/sqrt (n) Substitute the standard deviation that we were given and 0.0001 = 1.96*0.001/sqrt (n) 0.0001 = 0.00196/sqrt (n) Solve for n 0.0001*sqrt (n) = 0.00196 sqrt (n) = 19.6 n = 4.427188724 Since you can't have a fractional value for n, then n should be at least 5 for a 95% confidence interval that the measured mean is within 0.0001 grams of the correct mass.
Know the Answer?
Not Sure About the Answer?
Get an answer to your question ✅ “To assess the precision of a laboratory scale, we measure a block known to have a mass of 1 gram. we measure the block n times and record ...” in 📙 Mathematics if there is no answer or all answers are wrong, use a search bar and try to find the answer among similar questions.
Search for Other Answers