Quiz Over Chapter 9 of the
6^{th} Edition
Statistical Inferences Concerning
Bivariate Correlation Coefficients
Introduction
 (T/F) The notions of "sample" & "population"
are irrelevant when dealing inferentially with correlation.
 (T/F) Researchers usually build CIs around their sample values
of r rather than deal with null hypotheses.
Statistical Tests Involving a Single Correlation Coefficient
 Do researchers very often make inferences based upon a single correlation
that comes from a single sample?
 (T/F) The "direction" of the researcher's inference
moves from the population to the sample.
 What pinpoint number is usually in the null hypothesis?
 Does the typical researcher indicate, either in words or symbols,
the null hypothesis that he/she tested?
 Look at Excerpt 9.5. Express in symbols the null hypothesis that
most likely was tested . . . and rejected.
 (T/F) The sample value of r, computed from the collected data,
typically serves as the calculated value.
 If r is compared against a critical value, Ho will be
rejected if the former is __ (smaller/larger) than the latter.
 (Yes/No) Do researchers usually include the critical value
in their research report?
 Can tests on correlation coefficients be conducted in a onetailed
fashion?
 (T/F) Inferential tests can be done on Pearson & Spearman
correlations but not on biserial or pointbiserial correlations.
 (T/F) If a researcher tests a correlation coefficient but does not
indicate the type of correlation, you should guess that it was
Pearson's r.
Tests on Many Correlation Coefficients (Each of Which is Treated Separately)
 Do researchers very often set up and test more than 1 correlational
null hypothesis in the same study?
 (T/F) If two or more rs are tested in the same study, the null
hypothesis will likely be the same in all tests.
 Look at Excerpt 9.12. If there had been 8 coping strategies rather than 6,
how many null hypotheses would have been tested in this excerpt's table?
 If a researcher computes all possible bivariate correlations among
5 variables, what would the Bonferronicorrected alpha level be if
the researcher wants to keep Type I error risk at 5% for the set of
tests being conducted?
Tests on Reliability and Validity Coefficients
 Can a researcher set up and test a null hypothesis concerning a reliability
or validity coefficient?
 Look at Excerpt 9.16. In order for the GMAT to have accounted for 80% of the variability
among the final MBA GPAs, how high would the correlation have needed to be?
Statistically Comparing Two Correlation Coefficients
 (T/F) If a researcher inferentially compares two correlations, there
might be 1 group involved or 2 groups.
 If the correlation between height and weight for a sample of men
is compared with the correlation between height and weight in a sample
of women, how many inferences would be made to the 2 populations?
The Use of Confidence Intervals Around Correlation Coefficients
 Which is more popular: setting up and testing a correlational Ho
or building a CI around the sample value of r?
 If a researcher discovered that r = .13 and that CI.95 = .06 to .20,
would he/she claim p < .05?
Cautions
 Can a researcher's r turn out to be close to zero and yet still be
significantly different from zero?
 Which of these would constitute better evidence that there is, in
the population, a strong relationship between the two variables
that a researcher has measured and found to be significantly related:
 p < .0001
 r2 = .70
 n = 10,000
 Do many researchers concern themselves with the notions of "power"
and "effect size" when testing their correlations?
 If a correlation coefficient is found to be statistically significant,
and if a power analysis is then conducted, is it fair to assume that
a "strong" relationship exists if the statistical test is shown to have
had "high" power?
 Do many folks concerns themselves with the notions of "linearity"
& "homoscedasticity" when testing their correlations?
 Which of these terms is a fairly good synonym for the term "homoscedasticity"?
 Equal means
 Equal variances
 Equal correlations
 Equal variables
 (T/F) If a bivariate correlation coefficient has been found
to be statistically significant with p<.05 (or better yet, with p<.01),
the researcher can legitimately infer that a causal relationship exists
between the 2 variables.
 Attenuation causes r to ____ (underestimate/overestimate) r,
the magnitude of the correlation in the population.
 What causes attenuation?
 an n that's too small
 measurement errors
 inadequate statistical power
 Who is connected to the procedure that goes by the name "rtoz transformation"?
These Questions are Supposed to be a Bit More Challenging
 Look at Excerpt 9.18. If the value of the second r had
turned out equal to .26 (rather than equal to .22), how many of the
3 results would have been significant?
 It's a good guess to think that the study from which Excerpt 9.3
involved a ___ (small/large) sample size.
 In Excerpt 9.4, an r of .13 turned out to be nonsignificant. In
Excerpt 9.12, an r of .14 turned
out to be significant (with p < .001). Assuming that both of these
rs were of the Pearson productmoment variety, how could the results be so
different when the sample correlation coefficients are almost identical?
Click here for answers.
