OUTLINE FOR CHAPTER
9
Statistical Inferences Concerning
Bivariate Correlation Coefficients
- Introduction
- The distinction between descriptive/inferential concerns about
an r
- Inferences on r and the relative popularity of hypothesis testing
vs. confidence intervals
- Statistical Tests Involving a Single Correlation
Coefficient
- The inferential purpose
- The null hypothesis:
- The usual situation, Ho: r
= 0
- Other possibilities
- Deciding if r is statistically significant
- Comparing r against the calculated value
- Comparing p against alpha
- One-tailed and two-tailed tests on r
- Tests on specific kinds of correlations (e.g., r, rs,
rpb, etc.)
- Tests on Many Correlation Coefficients (Each Treated
Separately)
- Tests on the entries of a correlation matrix
- Tests on many correlation coefficients, with results presented
in a passage of text
- The Bonferroni adjustment technique
- Tests on Reliability and Validity Coefficients
- Statistically Comparing Two Correlation Coefficients
- Comparing the correlation coefficients from two different
samples against each other
- Comparing rxz and ryz
in one sample where there are 3 variables (X, Y,
& Z)
- The Use of Confidence Intervals Around Correlation
Coefficients
- Two possible reasons to build a CI around a sample r, only one
of which involves an Ho
- The "rule" for determining whether Ho: r
= 0 should be rejected if its evaluated via a CI
- Cautions
- Relationship strength, effect size, and power
- Two underlying assumptions:
- The notions of "linearity" and "homoscedasticity"
- Assessing the plausibility of these assumptions
- Causality and correlation
- Attenuation:
- What causes it
- Correcting for it
|