Statistical Inferences Concerning Bivariate Correlation Coefficients

  1. Introduction
    1. The distinction between descriptive/inferential concerns about an r
    2. Inferences on r and the relative popularity of hypothesis testing vs. confidence intervals
  2. Statistical Tests Involving a Single Correlation Coefficient
    1. The inferential purpose
    2. The null hypothesis:
      1. The usual situation, Ho: r = 0
      2. Other possibilities
    3. Deciding if r is statistically significant
      1. Comparing r against the calculated value
      2. Comparing p against alpha
    4. One-tailed and two-tailed tests on r
    5. Tests on specific kinds of correlations (e.g., r, rs, rpb, etc.)
  3. Tests on Many Correlation Coefficients (Each Treated Separately)
    1. Tests on the entries of a correlation matrix
    2. Tests on many correlation coefficients, with results presented in a passage of text
    3. The Bonferroni adjustment technique
  4. Tests on Reliability and Validity Coefficients
  5. Statistically Comparing Two Correlation Coefficients
    1. Comparing the correlation coefficients from two different samples against each other
    2. Comparing rxz and ryz in one sample where there are 3 variables (X, Y, & Z)
  6. The Use of Confidence Intervals Around Correlation Coefficients
    1. Two possible reasons to build a CI around a sample r, only one of which involves an Ho
    2. The "rule" for determining whether Ho: r = 0 should be rejected if its evaluated via a CI
  7. Cautions
    1. Relationship strength, effect size, and power
    2. Two underlying assumptions:
      1. The notions of "linearity" and "homoscedasticity"
      2. Assessing the plausibility of these assumptions
    3. Causality and correlation
    4. Attenuation:
      1. What causes it
      2. Correcting for it

Copyright © 2012

Schuyler W. Huck
All rights reserved.

| Book Info | Author Info |

Site URL: www.readingstats.com

Top | Site Map
Site Design: John W. Taylor V