Free Trial

Safari Books Online is a digital library providing on-demand subscription access to thousands of learning resources.

Share this Page URL


Methodological Considerations for Quantitative Content Analysis of Online Interactions other empirical results that provide corroborat- ing evidence for the construct we are measuring. For example, in the earlier section (Table 6),we presented an example of Zhu's study (2006) that correlated network properties with the results from content analysis. Reliability Besides validity, another key concept of measure- ment is reliability of scores. Following the classical test theory or measurement theory, reliability is the extent of closeness of observed scores to the true scores. Although operationally it is not possible to know the true scores, but we know that reliability is inversely related to the errors of measurements. Operationally, when we make several measure- ments (e.g., repeated measurements), the higher the consistency among the scores, the lower is the error. Thus, reliability coefficients are often used to quantify the "consistency (or inconsistency) among several error-prone measurements" (Feldt & Brennan, p. 105). Krippendorff (2004) suggests checking for coder stability before establishing intercoder reli- ability. He further specified three conditions for generating reliability data: provide specific cod- ing instructions, set criteria for selecting coders, and ensure that the coders work independently. Rourke, Anderson, Garrison, and Archer (2001) suggest that beyond intracoder reliability and intercoder reliability, the reliability of coding schemes could be established through replicability across research studies. However, most research studies report only intercoder reliability, which indicates the degree of agreement or correspon- dence between two or among more coders. There are two main methods of deriving intercoder reliability: agreement or covariation. In essence, agreement method determines whether the coders agree to the values assigned to a variable whereas covariation method determines whether there is correspondence between the scores assigned by the coders (that is, whether the scores go up or go down correspondingly). Table 7 summarizes the applications of various intercoder reliability Table 7. Intercoder reliability indices Reliability index Agreement Percent agreement · nominal data · range from.00 to 1.00 · Easy to compute · Crude agreement · Measures the agreement to the exact assigned values. · Fail to account for chance agreement · An improvement over Scott's by taking into account how the coders dis- tribute scores across coding categories · Too conservative · Two coders · Inherently standardizes the scores · Over-estimate reliability · Above 70% agreement is reliable · Source: Frey, Botan, and Kreps, 2000 Data / range Strengths Limitations Criteria Agreement, corrected for chance agreement Cohen's kappa · nominal data · range from.00 to 1.00 · Same as Scott's · Above.75, excellent ·.40 to.75, fair · Below.40, poor · Source: Banerjee et al., 1999 Covariation Pearson correlation coefficient, r · Interval or ratio data · -1.00 to 1.00 · Does not require precise agreement as for the above indices. · Works for interval or ratio data · Not available 620