Ridit and exponential type scores for estimating the kappa statistic

Authors

  • Ayfer E. Yilmaz
  • Serpil Aktas Hacettepe University

Keywords:

Cohen’s kappa, exponential scores, ordinal, ridit type scores, weights.

Abstract

Cohen's kappa coefficient is a commonly used method for estimating interrater agreement for nominal and/or ordinal data; thus agreement is adjusted for that expected by chance. The weighted kappa statistic is used as an agreement index for ordinal data. The weights quantify the degree of discrepancy between the two categories. The choice of this particular set of weights affects the value of kappa. The common scores are Cicchetti-Allison and Fleiss-Cohen weights. In this article, we discuss the use of ridit type and exponential scores to compute kappa statistics in general.

References

Agresti, A. (1988). A model for agreement between ratings

on an ordinal scale. Biometrics, 44(2):539–548.

Bagheban, A.A. & Zayeri, F. (2010). A generalization of

the uniform association model for assessing rater agreement

in ordinal scales. Journal of Applied Statistics, 37(8): 1265–

, DOI: 10.108002664760903012666/.

Bross, I.D.J. (1958). How to us eridit analysis. Biometrics,

:18–38.

Cicchetti, D. & Allison, T. (1971). A new procedure

for assessing reliability of scoring eeg sleep recordings.

American Journal EEG Technology, 11:101–109.

Cohen, J. (1960). A coefficient of agreement for nominal

scales. Educational and Psychological Measurement,

(1):37- 46.

Cohen, J. (1968). Weighted Kappa: Nominal scale agreement

with provision for scaled disagreement or partial credit.

Psychological Bulletin, 70(4): 213 -220.

Fleiss, J.L. & Cohen, J. (1973). The equivalence of weighted kappa

and the intraclass correlation coefficient as measure of reliability.

Educational and Psychological Measurement, 33: 613–619.

Fleiss, J.L., Cohen, J. & Everitt, B.S. (1969). Large sample

standard errors of kappa and weighted kappa. Psychological

Bulletin, 72:323–327.

Iki, K., Tahata, K. & Tomizawa, S. (2009). Ridit score type

quasi-symmetry and decomposition of symmetry for square

contingency tables with ordered categories. Austrian Journal

of Statistics, 38(3): 183–192.

Landis, J.R. & Koch, G.G. (1977). The measurement

of observed agreement for categorical data. Biometrics,

(1):159–174.

Saberi, Z. & Ganjali, M.A.(2013). A conditional Bayesian

approach for testing independence in two-way contingency

tables. Kuwait Journal of Science,40(2):103- 113.

Shoukri, M.M. (2004). Measures of Interobserver

Agreement. Chapman & Hall/CRC, Florida.

Tanner, M.A. & Young M.A. (1985). Modeling agreement

among raters. Journal of the American Statistical Association,

(389):175– 180.

Warrens, M. J. (2012). Cohen’s quadratically weighted

kappa is higher than linearly weighted kappa for tridiagonal

agreement tables. Statistical Methodology, 9:440 -444, DOI:

1016/j.stamet. 2011.08.006.

Warrens, M. J. (2013). Weighted kappas for 3×3

tables. Journal of Probability and Statistics, DOI:

1155325831/2013/.

Yang, J. (2007). Measure of agreement for categorical

data. Ph.D. thesis, The Pennsylvania State University, The

Graduate School, Department of Statistics, USA.

Downloads

Published

24-01-2018