Comparisons for agreement
Let’s say we have data that is only rank order from two or more evaluators (people, algorithms, etc.) and we want to determine if the evaluators agree or not.
The agreement here meaning the results from one person or another are in agreement, or they are concordant. This is typically done with this non-parametric method for 3 or more evaluators. For a comparison of two evaluators consider using Cohen’s Kappa or Spearman’s correlation coefficient as they are more appropriate. [Read more…]