This function will generate a nEvents x k scoring matrix.
Usage
simulateRatingMatrix(
  nLevels,
  k,
  k_per_event = 2,
  agree,
  nEvents = 100,
  response.probs = rep(1/nLevels, nLevels)
)Arguments
- nLevels
- the number of possible outcomes there are for each rating. 
- k
- the total number of available raters. 
- k_per_event
- number of raters per scoring event. 
- agree
- the percent of time the raters agree. Note that the actual agreement of the simulated matrix will vary from this value (see sample). 
- nEvents
- the number of rating events within each matrix. 
- response.probs
- probability weights for the distribution of scores. By default, each of the levels has equal probability of being selected. This allows situations where some responses are more common than others (e.g. 50% of students get a 3, 30% get a 2, and 20% get a 1). This is independent of the percent agreement parameter. 
Examples
test <- simulateRatingMatrix(nLevels = 3, k = 2, agree = 0.6, nEvents = 100)
psych::ICC(test)
#> boundary (singular) fit: see help('isSingular')
#> Call: psych::ICC(x = test)
#> 
#> Intraclass correlation coefficients 
#>                          type  ICC F df1 df2       p lower bound upper bound
#> Single_raters_absolute   ICC1 0.66 5  99 100 1.4e-14        0.54        0.76
#> Single_random_raters     ICC2 0.66 5  99  99 1.8e-14        0.54        0.76
#> Single_fixed_raters      ICC3 0.66 5  99  99 1.8e-14        0.54        0.76
#> Average_raters_absolute ICC1k 0.80 5  99 100 1.4e-14        0.70        0.86
#> Average_random_raters   ICC2k 0.80 5  99  99 1.8e-14        0.70        0.86
#> Average_fixed_raters    ICC3k 0.80 5  99  99 1.8e-14        0.70        0.86
#> 
#>  Number of subjects = 100     Number of Judges =  2
#> See the help file for a discussion of the other 4 McGraw and Wong estimates,