Calculates intraclass correlations (ICC) for simulated samples of raters and evaluations.
Source:R/simulateIRR.R
simulateIRR.Rd
This function wraps the [IRRsim::simulateRatingMatrix()] function to estimate inter-rater reliability statistics across many simulated rating matrices. It returns a `list` with all the runs but can be converted to a data frame using the `as.data.frame()` function. By default this function will run the simulations in parallel using the `parallel` package using one less the number of available cores. Set `parallel = FALSE` to run the simulations on one thread.
Usage
simulateIRR(
nRaters = c(2),
nRatersPerEvent = nRaters,
nLevels = 4,
nEvents = 100,
nSamples = 100,
agreements = seq(0.1, 0.9, by = 0.1),
response.probs = rep(1/nLevels, nLevels),
showShinyProgress = FALSE,
showTextProgress = !showShinyProgress,
numCores = (parallel::detectCores() - 1),
parallel = (numCores > 1),
...
)
Arguments
- nRaters
the number of available raters
- nRatersPerEvent
the number of ratings for each per scoring event.
- nLevels
the number of possible outcomes there are for each rating.
- nEvents
the number of rating events within each matrix.
- nSamples
the number of sample matrices to estimate at each agreement level.
- agreements
vector of percent agreements to simulate.
- response.probs
probability weights for the distribution of scores. See [IRRsim::simulateRatingMatrix()] for more information.
- showShinyProgress
show progress bar as simulations are generated.
- showTextProgress
show progress bar as simulations are generated.
- numCores
number of cores to use if the simulation is run in parallel.
- parallel
whether to simulated the data using multiple cores.
- ...
currently not used.
Value
a list of length nSamples * length(nRaters) * length(agreements)
.
Each element of the list represents one simulation with the following
values:
- k
the number of raters used in the simulation.
- simAgreement
the calculated percent agreement from the sample.
- agreement
the specified percent agreement used for drawing the random sample.
- skewness
skewness of all responses.
- kurtosis
Kurtosis for all responses.
- MaxResponseDiff
the difference between the most and least freqeuent responses.
- ICC1
ICC1 as described in Shrout and Fleiss (1979)
- ICC2
ICC2 as described in Shrout and Fleiss (1979)
- ICC3
ICC3 as described in Shrout and Fleiss (1979)
- ICC1k
ICC1k as described in Shrout and Fleiss (1979)
- ICC2k
ICC2k as described in Shrout and Fleiss (1979)
- ICC3k
ICC3k as described in Shrout and Fleiss (1979)
- Fleiss_Kappa
Fleiss' Kappa for m raters as described in Fleiss (1971).
- Cohen_Kappa
Cohen's Kappa as calculated in psych::cohen.kappa. Note that this calculated for all datasets even though it is only appropriate for two raters.
- data
The simulated matrix
Details
For reproducibility using the [base::set.seed()] function be sure to set `parallel = FALSE`.
Examples
icctest <- simulateIRR(nLevels = 3,
nRaters = 2,
nSamples = 10,
parallel = FALSE,
showTextProgress = FALSE)
summary(icctest)
#> Prediction table for 2 raters.
#> Agreement skewness kurtosis MaxResponseDiff ICC1 ICC2
#> 1 0.1 NA NA NA NA NA
#> 2 0.2 NA NA NA NA NA
#> 3 0.3 NA NA NA NA NA
#> 4 0.4 0.001220400 -1.501746 0.04146644 0.09974772 0.1000455
#> 5 0.5 0.032119335 -1.509715 0.03813980 0.24919501 0.2499546
#> 6 0.6 0.021263320 -1.513135 0.04069260 0.39011432 0.3899606
#> 7 0.7 -0.002677723 -1.487019 0.04617624 0.53965679 0.5394767
#> 8 0.8 -0.015539897 -1.460998 0.05761178 0.69967584 0.6998141
#> 9 0.9 0.017262929 -1.436328 0.06952314 0.84634494 0.8464201
#> ICC3 ICC1k ICC2k ICC3k Cohen_Kappa
#> 1 NA NA NA NA NA
#> 2 NA NA NA NA NA
#> 3 NA NA NA NA NA
#> 4 0.1003667 0.1630092 0.1631078 0.1636692 0.1013587
#> 5 0.2501391 0.3920801 0.3933090 0.3936315 0.2498210
#> 6 0.3896946 0.5585984 0.5584192 0.5581995 0.3982911
#> 7 0.5391603 0.6982505 0.6980749 0.6977855 0.5482248
#> 8 0.7000514 0.8218083 0.8219379 0.8221577 0.6982775
#> 9 0.8464353 0.9173147 0.9173782 0.9174450 0.8484651
#>