The Science Journal of the American Association for Respiratory Care

2012 OPEN FORUM Abstracts

DEVELOPMENT OF AN INTER-RATER RELIABILITY TRAINING TOOL.

Jose D. Rojas, Jon O. Nilsestuen; School of Health Professions Department of Respiratory Care, University of Texas Medical Branch, Galveston, TX

Background: The burdens of accreditation for a respiratory care program are not trivial. One hurdle that programs are struggling with is CoARC accreditation standard 3.11. CoARC revised the Interpretative guidelines to state: “this process must include a comparison of student evaluations completed by clinical instructors in order to identify variability among evaluators. Statistical analysis can be used but is not required. When variability is identified, the program must have a plan of action which includes remediation, timeline, and follow-up...”. This interpretation reduced the burden but many programs are at a loss for how best to handle the challenge. We have videotaped all our students performing competencies in a pre-clinical setting for several years. Videos provide feedback to students on areas of strength and weakness, and are also used to train our clinical instructors. We describe use of these videos to measure, collect, and assess rater agreement. Agreement amongst raters is assessed with the intraclass correlation coefficient (ICC) and we demonstrate how that analysis can be accomplished with SPSS or Microsoft Excel. Method: We have developed a videotape library of students in the pre-clinical setting performing clinical competencies. Raters are shown videos and evaluate performance with a modified nine question evaluation form that utilizes a five-point Likert scale. Assessment data are collected with either Blackboard (individual training) or with an audience response system (group training). Once collected the data are organized for import into SPSS or Excel. SPSS has a straightforward process that will return a value for both ICC single measures and ICC average measures. Using Excel the data are analyzed with a two-way ANOVA and the results from this are plugged into a relatively simple formula for determining the ICC single and average measures. Results: We implemented the use of the video library for training clinical preceptors. Data from training sessions was used to assess consistency of raters (ICC) as determined by two different means: SPSS and Microsoft Excel. When we identified variance, discussion with the raters helped to determine whether raters need further training or the grading rubric needed to be modified or both. Conclusion: This process allowed for the development and implementation of action plans for program improvement. We believe that this process and analysis will satisfy CoARC Standard 3.11. Sponsored Research - None