The Science Journal of the American Association for Respiratory Care

2009 OPEN FORUM Abstracts

A NOVEL APPROACH TO MEASURING INTERRATER RELIABILITY AMONG CLINICAL INSTRUCTORS USING EVALUATION DATA FROM AN ONLINE STUDENT RECORD DATABASE

Jonathan B. Waugh1, Stephen P. Fracek2; 1Clinical and Diagnostic Sciences, University of Alabama at Birmingham, Birmingham, AL; 2DataArc, LLC, League City, TX

INTRODUCTION: Accreditation guidelines of the Committee on the Accreditation of Respiratory Care (CoARC) specify programs should be able to demonstrate interrater reliability among those individuals who perform evaluations on students, including in the clinical setting. The online student record database, DataArc, has a clinical affective evaluation (CAE) to survey student performance for clinical rotations and post-graduate employer survey (ES) compatible with the CoARC specifications. The CAE and ES include 7 questions that are nearly identical and therefore allow the opportunity to compare assessments of students done by multiple clinical instructors with an ES done by an employer approximately six months after graduation. The purpose of our investigation was to examine how clinical instructor ratings for these affective qualities compared with employer ratings. METHODS: De-identified survey data for 17 students were extracted from our clinical record database and formatted for comparative analysis using Pearson Correlation and a two-tailed t-test. The ES for each student (10 students had 1 ES, 7 had 2 ES) was compared to the CAEs for each student (ranging 15-20 CEAs per student). RESULTS: The table below shows several of the seven pairs of questions have moderately (0.4-0.7) positive correlations. CONCLUSIONS: This comparison may offer another way to document interrater reliability for accreditation purposes. While performance of procedural details may reasonably be expected to change over a period of months to a year, affective characteristics are typically more stable and therefore comparing ratings of student affective behaviors while in their program to their performance after graduation could help validate the clinical ratings. This is enhanced by the fact that the two groups of raters have different vested interests with respect to the students (instructors versus employers). Sponsored Research - None

*Correlation is significant at alpha level of 0.05 (2-tailed).

You are here: RCJournal.com » Past OPEN FORUM Abstracts » 2009 Abstracts » A NOVEL APPROACH TO MEASURING INTERRATER RELIABILITY AMONG CLINICAL INSTRUCTORS USING EVALUATION DATA FROM AN ONLINE STUDENT RECORD DATABASE