2001 OPEN FORUM Abstracts
THERELIABILITY OF THE WRITTEN REGISTRY SELF-ASSESSMENT EXAMINATION FOR A STUDENTPOPULATION
Linda Van Scoder, EdD, RRT,Deborah Cullen, EdD, RRT, FAARC, Indiana University; Krzysztof Podgorski,PhD, Derek Elmerick, MS, Purdue University Indianapolis, Indiana
Background Reliability,the degree of dependability with which an instrument measures an attribute,is a necessary component of test validity. Unreliable instruments introduceerror into the testing process and may result in invalid conclusions being drawnfrom test scores. Many respiratory therapy educational programs use the self-assessmentexaminations (SAE?s) offered by the National Board for Respiratory Care as asummative measure of their students? competence prior to program graduation.The purpose of this study was to establish the reliability of the written registry(WR) SAE, a 100 question multiple-choice test, for a student population.
Method We selecteda convenience sample of 58 advanced-level respiratory therapy students enrolledin the last semester of programs in four different states for the study. Thesample was made up of 17 (29.3%) baccalaureate degree students and 41 (70.7%)associate degree students. Each student completed the web-based Form B of theWR SAE while monitored by an instructor. Scores were collected and analyzedutilizing S-Plus statistical software. We computed Cronbach?s alpha, an indexof internal reliability, for the WR SAE as a whole by splitting the 100 questionsinto 17 groups that contained a proportional representation of questions fromeach subsection of the test (Clinical Data ? 17 questions, Equipment ? 20 questions,Therapeutic Procedures ? 63 questions). This was necessary because subsectionitems are not evenly distributed throughout the test. The technique we usedallowed us to preserve the parallel structure of the subsections used for thecalculation of the coefficient. The reliability for each subsection was computedusing the split half (even ? odd) technique, which was appropriate because eachsubsection presumably covered the same content. The institutional review boardapproved the study.
Results The alphacoefficient for the test as a whole was 0.79. The alpha coefficients for thethree subsections were: Clinical Data 0.32, Equipment 0.20, and TherapeuticProcedures 0.72.
Discussion Alphacoefficients of 0.85 or higher are usually considered to be evidence of goodreliability for competence tests.1 The reliability of the WR SAEfor the student group approaches that standard. Using the traditional methodfor computing Cronbach?s alpha (i.e., not splitting the items into 17 parallelsubgroups) would have resulted in a slightly lower coefficient (0.77). Our findingsare limited to one form of the WR SAE, and the results from our conveniencesample may not be generalizable to the entire student population.
Conclusion TheWR SAE approaches good reliability for a student sample. This indicates thatthe test may meet one of the criteria for validity.
1 SwansonDB, Norcini JJ, Gross LJ. Assessment of clinical competence. Assessment &Evaluation in Higher Education. 1987; 12(3):220-46.