The Science Journal of the American Association for Respiratory Care

2001 OPEN FORUM Abstracts


Deborah Cullen EdD,RRT,FAARC, Linda Van Scoder EdD,RRT, Indiana University; KrzysztofPodgorski PhD, Derek Elmerick MS, Purdue University, Indianapolis, Indiana

The reliability of a test is importantto document since it can demonstrate the extent the variations in scores representsystematic differences among examinees rather than testing error.1Reliability for credentialing exams is calculated to determine the reproducibilityfor the test.

Background: The reliabilityquotient for the respiratory Clinical Simulation Examination (CSE) hasnot been reported. The CSE contains information gathering (IG) and decision-making(DM) sections. Historically, the CSE is a unique form of testing with a highre-test rate as well as small differences between first-time and repeater passrates.We questioned whether the CSE is stable and free from testing error. The purposeof this study was to determine the reliability for a representative CSE.

Methods: A sample of 56 advanced-levelrespiratory therapy students were administered the 1999 web-version of the self-assessmentexamination (SAE) CSE form A. These subjects were in their last month of programstudy and consisted of 30.4% baccalaureate and 69.6% associate degree studentsfrom 4 different states. Student examinees were monitored by an instructor toassure simulated test conditions and assist with technical test-taking issuesif necessary. All scores were collected and analyzed for Cronbach?s Alpha, ameasure of internal consistency and intercorrelation among testing items. Numericalcomputation and analysis were performed with the S-Plus statistical software.Institutional board approval was obtained for this study.

Results: The reliability forthe SAE CSE A was .756 calculated for our sample. Information gathering hada reliability of .724, while the decision-making section revealed an alpha coefficientof .636.

Discussion: The reliabilityof the decision-making section was lower than information gathering demonstratinggreater variability for scores in the decision-making section. This may accountfor the high re-test rate among CSE examinees. Similarly, higher information-relatedhistory and physical scores were previously demonstrated for the Internal MedicineCSE, which was discontinued in the mid-1980?s.1 Alpha coefficientsin the range of .85-1.0 are considered good evidence of reliability.1 TheClinical Laboratory Scientist credential ranges .90 and higher for their multiplechoice test alpha coefficient. An alpha coefficient of .75 is a moderate demonstrationof reliability and may account for greater variability in CSE scores. Sincethe reliability quotient represents the reproducibility or consistency of scores,it is important to demonstrate strong reliability in order to have test validity.However, we caution that these results are limited to the SAE CSE A and maybe limited by the student sample or sample size.

Conclusions: The reliabilitycoefficient of .756 for the SAE CSE A demonstrated that an estimated 25% ofthe observed variance in scores was due to error in measurement. This errorwas greater for the decision-making section of the test. These results are limitedto the SAE CSE A. We recommend that replication of this study be conducted sothat generalizabiltiy can be further extended to the respiratory community.We further recommend that the reliability coefficients for all CSE administrationsbe published.

1 Swanson DB, NorciniJJ, Gross LJ. Assessment of clinical competence. Assessment& Evaluation in Higher Education. 1987; 12(3):220-46.