Evaluating an instrument for testout in a business communication course

David, Carol
Major Professor
Committee Member
Journal Title
Journal ISSN
Volume Title
Research Projects
Organizational Units
Journal Issue

Enrollment in college business communication courses has increased in the last decade. As a result, departments have had to develop procedures to allow better students to test out of such courses. At Iowa State University a two-phase procedure was developed which uses an initial objective test to screen students for the second phase, a written essay. The screening instrument must discriminate between students who can write at the level of B or better in the course and those who cannot;This study evaluated the test for its ability to predict a student's writing ability. Furthermore, placement errors of two types were calculated for a chosen testout score: (1) screening out students who though able to write well, perform unsatisfactorily on the objective test, (2) allowing students who cannot write well to proceed to the second, essay phase. The first error is unfair to students; the second error is time-inefficient for departments. Finally, this study evaluated the content of the test to determine if it is representative of the course;A prediction instrument must correlate well with a criterion which represents writing ability. Both grade on an essay and grade in the business communication course were chosen as criterion measures for these correlations. The test was administered to thirteen sections both at the beginning (pretest) and end (posttest) of the course. The procedure was replicated the following quarter. Sets of essays were collected at both the beginning and end of the course and correlated respectively with pretest and posttest scores. For further evidence of the predictive power of the test, the posttest scores were correlated with course grade. All calculated correlations were comparable in magnitude to analogous correlations deemed adequate in the literature;In examining the content of the test, the study correlated the subtests with the same criteria of written essay and course grade. Gain scores from pretest to posttest also were calculated for the subtests to test whether the content areas were integrated into the teaching and learning process of the course. Results of the study showed that the content, although representative of the course, showed a disproportionate number of usage items. Although items of usage correlated adequately with criterion measures, the subtest on style and tone showed strong predictive power and better gains. Increased representation of style and tone items not only would be consistent with course objectives, but in addition might improve the predictive power of the test.