Placement testing

From Wikipedia, the free encyclopedia
(Redirected from Placement exam)

Placement testing is a practice that many colleges and universities use to assess college readiness and determine which classes a student should initially take. Since most two-year colleges have open, non-competitive admissions policies, many students are admitted without college-level academic qualifications. Placement exams or placement tests assess abilities in English, mathematics and reading; they may also be used in other disciplines such as foreign languages, computer and internet technologies, health and natural sciences. The goal is to offer low-scoring students remedial coursework (or other remediation) to prepare them for regular coursework.[1]

Historically, placement tests also served additional purposes such as providing individual instructors a prediction of each student's likely academic success, sorting students into homogeneous skill groups within the same course level and introducing students to course material.[citation needed] Placement testing can also serve a gatekeeper function, keeping academically challenged students from progressing into college programs, particularly in competitive admissions programs such as nursing within otherwise open-entry colleges.[citation needed]

Secondary schooling[edit]

A placement exam is a test designed to evaluate a person's knowledge of a subject and thus determine the level most suitable for the person to begin coursework on that subject. It is not unusual for students to take a placement exam in a subject such as mathematics upon admission to a school or university to determine what level of classes they should take. Scores on such exams as the Advanced Placement, International Baccalaureate, SAT Subject Tests, and British Advanced Level exams can serve as placement tests for students in certain subjects, where a high score would enable them to get into a more advanced class than what a freshman would normally take.[2][3]

Test validity[edit]

In the construction of a test, subject matter experts (SMEs) construct questions that assess skills typically required of students for that content area. "Cut scores" are the minimum scores used to divide students into higher and lower level courses. SMEs sort test items into categories of appropriate difficulty, or correlate item difficulty to course levels. "Performance level descriptors" define the required skills for remedial and standard courses.[4]

Once in use, placement tests are assessed for the degree to which they predict the achievements of students once they have been assigned to remedial or standard classes. Since grades serve as a common indirect measure of student learning, in the customary analysis, a binary logistic regression is run using the test score as the independent variable, and course grades as the dependent conditions. Typically, grades of A, B or C are counted as successful, while grades of D and F are counted as unsuccessful. Grades of I (for an unconverted Incomplete) and W (a Withdrawal) may be considered unsuccessful or may be excluded from the analysis.[citation needed]

Test scores are interpreted based on a proposed use and assessed in that context, rather than simply by establishing a predictive relationship between scores and grades. Since placement tests are designed to predict student learning in college courses, by extension they predict the need for developmental education. However, the efficacy of developmental education has been questioned in recent research studies, such as those by Bettinger and Long;[5] Calcagno and Long;[6] Martorell and McFarlin[7] and Attewell, Lavin, Domina and Levey.[8]

One study found that one-quarter of students assigned to math remediation and one-third of students assigned to English remediation in the US would have passed regular university courses with a grade of at least a B without any additional support.[9]

The placement testing process[edit]

Upon enrollment a student will be recommended or required to take placement tests, usually in English or writing, in math and in reading. Testing may also include a computer-scored essay, or an English-as-a-second-language assessment. Students with disabilities may take an adaptive version, such as in an audio or braille format that is compliant with the Americans with Disabilities Act (ADA).

Advisors interpret the scores and discuss course placement with the student. As a result of the placement, students may take multiple developmental courses before qualifying for college level courses. Students with the most developmental courses have the lowest odds of completing the developmental sequence or passing gatekeeper college courses such as Expository Writing or College Algebra.[10] Adelman has shown that this is not necessarily a result of developmental education itself.[11]

Student acceptance[edit]

Many students do not understand the high-stakes nature of placement testing. Lack of preparation is also cited as a problem.[citation needed] According to a study by Rosenbaum, Schuetz and Foran, roughly three quarters of students surveyed say that they did not prepare for their placement tests.[12]

Once students receive their placement, they either may or must begin taking developmental classes as prerequisites to credit-bearing college level classes that count toward their degree. Most students are unaware that developmental courses do not count toward a degree.[13] Some institutions prevent students from taking college level classes until they finish their developmental sequence(s), while others apply course prerequisites. For example, a psychology course may contain a reading prerequisite such that a student placing into developmental reading may not sign up for psychology until they complete the developmental reading requirement.

Federal Student Aid programs pay for up to 30 hours of developmental coursework. Under some placement regimens and at some community colleges, low-scoring students may require more than 30 hours of such classes.

History[edit]

Placement testing has its roots in remedial education, which has always been part of American higher education. Informal assessments were given at Harvard as early as the mid-1600s in the subject of Latin. Two years earlier, the Massachusetts Law of 1647, also known as the "Old Deluder Satan Law," called for grammar schools to be set up with the purpose of "being able to instruct youth so far as they shall be fitted for the university."[14] Predictably, many in-coming students lacked sufficient fluency with Latin and got by with the help of tutors who had graduated as early as 1642.[15]

In 1849 the University of Wisconsin established country's first in-house preparatory department. Late in the century, Harvard introduced a mandatory expository writing course, and by the end of the 19th century, most colleges and universities had instituted both preparatory departments and mandatory expository writing programs.

According to John Willson,[16]

The chief function of the placement examination is prognosis. It is expected to yield results which will enable the administrator to predict with fair accuracy the character of work which a given individual is likely to do. It should afford a reasonable basis for sectioning a class into homogeneous groups in each of which all individuals would be expected to make somewhat the same progress. It should afford the instructor a useful device for establishing academic relations with his class at the first meeting of the group. It should indicate to the student something of the preparation he is assumed to have made for the work upon which he is entering and introduce him to the nature of the material of the course.

Historically, the view that colleges can remediate abilities that may be lacking was not universal. Hammond and Stoddard wrote in 1928: "Since, as has been amply demonstrated, scholastic ability is, in general, a quite permanent quality, any instrument that measures factors contributing to success in the freshman year will also be indicative of success in later years of the curriculum."[17]

Entrance examinations began with the purpose of predicting college grades by assessing general achievement or intelligence. In 1914 T.L. Kelley published the results of his course-specific high school examinations designed to predict "the capacity of the student to carry a prospective high school course."[18] The courses were algebra, English, geometry and history, with correlations ranging from R =.31 (history) to .44 (English).

Entrance examinations and the College Entrance Examination Board (now the College Board) allowed colleges and universities to formalize entrance requirements and shift the burden of remedial education to junior colleges in the early 20th century and later to community and technical colleges.[19]

Policies[edit]

Required placement testing and remediation was not always considered desirable. According to Robert McCabe, former president of Miami-Dade Community College, at one time "community colleges embraced a completely open policy. They believed that students know best what they could and could not do and that no barriers should restrict them....This openness, however, came with a price....By the early 1970s, it became apparent that this unrestricted approach was a failure."[20]

Examples of state or college placement testing policies:

  • Placement testing using state approved tests is required (or encouraged) for all student (or all students taking classes for credit, or all new students taking classes for credit)
  • Students must meet approved cut scores to gain access to specific courses
  • Placement testing waived for students demonstrating college readiness via admissions tests (typically high scores on ACT or SAT tests, such as 21 plus or minus in relevant subjects on ACT, and 500 plus or minus in relevant subject areas on SAT), other approved placement tests, or previous college coursework in math and English
  • Students allowed/required to retest after/within a certain length of time (sometimes for a fee).
  • Students must begin remedial coursework within a specified time period.
  • Before testing/retesting students are encouraged/required to review study guides or complete a review course.
  • Cut score levels, roles and reviews are described.
  • Remedial students encouraged/required to take diagnostic assessments before/during their coursework
  • Integration of criteria beyond test scores into remediation decision-making.
  • Requiring completion of Students may not register for college level classes until they have completed all (or certain) prescribed remedial courses
  • Defining remedial prerequisites such as placement test score or remedial coursework for specific courses.

Alternatives[edit]

Testing other elements of student ability[edit]

Conley recommends adding assessments of contextual skills and awareness, academic behaviors, and key cognitive strategies to the traditional math, reading and traditional tests[1] Boylan proposes examining affective factors such as "motivation, attitudes toward learning, autonomy, or anxiety."[21]

Alternative test formats[edit]

In 1988, Ward predicted that computer adaptive testing would evolve to cover more advanced and varied item types, including simulations of problem situations, assessments of conceptual understanding, textual responses and essays.[22]: 6–8  Tests now being developed incorporate conceptual questions in multiple choice format (for example by presenting a student with a problem and the correct answer and then asking why that answer is correct); and computer-scored essays such as e-Write, and WritePlacer[citation needed].

In a Request for Information on a centralized assessment system, the California Community Colleges System asked for "questions that require students to type in responses (e.g. a mathematical equation)" and for questions where "Students can annotate/highlight on the screen in the reading test."[23] Some massive open online courses, such as those run by edX or Udacity, automatically assess user-written computer code for correctness.[24]

Diagnostic placement testing[edit]

Placement testing focuses on a holistic score to decide placement into various levels, but is not designed for more specific diagnoses. Increasing diagnostic precision could involve changes to both scoring and test design and to better targeted remediation programs, where students focus on areas of demonstrated weakness within a broader subject.[citation needed]

"The ideal diagnostic test would incorporate a theory of knowledge and a theory of instruction. The theory of knowledge would identify the student's skills and the theory of instruction would suggest remedies for the student's weaknesses. Moreover, the test would be, in a different sense of the word from what we have previously employed, adaptive. That is, it would not subject students to detailed examinations of skills in which they have acceptable overall competence or in which a student has important strengths and weaknesses—areas where an overall score is not an adequate representation of the individual's status."[22]: 5 

Test preparation[edit]

A controversy exists over the value of test preparation and review. Test publishers maintain that their assessments should be taken without preparation, and that such preparation will not yield significantly higher scores. Test preparation organizations claim the opposite. Some schools have begun to support test preparation.

The publishers' claims are partly based on the theory that any test a student can prepare for does not measure general proficiency. Institutional test preparation programs are also said to risk washback, which is the tendency for the test content to dictate the prior curriculum, or "teaching to the test".[25] Various test preparation methods have shown effectiveness: test-taking tips and training, familiarity with the answer sheet format along with strategies that mitigate test anxiety.[26]

Some studies offer partial support for the test publishers' claims. For example, several studies concluded that for admissions tests, coaching produces only modest, if statistically significant, score gains.[27][28] Other studies, and claims by companies in the preparation business were more positive.[29] Other research has shown that students score higher with tutoring, with practice using cognitive and metacognitive strategies and under certain test parameters, such as when allowed to review answers before final submission, something that most computer adaptive tests do not allow.[30][31][32]

Other research indicates that reviewing for placement tests may raise scores by helping students to become comfortable with the test format and item types. It also might serve to refresh skills that have simply grown rusty. Placement tests often involve subjects and skills that students haven't studied since elementary or middle school, and for older adults, the might be many years between high school and college. In addition, students who attach a consequence to test results and therefore take placement tests more seriously are likely to achieve higher scores.[33]

According to a 2010 California community college study, about 56% of colleges did not provide practice placement tests, and for those that did, many students were not made aware of them. In addition, their students "did not think they should prepare, or thought that preparation would not change their placement."[34]

By 2011, at least three state community colleges systems (California, Florida, and North Carolina), had asked publishers to bid to create customized placement tests, with integrated test reviews and practice tests. Meanwhile, some individual colleges have created online review courses complete with instructional videos and practice tests.

Simulations[edit]

In "Using Microcomputers for Adaptive Testing," Ward predicted the computerization of branching simulation problems, such as those used in professional licensing exams.[22]

Secondary/tertiary alignment[edit]

Since placement testing is done to measure college readiness, and high schools in part prepare students for college, it only makes sense that K-12 and higher education curricula be aligned. Such realignment could take many forms, including K-12 changes, collegiate changes or even collaboration between the two levels. Various efforts to improve education may undertake this challenge, such as the national K-12 Common Core State Standards Initiative in the United States, Smarter Balanced Assessment Consortium (SBAC), or the Partnership for Assessment of Readiness for College and Careers (PARCC).

As of 2012 neither kind of alignment has progressed to the point of close coordination of curriculum, assessments, or learning methodologies between public school systems and systems of higher education. Recently, state legislatures (including California, Florida, and Connecticut) passed a series of mandates to redefine developmental curricula. This was done in response to diminishing four-year graduation rates in college.

Succeeding in a college course requires students to fulfill a multitude of tasks in order to demonstrate their competency in a given class. Frederick's Ngo's study of Multiple Measures further criticized the use of placement tests by considering that “college readiness is a function of several academic and non-academic factors that placement tests do not adequately capture”.[35] Furthermore, Belfield and Crosta's 2012 study establishes “positive but weak association between placement test scores and college GPA”.[35] Defining key skills and attributes that lead to college success cannot be simply extrapolated from performance on a single placement test.

Scott-Clayton claims “it is easier to distinguish between those likely to do very well and everyone else than it is to distinguish between those likely to do very poorly and everyone else”.[36] This exacerbates the issue of the placement test as it highlights the fact that those who do well on the placement test have a high probability (with high predictive validity) of succeeding in college-level coursework. Meanwhile, those who do poorly on the placement test aren't necessarily put on a trajectory that has predictive validity. Regardless, those who start in remediation, end up becoming victims to the vicious cycle of being stuck in remediation.

See also[edit]

References[edit]

  1. ^ a b Conley, David. "Replacing Remediation with Readiness" (working paper). Prepared for the NCPR Developmental Education Conference: What Policies and Practices Work for Students? September 23–24, 2010, Teachers College, Columbia University, p. 12.
  2. ^ Association, National Education. "History of Standardized Testing in the United States | NEA". www.nea.org. Retrieved 2022-07-04.
  3. ^ Reichard, Gary; Keirn, Tim (1999). "The Advanced Placement Exam in History: Growth, Controversies, and New Perspectives". The History Teacher. 32 (2): 169–173. doi:10.2307/494438. ISSN 0018-2745.
  4. ^ Morgan, Deanna. "Best Practices for Setting Placement Cut Scores in Postsecondary Education" (working paper). Prepared for the NCPR Developmental Education Conference: What Policies and Practices Work for Students? September 23–24, 2010, Teachers College, Columbia University, p. 12.
  5. ^ Bettinger, E., and Long, B. T. "Remediation at the Community College: Student Participation and Outcomes." In C. A. Kozeracki (ed.), Responding to the Challenges of Developmental Education. New Directions for Community Colleges, no. 129. San Francisco: Jossey-Bass, 2005.
  6. ^ Calcagno, J. C., and Long, B. T. "The Impact of Postsecondary Remediation Using a Regression Discontinuity Approach: Addressing Endogenous Sorting and Noncompliance." New York: National Center for Postsecondary Research, 2008.
  7. ^ Martorell, P., and McFarlin, I. "Help or Hindrance? The Effects of College Remediation on Academic and Labor Market Outcomes." Dallas: University of Texas at Dallas, 2007.
  8. ^ Attewell, P., Lavin, D., Domina, T., and Levey, T. "New Evidence on College Remediation." Journal of Higher Education 2006, 77(5), pp 886–924.
  9. ^ Judith Scott-Clayton (20 April 2012). "Are College Entrants Overdiagnosed as Underprepared?". NYTimes.com. Retrieved 2012-04-24.
  10. ^ Bailey, T., Jeong, D. W., & Cho, S. (2010). Referral, enrollment, and completion in developmental education sequences in community colleges. Economics of Education Review, 29, 255-270.
  11. ^ Adelman, Clifford (2006). "The toolbox revisited: Paths to degree completion from high school through college." U.S. Department of Education. [1]
  12. ^ Rosenbaum, James E., Schuetz, Pam & Foran, Amy. "How students make college plans and ways schools and colleges could help." (working paper, Institute for Policy Research, Northwestern University, July 15, 2010).
  13. ^ Rosenbaum, J., Deil-Amen, R., & Person, A. (2006). After admission: From college access to college success. New York: Russell Sage Foundation.
  14. ^ Massachusetts Trial Court Law Libraries http://www.lawlib.state.ma.us/docs/DeluderSatan.pdf .
  15. ^ Wright, Thomas Goddard (1920). Literary culture in early New England, 1620-1730. New Haven, CT: Yale UP, Ch. 6, p. 99. https://web.archive.org/web/20051025080258/http://www.dinsdoc.com/wright-1-6.htm
  16. ^ Willson, J.M. (1931). A study of an objective placement examination for sectioning college physics classes. Thesis submitted to the faculty of the School of Mines and Metallurgy of the University of Missouri, p. 5. http://scholarsmine.mst.edu/thesis/pdf/Willson_1931_09007dcc8073add4.pdf
  17. ^ "A Study of Placement Examinations." University of Iowa Studies in Education. Charles L. Robbins, Editor. Volume 4(7) Published by UIA, Iowa City, p9.
  18. ^ Kelley, T. Educational Guidance: An Experimental Study in the Analysis and Prediction of High School Pupils. Teachers College, Columbia University, Contributions to Education. No. 71.
  19. ^ Boylan, 1988
  20. ^ McCabe, Robert H. (2000). No One to Waste: A Report to Public Decision-Makers and Community College Leaders. Washington, DC: Community College Press, p. 42.
  21. ^ Saxon, Patrick; Levine-Brown, Patti; & Boylan, Hunter. "Affective Assessment for Developmental Students, Parts 1 & 2." Research in Developmental Education, 22(1&2), 2008, p. 1.
  22. ^ a b c Ward, William C. "Using Microcomputers for Adaptive Testing," in Computerized adaptive testing: The state of the art in assessment at three community colleges." League for Innovation in the Community College, Laguna Hills, CA, 1988
  23. ^ "CCCAssess Proof of Concept Report 2011: Centralizing Assessment in the California Community Colleges." California Community Colleges Chancellor's Office, Telecommunications and Technology Division, Sacramento, CA, 2011, pp. 30, 33.
  24. ^ "Free Online Courses. Advance your College Education & Career". Udacity. Retrieved 2012-11-22.
  25. ^ Robb, Thomas N., & Ercanbrack, Jay. (1999). "A Study of the Effect of Direct Test preparation on the TOEIC Scores of Japanese University Students." TESL-EJ, 3(4).
  26. ^ Perlman, Carole L. (2003). "Practice Tests and Study Guides: Do They Help? Are They Ethical? What Is Ethical Test Preparation Practice?" Measuring Up: Assessment Issues for Teachers, Counselors, and Administrators, ERIC, 12 pages.
  27. ^ Briggs, Derek C. 2001. "Are standardized test coaching programs effective? The effect of admissions test preparation: Evidence from NELS:88. Chance, Vol. 14,(1) pp 10-21.
  28. ^ Scholes, Roberta J., & Lain, M. Margaret. (1997). "The Effects of Test Preparation Activities on ACT Assessment Scores." Paper presented at the Annual Meeting of the American Educational Research Association, Chicago, IL. March 24–28, 22 pages.
  29. ^ Buchmann, C., Condron, D. J., & Roscigno, V. J. (2010). "Shadow Education, American Style: Test Preparation, the SAT and College Enrollment." Social Forces, 89(2), 435-461.
  30. ^ Rothman, Terri, & Henderson, Mary. (2011). "Do School-Based Tutoring Programs Significantly Improve Student Performance on Standardized Tests?" Research in Middle Level Education Online, 34 (6), p1-10.
  31. ^ Shokrpour, N., Zareii, E., Zahedi, S. S., & Rafatbakhsh, M. M. (2011). "The Impact of Cognitive and Meta-cognitive Strategies on Test Anxiety and Students' Educational Performance." European Journal of Social Science, 21(1), 177-188.
  32. ^ Papanastasiou, E. C. (2005). "Item Review and the Rearrangement Procedure: Its process and its results." Educational Research And Evaluation, 11(4), 303-321.
  33. ^ Napoli, Anthony R., & Raymond, Lanette A. (2004). "How Reliable Are Our Assessment Data?: A Comparison of the Reliability of Data Produced in Graded and Un-Graded Conditions." Research in Higher Education, 45(8), 921-929.
  34. ^ Venezia, A., Bracco, K. R., & Nodine, T. (2010). One-shot deal? Students' perceptions of assessment and course placement in California's community colleges. San Francisco: WestEd. http://www.wested.org/online_pubs/OneShotDeal.pdf
  35. ^ a b Ngo, Federick; Kwon, William W. (2015-08-01). "Using Multiple Measures to Make Math Placement Decisions: Implications for Access and Success in Community Colleges". Research in Higher Education. 56 (5): 442–470. doi:10.1007/s11162-014-9352-9. ISSN 1573-188X.
  36. ^ Scott-Clayton, Judith (February 2012). Do High-Stakes Placement Exams Predict College Success? CCRC Working Paper No. 41. Community College Research Center.