Jump to content

Placement testing

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by BradBostian (talk | contribs) at 13:42, 16 December 2011 (Created page with '{{New unreviewed article|source=ArticleWizard|date={{subst:CURRENTMONTHNAME}} {{subst:CURRENTYEAR}}}} '''Placement Testing''' is about the placement tests that ...'). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Template:New unreviewed article

Placement Testing is about the placement tests that most two year colleges use to place students into their initial classes.

Placement Testing

Placement Testing is used at most two year colleges and some universities to assess college readiness and place students into initial course levels. Since most two year colleges have open, non-competitive admissions policies, many students are admitted even though they are considered not college ready. Placement tests gauge students’ abilities with English, math and reading, but also in other disciplines such as foreign languages, English as a second language, computer and internet technologies, health and natural sciences. The goal is to bring low-scoring students up to college level by providing remedial coursework. The most common tests given are College Board’s ACCUPLACER and ACT’s COMPASS, both of which are online, computer-adaptive multiple-choice tests. In addition, some colleges give computer-scored essay writing tests, including ACCUPLACER’s WritePlacer, and COMPASS’s e-Write. Recently, McCann Associates ended its association as a vendor for the ACCUPLACER program and became a third major placement testing company. Contents 1 Purposes 2 Test validity 3 Testing practices 4 History 5 Policies 6 Future 6 See also 7 References 8 External links

Purposes

Underlying contemporary placement testing is the assumption that all students do not begin college with the same level of academic readiness. Colleges therefore use placement testing to determine students’ “key content knowledge” in core subject areas, especially English, math, and reading. [1] Consistent with the open door philosophy of community colleges, the goal is to provide whatever remediation (or development) each individual needs to be a successful college student. Students may be placed into various remedial situations, from Adult Basic Education, through various levels of developmental college courses, or directly into college level English, math, and other courses with developmental prerequisites. Historically, placement tests also served additional purposes such as providing individual instructors a prediction of each student’s likely academic success, sorting students into homogeneous skill groups within the same course level, and introducing students to course material. Placement testing can also serve an unintended gatekeeper function, keeping academically challenged students from progressing into college programs. For certain competitive admissions programs within otherwise open-entry colleges, for example nursing, gatekeeping may be an intended function.

Test Validity

Content Validity. In the construction of a test, subject matter experts (SMEs) provide Content Validity by creating items (test questions) that assess skills typically required of students for that content area. When initial cut scores are set, that is the minimum scores used to place students into a higher level course, SMEs may sort items into categories of appropriate difficulty, or correlate item difficulty to course levels, creating Performance Level Descriptors that “define the rigor or expectations associated with the categories, or performance levels, into which examinees are to be classified.” [2] Once in use, placement tests are assessed for their Predictive Validity, with the assumption that such tests should indicate how well a student will learn in a college class. Since course grades serve as a common indirect measure of student learning, in the customary analysis, a binary logistic regression is run using the test score as the independent variable, and course success/unsuccess as the dependent conditions. Typically, grades of A, B or C are counted as successful, while grades of D and F are counted as unsuccessful. Grades of I (for an unconverted Incomplete) and W (a Withdrawal) may be considered unsuccessful or may be left out of the analysis altogether. In practice, there is usually a clear positive relationship between student placement test scores and initial course grades. However, placement tests usually predict less than 10 % of college course success variance. Practicing instructors may note the lack of Face Validity in placement tests. For this kind of validity, test items should match the tasks students will face in the college classroom. Instead, the tests have traditionally been limited to multiple choice questions and extemporaneous essays written to one of several standard prompts. Another kind of validity is Argument Validity, or Consequence Validity. Test scores are interpreted based on a proposed use and assessed in that context, rather than simply by establishing a predictive relationship between scores and grades. Since placement tests are designed to predict student learning in college courses, by extension they predict the need for developmental education. However, the efficacy of developmental education has been questioned in recent high quality statewide and national research studies, such as those by Bettinger and Long; [3] Calcagno and Long; [4] Martorell and McFarlin; [5] and Attewell, Lavin, Domina and Levey. [6] If placement tests are designed to measure a student’s ability to learn at a given college level, a correlation between test and course success may not be sufficient to establish the test as a valid measure. For example, if students were paying test administrators for high scores, and also paying course instructors for successful grades, a traditional study would show a strong predictive relationship between score and grade, but that relationship would only validate a cheating scheme, not the test constructs themselves. According to proponents of Argument Validity, even when test scores are highly predictive of course success, the nature and effectiveness of the courses themselves must come into play. On this basis, predictively valid tests have been questioned as the learning value of developmental courses themselves has been called into questioned. The argument might be that if the consequences of using a test in the prescribed fashion are found to be invalid, the validity of the test may also be questioned.

The Placement Testing Process

Upon enrollment in a community college (and in some other two and four year institutions of higher learning), a student will be recommended or required to take placement tests, usually in English or writing, in math, and in reading. With ACCUPLACER this will most likely be Arithmetic, Elementary Algebra, Reading Comprehension and Sentence Skills. With COMPASS this will probably include Math, Reading Skills and Writing Skills. Testing may also include a computer-scored essay, or an ESL assessment. Some colleges use ASSET, ACT’s paper and pencil test. Students with disabilities may take an adaptive version, such as in an audio or braille format that is ADA compliant.

Advisors then interpret the resulting scores for the students and discuss the mandatory or recommended course placements. As a result of the placement, students may face a number of developmental levels before being allowed to take college level courses. Students placing into the greatest number of developmental levels have the lowest odds of completing the developmental sequence or passing gatekeeper college courses such as Expository Writing or College Algebra. [7]. While Adelman [8] has shown that this is not necessarily a result of the developmental education itself, the question is often asked whether or not developmental placements represent good advice.

Throughout the history of placement testing and enrollment practices, the pendulum has swung slowly back and forth between more and less prescriptive practices. If students are not required to take placement tests, they tend to avoid them. If they are not required to immediately enroll in the developmental classes they’ve placed into, they will often delay or avoid taking those as well. The validity of studies examining placement testing and developmental courses will necessarily suffer to the extent that students avoid testing and the subsequent course placements.

Beyond avoidance, many students do understand the high stakes nature of placement testing. Lack of preparation is also cited as a problem. According to a study by Rosenbaum, Schuetz and Foran, [9] roughly three quarters of students surveyed say that they did not prepare for the tests. This lack of preparation may result in the over-diagnosis of remedial need.

Consequently, many colleges supply their students with study guides and practice tests, and a small but growing practice is to require online or face to face review sessions before allowing students to test, or retest.

Once students receive their placement, they either may or must begin taking developmental classes as prerequisites to credit-bearing college level classes that will count toward their degree. Most students are unaware that developmental courses do not count toward a degree. [10] Some institutions prevent students from taking any college level classes until they finish their developmental sequence or sequences, while others let this process happen more naturally through a system of course prerequisites. For example, a psychology course may contain a reading prerequisite such that a student placing into developmental reading may not sign up for psychology until they complete the developmental reading requirements. In this way, developmental placements can be seen as enhancing college readiness and/or simultaneously delaying or even preventing college completion.

Federal financial aid programs will pay for up to 30 attempted hours of developmental coursework. Under some placement regimens and at some community colleges, low-scoring students may place into more than 30 hours of these non-credit classes. In addition to finances, this process also affects learning, as instructors often consider the presumed reading and writing abilities of their students when choosing textbooks or instructional methods.

History

Placement testing has its roots in remedial education, which has always been part of American higher education. While formal and standardized assessments came later, informal assessments were given at Harvard as early as the mid 1600’s in the subject of Latin. Two years earlier, the Massachusetts Law of 1647, also known as the “Old Deluder Satan Act,” called for grammar schools to be set up with the purpose of “being able to instruct youth so far as they shall be fitted for the university.” [11] Predictably, many in-coming students lacked sufficient fluency with Latin and got by with the help of tutors who had graduated as early as 1642. [12]

Over the years, the pendulum has continued to swing between institutions’ desire to promote college readiness and provide the necessary remediation, and their impulse to use placement testing as a gatekeeper and force students to remediate elsewhere, or even let students decide for themselves if they need more help.

According to John Willson, [13]

“The chief function of the placement examination is prognosis. It is expected to yield results which will enable the administrator to predict with fair accuracy the character of work which a given individual is likely to do. It should afford a reasonable basis for sectioning a class into homogeneous groups in each of which all individuals would be expected to make somewhat the same progress. It should afford the instructor a useful device for establishing academic relations with his class at the first meeting of the group. It should indicate to the student something of the preparation he is assumed to have made for the work upon which he is entering and introduce him to the nature of the material of the course.”


While prognosis is always part of the purpose of placement testing, the current theory is that tests predict a student’s performance so that colleges can remediate abilities that may be lacking. Historically, this view was not universal. Hammond and Stoddard [14] wrote in 1928 that “Since, as has been amply demonstrated, scholastic ability is, in general, a quite permanent quality, any instrument that measures factors contributing to success in the freshman year will also be indicative of success in later years of the curriculum.”

While entrance examinations began with the purpose of predicting college grades by assessing general achievement or intelligence, in 1914 T.L. Kelley published the results of his creation and use of course-specific high school examinations designed to predict “the capacity of the student to carry a prospective high school course.” [15] The courses were algebra, English, geometry and history, with correlations ranging from history (R =.31) to English (R = .44).


Still, placement testing within the broad category of entrance assessments has long been coupled with remedial education as a solution for the universal phenomenon of students entering colleges without meeting the academic expectations of college officials. Formal remedial education continued with the establishment of preparatory schools, right through the 1849 establishment of the country’s first preparatory department at the University of Wisconsin. Late in the century, Harvard introduced a mandatory expository writing course, and by the end of the 19th century, most colleges and universities had instituted both preparatory departments and mandatory expository writing programs.

The widespread use of entrance examinations and in fact the creation of the College Entrance Examination Board (now the College Board) allowed colleges and universities to raise entrance requirements and shift the burden of remedial education to junior colleges in the early 20th century, and later in the second half of the 20th century, to community and technical colleges (Boylan, 1988).


Policies

Placement testing policies usually center on optional or mandatory placement testing, but may include a host of related policies. Some experts consider testing requirements to be important because, as community college and student engagement expert Kay McClenney puts it, “Students don’t do optional.” Required placement testing and remediation has not always been considered desirable. According to Robert McCabe, former president of Miami-Dade Community College, at one time “community colleges embraced a completely open policy. They believed that students know best what they could and could not do and that no barriers should restrict them. . . . This openness, however, came with a price. . . . By the early 1970s, it became apparent that this unrestricted approach was a failure”[16] The push toward mandatory policies has gathered momentum more recently. In 2002, just 5 states had statewide standard placement test cut scores. By 2009, that number had jumped to 20, and is still growing. In 2002, 17 states had statewide remedial placement policies. By 2005, that number had risen to 24, and is still growing. Clearly, the trend is toward states standardizing, controlling, and mandating the placement testing experience for students in community colleges. Examples of placement testing policies: Placement testing using state (or college) approved tests is required (or encouraged) for all student (or all students taking classes for credit, or all new students taking classes for credit) Students must meet state (or college) approved cut scores to gain access to standardized and articulated (or local) college level (and often various remedial level) courses Placement testing will be waived for students demonstrating college readiness via admissions tests (typically high scores on ACT or SAT tests, such as 21 plus or minus a few points in relevant subjects on ACT, and 500 plus or minus a few tens of points in relevant subject areas on SAT), other approved placement tests, or previous college coursework in math and English Students are allowed (or required) to retest after or within a certain length of time (sometimes at a fee). Evidence for test score expiration dates is weak, but the practice is commonplace Students placing into remedial coursework must begin that coursework within a specified time period Before testing (or retesting), students are encouraged (or required) to review study guides or complete an online (or face to face) review course Cut scores will be validated and set (as required or recommended) by the (state) college system (or each local college) Students placing into remedial courses are encouraged (or required) to take diagnostic assessments before (or as part of) their prescribed remedial coursework Placement policies and cut scores will be examined periodically to assess their impact on student success Initial cut scores (levels indicating likely success or content mastery) for new assessments will be set by state (or college) subject matter experts, typically faculty teaching the remdeial and college level courses Students (in California especially) will be placed using multiple measures, not just a placement test cut score Students may not register for college level classes until they have completed all (or certain) prescribed remedial courses (All or some) college level courses will have remedial prerequisites that students have to meet (by placement test score or remedial coursework) before registering for those courses

Future

The future of placement testing will be shaped by attempts to address current shortcomings, along with the challenges of a changing higher education landscape. Placement tests are designed to predict what a student can learn at a college or university, but they do so by addressing a student’s current content knowledge. Therefore, one area to be addressed will be rounding out the existing picture of college readiness by adding other factors. These multiple measures could directly assess other factors identified by David Conley: contextual skills and awareness, academic behaviors, and key cognitive strategies. [17], and by Hunter Boylan: affective factors such as “motivation, attitudes toward learning, autonomy, or anxiety.” [18] Other typical non-cognitive factors identified are students’ educational expectations and their feelings of self-efficacy. Other factors that affect a student’s readiness to learn in college include social and financial support.

Instead of directly assessing these factors, high school GPA could be substituted as good proxies. GPA is itself a multiple measure, in that a student achieves high grades not simply by knowing a lot, but also by attending consistently, having good study habits, staying positive, and working hard. Individual subject grades might be slightly more representative of key content knowledge than non-cognitive factors, and end of course/end of grade tests would also more specifically represent key content knowledge.

An important shortcoming of traditional placement tests today is the predominance of a single question format: multiple choice. This limits the face validity of such tests. The importance of this goes beyond establishing credibility with instructors and test users; it raises the issue of content validity. Do these tests directly or adequately predict the student’s ability to learn the tasks that will actually be assigned to them in the college curriculum? After all, not every learning experience will involve the multiple choice questions predominant in traditional placement tests.

In 1988, William C. Ward wrote that the future of computer adaptive testing would involve more advanced and varied item types. These would include computerized simulations of problem situations, questions that get at expert conceptual understanding, questions requesting freely written responses, and computer-scored essays. [19] Tests now being developed incorporate conceptual questions in the multiple choice format (for example by presenting a student with a problem and the correct answer and then asking why that answer is correct); and computer-scored essays such as e-Write, and WritePlacer (the latter which incorporates critical thinking as an assessed component) have proven to be as valid and reliable as expert-scored essays. It is only a matter of time before we see free response items and simulations. In their Request for Information on a centralized assessment system, California Community Colleges has asked for “questions that require students to type in responses (e.g. a mathematical equation)” and for questions where “Students can annotate/highlight on the screen in the reading test.” [20]

Diagnostic Placement Testing

Another shortcoming of current placement testing practices is the use of a single holistic score for placement. A single holistic score allows for placement into various levels, but not into and out of specific subject area sub-domains. Testing that can do that is known as diagnostic testing, and diagnostic testing will likely become a more integral element of future placement testing. This is especially important to the degree that remedial education programs move to a modular format, where students only remediate in domains of demonstrated weakness within a broader subject.

“The ideal diagnostic test would incorporate a theory of knowledge and a theory of instruction. The theory of knowledge would identify the student's skills and the theory of instruction would suggest remedies for the student's weaknesses. Moreover, the test would be, in a different sense of the word from what we have previously employed, adaptive. That is, it would not subject students to detailed examinations of skills in which they have acceptable overall competence or in which a student has important strengths and weaknesses—areas where an overall score is not an adequate representation of the individual’s status.” [21]

To the student, diagnostic placement testing may look and feel like traditional placement testing, but it will not rely on a single holistic score. Instead, the student will place below, into, or above various remedial modules within the traditional remedial subjects. However, diagnostic tests take longer to deliver, as they have to more fully investigate knowledge in all the relevant domains. Modular remedial education is typically mastery based, so the goal is not to find the student’s approximate level of ability on a single continuum, but to determine a student’s ability in several different domains. Diagnostic placement tests are computer adaptive, but can take up to twice as long as traditional placement tests. There is some debate among test designers as to how far adaptive technology can be pushed to limit total testing time and still produce valid and reliable results. For example, one question is whether a student who scores a certain number of right or wrong answers in a row in a given domain can be justifiably moved on to a different domain.


Test preparation

Test publishers have maintained that their assessments should be taken without preparation, and that such preparation will not yield significantly higher scores. Partly this is based on the theory that any test a student can prepare for doesn’t actually measure general proficiency.” Institutional test preparation programs are also said to risk washback, which is the tendency for the test content to dictate the prior curriculum, in other words, teaching to the test.[22] Various test preparation methods have shown effectiveness: test-taking tips and training, familiarity with the answer sheet format, and strategies that mitigate test anxiety.[23]

Some studies have partly supported the discouraging claims by test publishers. For example, several studies have concluded that for admissions tests, coaching produces only modest, if statistically significant, score gains.[24][25] Other studies, and claims by companies in the business of providing admissions test coaching, have been more positive.[26] And, research has shown that in general for various kinds of academic testing, students score higher with tutoring, with practice using cognitive and metacognitive strategies, and under certain test parameters, such as when allowed to review answers before final submission, something that most computer adaptive tests do not allow.[27] [28] [29]

Community college administrators have come to regard test preparation as a critical aid in boosting the accuracy of the placement test and thereby helping students to avoid unnecessary remediation. Test review has been found to increase scores for students who retest, and help students to place out of one or more remedial levels, without undermining the academic performance of those students who advance through retesting. The impact of test preparation before initial placement testing is less clear.

There is a growing belief that test preparation can help community college students place out of unnecessary remediation, and yet, according to a recent California community college study, about 56% of colleges did not provide practice placement tests, and for those that did, many of their students were not made aware of them. In addition, their students “did not think they should prepare, or thought that preparation would not change their placement.” [30] The recent research questioning the benefits of remedial education has provided impetus for the argument to streamline the remedial process and help students to avoid remedial coursework where possible.

By 2011, at least three state community colleges systems (California, Florida, and North Carolina), had asked publishers to bid on the right to create new, custom placement tests, including test reviews/practice tests to accompany them. Meanwhile, some individual colleges have created online review courses complete with instructional videos and practice tests using items that correspond as closely as possible to actual test items.

Reviewing for placement tests may raise scores by helping students to become comfortable with the test format and item types. It also might serve to refresh skills that have simply grown rusty. Placement tests often involve subjects and skills that students haven’t studied since elementary or middle school, and for older adults, the might be many years between high school and college. In addition, students who attach a consequence to test results and therefore take placement tests more seriously are likely to achieve higher scores. [31]

Simulations

In “Using Microcomputers for Adaptive Testing,” Ward predicted the computerization of branching simulation problems, such as those used in professional licensing exams. Certainly as the power of artificial intelligence technology grows, more authentic test items will be available. Authentic items would reflect the real world and real classroom skills, tasks, and projects that students will face. As time goes by, standard multiple choice questions may well make up a decreasing proportion of placement test items.

Alignment

Since placement testing is done to measure college readiness, and high schools prepare students for college, it only makes sense that the K-12 and higher education curriculums be aligned. This may involve state adoption of the national K-12 Common Core standards, or alignment of those standards with gateway courses at community colleges and universities. It could mean testing students while they are still in K-12, using assessments that are reflect aligned curriculum standards, and remediating those students while they are still in high school. It might involve the creation of a federal Department of Education Race to the Top grant-sponsored consortium of states such as the Smarter Balanced Assessment Consortium (SBAC), with a mission of “working to develop next-generation assessments that are aligned to the Common Core State Standards and that accurately measure student progress toward college and career readiness,” or the Partnership for Assessment of Readiness for College and Careers (PARCC), which seeks “to create an assessment system and supporting tools that will help states dramatically increase the number of students who graduate high school ready for college and careers and provide students, parents, teachers and policymakers with the tools they need to help students - from grade three through high school - stay on track to graduate prepared. The Partnership will also develop formative tools for grades K-2.”

Alignment may also involve coordinating the curriculum between remedial and college level courses, so that placement tests can more accurately place students between or among those course levels.

So far, neither kind of alignment has progressed to the point of close coordination of curriculum, assessments, or learning methodologies between public school systems and systems of higher education.

Notes

  1. ^ Conley, David. “Replacing Remediation with Readiness” (working paper). Prepared for the NCPR Developmental Education Conference: What Policies and Practices Work for Students? September 23–24, 2010, Teachers College, Columbia University, p. 12.
  2. ^ Morgan, Deanna. “Best Practices for Setting Placement Cut Scores in Postsecondary Education” (working paper). Prepared for the NCPR Developmental Education Conference: What Policies and Practices Work for Students? September 23–24, 2010, Teachers College, Columbia University, p. 12.
  3. ^ Bettinger, E., and Long, B. T. “Remediation at the Community College: Student Participation and Outcomes.” In C. A. Kozeracki (ed.), ‘‘Responding to the Challenges of Developmental Education.’’ New Directions for Community Colleges, no. 129. San Francisco: Jossey-Bass, 2005.
  4. ^ Calcagno, J. C., and Long, B. T. “The Impact of Postsecondary Remediation Using a Regression Discontinuity Approach: Addressing Endogenous Sorting and Noncompliance.” New York: National Center for Postsecondary Research, 2008.
  5. ^ Martorell, P., and McFarlin, I. “Help or Hindrance? The Effects of College Remediation on Academic and Labor Market Outcomes.” Dallas: University of Texas at Dallas, 2007.
  6. ^ Attewell, P., Lavin, D., Domina, T., and Levey, T. “New Evidence on College Remediation.” ‘‘Journal of Higher Education’’ 2006, 77(5), pp 886–924.
  7. ^ Bailey, T., Jeong, D. W., & Cho, S. (2010). Referral, enrollment, and completion in developmental education sequences in community colleges. Economics of Education Review, 29, 255-270.
  8. ^ Adelman, Clifford (2006). “The toolbox revisited: Paths to degree completion from high school through college.” U.S. Department of Education. http://www.ed.gov/rschstat/research/pubs/toolboxrevisit/toolbox.pdf
  9. ^ Rosenbaum, James E., Schuetz, Pam & Foran, Amy. “How students make college plans and ways schools and colleges could help.” (working paper, Institute for Policy Research, Northwestern University, July 15, 2010).
  10. ^ Rosenbaum, J., Deil-Amen, R., & Person, A. (2006). After admission: From college access to college success. New York: Russell Sage Foundation.
  11. ^ Massachusetts Trial Court Law Libraries http://www.lawlib.state.ma.us/docs/DeluderSatan.pdf .
  12. ^ Wright, Thomas Goddard (1920). Literary culture in early New England, 1620-1730. New Haven, CT: Yale UP, Ch. 6, p. 99. http://webcache.googleusercontent.com/search?hl=en&sig=uzEPFyUtLVmbclssA9MzGHYT5YY&q=cache:SO77jcVLMN4J:http://www.dinsdoc.com/wright-1-6.htm+1649+earliest+latin+tutors+massachusetts&ct=clnk
  13. ^ Willson, J.M. (1931). A study of an objective placement examination for sectioning college physics classes. Thesis submitted to the faculty of the School of Mines and Metallurgy of the University of Missouri, p. 5. http://scholarsmine.mst.edu/thesis/pdf/Willson_1931_09007dcc8073add4.pdf
  14. ^ “A Study of Placement Examinations.” University of Iowa Studies in Education. Charles L. Robbins, Editor. Volume 4(7) Published by UIA, Iowa City, p9.
  15. ^ Kelley, T. }. Educational Guidance: An Experimental Study in the Analysis and Prediction of High School Pupils. Teachers College, Columbia University, Contributions to Education. No. 71.
  16. ^ McCabe, Robert H. (2000). No One to Waste: A Report to Public Decision-Makers and Community College Leaders. Washington, DC: Community College Press, p. 42.
  17. ^ Conley, David. “Replacing Remediation with Readiness” (working paper). Prepared for the NCPR Developmental Education Conference: What Policies and Practices Work for Students? September 23–24, 2010, Teachers College, Columbia University, p. 12.
  18. ^ Saxon, Patrick; Levine-Brown, Patti; & Boylan, Hunter. “Affective Assessment for Developmental Students, Parts 1 & 2.” Research in Developmental Education, 22(1&2), 2008, p. 1.
  19. ^ Ward, William C. “Using Microcomputers for Adaptive Testing,” in Computerized adaptive testing: The state of the art in assessment at three community colleges.” League for Innovation in the Community College, Laguna Hills, CA, 1988, pp. 6-8.
  20. ^ “CCCAssess Proof of Concept Report 2011: Centralizing Assessment in the California Community Colleges.” California Community Colleges Chancellor’s Office, Telecommunications and Technology Division, Sacramento, CA, 2011, pp. 30, 33.
  21. ^ Ward, William C. “Using Microcomputers for Adaptive Testing,” in Computerized adaptive testing: The state of the art in assessment at three community colleges.” League for Innovation in the Community College, Laguna Hills, CA, 1988, p. 5.
  22. ^ Robb, Thomas N., & Ercanbrack, Jay. (1999). “A Study of the Effect of Direct Test preparation on the TOEIC Scores of Japanese University Students.” TESL-EJ, 3(4).
  23. ^ Perlman, Carole L. (2003). “Practice Tests and Study Guides: Do They Help? Are They Ethical? What Is Ethical Test Preparation Practice?” Measuring Up: Assessment Issues for Teachers, Counselors, and Administrators, ERIC, 12 pages.
  24. ^ Briggs, Derek C. 2001. “Are standardized test coaching programs effective? The effect of admissions test preparation: Evidence from NELS:88. Chance, Vol. 14,(1) pp 10-21.
  25. ^ Scholes, Roberta J., & Lain, M. Margaret. (1997). “The Effects of Test Preparation Activities on ACT Assessment Scores.” Paper presented at the Annual Meeting of the American Educational Research Association, Chicago, IL. March 24-28, 22 pages.
  26. ^ Buchmann, C., Condron, D. J., & Roscigno, V. J. (2010). “Shadow Education, American Style: Test Preparation, the SAT and College Enrollment.” Social Forces, 89(2), 435-461.
  27. ^ Rothman, Terri, & Henderson, Mary. (2011). “Do School-Based Tutoring Programs Significantly Improve Student Performance on Standardized Tests?” Research in Middle Level Education Online, 34 (6), p1-10.
  28. ^ Shokrpour, N., Zareii, E., Zahedi, S. S., & Rafatbakhsh, M. M. (2011). “The Impact of Cognitive and Meta-cognitive Strategies on Test Anxiety and Students' Educational Performance.” European Journal Of Social Science, 21(1), 177-188.
  29. ^ Papanastasiou, E. C. (2005). “Item Review and the Rearrangement Procedure: Its process and its results.” Educational Research And Evaluation, 11(4), 303-321.
  30. ^ Venezia, A., Bracco, K. R., & Nodine, T. (2010). One-shot deal? Students’ perceptions of assessment and course placement in California’s community colleges. San Francisco: WestEd. http://www.wested.org/online_pubs/OneShotDeal.pdf
  31. ^ Napoli, Anthony R., & Raymond, Lanette A. (2004). “How Reliable Are Our Assessment Data?: A Comparison of the Reliability of Data Produced in Graded and Un-Graded Conditions.” Research in Higher Education, 45(8), 921-929.


References