Test (student assessment)

Test (student assessment)

A test or an examination (or "exam") is an assessment, often administered on paper or on the computer, intended to measure the test-takers' or respondents' (often a student) knowledge, skills, aptitudes, or classification in many other topics (e.g., beliefs). Tests are often used in education, professional certification, counseling, psychology (e.g., MMPI), the military, and many other fields. The measurement that is the goal of testing is called a test score, and is "a summary of the evidence contained in an examinee's responses to the items of a test that are related to the construct or constructs being measured." [Thissen, D., & Wainer, H. (2001). Test Scoring. Mahwah, NJ: Erlbaum. "Page 1, sentence 1."] Test scores are interpreted with regards to a norm or criterion, or occasionally both. The norm may be established independently, or by statistical analysis of a large number of subjects.

A standardized test is one that is administered and scored in a consistent matter to ensure legal defensibility. [North Central Regional Educational Laboratory [http://www.ncrel.org/sdrs/areas/issues/students/earlycld/ea5lk3.htm] ] A large proportion of formal testing is standardized. A standardized test with important consequences for the individual examinee is referred to as a high stakes test.

The basic component of a test is an "item". These are often colloquially referred to as "questions," but not every item is phrased as a question; it may be such things as a true/false statement or a task that must be performed (if a performance test).

History

The earliest known standardized tests (which included both practical and written components) are the Chinese Imperial Examinations which began in 587. [Feng, Y. (1994). From the Imperial Examination to the National College Entrance Examination: the Dynamics of Political Centralism in China's Educational Enterprise. ASHE Annual Meeting Paper. [http://www.eric.ed.gov/ERICDocs/data/ericdocs2sql/content_storage_01/0000019b/80/13/64/3b.pdf] ]

In Europe, traditionally school examinations were conducted orally. Students would have to answer questions posed by teachers in Latin, and teachers would grade them on their answers. The first written exams in Europe were held at Cambridge University, England in 1792 by professors who were paid a piece rate and realized that written exams would earn them more money.

Types of items

Many possible item formats are available for test construction. These include: multiple-choice, free response, performance or simulation, true/false, and Likert-type. There is no "best" format to use; the applicability depends on the purpose and content of the test. For example, a test on a complex psychomotor task would be better served by a performance or simulation item than a true/false item.

Multiple-choice items

A common type of test item is a multiple-choice question, the author of the test provides several possible answers (usually four or five) from which the test subjects must choose. [Haladyna, T. (2004). Developing and Validating Multiple-Choice Test Items. Erlbaum.] There is one right answer, usually represented by only one answer option, though sometimes divided into two or more, all of which subjects must identify correctly. Such a question may look like this:

The number of right angles in a square is: a) 2 b) 3 c) 4 d) 5

Test authors generally create incorrect response options, often referred to as distracters, which correspond with likely errors. [Kehoe, Jerard (1995). Writing multiple-choice test items. Practical Assessment, Research & Evaluation, 4(9). Retrieved February 26, 2008 from http://PAREonline.net/getvn.asp?v=4&n=9 ] For example, distracters may represent common misconceptions that occur during the developmental process. The construction of effective distracters is a key challenge that must be faced in order to construct multiple-choice items that possess strong psychometric properties. Well-designed distracters, considered in combination, can attract considerably more than 25% of the weakest students, so reducing the effects of guessing on total scores. The construction of such items may in some cases require some skill and experience on the part of the item developer.

A graph depicting the functioning of a multiple-choice question is shown in Figure 1. The x-axis represents an ability continuum and the y-axis the probability of any given choice being selected by an examinee with a given level of ability. The y-axis is obviously on a scale of 0 to 1, while the x-axis represents standardized scores with a mean of 0 and standard deviation of 1, which can be based on either the items or the examinees.

The grey line maps ability to the probability of a correct response according to the Rasch model, which is a psychometric model used to analyse test data. The correct response in the example shown in Figure 1 is E. The proportion of students along the ability continuum who chose the correct response is highlighted in pink. The graph shows the proportion of students opting for other choices along the range of the ability continuum, as shown in the legend. The proportion of students at about −1.5 on the scale (i.e., of very low ability) who responded correctly to this item is approximately 0.1, which is below the proportion expected if students were purely guessing.

An attractive feature of multiple-choice questions is that they are particularly easy to score. [Test Item Writing - From the University of Alabama at Birmingham [http://www.uab.edu/uasomume/cdm/test.htm] ] Machines such as the Scantron and software grading of computer-based tests can be performed automatically and instantly, which is particularly valuable for situations where there are not enough graders available to grade a large class or large-scale standardized test. Multiple-choice tests are also valuable when the test sponsor desires to have immediate score reporting available to the examinee; it is impossible to provide a score at the end of the test if the items are not actually scored until several weeks later.

This format is not, however, appropriate for assessing all types of skills and abilities. Poorly written multiple-choice questions often create an overemphasis on simple memorization and deemphasize processes and comprehension. They also leave no room for disagreement or alternate interpretation, making them particularly unsuitable for humanities such as literature and philosophy.

Free response items

[
University of Vienna, June 2005]
Free response questions do not pose as much of a challenge to the test author, but evaluating the responses is a different matter. Effective scoring involves reading the answer carefully and looking for specific features, such as clarity and logic, which the item is designed to assess. Often, the best results are achieved by awarding scores according to explicit ordered categories which reflect an increasing quality of response. Doing so may involve the construction of marking criteria and support materials, such as training materials for markers and samples of work which exemplify categories of responses. Typically, these questions are scored according to a uniform grading rubric for greater consistency and reliability.

At the other end of the spectrum, scores may be awarded according to superficial qualities of the response, such as the presence of certain important terms. In this case, it is easy for test subjects to fool scorers by writing a stream of generalizations or non sequiturs that incorporate the terms that the scorers are looking for. This, along with other factors that limit their reliability and cost/measurement ratio, has caused the usefulness of this item type to be questioned. [Hollingworth, L., Beard, J.J., & Proctor, T.P. (2005). An Investigation of Item Type in a Standards-Based Assessment. "Practical Assessment, Research, and Evaluation, 12"(18). [http://pareonline.net/pdf/v12n18.pdf] ]

While free-response items have disadvantages, they are able to offer more differentiating power between examinees. [Vale, C.D., & Weiss, D.J. (1977). A Comparison of Information Functions of Multiple-Choice and Free-Response Vocabulary Items. Technical Report, University of Minnesota Psychometric Methods Laboratory. [http://stinet.dtic.mil/oai/oai?verb=getRecord&metadataPrefix=html&identifier=ADA039255] ] However, this might be offset by the length of the item; if a free-response item provides twice as much measurement information as a multiple-choice item, but takes as long to complete as three multiple-choice items, is it worth it?

Performance test or practical examination

Knowledge of "how to do" something does not lend itself well to either free-response or multiple-choice questions. It may be demonstrated only outright by a performance test. [Performance Testing Council - Why Performance Testing? [http://www.performancetest.org/whytest.html] ] Art, music, and language fall into this category, as do non-academic disciplines such as sports and driving. Students of engineering are often required to present an original design or computer program developed over the course of days or even months.

A practical examination may be administered by an examiner in person (in which case it may be called an "audition" or a "tryout") or by means of an audio or video recording. It may be administered on its own or in combination with other types of questions; for instance, many driving tests in the United States include a practical examination as well as a multiple-choice section regarding traffic laws.

Tests of the sciences may include laboratory experiments (practicals/laboratory sessions) to make sure that the student has learned not only the body of knowledge comprising the science but also the experimental methods through which it has been developed. Again, the use of explicit criteria is generally beneficial in the marking of practical examinations or performances.

Criticism

General aptitude tests, such as the SAT in the United States, are used in certain countries as a basis for entrance into colleges and universities. A criticism associated with this use of these tests is that they are known to be subject to practice effects, and do not necessarily assess the accumulated learning of students during their schooling years. However, the goal of these tests is "not" to assess accumulated learning; they are designed to measure aptitude, not achievement.

Similarly, college entrance exams are criticized for not accurately predicting first-year university grade point average(GPA) as well as high school GPA. [FairTest criticism of the SAT [http://www.fairtest.org/sat-i-faulty-instrument-predicting-college-success] ] However, the intent is for test scores to be used "along with" other measures in university selection; large-scale test scores are only one aspect of the university selection process. Universities are free to place more emphasis on high school GPA or extracurricular activities. Any criticism might be better directed to a university than the test itself, which most people consider fair. [Domino, G., & Domino, M.L. (2006). "Psychological Testing: An Introduction". Cambridge University Press. page 342 Preview available at [http://books.google.com/] ]

The content of the exam might not correspond with its intended use or representation. An example of this would be for an exam to have the ratio of questions in geometry, calculus, and number theory dissimilar to the ratio of these questions present in the environment for which the exam is intended to serve as a predictor of future performance. As an extreme and unrealistic example, a mathematics exam may ask solely about the names, birthdates, and country of origin of various mathematicians when such knowledge is of little importance in a mathematics curriculum. This need for a test to be valid for its use is AERA and [http://www.ncme.org NCME] Standard 1.1 for educational and psychological testing. [Standard 1.1 - American Educational Research Association, American Psychological Association, and the National Council on Measurement in Education. "Standards for Educational and Psychological Testing". Washington, DC: American Educational Research Association.] If it is used for other than its intended purpose, the burden of proof of validity rests upon its user. [Standard 1.4 - American Educational Research Association, American Psychological Association, and the National Council on Measurement in Education. "Standards for Educational and Psychological Testing". Washington, DC: American Educational Research Association.]

People are variously susceptible to stress. Some are virtually unaffected, and excel on tests, while in extreme cases, individuals can become very nervous and forget large components of exam material. To counterbalance this, often teachers and professors don't grade their students on tests alone, placing considerable weight on homework, attendance, in-class discussion activity, and laboratory investigations (where applicable). Conversely, in some high-stakes testing cases, the pressure induces examinees to rise to meet the exam's high expectations.

Through specialized training on material and techniques specifically created to suit the test, students can be "coached" on the test to increase their scores without actually significantly increasing knowledge of the subject matter. However, research on the effects of coaching remains inconclusive, and the increase might be simply due to practice effects. [Domino, G., & Domino, M.L. (2006). "Psychological Testing: An Introduction". Cambridge University Press. page 340 Preview available at [http://books.google.com/] ]

Although test organizers attempt to prevent it and impose strict penalties for it, academic dishonesty (cheating) can be used to obtain an advantage over other test-takers. On a multiple-choice test, lists of answers may be obtained beforehand. On a free-response test, the questions may be obtained beforehand, or the subject may write an answer that creates the illusion of knowledge Fact|date=September 2008. If students sit in proximity to one another, it is also possible to copy answers off other students, especially if a test-taker knows that particular person knows the material better than they do Fact|date=September 2008. Despite such issues, tests are less susceptible to cheating than other tools of learning evaluation Fact|date=September 2008. Laboratory results can be fabricated, and homework can be done by one student and copied by rote by others Fact|date=September 2008. The presence of a responsible test administrator, in a controlled environment, helps to guard against cheating.

References

See also

* Academic dishonesty
* Patterns in multiple-choice tests
* Blue book exam, used in free response exams.
* List of standardized tests in the United States
* Aptitude Battery
* Exam Stress
* High-stakes testing

External links

* [http://www.testpublishers.org/faq.htm Association of Test Publishers FAQs]
* [http://www.ncme.org National Council of Measurement in Education]
* [http://www.apa.org/science/jctpweb.html Joint Committee on Testing Practices]

International exams

* GCSE and A-level — Used in the UK except Scotland
* Standard Grade, Higher Grade, and Advanced Higher — used in Scotland
* Abitur — used in Germany
* Matura/Maturita — used in Austria, Bosnia and Herzegovina, Bulgaria, Croatia, Italy, Liechtenstein, Hungary, Macedonia, Montenegro, Poland, Serbia, Slovenia, Switzerland and Ukraine; previously used in Albania.
* International Baccalaureate Diploma Programme — International exam
*Internationella prov - used in Sweden.
*Matura Shtetërore - used in Albania.
*International General Certificate of Secondary Education (IGCSE)- international exams
*Junior Certificate and Leaving Certificate - Republic of Ireland

Further reading

* Airasian, P. (1994) "Classroom Assessment," Second Edition, NY" McGraw-Hill.
* Cangelosi, J. (1990) "Designing Tests for Evaluating Student Achievement." NY: Addison-Wesley.
* Gronlund, N. (1993) "How to make achievement tests and assessments," 5th edition, NY: Allyn and Bacon.
* Haladyna, T.M. & Downing, S.M. (1989) Validity of a Taxonomy of Multiple-Choice Item-Writing Rules. "Applied Measurement in Education," 2(1), 51-78.
* Monahan, T. (1998) [http://torinmonahan.com/papers/testing.pdf The Rise of Standardized Educational Testing in the U.S. – A Bibliographic Overview] .
* Wilson, N. (1997) Educational standards and the problem of error. http://olam.ed.asu.edu. Tap into archives, vol 6. No 10

Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать реферат

Look at other dictionaries:

  • Programme for International Student Assessment — The Programme for International Student Assessment (PISA) is a triennial world wide test of 15 year old schoolchildren s scholastic performance, the implementation of which is coordinated by the Organisation for Economic Co operation and… …   Wikipedia

  • Colorado Student Assessment Program — For the school board in Nova Scotia, see Conseil Scolaire Acadien Provincial The Colorado Student Assessment Program (CSAP) is an assessment required by the No Child Left Behind Act administered by the Unit of Student Assessment in the Colorado… …   Wikipedia

  • Test — Test, TEST or Tester may refer to:In science:* Experiment, part of the scientific method * Test (biology), the shell of sea urchins and certain microorganisms * Test method, a definitive procedure that produces a test result * Chemical test, a… …   Wikipedia

  • Colorado Student Assessment Program Alternate — The Colorado Student Assessment Program Alternate (CSAPA) is a version of the CSAP test for students who meet certain eligibility requirements [1]. Generally these are students with disabilities or in cases where other extenuating factors make… …   Wikipedia

  • Achievement test — An achievement test is a test of developed skill or knowledge. The most common type of achievement test is a standardized test developed to measure skills and knowledge learned in a given grade level, usually through planned instruction, such as… …   Wikipedia

  • Test anxiety — is a psychological condition in which a person experiences distress before, during, or after an exam or other assessment to such an extent that this anxiety causes poor performance or interferes with normal learning.ymptoms*Physical headaches,… …   Wikipedia

  • Assessment — For article assessment process on Wikipedia, see . Assessment is the process of documenting, usually in measurable terms, knowledge, skills, attitudes and beliefs. This article covers educational assessment including the work of institutional… …   Wikipedia

  • Assessment for Learning — In classrooms where assessment for learning is practised, students know at the outset of a unit of study what they are expected to learn. At the beginning of the unit, the teacher will work with the student to understand what she or he already… …   Wikipedia

  • test — 1. To prove; to try a substance; to determine the chemical nature of a substance by means of reagents. 2. A method of examination, as to determine the presence or absence of a definite disease or of some substance in any of the fluids, tissues,… …   Medical dictionary

  • Student information system — A student information system (SIS) is a software application for educational establishments to manage student data. Student information systems provide capabilities for entering student test and other assessment scores, building student schedules …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”