LSAT - Historical Range of Correct Answers for Targeted Scores
Students preparing for the LSAT often wonder exactly how many correct answers are required to obtain a particular score. The chart below lists the number of correct answers needed to achieve scores from 150 to 180 (in five-point increments) on every released test over the last eight years.
*** Indicates that there was no raw score capable of producing that scaled score for this test.
One of the more noticeable trends in the above chart is that, depending on the test year, different raw scores translate into equivalent scaled scores. The reason for this apparent discrepancy is that the LSAT varies slightly in difficulty each administration. To account for these variances in test "toughness," the test makers adjust the Scoring Conversion Chart for each LSAT in order to make similar LSAT scores from different tests mean the same thing. For example, the LSAT offered in June of a given year may be logically more difficult than the LSAT offered in December, but by making the June LSAT scale "looser" than the December scale, a 160 on each test would represent the same level of performance.
Test takers can draw important conclusions about their own performance from both the average raw scores and the standard deviations. For instance, though the average raw score corresponding to a scaled 160 is 73.08, the standard deviation shows that a majority of the scores are within ± 1.82 of this number, or from roughly 71 to 75. A student wishing to score 160 on an upcoming test should then expect, with a very high degree of confidence, that correctly answering somewhere between 71 and 75 questions correctly would result in that score. Similar conditions apply for a score of 170, where, with the standard deviation adjustment, a raw score between roughly 86 and 90 is likely needed.
Perhaps most important
of all for the potential test taker is to realize that achieving a desired
score does not require perfect performance. Each of the raw scores above is the number correct out of 99, 100, or 101 questions (the October 1997 test had one question withdrawn after the test was administered, the October 2004 test had only 100 questions, and the December 2004 test had only 100 questions and then one question was withdrawn after the test was administered), so it is clear that missed questions, within reason, are acceptable regardless of the desired score. Even perfect scores usually allow for two or three
incorrect answer choices. Again, the averages and standard deviations
listed are useful tools in determining an acceptable number of missed
questions, whether setting pre-test objectives or evaluating your performance
in the week following the LSAT when scores may still be cancelled.
1. Also worth noting is the fact that, while raw scores tend to be relatively similar around the corresponding 150 range (50th percentile range), variation in test difficulty becomes much more apparent as you move away from the adjusted median. The standard deviation line below the chart can be used to determine these variations, as a single standard deviation interval (±) represents the range from the average in which the majority of the given raw scores fall. An increase in standard deviation represents an increase in the variety of raw scores, and, consequently, a larger variation from the given mean/average. Thus, the standard deviation of 1.7 corresponding with a scaled 170 shows a 30% increase in variety when compared to the standard deviation of 1.2 for a 150. Essentially, the range of correct answer choices that typically produced a 170 is 30% greater than the range of correct answers resulting in a 150. This is due to the fact that questions of high difficulty are what tend to distinguish one LSAT from another. That is, while each test consists of a relatively equivalent number of correctly answered easy and medium-difficulty questions, the number of correctly answered high-difficulty questions varies greatly. A test with a low number of correctly answered high-difficulty questions then requires fewer correct answers to arrive at a particular scaled score than does a test with a larger percentage of difficult questions answered correctly (consider the more difficult June 2002 LSAT scale versus the somewhat easier October 1997 LSAT scale). These principles, as well as percentile ranking charts, are discussed in further detail here.