LSAT - Historical Range of Correct Answers for Targeted Scores

Students preparing for the LSAT often wonder exactly how many correct answers are required to obtain a particular score. The chart below lists the number of correct answers needed to achieve scores from 150 to 180 (in five-point increments) on every released test over the last eight years.

LSAT

150

155

160

165

170

175

180

December 1996

55
64
72
80
87
93
98

February 1997

54
62
70
79
86
93
98

June 1997

56
65
74
82
89
94
99

October 1997

55
63
71
78
84
90
96

December 1997

56
64
72
80
87
93
99

June 1998

56
65
73
82
89
94
99

September 1998

56
64
72
80
87
93
98

December 1998

54
62
70
78
85
91
97

June 1999

54
63
71
80
88
94
99

October 1999

56
65
73
82
89
94
99

December 1999

55
63
72
79
87
93
98

June 2000

56
65
73
81
87
93
98

October 2000

55
64
72
80
87
93
98

December 2000

54
63
72
80
87
93
98

June 2001

57
66
74
81
88
93
98

October 2001

57
66
74
82
89
***
99

December 2001

57
66
74
82
89
95
99

June 2002

57
67
76
84
91
96
100

October 2002

57
66
74
82
88
93
98

December 2002

55
63
71
80
88
94
99
June 2003
58
67
76
84
91
***
99
October 2003
58
67
75
83
89
94
99
December 2003
56
66
75
83
90
95
99
June 2004
58
67
75
82
89
94
99
October 2004
60
68
76
84
90
***
98
December 2004
56
65
73
81
87
92
97
June 2005
61
69
77
84
90
94
98

Averages

56.26
65.00
73.22
81.22
88.07
93.38
98.37

Standard Deviation1

1.72
1.84
1.91
1.78
1.69
1.24
0.84
*** Indicates that there was no raw score capable of producing that scaled score for this test.

One of the more noticeable trends in the above chart is that, depending on the test year, different raw scores translate into equivalent scaled scores. The reason for this apparent discrepancy is that the LSAT varies slightly in difficulty each administration. To account for these variances in test "toughness," the test makers adjust the Scoring Conversion Chart for each LSAT in order to make similar LSAT scores from different tests mean the same thing. For example, the LSAT offered in June of a given year may be logically more difficult than the LSAT offered in December, but by making the June LSAT scale "looser" than the December scale, a 160 on each test would represent the same level of performance.

Test takers can draw important conclusions about their own performance from both the average raw scores and the standard deviations. For instance, though the average raw score corresponding to a scaled 160 is 73.08, the standard deviation shows that a majority of the scores are within ± 1.82 of this number, or from roughly 71 to 75. A student wishing to score 160 on an upcoming test should then expect, with a very high degree of confidence, that correctly answering somewhere between 71 and 75 questions correctly would result in that score. Similar conditions apply for a score of 170, where, with the standard deviation adjustment, a raw score between roughly 86 and 90 is likely needed.

Perhaps most important of all for the potential test taker is to realize that achieving a desired score does not require perfect performance. Each of the raw scores above is the number correct out of 99, 100, or 101 questions (the October 1997 test had one question withdrawn after the test was administered, the October 2004 test had only 100 questions, and the December 2004 test had only 100 questions and then one question was withdrawn after the test was administered), so it is clear that missed questions, within reason, are acceptable regardless of the desired score. Even perfect scores usually allow for two or three incorrect answer choices. Again, the averages and standard deviations listed are useful tools in determining an acceptable number of missed questions, whether setting pre-test objectives or evaluating your performance in the week following the LSAT when scores may still be cancelled.

 BACK TO THE TOP

Footnote:
1. Also worth noting is the fact that, while raw scores tend to be relatively similar around the corresponding 150 range (50th percentile range), variation in test difficulty becomes much more apparent as you move away from the adjusted median. The standard deviation line below the chart can be used to determine these variations, as a single standard deviation interval (±) represents the range from the average in which the majority of the given raw scores fall. An increase in standard deviation represents an increase in the variety of raw scores, and, consequently, a larger variation from the given mean/average. Thus, the standard deviation of 1.7 corresponding with a scaled 170 shows a 30% increase in variety when compared to the standard deviation of 1.2 for a 150. Essentially, the range of correct answer choices that typically produced a 170 is 30% greater than the range of correct answers resulting in a 150. This is due to the fact that questions of high difficulty are what tend to distinguish one LSAT from another. That is, while each test consists of a relatively equivalent number of correctly answered easy and medium-difficulty questions, the number of correctly answered high-difficulty questions varies greatly. A test with a low number of correctly answered high-difficulty questions then requires fewer correct answers to arrive at a particular scaled score than does a test with a larger percentage of difficult questions answered correctly (consider the more difficult June 2002 LSAT scale versus the somewhat easier October 1997 LSAT scale). These principles, as well as percentile ranking charts, are discussed in further detail here.