Article 3 – The Mismatch Hypotheses in Law School Admissions – By Gregory Camilli & Darrell D. Jackson

The Mismatch Hypotheses in Law School Admissions

Gregory Camilli*, & Darrell D. Jackson


Chia-Yi Chiu & Ann Gallagher

I. Introduction

To fully understand the mismatch hypotheses, proper analysis must be given in two key ways.  First, one must examine the following: the strength of the evidence pertaining to the benefits and detriments of attending an elite law school; how an evaluation of student-school match can help shape a broader perspective on admission practices; and the connection between those practices.  Second, one must comprehend what are the real benefits of attending an elite law school.[1] If highly qualified students truly receive an academic boost from attending an elite school, then academic admission criteria are arguably related strongly to the empirical, if not intended outcomes, of such attendance.  The same results are true if marginally qualified individuals are harmed by elite school attendance.  For instance, students admitted to elite law schools under affirmative action criteria have academic credentials below those of students not receiving such preferences on average.[2] The potential boost or harm resulting from this mismatch can be used to evaluate the effects of such admission policies on student performance; while in the absence of preferential admission, the match effect assesses the value added to student outcomes by elite school attendance.[3] Considering the significant focus on the possibility of negative match effects from affirmative action in law school and undergraduate education, whether positive effects exist is also important to evaluate.[4]

Studies of student-school match in higher education are loosely related to two well-known types of research in K-12 education.  First, in studies of student tracking, the focus is on student performance in mixed-ability groups compared to tracked groups of similar ability.[5] The question raised in this comparison is whether students learn more effectively in separate groups that are more homogenous in ability.[6] Issues of particular importance are: how to determine the most effective educational context for gifted students; how under-prepared individuals might perform in a gifted track; and how gifted students might perform in regular tracks.[7] A second stream of research focuses on promotion from kindergarten to first grade.[8] This research asks whether an underperforming student benefits from retention to complete an additional year of kindergarten.[9] Not surprisingly, the above areas of research are hotly contended and involve issues of race, ethnicity, and educational effectiveness.[10] Mentioning these topics is intended only to bridge the related issues in law school admissions for policymakers and researchers familiar with the tracking and retention literature.

In higher education, initial student-school match has been defined as the gap between the strength of a student’s entering credentials at a particular school (A) and that of the typical student’s credentials at that school (B).[11] The match effect is the potential increase or decrease in developed ability attributable to the difference between A and B.  According to the negative match hypothesis, a negative gap (A < B) will result in less learning than if an underqualified student had hypothetically attended an elite school (A ≈ B).[12] Under the positive match hypothesis, a good match (A ≈ B) will result in more learning than if a highly qualified student had hypothetically attended a less elite school (A > B).[13]

Herein, the match effects for five different student populations (Native American, Asian, Black, Hispanic, and White)[14] are gathered and analyzed to determine whether these effects provide support for the match hypotheses above with respect to law school grades, graduation, and bar passage.  In conclusion, the implications of the match effects when choosing admission criteria are evaluated.

Other classifications of students could also be used to investigate the match hypotheses.  If the negative match hypothesis is true, then the resulting harm is not limited to students of color, but to any student who is mismatched for any reason.  The match hypotheses are not fundamentally about race or ethnicity, nor do they require explicit preferential selection.[15] Match hypotheses are most accurately described as expectations about the prerequisites for, and contexts of, student achievement.[16] A number of plausible scenarios have been offered about how mismatch might translate into diminished student outcomes.[17] White and non-White students can be mismatched based on factors such legacy, residential preferences and athletic abilities.[18] Yet, preferential admissions based on race and ethnicity from affirmative action programs have been implicitly assumed to be the only way minority students can be mismatched.[19] Clearly, affirmative action policies have led to a more diverse population of lawyers.[20] Plus, studying the effects of racial or ethical preference is important because colorblind admissions policies may reduce this diversity.[21] To estimate value added effects, this study will compare match effects on minorities to the prevalent category of the White race.

The following discussion begins with a description of the potential outcomes model.[22] This model is used as methodological framework, providing a context for both the ensuing literature synthesis and the statistical methods section.[23] Researchers examining match effects in both law school and undergraduate admissions have adopted the potential outcomes model.[24] When applying this model to law school admissions, the core element is the counterfactual question: would a student attending an elite law school via preferential admission have learned more at a non-elite law school?  Furthermore, would a student with adequate academic credentials for an elite law school have learned less at a non-elite law school?  The former question has attained prominence in legal education research, but a number of other studies address the negative match hypothesis more broadly.  Our research has not found any studies focusing on the positive match effect and the associated implications for admission criteria.

Two law school admissions studies are purported to decisively refute the negative match hypothesis.  Ho carried out an exact multivariate matching analysis to estimate match effects.[25] The current analysis differs in three important ways.  First, Ho reported match effects solely for Black and White students.[26] The following study also considers match effects on Hispanic and Asian students.  Second, Ho obtained match effects only for adjacent tiers of law schools.[27] This methodological choice results in a severe restriction of range for quantitative variables, law school eliteness (measured by tier), and a reduced sample size.[28] Third, Ho matched students within each category of race, using three variables: sex, law school admission test (LSAT) score, and undergraduate grade point average (GPA).[29] In the following paper, nonequivalence is controlled by using at least ten matching variables.  In addition, the full elite-school sample size is retained through multiple imputations for missing data.[30] As noted by Sander, Rothstein and Yoon also performed an analysis of mismatch, but they estimated the treatment effect using only the upper four quintiles of students on the academic index, which is combination of LSAT scores and undergraduate GPA.[31] This choice had the effect of retaining only about 25% of the Black sample.[32] For purposes of the following study, elite-school students within all racial or ethnic groups were retained, which substantially improves the problems raised above.

The following discussion maintains an important semantic distinction between a ‘match hypothesis’ and a ‘match effect.’  The phrase ‘mismatch effect’ is avoided because it encourages the presumption that the negative match hypothesis is correct.  Following the presentation of the methodological framework and a brief literature review, new analyses (the positive match hypothesis) using propensity score matching are reported.

II. Methodological Framework

How would a legal education be viewed, if affirmative action policies were abrogated nationwide?  More specifically, “What would have happened to minorities receiving racial preferences had the preferences not existed?”[33] This counterfactual question has been posed in a number of different variations and generates implications for admission practices.[34] An example of a negative match hypothesis is: “If one is at risk of not doing well academically at a particular school, one is better off attending a less elite school and getting decent grades.”[35] This statement, a form of the negative match hypothesis, fits well into the framework of the potential outcomes model.  The potential outcomes model, a description of which is to follow, is a well-accepted methodology for obtaining a causal effect and in this case, the match effect.

Law school performance partially depends on entering credentials.[36] However, once a student enters law school, the proposed match effect is determined by the net gain or loss, relative to her hypothetical performance, had she attended a less elite institution.[37] This notion is similar to a value-added or subtracted effect.[38] A simple or naïve comparison of statistics across more elite and less elite schools is distorted by differences in incoming credentials.[39] Students at elite schools generally have higher incoming credentials, which creates tougher competition for grades and results in a selection bias.[40] The term, ‘selection bias,’ means the initial difference in qualifications between students enrolling in higher and lower tier schools.  Whereas the term, ‘selectivity effect,’ means the net gain resulting from attending a more selective school.  The following study refers to the ‘selective effect’ as the ‘elite-non-elite effect.’  This distinction is a practical way, as opposed to an ideal way, of describing the effect of attending a school with a more, as opposed to less, stringent admission criteria.  When the elite-non-elite effect is estimated for students with qualifications well below their particular schools’ average student, the effect can also be interpreted as a measure of student-school match.

In any situation with a treatment group (T) and a control group (C), two potentially different outcomes exist for each individual depending on whether she receives the treatment or is part of the control group.  In this study, attending an elite law school is akin to the treatment condition, and attending a non- or less-elite school is akin to the control condition.  Let outcomes for individual i in these two cases be denoted as for T and  for C, so the treatment effect can be expressed as .  The average treatment effect (ATE)[41] is then defined as the average  in the population of students in question.

The fundamental problem with estimating the ATE is that for individual i, the only possibility is to measure the effect under one condition, either the treatment or the control condition, but not both.  The ATE is the “expected what-if difference in achievement that would be observed if we could educate a randomly selected student” in both an elite and non-elite school.[42] In the context of this study, the ATE is the average effect defined with respect to the population of students similar to those in the sample.[43] In addition, as a result of observing individuals in only one group, an approximation of the treatment effect (i.e., the elite-non-elite effect) might be calculated as the simple difference in group means, which will be referred to as the naïve or uncontrolled estimate (NTE).[44] In the context of this study, NTE is the unadjusted achievement between students at elite and non-elite schools, which is strongly affected by the differential qualifications of these students.  Without random assignment to comparison conditions, different kinds of systematic bias can affect the NTE.  This systemic bias must be eliminated or reduced by using a set of matching or conditioning variables to control for initial differences between the compared groups.  When conditioning is successfully accomplished, the ceteris paribus[45] qualification for creating a valid comparison across treatment and control is satisfied.  Recognizing that estimators of treatment effects require regions of common support is critical.  For conditioning to be done successfully, individuals from both the treatment and control groups with similar values of the control variables must be present.  The theoretical ideal is a truly randomized experiment, which occurs only when common support is built in by design.  This standard is approximated more or less closely in observational studies, depending on the quality of control variables.

A second, more relevant estimator from the potential outcomes model is the average treatment effect for the treated or ATT.[46] In this study, the ATT estimate represents the expected impact of the treatment on a student actually assigned to the treatment group, as opposed to a randomly selected student.  The corresponding counterfactual question is the “what-if difference” in achievement, which would be observed if an elite school attending student had attended a non-elite school instead.  For this component, the ATT is interpreted relative to the incoming academic qualifications of students relative to the admitting school.  If the students are under qualified, the ATT addresses what would have been the difference in performance if a less qualified student had attended a less elite law school.  If the students are adequately qualified, the ATT addresses what would have been the difference in performance if a highly qualified student had attended a less elite law school.  The counterfactual represented by the ATT is more closely related to these above questions than the ATE.   Consequently, the ATT is the parameter of interest in this present study.

III. Literature Review

Within the framework of the potential outcomes model, a number of researchers have studied the effect on minority students of attending elite versus non-elite schools.[47] The general, common theme of these studies is that outcomes for less qualified minority students who attended elite postsecondary institutions were compared to similar students who attended institutions where their credentials were closer to those of the typical student (i.e., students at less elite institutions).[48] The match effect in these studies has been operationalized according to the potential outcomes model.

In the following discussion, the terms ‘match effect’ and ‘elite-non-elite effect’ are used interchangeably.  However, a negative elite-non-elite effect can only be taken as evidence in support of the negative match hypothesis if the average student in a particular group falls below the institutional average on qualification criteria.  If the examined student population is adequately qualified, the match effect can be described as a value added effect of elite law schools.

A. Undergraduate Studies of Student-School Match

Some skepticism regarding the negative match hypothesis in law school may be justified on the basis of findings from undergraduate education.  Fischer and Massey found small positive student-school match effects at the individual level for Black and Hispanic undergraduates in a sample of students attending elite colleges and universities in terms of GPA, leaving school, and perception of college success.[49] Using the same data set with a broader student sample, Massey and Mooney found no student-school match effects for retention or hours studied.[50] However, they did find a small negative effect for GPA in a group of legacy students.[51] Brand and Halaby determined that elite college attendance yielded occupational status benefits in terms of the ATE, but not the ATT estimator.[52] Alon and Tienda estimated the elite-non-elite effect for Asian, Black, Hispanic and White students on a six-year graduation rate.[53] Using an econometric modeling approach, their study found that all students tended to benefit from attending elite schools.[54] Specifically, most estimated match effects were significantly positive with a few near-zero, depending on grouping and modeling variations.[55] In sum, no support for the mismatch hypothesis was found.[56]

In a widely cited and influential study, Dale and Krueger compared life outcomes for students who were accepted and rejected by comparable schools.[57] Some of these students eventually attended more or less elite schools; thus, Dale and Krueger argued that their methodology controls for variables that may be observable to admission committees, but not statisticians.[58] No effect was found for increasing eliteness for the general population of students.  Students scoring relatively lower on the Scholastic Aptitude Test (SAT), compared to students with higher SATs, appeared to do no worse in salary over the course of their careers.[59] With the relatively small sample of Black students, the study also concluded that Black students benefited from attending elite schools just as much as other students in terms of subsequent earnings. Dale and Krueger reported a small negative match effect for GPA, but noted that GPA is not comparable across schools.[60]

Generally, none of studies found compelling evidence in favor of the negative match hypothesis.  However, Alon and Tienda reported some evidence suggesting the positive match hypothesis.[61] Regardless, extrapolating the lack of negative match effects to law school admissions would be unpersuasive due to the lack of comparability in the economic, social, and cognitive pressures faced by law and undergraduate students as well as the contentious nature of affirmative action research.  The following studies address the negative match hypothesis in law school more directly.

B. Law School Studies of the Negative Match Hypothesis

Richard Sander brought the implications of affirmative action into clear focus.  In his article, A Systemic Analysis of Affirmative Action in American Law Schools, he conducted an extensive investigation of the negative match hypothesis in law school.[62] He found that Black applicants to law school had the same probability of admission as White students with much lower admission credentials.[63] Furthermore, he argued that if academic qualifications are the strongest determinants of law school grades, and if law school grades are the strongest determinants of bar passage, then preferential admission—concurrent with lower qualifications—to more elite schools translates into lower bar passage rates.[64]

The model in Figure 1 illustrates the reasoning impliedly suggested by Rothstein and Yoon in path diagram.[65] This model provides only two qualification covariates (UGPA[66] and LSAT) for the sake of simplicity.  The coefficients are interpreted as follows: α is the increase to a Black student’s probability of acceptance—relative to Whites—due to admission preference; γ is the influence of the elite schools’ standards on grades; β is the elite schools’ effect on bar passage; and δ is the elite schools’ effect on LGPA, which is arguably related to student learning or achievement.


Figure 1

An illustration of multiple effects in the negative match hypothesis.

Note: In this conceptual diagram, a positive effect is denoted green and a negative effect red. Other effects are not considered.  Racial preference (green) multiples through a positive path for the effect of Tier on bar passage (green), which results in a combined positive effect (the product αβ).  Racial preference has a negative effect when multiplied through the Tier effect on LGPA (red) and the ensuing LGPA effect on bar passage, which results in a negative combined effect (the product αγδ).  Note that the product of two positive numbers is positive, and the product of two positives and a negative is negative.[67]


The overall total effect also includes terms that are due to the exogenous association of race with qualifications.  The coefficient α can be interpreted generally as the degree of preference independent of student qualifications.  The original version of the negative match hypothesis offered by Sander[68] is implied in this above model.  In fact, this model’s argument takes the rudimentary form that if δ > β and δ >λ, then lower grades dominate both qualifications and academic learning, which is reflected in the LSAT in determining bar passage.[69]

According to Ho, this argument conflates the total effect of tier on bar passage  with the coefficient β, as shown in Figure 1 above.[70] To illustrate using Sander’s reported standardized coefficients, the indirect causal effect () of tier on bar passage (through LGPA) is -.195, while the direct effect β is .122; so that.[71] As Ho noted, the direct and indirect effects of tier generally appear to cancel.[72] Ho found no evidence of a negative match effect for either White or Black students in a propensity score analysis—with the limitations noted above.[73] In addition, Ho observed that Sander did not appear to have a control group consistent with the potential outcomes model, or his stated counterfactuals.[74] Instead, the comparison was between “‘treatment’ Blacks (who generally receive preferences) and ‘control’ Whites (who generally do not).”[75]

Ayers and Brooks followed a strategy similar to Dale and Krueger’s[76] approach and found mixed support for the negative match hypothesis.[77] In their most compelling analysis, two groups of students were identified: students who attended their first-choice school; and students who were accepted into their first choice school, but chose to attend a presumably more lower-tiered school.[78] This strategy was motivated by Dale and Krueger’s[79] method and assumes that using only students accepted by their first-choice school helps control for unobserved variables contributing to student success, thus decreasing selection bias.[80] They concluded that results for first year grades and first-attempt bar passage rates lent marginal support for the negative match hypothesis; however, their analyses of other outcomes with farther reaching significance—ultimate bar passage and graduation rates—did not.[81] Ayres and Brooks were the first researchers to use a statistical model to estimate a parameter representing the match effect, but this effect was not the ATT.[82]

Rothstein and Yoon proposed the following methodology: the parameters for estimating match effects were constructed with two different statistical models—ordinary least square (OLS) regression, and instrumental variables (IV)—that would likely “understate” and “overstate” the match effect, respectively.[83] Under both models, the match effects for law school graduation and bar passage were not significantly different from zero.[84] Similarly, the effects for post-graduation employment outcomes were positive under both models.[85] Although evidence for positive match effects was found, no discussion described how these results related to the elite law schools’ missions or the articulation of admission criteria and academic outcomes.  Disconfirmation of the negative match hypothesis was the sole focus.[86]

Rothstein and Yoon argued that the consistent effect for both models helps prove the disconfirmation of the negative match hypothesis.[87] Significant negative effects were initially found for one model for law school outcomes.[88] Yet, these effects diminished to near zero only after eliminating about 75% of the Black sample, to avoid potential “sample selection bias” effects.[89] Rothstein and Yoon acknowledged substantial negative effect for graduation and bar passage of Black students in the academic index’s bottom quintile; however, they also observed that the ability for this group to distinguish match effects from sample selection biases is not possible.[90]

Barnes carried out a logistic regression analysis in which the logit of graduation was not assumed to be linearly related to credentials as represented by LSAT and UGPA.[91] Independent variables were limited to “student credentials, type of school, race, race by school type interactions, and student credentials school type interactions.”[92] According to Barnes, the mismatch degree can be determined by comparing Black and Hispanic students to similar White students at mid-range law schools.[93] She found positive match effects for elite schools ranging from 4-5% for graduation rate[94] and 1-3% for bar passage.[95]

Though no evidence for the negative match hypothesis was found,[96] Barnes’ analysis clearly does not conform to the potential outcomes model, similar to the OLS regressions of Rothstein and Yoon, Sander, and Ayres and Brooks.[97] In these studies, Black students are compared to similar White students, and race is considered a treatment.  To the contrary, Holland argued that “causes are only those things that could, in principle, be treatments in experiments.”[98] In theory, students cannot be assigned to race categories; thus, the effects derived from a comparison among racial categories cannot usefully be described as causes.  In other words, “[a]n attribute cannot be a cause in an experiment, because the notion of potential exposability does not apply to it.  The only way for an attribute to change its value is for the unit to change in some way and no longer be the same unit.”[99] Moreover, the limited set of independent variables used by Barnes[100] and the studies cited above are highly unlikely to be adequate for estimating an effect when selectivity factors—such as motivation—play a significant role in ultimate success.

Williams provided a ‘distance’ framework for understanding mismatch similar to that of Fischer and Massey and Massey and Mooney.[101] Additionally, he reviewed the methodology of Rothstein and Yoon, Barnes, and Ayres and Brooks, which were all conducted with the Bar Passage data.[102] Williams carried out an OLS and IV analysis—roughly similar to the analyses of Rothstein and Yoon—for law school graduation and ultimate bar passage.[103] However, he also constructed new outcome measures targeted to learning. [104] A number of statistically significant and negative match effects were found, though only after omitting the middle two tiers from the analysis.[105] In particular, an 8-12% mismatch effect was found for first bar passage, and a 5-10% mismatch effect for final bar passage.[106] Williams also implemented analyses comparable to Ayres and Books with a similar result—no effect for ultimate bar passage was found.[107] However, he found that the results of Barnes analysis could not be replicated. [108] Finally, Williams concluded that evidence from the Bar Passage data mostly likely supports the negative match hypothesis.[109]

C. Summary and conclusions

Currently, minimal support exists in the literature for the negative match hypothesis in law school admission.  However, several studies purporting to estimate the match effects described above are not consistent with the potential outcomes model.  This inconsistency has somewhat resulted in reports of match effects that are of an incommensurate scale and that estimate different parameters.  To address some gaps in the literature pertaining to initial student-school match, an extended set of match effects is reported below that was obtained with multivariate matching and that maintains a counterfactual consistent with the ATT estimator.  In addition, various other studies are only critiques of Sander and do not address estimation issues.[110] No attempt was made to review the material provided by critique studies that did not provide original analyses.

The review by Williams is a different matter.  His review offers empirical criticism of several previous studies, and presents a number of negative match effects based on OLS regression that are consistent with the negative match hypothesis.[111] In particular, several new constructs of outcome were developed, motivated by the desire to assess the human capital acquisition potential rather than labor market access.[112] Three main methodological arguments were made.  First, the study asserted that for bar passage outcomes, an analysis should be conducted only for students attempting the bar examination.[113] Presumably, the concern behind this assertion is that the unconditional approach makes no distinction between students not taking the bar examination.  For example, some students not taking the bar may be in good standing whereas others may have had low academic performance.  A new learning measure was also constructed in which the bar examination was coded as zero for failing students and 1/n for passing students (where n is the number of attempts).[114] Third, an attempt was made to control for the region where the bar examination was taken due to differential passing standards.[115] Finally, Williams observed that the match estimators based on the Bar Passage data may be biased due to minority students who are not mismatched at elite law schools.[116] Each of these innovations is considered in this study.

However, attention must be paid to the fact that student-school mismatch is only an initial description of the outcome of an admissions process.  Consequently, match effects should not be interpreted as an average attribute of students, but as the interaction of initial student qualities with enduring institutional qualities.  If mismatch occurs, it arises from both the characteristics of students and schools.  Similarly, if affirmative action is utilized, it may also be an enduring operational aspect of law school rather than an opportunity limited to an admissions decision.  Thus, the terms racial preference and affirmation action should not be conflated.

III. Data

Using the National Bar Passage Data, Wightman created the LSAC National Longitudinal Bar Passage Study for understanding the student-school match effect in law school.[117] To protect anonymity and safeguard against sensitivity issues pertaining to the data’s use, the identity of individual law schools was omitted in the public-use data set, and cumulative grades were reported after standardization within school.[118] To allow other researchers to study the relationship between school characteristics and student outcomes, the law schools were empirically clustered into the following seven variables: median Law School Admission Test (LSAT) score, median undergraduate grade point average (UGPA), tuition and fees, total enrollment, selectivity, minority percentage, and faculty to student ratio.[119]

The study also conducted a cluster analysis, which “identified six naturally occurring clusters or groups of law schools, numbered 1 to 6.”[120] Barnes described the six clusters as the following: (1) “Small Top 30 law schools;” (2) “Large Top 30 law schools,” (3) “Mid-range Public law school,” (4) “Mid-range Private law schools,” (5) “Lower Ranked law schools,” and (6) “Historically Black law schools.”[121] The Bar Passage Study data was also used to estimate match effects in the studies of Sander, Ayres and Brooks, Ho, Barnes, and Rothstein and Yoon.[122] Although this data set is over a decade old, it remains the only national data set for examining the match hypotheses.

A. Outcome Variables

Law school GPA (LGPA) provides a potential proxy means for achievement in law school, but these measures are not comparable across schools.  Grades are assigned normatively within a school—such as students being graded on curve—rather than on a criterion-based scale.  The model grade assigned within a school is likely to be strongly influenced by the school’s academic standards.  However, raw LGPAs are not available in the Bar Passage data because these measures were standardized within schools.[123] Due to this limitation, these measures are useful only for describing relative class rank; they do not provide measures of achievement. Nonetheless, examining the impact of elite schools on class rank is useful; thus, these variables are retained in this current study.

Other outcomes examined in this study include three dichotomous variables: graduation from law school; success on the first bar examination attempt; and success on the final bar examination attempt.  Both of the bar passage variables indicated whether a student passed the bar or not.  All students in the sample, not the subgroup of students who attempted the bar examination, were the basis of the determination.  In addition, a new variable, adjusted final bar passage, is derived.  Adjusted final bar passage takes on the values 0 and 1/n, where 0 indicates bar examination failure and n indicates the number of times required for passing.[124] The results for LGPA variables are reported in the effect size (dσ) or standard deviation metric.

Although working rules for the interpretation of this metric vary by field and topic, many researchers use the starting points provided by Cohen: the values dσ =.2 is small; dσ =.5 is moderate; and dσ=.8 is large. [125] Results for the bar passage variables are interpreted in the proportion difference metric with a maximum and minimum of -1.0 and 1.0, respectively.  This effect size is interpreted as the difference between the proportion of elite-school students and the proportion of nonelite-school students passing the bar examination.  A starting point for interpreting effect in this metric is especially challenging.  However, the working rule is that an absolute value greater than dp =.05 puts an effect on the radar, while dp =.10 and dp=.15 could be interpreted as cut-off values for moderate and large effects.  In addition, effect size in any metric should be interpreted relative to its standard error and its substantive context.  Thus, the above guidelines are not rigid.

B. Matching variables

In this study, the matching variables include the following: sex; LSAT score; UGPA; age; socio-economic status; the number of lawyers in a student’s family; the number of law school applications; and the student-reported importance of loans, housing, and the minority students’ presence at a law school.  Once these variables were considered, additional variables did not practically contribute to group differences—such as the family’s social capital; the number of children; and the children’s gender.  In fact, some of the eleven matching variables did not predict group differences at all, but were still included to describe the quality of the match.

Square and cubic terms for UGPA, LSAT, and their interactions, also did not contribute significantly to group differences.  Except for the sex and socio-economic status (SES) variables, each of the primary covariates was statistically significant for at least one demographic group.   Several of these covariates may be understood as proxies for unobserved variables.  For instance, the number of applications a student makes (Napps) may reflect motivation as well as savvy in navigating a complex institutional process.  The number of lawyers in the family (LawFam) may reflect unmeasured cultural capital for supporting law school study.  Other variables representing proxies for unobserved variables may be the following: the importance of school reputation (ImpRep); housing (ImpHousing); and the number of minority students at a school (ImpMin).

The fact that admissions committees have access to variables unobservable to researchers is a key issue in determining match estimates.  Thus, these unobservable, success-related variables are more likely to be available to students admitted to elite schools.  If these variables cannot be controlled, the performance of elite school students will probably be overestimated, which leads to positive bias and a lower likelihood of discovering the possible existence of negative match effects.

C. Missing data

The Bar Passage Study data set contains some incomplete student records.  The study may provide a student’s first-year LGPA, but no cumulative LGPA for that student.[126] Rather than deleting these individual cases as all previous studies had, a value was imputed for the missing value to retain the maximum possible sample size.[127] Missing values in this study were imputed using the EMB, an EM algorithm, method as implemented in the R software module Amelia II.[128] Statistical estimates and standard errors were obtained using five imputations.  An examination of the results revealed that the imputation process increased the error variance by about 30%.

When studying admissions, missing data is a complex issue with respect to measuring initial or final bar passage because not all students attempt the examination.  Thus, the “missing” value is missing structurally as oppose to the inability to collect information not in existence. Students not attempting the bar examination may differ in qualities, and may differ from students who take the examination and fail.[129] For a dichotomous indicator, all of the previous scenarios are treated as a “zero;” however, with missing value imputation, both structural and nonstructural zeros are predicted by the other background variables present for a student with missing values.

Distinctions can be made between imputations.  The structural bar passage zeros for low performing students are likely to be nearer to zero based on entering credentials, the need for loans, and other variables.  The corresponding zeros for students leaving law school in good standing are likely to be nearer to one.  While distinctions should be made, simply deleting students not attempting the bar examination ignores the possibility that an indicator may be missing, that of whether a student attempted the bar examination.  This study analyzes the possibly missing data of students known to be law school graduates and to have attempted the bar examination.  Imputation is also performed in this study, but much fewer values are missing: on average less than .5 values per case are missing.  Given the imputation procedure, conditional pass rates are not examined.  Instead, the full available sample of students in each race category is included.

V. Methods

A number of matching algorithms are available including exact, nearest neighbor, optimal, kernel, and genetic algorithms for matching.[130] These algorithms are used to find control individuals closely resembling a set of treatment individuals.  In propensity score matching, logistic regression is typically used to estimate the probability—or propensity—that individual i, with a set of controls—or matching variables,—attends an elite school, where tiers 1-2 are elite and tiers 3-6 nonelite.[131] Students with nearly identical a priori propensity of attending an elite school are considered to be comparable in the potential outcomes model framework.[132]

In the present study, genetic matching was performed using a single match with replacement as implemented in the function Match of the R Matching package.[133] According to Sekhon, genetic matching employs an automatic search technique to find a set of weights for matching variables.[134] When this technique is applied to Mahalanobis distance, genetic matching provides an optimal balance between treatment and control by minimizing distributional differences.[135] This study did not employ different matching techniques for theoretical reasons.  Instead, the methodological choice for an algorithm was guided pragmatically by the goal of achieving the best match.  The genetic matching technique was clearly superior when compared to the standard propensity technique.

A. Estimating the Match Effect

ATTs were estimated within race/ethnic groups according to the potential outcomes model.  In addition, ATT estimates were obtained for elite students within a race/ethnic group with  and, where P75 designates the linear propensity θ of the overall sample, within the race category, at the 75th percentile.  The matching analysis was repeated for students in each of these propensity regions.  Regions of common support at low propensity values are especially interesting because the negative match hypothesis reflects students who are unlikely to be admitted to elite schools without preferences. [136]

VI. Results

This section sets forth three aspects of the analysis used to obtain ATT estimates.  First, logistic regression coefficients are given for each group from a single imputation.  Second, balance statistics are given for each group of students in the lower propensity regions.  Third, the estimated ATTs are given for each racial/ethnic group and for each of the three propensity regions.  The lower propensity region is especially relevant to the negative match hypothesis.  This region was designated by the rule because the 75th percentile on the linear propensity roughly balanced the elite students within the four groups studied across lower and upper propensity categories.  All estimates were based on combining statistics across imputations.

A. Logit Regression Propensity Scores

Logit coefficients and standard errors are provided in Table 1 below.  The two strongest predictors of elite enrollment were LSAT and UGPA, as illustrated by a comparison of estimates to their standard errors.  The importance of loans, the number of applications a student submitted, and the number of lawyers in the student’s family were weaker predictors of elite enrollment and had coefficients that changed signs across racial/ethnic groups.  The reported importance of loans was highest for Black students, but was slightly negative for White and Asian students.  The number of applications a student submitted was most strongly related to elite school admission for Black students, and the number of lawyers in the family was surprisingly negative for all groups, except Asian students, for whom it was slightly positive.


Table 1

Logistic Regression Coefficients



































































1 Number of lawyers in the family, 2 Law school paid for with loans, 3 Number of law schools to which a student applied, 4 Importance of a school’s academic reputation, 5 Importance of housing availability, 6 Importance of the number of minority students at a school.

Logistic regression coefficients obtained for one imputation:

a p<.001, b p<.01, c p<.05


Across all groups, the coefficients for LSAT, UGPA, age, number of applications, and importance of school reputation were consistently positive.  Other covariates, except for SES, had at least one negative value across groups.  Generally, the same pattern of coefficients was evident for all groups.  As shown in Table 1, the linear correlations of the 11 covariate slopes for Asian, Black, and Hispanic students were r = .98, .93, and .96, respectively.  This finding suggests that the same factors tend to affect admission to elite schools in each of the four demographic groupings examined.  Furthermore, this finding demonstrates that the empirical selection rule involves more than LSAT and UGPA.  Thus, statistical procedures may result in biased estimates of potential match effects if the complexity of the rule is not explained.

These results indicate that little, if any, bias reduction is obtained by balancing on sex.  Thus, Ho essentially matched on only two variables (LSAT and UGPA)[137] and Williams on three variables (bar examination region, LSAT, and UGPA).[138] Both ignored the role of other potentially predictive variables, including age, the importance of loans, law school reputation, and the number of law school applications.  These other variables are important because they serve as proxies for unmeasurable variables, such as maturity, motivation, commitment to social justice, and financial stability, which may relate to success in law school.

B. Balance

Overall, the post-matching means for elite and matched nonelite students were similar. In Table 2, empirical CDF differences—standardized quantile-quantile differences—are reported for students identified as.  These eCDF statistics measure the dissimilarity of the elite and nonelite distributions on the matching variables and can be interpreted as z-score or effect size differences.  For LSAT, the average difference was no more than .033 in standard deviation units for Asian, Black and Hispanic students.  The balance for White students was notably better on most variables due to the large pool of students attending nonelite schools.  Still, none of the observed covariate differences yielded an average significance level below the nominal .05 level.

The results in Table 2 demonstrate that the matching process was generally successful within each group.  White students obtained low p-values for socio-economic status (SES), and the importance of reputation and housing.  This measure of statistical significance can be attributed to the large sample size.  A more practical measure of significance is given by the z-like eCDF statistic.  As Table 2 shows, the eCDF statistics are generally much smaller for White students than for other groups.


Table 2

eCDF1 Statistics (p-Values) for Students with Propensities .



Asian Black Hispanic White
Sex2 .004 (.553)

.007 (.75)

.010 (.71)

.001 (.53)

.028 (.26)

.019 (.78)

.015 (.81)

.010 (.21)

.021 (.45)

.020 (.61)

.017 (.81)

.009 (.35)

.017 (.49)

.018 (.71)

.018 (.70)

.004 (.47)

.019 (.59)

.028 (.48)

.019 (.79)

.015 (.05)

.012 (.59)

.009 (.91)

.010 (.80)

.004 (.32)

.018 (.69)

.021 (.83)

.017 (.73)

.007 (.65)

.028 (.43)

.023 (.70)

.021 (.52)

.007 (.65)

.007 (.84)

.009 (.81)

.007 (.90)

.005 (.05)

.033 (.20)

.023 (.56)

.019 (.56)

.020 (.04)

.013 (.79)

.026 (.65)

.023 (.54)

.009 (.31)

.033 (.14)

.022 (.65)

.024 (.65)

.007 (.39)


1 eCDF = empirical Cumulative Distribution Function (or standardized eQQ).  All statistics averaged across 10 imputed data sets.

2 For the variable SEX, the average raw eQQ difference is given along with the approximate t probability for equality of means.

3 p-values in parentheses are Kolmogorov-Smirnov bootstrapped probabilities describing the equivalence of group distributions.  The idea is not to reject the hypothesis that the distributions of matching variables are the same for elite and nonelite groups.  Thus, large probabilities are desirable.


One concern in the matching procedures pertains to the small size of the matched group differences, and that despite this small size, a consistent average effect exists of more lawyers in the families of students attending elite schools.

B. Match Estimates

Table 3 reports the match estimates for Asian, Black, Hispanic, and White student groups given the total number of students, the students with , and the students with .[139] Parallel results are provided in Table 4 based on the sample of students attempting the bar examination.  Additionally, Table 4 reports the new adjusted measure advanced by Williams,[140] and an additional new measure based on the difference between first and final grade point average.  Both of these measures are targeted to learning.

According to the match estimates provided in Table 3, Asian students have very small negative effects for first-year and cumulative LGPA.  This finding indicates that Asian students tend to maintain about the same class rank in elite and nonelite schools.  The overall effects for bar passage are nonsignificant and generally close to zero.  However, in the lower propensity range a negative effect exists for the first bar result (-6.3%).  In the upper propensity range, all estimates are close to zero.  Effects classified by gender were nonsignificant.  Yet, negative match effects of less than 2% were observed for males, whereas negative match effects of about 5% were observed for females.  In addition, 46% of students in the upper propensity range attended elite schools, while 54% in the lower propensity range attended elite schools.


Table 3

Full Sample Match Estimates with Elite Group ns.  LGPA results are reported as effect sizes.



Group Outcomes







Asian 1st yr LGPA







3rd yr LGPA







First bar resultd







Final bar resultd














Elite n




Black 1st yr LGPA







3rd yr LGPA







First bar result







Final bar result














Elite n




Hispanic 1st yr LGPA







3rd yr LGPA







First bar result







Final bar result














Elite n




White 1st yr LGPA -.164 c .032 -.231c .023 -.119c .017
3rd yr LGPA -.159 c .030 -.241c .023 -.123c .016
First bar result -.014c .005 -.012c .004 -.026c .007
Final bar result -.009c .003 .005 .003 -.019c .005
Graduation .002 .004 -.005 .004 .015b .006

Elite n





a Positive effect sizes signal an advantage to elite-school attendance.  Negative effect sizes, which are predicted by the mismatch hypothesis, signal a disadvantage.

b p < .05

c p < .01

d Results for dichotomous variables at the individual level are shaded.  The effect size is interpreted as a difference between proportions.  The effect sizes for grades are interpreted in the standard deviation metric. _____________________________________________________________________________________________

For the upper propensity group of Black students in the second segment of Table 3, moderate (dσ=-.61,-.67) significant negative effects exist for first-year and cumulative LGPA.  This effect indicates that Black students tend to have lower class ranks at elite schools as compared to nonelite schools.  This result could be interpreted as the degree of affirmative action in admission, rather than as a consequence of affirmative action.  In the lower propensity group however, a smaller effect on class ranking exists.  Also in the lower region, negative effects for bar passage and a positive effect for graduation both exist, which is consistent with the OLS results of Rothstein and Yoon.[141] For both the upper and lower propensity regions, negative effects for first bar results exist (-4.6% and -5%).

When broken down by gender, positive match effects were observed of about 6.5% for females in the upper propensity range, and negative effects of about 5% for females in the lower range.  For males, negative effects were observed of 7-10% for those in the upper propensity range, and negative effects of about 1-2% in the lower range.  All of the latter effects were insignificant.

Another mild indication that a negative match effect exists for students in the low propensity range is that the effect increases from first-year LGPA to cumulative LGPA.  Though these elite-school students are approximately .22 standard deviations lower than their non-elite peers at the end of the first year, they are approximately .29 standard deviations lower at the end of law school.  For Black students, 61% of students in the upper propensity range attended elite schools, while only 39% in the lower range attended elite schools.

In the third segment of Table 3, no significant effects exist for Hispanic students other than LGPA.  However, both upper and lower propensity groups show positive effects on first bar attempts.  In addition, students in the lower propensity range appear to make increases, as compared to their non-elite peers, in class rank from the first to the last year of law school.  These results suggest a positive match effect for Hispanic students.  Also, 49% of students in the upper propensity range attended elite schools, while 51% in the lower range attended elite schools.  For Hispanic females, a positive effect of about 10% is observed in the upper propensity range, and mixed results were observed in the lower range.  For Hispanic males, negative effects of 2-4% were observed in the upper propensity range, and positive effects of 7-13% were observed in the lower range.  Only the effect for adjusted final pass rate of 13% is significant (α = .05).

In the last segment of Table 3, statistically significant effects were observed for White students.  For the lower propensity group, a negative effect was evident for first bar results, and a positive effect for graduation.  Similar to Asian students, but in contrast to Black students, these ATTs for White students provide less of a measure of match, and more of a measure of the value added by elite law schools for these outcomes.  These short-term effects are very small in absolute terms; however, evidence exists of a negative match effect for white students in the lower propensity range.  For White students, 56% in the upper propensity range attended elite schools, while 44% in the lower range attended elite schools.  When broken down by gender, small nonsignificant negative effects of approximately 1.5% were observed for females in the lower propensity range and males in the upper propensity range.


Table 4

Match Estimates with Elite Group ns for Students Who Attempted the Bar Examination.



Group Outcomes







Asian 1st yr LGPA







3rd yr LGPA














First bar result







Final bar result







Adjusted Finalb







Elite n




Black 1st yr LGPA







3rd yr LGPA














First bar result







Final bar result







Adjusted Final







Elite n




Hispanic 1st yr LGPA







3rd yr LGPA














First bar result







Final bar result







Adjusted Final







Elite n




White 1st yr LGPA -.198c .027 -.258c .045 -.114c .029
3rd yr LGPA -.194c .028 -.271c .047 -.093c .032
Difference .004 .015 -.013 .023 .021 .017
First bar result -.017b .007 -.011 .008 -.016 .010
Final bar result -.005 .004 .003 .006 -.009 .007
Adjusted Final -.012b .005 -.004 .007 -.013 .008

Elite n





a This variable is the difference of 3rd year LGPA minus 1st year LGPA.

b This variable is the final bar result adjusted by number of attempts .


The Table 4 results are based on students who attempted the bar examination and differ from the Table 3 results in several respects.  First, the results of Table 4 indicate the presence of stronger negative match effects only in the lower propensity range for Asian and Black students. Of the student in those groups, approximately 50% are at risk of passing the bar examination at about a 5% lower rate than similar students in less elite schools.  Yet, this conclusion does not hold for Hispanic students; in both propensity regions, marginal support exists for the positive match hypothesis.  Plus, very little change is demonstrated between the first and third year LGPA, even though small positive effects exist for all students in the lower propensity range.  The results for White students are similar in both tables, but the standard errors are smaller in Table 3.  Consequently, the small negative effects register at a higher level of significance. Finally, the adjusted final bar result in Table 4 shows very little practical difference from the unadjusted result.

VII. Discussion

Some evidence was found supporting the negative match hypothesis for Black and Asian law school students in the lower propensity range.  Yet, the match effects for bar passage in the upper range were much lower than Sander’s reports, and did not approach statistical significance at α = .05.[142] For first and second bar passage rates for Black students, Sander reported estimates of dp = ‑14.9% and dp = -3.5%.[143] The comparable results from this study were ‑5% and ‑1.4%, respectively.  For students in the low propensity regions, the comparable figures were ‑7.8 and -8.1%.  This latter result is similar to Rothstein and Yoon’s results, which suggested that a negative effect existed only for the lowest 20% of Black students as determined by the academic index.[144] Estimating effects for Black students below the 75th percentile of the propensity score was not possible because of inadequate sample size.  Yet, Black students fell below this threshold by at most 40%, even though Rothstein and Yoon suggested that of 20%.[145] Given the White students’ larger sample size, estimating match effects for the lowest quintile on the propensity score was possible.  For the 333 students in the lowest quintile attending elite law schools, the pass rates for first and final bar attempt were about 1.5% lower than similar students attending non-elite law schools.  From this baseline and using the adjusted bar passage outcome, Black and Asian students in the lower propensity range appear to have about a 5% lower chance of passing the bar examination.  No negative match effects for graduation are apparent.  Thus, the bar passage rates difference seems very modest relative to the substantial social networking advantages of elite school attendance.

Regarding this study’s estimated propensity function, a high degree of student mixing between elite and non-elite schools within each race category is present.  Only about 50% of students in each upper propensity range category actually attend elite schools.  The implication is that elite schools admit about 50% of students in the lower propensity range.  As a case in point, 333 White students in the lowest propensity quintile attended elite schools.  Thus, there is a substantial amount of mixing in student propensity within elite schools, independent of preferential admission.  To some degree, this outcome is due to the variability of schools within tiers and is a weakness of Wightman’s crude classification of law schools into six categories.[146] Yet, the preponderance of the mixing is unlikely to be the result of measurement error inherent in the classification scheme.  Wightman’s cluster dendrogram shows a high degree of separation between elite and non-elite school.[147]

Considering the large sample size and the relatively high quality of matching, the White student results suggest that the value added by elite law schools should be examined for the effects on law school graduation and bar passage.  The match effects of this study are largely unfounded with preferential admission.  The effects in Table 3 for White students may not be practically significant; attending an elite school, as opposed to a lower tier, makes virtually no difference in terms of graduation or bar passage.  The single exception is that low propensity elite school students had slightly lower chances of bar passage than their nonelite peers.  Overall, elite schools do not add value in these outcome domains.  This latter result is consistent with Dale and Krueger’s findings for elite undergraduate institutions.[148]

The value of elite school access extends vastly beyond these early career outcomes. Based on a survey of 874 Rhodes scholars, Youn and Arnold examined correlates of public leadership.[149] The variables most strongly associated with higher levels of leadership were in following order: attainment of a bachelor’s (B.A.) degree at Harvard, Yale, and Princeton; attainment of a law degree; and attainment of a B.A. degree in one of the Cartter Classification A of universities.[150] Even amongst this highly select group of individuals, a law degree was a statistically significant predictor of public leadership.  Therefore, the fact that 77% of the eventual lawyers in the sample of Rhodes Scholars attended Harvard, Yale, or Stanford should not be surprising.[151]

Consistent with the purpose of the Rhodes scholarship program, Gray and LeBlanc recognized an additional role of elite law schools to train students beyond blackletter law and doctrine.[152]

The goal of an elite law school education is . . . to train future lawyers to confront the tough normative questions about law and to challenge the bounds of law so that it is more equitable, administrable, useful, and just.  Accordingly, a law school could not meet its public mission by accepting students solely on the basis of their past achievements; it also needs to consider their future potential to contribute to the advancement of law and society.[153]

The implication is that leadership potential, which may fall outside the bounds of traditional admission criteria, is more suitably described as a benefit to society.[154] In addition, the potential for elite school graduates to fulfill the cultural capital prerequisites for careers in politics, industry, and public sector leadership is obtainable beyond short-term outcomes.[155] According to Youn and Arnold, “[a]ttending Harvard or Yale law school mean[s] acceptance into a status group opening rich peer networks and access to influential institutions and individuals.”[156] Elite law school attendance brought students into contact with top legal scholars and government officials.[157] For example, “[c]lerking for eminent judges, interning in the U.S. Department of Justice, and working in political campaigns were reported as career-defining experiences by Rhodes Scholars.”[158] Moreover, Wilkins and Gulati found that about 70% of the partners at five large firms in top legal markets had graduated from elite law schools, whereas only 30% came from non-elite schools.[159] Even Sander noted that he did not consider “perhaps the single greatest benefit of affirmative action in law school: its role in building the long-term careers of [B]lack lawyers and giving them a place in the most elite ranks of the profession and American society.”[160]

Several methodological comments should be addressed prior to closing.  First, inferences about the elite/non-elite effect are based on tiers as reported in the Bar Passage Study data, rather than on individual schools.[161] A number of researchers have pointed out the limitations of this indicator for assessing match effects.  While the match effect plausibly may vary across schools or types of schools, no evidence is currently available for fine-tuning such intuitions.  In addition, an analysis incorporating a school identifier would possibly alter this study’s central findings; however, the current research literature does not contain any probable cause for such speculation.  Future research would be recommended about whether the identity of law schools is a necessity for understanding admission preferences.  This present study suggests that anonymity risks associated with such a data collection may not outweigh its potential benefits.

Second, this study has shown that regression analyses of the kind conducted by Sander are incapable of producing credible estimates of causal effects.[162] Aside from the strong assumptions of linearity, no account is given to selectivity effects.[163] Rothstein and Yoon demonstrated a more credible approach in terms of parametric modeling, and their results are consistent with the current results based on nonparametric modeling.[164]

Third, the risk of reporting biased estimates can occur when studies do not utilize techniques for missing data imputation.  When uncertainty due to missing values is addressed directly, the precision of estimates decreases.  In particular, the interpretation of effects suggesting mismatch for Black students is limited by the corresponding high standard errors.

Fourth, a threshold appears to exist, below which students suffer from mismatch; however, estimating this threshold adequately is difficult given the small sample sizes.[165] Several variables are necessary for bias adjustment, and studies that fail to collect an adequate set of matching variables risk reporting biased estimates of match effects.  Future studies not adequately addressing these methodological issues risk producing misinformation and inaccurate guidance for admission policies.

Fifth, match effects can vary by gender and propensity range.  Moreover, negative match effects were unexpectedly more frequent in the upper propensity range.  Given the small sample sizes and the limitations of the Bar Passage data, these results should be regarded as tentative.  Yet, they indicate that substantial uncertainty exists about the individuals at risk of negative match effects, and the processes contributing to the positive or negative match outcomes for students and their institutions.

Finally, analyses have identified three variables related to LGPA improvement from the first to final year of law school.  The strongest indicator was undergraduate GPA, followed by age (older was better) and gender (female was better).  Further examination of students who improved their class rank over the course of law school may be helpful to identify additional predictors of success for all students.

The research contained in this article partially supported by the Law School Admission Council (LSAC). The results and conclusions expressed herein are strictly those of the authors and do not represent those of LSAC.

* Gregory Camilli is Professor at the University of Colorado Boulder. In addition to teaching classes in statistics, measurement, and meta-analysis, his research areas have focused on test fairness, equity, preschool interventions, school factors in mathematics achievement, and multilevel IRT models. Prof. Camilli recently completed a term as Co-Editor of Educational Researcher and as Associate Editor of the Journal of Educational and Behavioral
 Darrell D. Jackson, JD, is a doctoral candidate in the University of Colorado (Boulder) School of Education. Immediately prior, he served the George Mason University School of Law (GMUSL) as an Assistant Dean
and Director of Diversity Services. Prior to joining GMUSL, he practiced law as an Assistant United States Attorney in the District of Columbia and as an Assistant County Attorney in Fairfax County, Virginia. Prior to joining the County Attorney's office, he served as judicial law clerk to The Honorable L.M. Brinkema in the United States
District Court for the Eastern District of Virginia and to The Honorable Marcus D. Williams in the Nineteenth Judicial Circuit of Virginia. He received his JD from GMUSL where he co-founded the George Mason University Civil Rights Law Journal.

[1] Elite law schools are typically defined as those with the lowest selection rates, the highest average undergraduate GPA and LSAT scores, and the highest peer (e.g., law school deans, lawyers, and judges) evaluations. See Theodore P. Seto, Understanding the U.S. News Law School Rankings, 60 SMU L. Rev. 493, 496-505 (2007) (discussing the variables used by U.S. News to determine rank).  Such schools tend to be ranked highly in research studies in media reports. For example, the U.S. News & World Report provided an overall ranking of law schools in 2010 with the top 10 schools ranging from Yale and Harvard to the University of Michigan—Ann Arbor and the University of Virginia. Best Law Schools, U.S. News & World Report, (last visited Mar. 25, 2011).  While rankings vary substantially by specialty area, the more common use of the term “elite” refers to schools at the top of the global ranking. See Healthcare Law: Best Law Schools, U.S. News & World Report, (last visited Mar. 25, 2011).  Other approaches to ranking may give highly dissimilar results. E.g., 2009 Raw Data Law School Rankings: School (Ascending), Internet Legal Research Group, (last visited Mar. 25, 2011).

[2] See Thomas Sowell, Are Quotas Good for Blacks?, 65 Comment. 39, 41 (1978).

[3] Mismatch refers to the degree of dissimilarity of a student’s qualifications and the average level of qualifications at a particular school. Id.  According to critics of affirmative action, mismatch hurts underprepared Blacks through admittance to elite schools. Id.; see also Clyde W. Summers, Preferential Admissions: An Unreal Solution to a Real Problem, 2 U. Tol. L. Rev. 377, 395-97 (1970).

[4]See, e.g., Stacy Berg Dale & Alan B. Krueger, Estimating the Payoff to Attending a More Selective College: An Application of Selection on Observables and Unobservables, 117 Q. J. Econ. 1491, 1524 (2002) (arguing that there is potential for attending a selective school to hurt some students and help others, and if know, then students may attend a less selective school in order to find the best “fit” for them); Richard H. Sander, Systemic Analysis of Affirmative Action in American Law Schools, 57 Stan. L. Rev. 367, 368 (2004) [hereinafter Systemic Analysis of Affirmative Action] (indicating that the reason for support of affirmative action is the perceived benefit it has on minorities, however, there has never been a “comprehensive attempt to assess the relative costs and benefits of racial preferences in any field of higher education”); Richard H. Sander, Reply: A Reply to Critics. 57 Stan. L. Rev. 1963, 1964-65, 1971 (2005a) [hereinafter A Reply to Critics] (evaluating significant negative differences for black law students when compared to white law students regarding graduation and bar passage, and concluding that going to an elite school through affirmative action is “mildly negative or neutral” for the student).

[5] E.g., Carol Corbett Burris, Ed Wiley, Kevin Welner & John Murphy, Accountability, Rigor, and Detracking: Achievement Effects of Embracing a Challenging Curriculum as a Universal Good for All Students, 110 Tchr[s] C. Rec. 571, 577 (2008) (identifying different opinions that create the “modern tracking debate”); Tom Loveless, The Tracking Wars: State Reform Meets School Policy 12-19 (1999) (discussing how tracking began as well as analyzing the effect of tracking on “[a]chievement . . . , [e]quity, . . . ” and the effect of these two things at untracked schools).

[6] See Burris et al., supra note 5, at 577.

[7] See id.

[8] See, e.g., Guanglei Hong & Stephen W. Raudenbush, Effects of Kindergarten Retention Policy on Children’s Cognitive Growth in Reading and Mathematics, 27 Educ. Evaluation & Pol’y Analysis 205, 205-06 (2005) (explaining opposing views of grade retention and social promotion with a focus on “evaluating the causal effects of the kindergarten retention policy on children’s cognitive growth in reading and mathematics”); Guanglei Hong & Bing Yu, Effects of Kindergarten Retention on Children’s Social-Emotional Development: An Application of Propensity Score Method to Multivariate Multi-Level Data, 44 Developmental Psychiatry 407, 407 (2008) (explaining that there is no definitive answer yet as to “whether kindergarten retention is beneficial to the retained students’ social-emotional development over their elementary years”, but there are recent studies indicating that retention of kindergartners is not as beneficial as advancing to first grade).

[9] Hong & Raudenbush, Effects of Kindergarten Retention Policy on Children’s Cognitive Growth in Reading and Mathematics, supra note 8, at 206.

[10] See Burris, et al., supra note 5, at 577-78 (stating that “detracking with a high-track curriculum” would solve racial and socio-economic differences associated with tracking and increase educational achievement at all levels).

[11] Sander, A Reply to Critics, supra note 4, at 1966.

[12] See, e.g., Jesse Rothstein & Albert H. Yoon, Affirmative Action in Law School Admissions: What Do Racial Preferences Do?, 75 U. Chi. L. Rev. 649, 659-60 (2008) [hereinafter What Do Racial Preferences Do?] .

[13] See, e.g., id.

[14] See Lisa Anthony Stilwell, Lynda M. Reese & Peter J. Pashley, Analysis of Differential Prediction of Law School Performance by Racial/Ethinic Subgroups, LSAC RESEARCH REPORT SERIES, March 1998 at 2, available at Racial/ethnic grouping is based on self-report and is used solely for the analytic purpose of examining the match hypotheses, and the unconditional existence of this grouping is neither assumed nor implied. In this paper, the adjectives Asian, Black, Hispanic, and White are used to describe student groups. Though many students are American citizens, this paper declines to make that presumption.

[15] Katherine Y. Barnes, Is Affirmative Action Responsible for the Achievement Gap Between Black and White Law Students?, 101 Nw. U. L. Rev. 1759, 1767 (2007).

[16] See id. (explaining that “[a]ny preference based on something other than student credentials can potientally lead to mismatch”).

[17] See, E.g., Rothstein & Yoon, What Do Racial Preferences Do?, supra note 12, at 651-52 (summarizing methods in the article used to determine how the mismatch hypothesis affects Black students’ admission to law school).

[18] Barnes, supra note 15, at 1767.

[19] Id. at 1769.

[20] David B. Wilkins, A Systematic Response to Systemic Disadvantage: A Response to Sander, 57 Stan. L. Rev. 1915, 1960 (2005); see also, Rothstein & Yoon, What Do Racial Preferences Do?, supra note 12, at 652.

[21] Wilkins, supra note 20, at 1960-61.

[22] See Paul W. Holland, Statistics and Causal Inference (With Discussion), 81 J. Am. Stat. Ass’n. 945, 945-46, 959 (1986).

[23] Id.

[24] E.g., Rothstein & Yoon, What Do Racial Preferences Do?, supra note 12, at 677-79 (discussing Black-White gaps in average outcomes and the importance of focusing on “the causal effects of schools of different types on their students”); Sigal Alon & Marta Tienda, Assessing the “Mismatch” Hypothesis: Differences in College Graduate Rates by Institutional Selectivity, 78 Soc. Educ. 294, 296-300 (2005) ( . . . probably need a parenthetical here – I have no idea what to say though . . . ).

[25] Daniel E. Ho, Why Affirmative Action Does Not Cause Black Students to Fail the Bar, 114 Yale L.J. 1997, 2002 (2005) [hereinafter Why Affirmative Action] (a propensity score analysis was not completed).

[26] Id. at 2003.

[27] Id. at 2003 (Ho did not report full sample sizes, but instead reported the sizes of overlapping sets, for example law school tier 1 v. 2, or tier 2 v. 3, ect.).

[28] Id. at 2001. “An extension of my analysis that controls for a wider range of variables, thereby making this assumption more believable, further indicates that there is no evidence for the Sander hypothesis.” Id.

[29] Id. at 2002.

[30] Contra Daniel E., Evaluating Affirmative Action in American Law Schools: Does Attending a Better Law School Cause Black Students to Fail the Bar? (Mar. 9, 2005), available at [hereinafter Evaluating Affirmative Action].  The analysis in this paper was performed on a single imputed data set in two steps, taking into account missing data.  First, Ho obtained a matched data set with propensity scores, estimated using 180 covariates (Table 2 at 11).  Next, Ho performed a logistic regression analysis, (at 9-10) carried out within each tier for determining the ATT tier effect.  Only results for the Black students were provided. Estimated effects are graphed with confidence intervals, rather than reported, and causal effects are only reported for a single bar passage outcome (final bar passage).  The effectiveness of matching was shown for all students, rather than Black students only.  In Ho’s report, Why Affirmative Action does not Cause Black Students to Fail the Bar,  the matching analysis is reported in just two pages.  Consequently, it provides few details of the analysis. 114 YALE L.J. 1997, 2003-04 (2005).

[31] Jesse Rothstein & Albert H. Yoon, Mismatch in Law School 17-18 (May 2009), available at (follow “Manuscript” under Working Papers) [hereinafter Mismatch in Law School]; see Richard H. Sander, Do Elite Schools Avoid the Mismatch Effect, Empirical Legal Studies Blog (Sept. 23, 2006, 9:23 PM, Sept. 24, 2006, 5:10 PM), [hereinafter Do Elite Schools Avoid the Mismatch Effect].

[32] Sander, Do Elite Schools Avoid the Mismatch Effect (Sept. 24, 2006), supra note 31.

[33] Sander, Systemic Analysis of Affirmative Action, supra note 4, at 368.

[34] Id. at 445.

[35] Id.

[36] See id. at 423.

[37] See Wilkins, supra note 20, at 1916 (explaining that bad grades are only important if they produce bad outcomes).

[38] See id. at 424 (noting that “blacks tend to underperform in law school relative to their numbers”); Wilkins, supra note 20, at 1916 (the value of how well a student does depends on that students ultimate outcome).

[39] Ho, Why Affirmative Action, supra note 25, at 2000-01.

[40] Id.

[41] James J. Heckman, The Scientific Model of Causality, 35 Soc. Methodology 1, 18 (2006) (“The conventional parameter of interest, and the focus of many investigations in economics and statistics, is the average treatment effect (ATE)”).

[42] Stephen L. Morgan & Christopher Winship, Counterfactuals and Causal Inference 43 (2007).

[43] An important assumption of the potential outcomes model is known as the stable unit treatment value assumption (SUTVA), which posits that the treatment effect for any given individual is independent of the treatment assignment of other individuals. The SUTVA is also known as the assumption of no macro or system-level effect. See Heckman, supra note 41.

[44] Morgan & Winship, supra note 42, at 46-47 (discussing how the naïve estimator can relate to the ATE).

[45] Heckman, supra note 41, at 28-29 (explaining ceteris paribus as the need to rule out variants which could otherwise interfere with a scientific or statistical formula).

[46] Morgan & Winship, supra note 42, at 42 (explaining that ATT “signif[ies] the average treatment effect for the treated”).

[47] See infra note 24.

[48] Id.

[49] Mary J. Fischer & Douglas S. Massey, The Effects of Affirmative Action in Higher Education, 38 Soc. Sci. Research 531, 532, 545-46 (2007).

[50] Douglas S. Massey & Margarita Mooney, The Effects of America’s Three Affirmative Action Programs on Academic Performance, 54 Soc. Probs. 99, 109 (2007).

[51] Id.

[52] See Jennie E. Brand & Charles N. Halaby, Regression and Matching Estimates of the Effects of Elite College Attendance on Educational and Career Achievement, 35 Soc. Sci. Res. 749, 766 (2006).

[53] Alon & Tienda, supra note 24, at 302.

[54] Id. at 301-03.

[55] Id. at 307.

[56] Id. at 303.

[57] Dale & Krueger, supra note 3 at 1492.

[58] Id. at 1492-93, 1495.

[59] See id. at 1511-12.

[60] See id. at 1512.

[61] Alon & Tienda, supra note 24, at 307.

[62] Sander, Systemic Analysis of Affirmative Action, supra note 4, at 369-370, 372-73.

[63] Id. at 414, 416-17 (stating that the Black-White credentials gap lessens from top tiered to low tiered law schools.  Thus, Blacks get more of a boost at the top law schools than at the lower tiered law schools because the credentials level between Whites and Black get smaller).

[64] Id. at 447 (explaining that black have lower BAR passage rates because they have lower law school GPAs, which is because they got into more elite schools via racial preferences rather than academic qualifications).

[65] Rothstein & Yoon, What Do Racial Preferences Do?, supra note 12, at 686 (describing their method of “implementing the [B]lack-[W]hite comparison”).

[66] Undergraduate Grade Point Average

[67] See, e.g., Richard H. Sander, Mismeasuring the Mismatch: A Response to Ho, 114 Yale L. J. 2005, 2008 (2005) [hereinafter Mismeasuring the Mismatch] (for an example of the original path diagram illustrating the negative match hypothesis).

[68] See e.g., id.

[69] The fourth component of Sander’s argument against affirmative action in Systemic Analysis, which is not considered herein, was that admission policies based solely on credentials (i.e., “colorblind” admission procedures) would substantially reduce mismatch and would therefore result in a greater flow of minority lawyers from American law schools. Note that a negative match effect is necessary, but not sufficient, for this argument to be true. See Sander, Systemic Analysis of Affirmative Action, supra note 4, at 367.

[70] Ho, Why Affirmative Action, supra note 25, at 2000-01.

[71] See Sander, Mismeasuring the Mismatch, supra note 67, at 2008; see also Otis Dudley Duncan, Introduction to Structural Equation Models 31-43 (1975) (Sewell Wright’s multiplication rules provide more information on direct, indirect, and total effects).

[72] Ho, Why Affirmative Action, supra note 25, at 2004.

[73] Id.

[74] Sander, Mismeasuring the Mismatch, supra note 67, at 2006; Sander, A Reply to Critics, supra note 3, at 2005.

[75] Sander, Mismeasuring the Mismatch, supra note 67, at 2006; see Rothstein & Yoon, What Do Racial Preferences Do?, supra note 12 (the idea of “control Whites” is also used).

[76] Dale & Krueger, supra note 3, at 1492, 1500.

[77] Ian Ayres & Richard Brooks, Does Affirmative Action Reduce the Number of Black Lawyers?, 57 Stan. L. Rev. 1807, 1853 (2005).

[78] Ayres and Brooks, supra note 77, at 1831.

[79] Id. (noting that “Dale and Krueger matched students who reported that they were accepted by similar-quality schools”).  Dale and Krueger, supra note 3, at 1492-93, 1498.

[80] See Ayres and Brooks, supra note 77, at 1831-32.

[81] Id. at 1837-38.

[82] See generally, Ayres and Brooks, supra note77. Ayres and Brooks made two separate statistical arguments. The results mentioned in this paper derive from a logistic regression model with control variables.

[83] Rothstein & Yoon, Mismatch in Law School, supra note 31, at 2.  The two models employed were ordinary least squares (OLS) regression, and instrumental variables (IV) using race as the instrument. According to Rothstein and Yoon, the OLS estimates (argued to be an ATTs) of effect should be biased positively, while the IV effects (argued to be LATEs) should be biased negatively. The full mathematical complexity of this argument is beyond the scope of this paper.  However, the LATE is based on a comparison of Black students who comply with affirmative action preference by attending selective schools, and White students with the same credentials who do not. Because these White students are taken as the counterfactuals, the LATE in this case is not consistent with the potential outcomes model. Id. at 2, 20-21.

[84] Id. at 20.

[85] Id. at 22.

[86] Id.

[87] See id. at 2.

[88] Id. at 17.

[89] See id. at 3-4.  Rothstein and Yoon were concerned that without preferential admission policies, many low-scoring Black students would not be admitted to any law school. They noted that such students might have lower unobserved qualifications than White students with similar observed qualifications.  Id. at 23.  In order to eliminate this potential source of selection bias, they reanalyzed the top four quintiles of students based on the academic index, which in turn eliminated 80% of the Black Bar Passage student sample. Id. at 20; see Rothstein and Yoon, What Do Racial Preferences Do?, supra note 12, at 674-75, 705.

[90] Rothstein and Yoon, Mismatch in Law School, supra note 31, at 23.

[91] Barnes, supra note 15, at 1774.

[92] Barnes, supra note 15, at 1776, n. 60.

[93] See Tables 1A, 2A, and 3A in Barnes, supra note 6, at 1777, 1781, 1783.

[94] Barnes, supra note 15, at 1778.

[95] Barnes, supra note 15, at 1781 (Table 2A).

[96] Barnes, supra note 15, at 1807.

[97] See generally Barnes, supra note 15 (all the tables in the entire article refer to her use of “logistic regression” analysis); see also Rothstein & Yoon, What Do Racial Preferences Do?, supra note 12, at 694-95; Sander, Systemic Analysis of Affirmative Action, supra note 4, at 373; Sander, A Reply to Critics, supra note 4, at 1971; Ayers & Brooks, supra note 37, at 1810 (discussing how Sander used regression analysis comparable to their own use of regression analysis).

[98] Holland, supra note 22, at 954.

[99] Id. at 955.

[100] Barnes, supra note 15, at 1775. Barnes uses three variables: race, school type, and credentials.

[101] Doug Williams, Does Affirmative Action Create Educational Mismatches in Law Schools, at 10, April 13, 2009, available at; see Fischer & Massey, supra note 49, at 544 (the authors used the mismatch and stereotype threat hypothesis); Massey & Mooney, supra note 50, at 102-05 (the authors examined affirmative action, mismatch and stereotype theories).

[102] Rothstein & Yoon, What Do Racial Preferences Do?, supra note 12, at 650-51 (discussing the mismatch hypothesis); Barnes, supra note 15, at 1765 (using the mismatch theory and race-based barriers theory); Ayers & Brooks, supra note 36, at 1853 (analyzing the mismatch theory).

[103] Williams, supra note 101, at 24, 36, 38.

[104] Williams, supra, note 101, at 6-7.

[105] Id. at 30.  The rationale for this methodological choice was that classifying the first four tiers as non-elite creates measurement error, which reduces the size of estimates.  By eliminating the two middle tiers, this problem is ameliorated.  Yet the middle two tiers most likely provide the closest-matching counterfactual controls for elite school attendance.  Thus, it could be argued that this choice is just as likely to create bias as it is to reduce the effects of measurement error.  Williams also carried out a “binary selective IV” analysis similar to that of Rothstein and Yoon. Williams, supra note 101, at 20; Rothstein and Yoon, Mismatch in Law School, supra note 31, at 23.  It appears that the matching analysis is intended to parallel this procedure, but it is not clear which groups were matched together.  There does not appear to be an analysis similar to Rothstein and Yoon’s IV analysis with latent selectivity.  Williams also carried out other analyses. He used a matching procedure using region of bar examination, UGPA, and LSAT to control to corroborate the results of the IV analysis. Williams, supra note 101, at      14.  Williams computed effects only for those students who attempted the bar examination.

[106] Williams, supra note 101, at 30.

[107] See Ayers & Brooks, supra note 36, at 1853 (stating that “ending affirmative action will not cure the bar passage deficit”).

[108] Williams, supra note 101, at 9; see Barnes, supra note 15, at 1807.  Barnes concluded that “ending affirmative action would result in a 22.6% decrease in the number of new black law graduates, a 13.4% decrease in the number of new black lawyers and a 23% decrease in the number of black law graduates who obtain well paying jobs.” Id.

[109] Williams, supra note 101, at 40.

[110] Sander, A Reply to Critics, supra note 4, at 1965 (scholars critiquing Sander’s work include Michele Dauber, Ayers and Brooks, David Chambers and David Wilkins); see Richard O. Lempert, William C. Kidder, Timothy T. Clydesdale, & David L. Chambers, Affirmative Action in American Law Schools: A Critical Response to Richard Sander’s “A Reply to Critics”, U. Mich. John M. Olin Center for Law & Econ., Working Paper No. 60, at 5 (2006), available at (only mentioning that different data was used for the different estimates); Michele L. Dauber, The Big Muddy, 57 Stan. L. Rev. 1899, 1907 (2005) (stating that he could not “directly test Sander’s evidence).

[111] Williams, supra note 101, at 3-9.

[112] Id. at 3.

[113] Id. at 14.

[114] 14.

[115] Id.

[116] Id. at 39-40.

[117] See generally Linda F. Wightman, LSAC National Longitudinal Bar Passage Study (Law Sch. Admission Council 1998),

[118] Id. at 5 (anonymity of the schools is implied).

[119] Wightman, supra note 117, at 5.

[120] Id. at 9.

[121] Barnes, supra note 15, at 1772-73; see supra note 117, at 9 n.20 (the illustration shows how each cluster rates with each variable); see also supra note 117, at 33-34 (describing the differences between the clusters of schools).

[122] See Rothstein & Yoon, What Do Racial Preferences Do?, supra note 12, at 664 n.55; Barnes, supra note 15, at 1773 n.51; Ayres & Brooks, supra note 77, at 1808 n.4; Ho, Why Affirmative Action, supra note 25, at 1997 (stating that he based his study off the data used by Sander, who used the Bar Passage Study); Sander, Systemic Analysis of Affirmative Action, supra note 4, at 414 n.133.

[123] See generally Wightman, supra note 117, at 5.

[124] See Williams, supra note 101, at 14 (stating that “‘[a]djusted pass bar ever’ incorporates information about the number of attempts required to pass the bar; this variable takes on the value ‘1/n’ if the test taker passed the bar on the nth try and ‘0’ if the test taker never passed”).

[125] Jacob Cohen, A Power Primer, 112 Psychol. Bull. 155, 157 (1992).

[126] Wightman, supra note 117, at 5.

[127] Id. at 8.

[128] See generally, James Honaker, Gary King, & Matthew Blackwell, Amelia II: A Program for Missing Data, (last visited April 23, 2011).

[129] Williams, supra note 101, at 13 (stating that students who choose not to take the bar examination and those who fail the bar examination are often grouped together, but “it is doubtful that these two [groups of] individuals have achieved the same amount of learning).

[130] See generally Jasjeet S. Sekhon, Multivariate and Propensity Score Matching Software with Automated Balance Optimization: The Matching Package for R, J. Stat. Software 1, 6 (2011), available at; Daniel E. Ho, Kosuke Imai, Gary King, & Elizabeth A. Stuart, Matching as Nonparametric Preprocessing for Reducing Model Dependence in Parametric Causal Inference, 15 Political Analysis 199, 219-20, 219 n.14 (2007) (mentioning the use of nearest neighbor and optimal matching in the MatchIt software); see Daniel E. Ho, et al., MatchIt: Nonparametric Preprocessing for Parametric Causal Inference, 171 J. Stat. Software 481 (2008).

[131]Paul R. Rosenbaum & Donald B. Rubin, Constructing a Control Group Using Multivariate Matched Sampling Methods that Incorporate the Propensity Score, 39 Am. Statistician 33, 37 (1985) (providing an example of how regressions are used in propensity score matching).

[132] This approach reduces the dimensionality to a single matching criterion, which efficiently balances—albeit indirectly—on the K individual x variables.  Given a dichotomous measure of eliteness (D = 0, 1), the logistic propensity is given by

where the linear propensity is obtained as

In the present study, Mahalanobis distance was the similarity measure based on a combination of the covariates and the linear propensity. As noted by a number of authors, propensity score matching minimizes group differences on a general factor described by a propensity score while Mahalanobis matching is effective at minimizing group differences on individual covariates orthogonal to the propensity score.  Paul R. Rosenbaum & Donald B. Rubin, Constructing a Control Group Using Multivariate Matched Sampling Methods that Incorporate the Propensity Score, 39 Am. Statistician 33, 37 (1985).  Both types of bias reduction are obtained by using the combination.

[133] Sekhon, supra note 130, at 6-7 (discussing how genetic matching is used in the R package Matching software).

[134] Id. at 1.

[135] Id. at 6. A number of matching algorithms are available including exact, nearest neighbor, optimal, and kernel matching. These methods use different algorithms to find a control individual that is close to a treatment individual. All methods can be used with or without replacement. Use of replacement tends to decrease bias in the treatment effect, while no replacement tends to increase precision. Morgan & Winship, supra note 42. We used nearest neighbor matching with a single match (with replacement) as implemented in Matching.

[136] According to the negative match hypothesis, a negative gap (A < B) will result in less learning than if an under-qualified student had hypothetically attended a less elite school (A ≈ B).

[137] Ho, Why Affirmative Action, supra note 25, at 2002.

[138] Williams, supra note 101, at 14, 16. Williams’ matching analysis was intended to provide additional support for his IV estimates.

[139] The validity of matching estimates may be affected by hidden variables, and evaluating their potential impact is useful.  One approach for determining how a hidden variable may change the magnitude of match estimates is to alter the set of covariates and rerun the analysis.  Sometimes this approach is accomplished by adding interaction terms or, alternatively, by deleting covariates and examining the resultant change in match estimates.  For this reason, sensitivity analyses were conducted by adding four select dummy indicators of region where final bar examinations were taken (Far West, Great Lakes, Northeast, and South Central).  This approach had a negligible effect on match estimates.

[140] Williams, supra note 101, at 48.

[141] Rothstein & Yoon, supra note 12, at 689-90.

[142] Sander, Systemic Analysis of Affirmative Action, supra note 4, at 374, 473.

[143] See id. at 473-74.

[144]Rothstein & Yoon, supra note 12, at 675 (Figure 3).  Rothstein and Yoon illustrated this effect with a diagram (Figures 4.3 and 4.4), but did not provide an estimate of effect.  Id.

[145] Id.

[146] Wightman, supra note 117, at 25.

[147] Id. at 29 (Figure 4).

[148] Dale and Krueger, supra note 4, at 1491, 1512, 1523.

[149] Ted I. K. Youn & Karen D. Arnold, Chosen to Lead: Generations of Rhodes Scholars in American Meritocracy 3 (2008),

[150] Id. at 7-8.

[151] Id. at 14.

[152] Danielle C. Gray & Travis LeBlanc, Integrating Elite Law Schools and the Legal Profession: A View from the Black Law Students Associations of Harvard, Stanford, and Yale Law Schools, 19 Harv. BlackLetter L.J. 43, 48 (2003).

[153] Id.

[154] See id.

[155] Wilkins, supra note 20, at 1932.

[156] Youn & Arnold, supra note 149, at 15.

[157] Id.

[158] Id.

[159] David B. Wilkins & Mitu G. Gulati, Why Are There So Few Black Lawyers in Corporate Law Firms?: An Institutional Analysis, 84 Calif. L. Rev. 493, 741 (1996) (see Table 5); see Wilkins, supra note 20, at 1933.  For additional quantitative estimates of the effect of elite law school attendance see Tables 7.3 and 7.4 of Sander’s article, A Systemic Analysis of Affirmative Action in American Law Schools. Sander, supra note 4, at 463-464.

[160] Sander, A Reply to Critics, supra note 3, at 2004-05.

[161] Wightman, supra note 117, at 28 (discussing law school cluster comparisons).

[162] Sander, Systemic Analysis of Affirmative Action, supra note 4, at 372-74; Sander, A Reply to Critics, supra note 4, at 1967-68 (explaining what his regressions show in his opinion).

[163] See Sander, A Systemic Analysis of Affirmative Action, supra note 4, at 460 (Sander only mentions selectivity in Table 7.2, but this table does not address selectivity effects specifically).

[164] See Rothstein & Yoon, supra note 12, at 710-14 (discussing the results of their study).

[165] See Sander, A Reply to Critics, supra note 4, at 1993 (explaining that “[s]maller sample sizes make it more likely that real performance differences will not show up as statistically significant”).