[Imc-cfl] Statistical fascism?
dwight.hines at gmail.com
Fri Aug 4 10:08:52 PDT 2006
Empirical Legal Studies and Statistical Fascism
St. Augustine, Florida USA
August 2, 2006
Dwight E. Hines
TRAC, (Transactional Records Access Clearinghouse
<http://trac.syr.edu>, an independent organization that states it is
"Your source for comprehensive, independent, and nonpartisan
information about federal enforcement, staffing and spending", has a
number of specialty areas, one of which is immigration. This week,
TRAC released their most recent study on immigration: "Immigration
I have a number of concerns about the data collection methods, the
types of analyses, and the inferences drawn from the data by TRAC.
The "Immigration Judges" research exemplifies a number of problems
helping people to perceive Empirical Legal Studies as just another way
for liars to figure out how to win unfairly. There is no doubt that
the basic assumption of the research shown in "Immigration Judges"
misses a major and fundamental concern in the judicial system of the
United States — equal application of the laws. Although consistency
in judicial decisions across different judges is a superficial and
even appealing goal, consistency is not the proper focus of any
empirical research that purports to be concerned with fairness. For
fairness, true justice is inconsistent because no two people are
alike, no two cases are alike.
I am not arguing that the judicial system is not amenable to empirical
research. What I am arguing is that consistency is a terrible
indicator of how well a judicial system in functioning to achieve the
stated goals of a culture, in decisions about immigration or any other
area of controversy. Think of the wonderful consistencies that were
possessed by the courts of the Third Reich. The use of numbers of
the types reported by TRAC must be examined closely to be sure our
assumptions, methods and inferences are to the best of our abilities.
I admire what TRAC is doing and have not had a problem or criticism of
their work in other areas. I hope that TRAC reads my concerns and
criticisms as one who supports their mission and their work, as well
as believing the TRAC staff is professional.
Of course, I am disappointed that EOIR (Executive Office for
Immigration Review), and DOJ (Department of Justice) both refused to
make comments on the research by TRAC. The "Immigration Judges"
research provides an excellent opportunity for DOJ to gain awareness
for the importance of the Department of Justice programs, programs
that will help address a few of my criticisms (Global Justice XML Data
Model, < http://www.it.ojp.gov/topic.jsp?topic_id=43>). I am also
disappointed that DEA, ATF, FBI, IRS, & DHS have not made comments
about the mistakes that TRAC made with "Immigration Judges" that might
show TRAC can make mistakes in areas related to a specific agency
based not so much on facts but on methods, assumptions and inferences.
First, a focus on any single component of any of the current judicial
systems functioning in the United States is applying a model that has
worked well in parts of physics and parts of biology, but not so well
in sociology. Indeed, examining the development of complex systems,
the ecological approaches, complemented by modern visualization
techniques, offers great unexplored areas of progress not just in
justice but healthcare and assume multivariate interdependencies.
Given that TRAC emphasizes data, and that there are numerous charts
and graphs and tables in the report, I was astounded to read: "In the
last decade, judges have disposed of "on their merits" somewhere
between 25,000 and 35,000 requests each year." So, in an unpaginated
paper with unnumbered sections, you, the reader will have to search
for that sentence to verify I quoted it accurately, and you know now
that you are dealing with a range of 10,000 cases a year. That is
unacceptably sloppy writing. Multiplying 10,000 unknown cases over
ten years shows we now have a larger range of uncertain cases,
100,000. One hundred thousand is too large even for newspaper
reporting. We need specific numbers and this shows the ecology of
judges in the "Immigration Judges" research is not likely to be
ecologically valid because if the core numbers are not known within a
reasonable range, it is not possible to know what you are talking
about. I am not arguing for precision here, but meaningful accuracy.
Ecological validity simply means that the analysis of the data makes
sense in situ, where the data were collected. Ecological validity is
important because without it, one can do analyses of judges and leave
out where they judge, who makes the judging possible, and who and what
they are judging.
First, go into any state of federal courtroom or federal
administrative court and you see that Judges are not alone. A Judge
without sufficient quantitative and qualitative clerical support does
exist too often, but to compare judges who are have adequate clerical
support to judges without adequate clerical support is simply wrong.
Go into any court in the United States, from the United States
District Court in Philadelphia, PA, to the Clay County Court, in Green
Cove Springs, FL, and you will see massive paper flows. I've spent
hundreds of hours in different courts and my jaw still drops when I
try to follow what paper flows when, and to, and from whom, and what
paper stays in the same place. I do know that a ripe area for court
research is comparing the rate of response to jury summons to the
quantity and quality of the jury master, and his or her staff, and it
has been ripe for some years, especially in the south of the United
States. We need to know those types of support effects and how they
relate or even dictate judicial outcomes.
Second, comparing Judges on binary decision outcomes is too pure even
for TRAC's limited use of statistics in "Immigration Judges".
Simplifying what dependent variables to measure is often done in
statistics and is necessary in studying judges because TRAC defines
what a judge does extremely narrowly: "In general, the key moment in
each asylum hearing comes when the judge, after considering the
evidence and listening to direct and cross examination, decides to (1)
deny the asylum application (opening up the probability of
deportation) or (2) grant it, on either an absolute or conditional
Using binary outcomes, TRAC can not tell us how often the Judge denies
asylum because of adequate paperwork, or because an applicant does not
know how to remedy a deficiency, or because clerical support misfiled
an affidavit, or allows for a continuance for corrections, or how many
applicants have been initially refused and are filing an amended
paper. Most of the missing variables are shortcomings that leaned
counsel know how to rectify, but are not included in the analyses.
These missing variables are more determinative of outcome than the
judge. TRAC notes that 93% of those denied asylum were pro se
applicants. The extremely high failure rate of pro se is the opposite
of results found in the criminal courts — pro se defendants have a
higher rate of success than attorneys (Hashimoto, Erica J., "Defending
the Right to Self Representation: An Empirical Look at the Pro Se
Felony Defendant". North Carolina Law Review, Forthcoming Available at
SSRN: http://ssrn.com/abstract=901610). Thus, one way of accounting
for the 'inconsistency' in judge's decisions is to measure how helpful
clerical staff are to pro se applicants, how efficient and how
accurate clerical staff are in all the cases, and even the
availability of untrained, informal legal advisors who know the
culture and language of those applying for citizenship.
Third, comparing Judges, without equating for simple, fundamental
variables relating to personal security in the courtroom is not
ecologically valid. The threats of attacks on judges are not to be
minimized, especially if you have seen the people who are so full of
hate and anger and are so limited in flexible thinking that you know
they are real threats. Comparing judges' decisions in San Francisco,
with Judges' decisions in Miami, Florida, without some attempt at
least in knowing that along with an overall elevated security
consciousness in Miami, there are day to day increases in tension that
can be additive or multiplicative depending on the proximity or
severity of other events, would account for some of the
'inconsistency' of decisions due to not only the impact on the judge,
but to the clerical and security staff. Think for a moment, you can
choose to be a Courtroom marshal in San Francisco or Miami, Florida.
I know, a no brainer.
Fourth, collecting data on a specific variable that has limited
ecological validity, like the binary — grant or no grant decision
—needs to be a random or a nearly random sample, and Pos is cited as
stating the need for random sampling. 400 F.3d 530 (7th Circuit 2005).
Nonrandom samples can be valuable, but such samples can be misleading
and you don't know confidence intervals. Taking the decisions
nonrandomly of the judges on a binary measure, ranking them by number
of 1s or 0s, makes little sense because of the binary measure itself
is inappropriate. Compounding the use of an inadequate, nonrandom
outcome variable is to take the different extremes of the ranks and to
compare the extremes on other variables. So, the analysis is
comparing cases decided across judges, not corrected for number of
cases actually heard by each judge, using a most likely nonrandom
sampling system TRAC was assured was random. It stuns me that TRAC
did not report how the judges are randomly assigned.
Given that a good attorney is not only street wise but court wise, it
is difficult to accept that random assignment of cases in New York
among 34 judges is random — despite a greater than ten-fold difference
in the granting of asylum among the judges. There are problems here.
TRAC is going to have to make the argument that attorneys are NOT
only ignorant of the discrepancy in judges' decisions but that the
attorneys don't find legal ways to increase the probability of going
before the judge who would be most favorable to their client. I would
like to see data on what attorneys are arguing before the different
judges. Attorneys are not ignorant of probability. It would be of
value to examine amended applications to see how the applications
change in their amended versions, after the judge is randomly
assigned, of course.
Fifth, I would argue, and there are likely a whole gaggle of
immigration judges who would agree with me, that the key decision in
an immigration application is who a person chooses as an attorney.
Attorneys are ignored in the TRAC discussion of inconsistent
Sixth, I am not discussing judges' salaries or benefits as variables
that should be included, but one has to wonder if some indicator of
salary discrepancy between what the judges are earning and what
private practice attorneys are earning who have less experience and
skills, could account for a small but significant amount of the
variance in 'inconsistency in immigration judges' decisions. (GAO,
Human Capital: Trends in Executive and Judicial Pay, June, 2006,
Similarly, budgets of the courts need to be included, at least in any
initial analyses, to be sure all the courts are equated on the dollars
spent. Not knowing for sure, but I would submit that the budget, per
case, of actual costs is different between San Francisco and Miami.
To argue that such differences is not important in the inconsistencies
in outcome decisions is to strip all decisions of their ecology.
Seventh, finally, there is absolutely no discussion of the quality of
the judges' decisions. Was the person qualified, was the person
granted or not granted asylum properly? A two by two table could be
constructed qualified -yes or no— grant -right or wrong. The table
would be useful for playing with to determine what the factors above,
and others, might contribute to the distribution within TRAC's Figure
1. Such a table would be fun to use at the immigration judges'
workshops because it would allow judges to validate, or not, other
judges' decisions. Over several sessions, with judges making the
ratings, tables large enough for legitimate statistical tests could be
built. One side effect of such exercises is that judges would become
more similar in their decisions. Yet, given the large, often
overwhelming gap between judicial decisions based on information rich,
hot courts compared to cold, near sterile appellate paper review, the
results might be surprising.
Similar criticisms of the empirical approaches to studying judges'
decisions are possible: Schneider, Daniel M., "Using the Social
Background Model to Explain Who Wins Federal Appellate Tax Decisions:
Do Less Traditional Judges Favor the Taxpayer?". Virginia Tax Review,
Vol. 25, p. 201, 2005 <http://ssrn.com/abstract=843464>. It's ironic
that the studies of judges and judicial decisions appear to be
consistent in that they take the judge out of the court and put her or
him into a tiny, environmentally controlled, empty room for making all
decisions. Comparing judges and juries does ameliorate some of the
problems of ecological validity (Eisenberg, T., Hannaford, P., Heise,
M., LaFountain, N. Munsterman, G., Ostrom, B., Wells, M. "Juries,
Judges, and Punitive Damages: Empirical Analyses Using the Civil
Justice Survey of State Courts 1992, 1996, and 2001 Data". Journal of
Empirical Legal Studies, 3(2), 263-295, July 2006, <
Quoting Judge Posner and Attorney General Gonzales about there being
specific and general problems with immigration cases is a solid reason
to conduct serious analyses of court data. However, to focus on the
judges as making the key decision is not going to help us improve the
system, it will simply delay addressing all of the issues. Thus,
cutting the extreme judges from the courtroom, be they in San
Francisco or Miami, is not going to solve the problems and might make
things worse by creating justice formulas that are shams. If there is
single beauty to justice, it is that each case, each presentation to
the Court, is individual. Thus, apparent inconsistency across cases
may be desirable. Because our understanding not only of statistics,
but of human nature, still need development, let's pray that the
courts ignore this TRAC study and join with the Federal Judicial
Center to find out what, if anything, is going on with the immigration
Courts (Courts with a capital C).
I emphasize that I don't think that TRAC is attempting a hatchet job
on immigration judges. I see too many 'studies' like this one, and
worse, that fail to take account the complexity of a system, and
become frustrated that yearly attempts to improve it or make
More information about the Imc-cfl