The Republican Standard

UVA, Implicit Bias, and Statistics

Polarization has always been a part of the American political system. It affects our ability to honestly react to incidents and to critically debate ideas by dividing us up into tribes that have conflicting interests and are constantly at war with each other. In this type of environment, it is very important for both sides to be able to agree on a common definition of facts and truths, because if we are to disagree on a motley crew of subjective issues like rights, role of governments and morality, we need to at least be able to agree on the nature and definitions of the subjects of our debates.

If you spend time following politics on the internet, you will know that this is becoming harder and harder with each political storm on internet platforms like Twitter, and I think it is partly due to a change in how we have come to define “the truth”. We live in a time of unbelievable technological and economical changes, all of which are happening in a context of unprecedented interactions between different cultures and people from different parts of the world (the best example of this is the European migrant scenario of the last few years). In this context of instability and cultural flux, when traditional definitions and behavioral truths are either being challenged or decimated, the only definition of truth we have left in the west is the “objective” or observable truth.

This is an incomplete view of the truth and ignores the ancient behavioral truths of human beings across cultures (most visibly with religious moral systems), but despite its flaws, it is a good start in our attempt to define the nature of relationships and circumstances we are dealing with. However, this puts us in interesting dilemma in our ability to communicate with each other.

If the observable universe is enough to inform you on the complexities of human behavior, then measurement and statistics attain the power of religious language, essentially becoming irrefutable “evidence” for whatever side you are arguing for. Therefore, if you find a headline saying that a study proves what you think you know about the world, your confirmation bias will immediately make you accept the validity and accuracy of the study and all of its assumptions, data, and conclusions

Perhaps no scientific study has been weaponized in this manner more than the Implicit Association Test (IAT) that is supposed to demonstrate to the participants how they are implicitly prejudiced towards other groups of people. The University of Virginia, my alma mater, has recently introduced this test as mandatory training for some incoming freshmen and women, mainly as a response to the Charlottesville violent incident in August 2017.

But UVA introducing the test is a good example of the purpose this test and studies like it serve in today’s polarized political climate. As Jesse Singal points out in this probing article in NY Magazine, the main draw of the IAT is that “its notion, and the data surrounding it, have fed into a very neat narrative explaining bias and racial justice in modern America”.

Singal explains further, that while most people agree that explicit racism and prejudice has decreased dramatically in America over the last few decade, things like the existing economic disparity between groups present a need for a deeper form of prejudice, one that’s hidden and secretive and lies beneath the surface of even those people who are not explicitly racist or prejudiced. This is where the IAT comes in, offering people like University administrators the ability to act strongly after a racist or prejudiced incident (like the August 2017 Charlottesville incident), pacifying those who want a visible reaction against these actions while also also being able to look good, decent people (who are not prejudiced) in the eye and say that this implicit bias is so hidden within them that it is beyond even their own knowledge.

This is a bad solution for all sorts of ethical and philosophical reasons, but an underreported crime of this IAT test is its scientific and statistical weakness. As Singal again writes in this must-read piece in NY Mag, there have been many pieces of scholarly work, ignored mostly by the media, that find that the IAT falls short of the usual standards of academic rigor set in fields like psychology.

He continues, saying that the “IAT is a noisy, unreliable measure that correlates far too weakly with any real-world outcomes to be used to predict individuals’ behavior — even the test’s creators have now admitted as such”. He claims that one can easily look at the history of the test that it “released to the public and excitedly publicized” long before it could have gone through the required scientific steps of repetition and challenge by other academics to measure reliability (a study is reliable if it produces similar results under consistent conditions). Before academics could work through the scientific method, the IAT and its insights were already the talk of cocktail circuits on both of America’s coasts, and therefore, lost to the rigor demanded by the scientific method.

In today’s political climate, most people are either ignorant about this part of the IAT’s history or have chosen to ignore it in favor of the “good” it can do to reduce prejudice in the world. For many of its advocates, even questioning the scientific accuracy of the IAT could be seen as a sign of prejudice. Combine an unreliable test like the IAT with the power of statistics in today’s political climate and you get a terrible marriage.

To demonstrate, let us look at a generic conversation between a liberal student and conservative student at a nameless university. Our liberal student (student A) will make a claim about a certain group of people and will offer a study in support of his claim, and our conservative student (student B) might do the same for her counterclaim.

This interaction is perfectly fine if both A and B understand the statistics of the studies they are dealing with – the assumptions, the data, the methodology, the analysis and the conclusion – and are willing and educated enough to talk about these details

And of course, they also need to understand the limitations of the studies – whether it is externally or internally valid or whether it is reliable. Unless the two students are also talking about these things when they introduce a statistical study to the argument, I believe they are participating in the logical fallacy of “appealing to authority” with “science” and “statistics” playing the infallible authority in this case. This was a common occurrence amongst many of the people discussed politics with at UVA, with me pushing back or at least inquiring about the study’s details with a person, and the person responding by saying glibly, “Are you questioning science and the experts?”

In the cases where both sides understood the statistics they were talking about, the conversation can be enlightening and productive. In a world where science and statistics are seen as the only acceptable definition of the truth, I believe it is essential that we get this part of our discussion right and avoid the trap of the logical fallacy.

For this to happen, we need to understand that simply mentioning a study and its findings does not win us the argument against our ideological opponents, with the flaws and limitations of a study being just as, if not more important than, its findings. If both sides of the political aisle accept this level of rigor in talking about these statistical studies, I believe the studies can play an important role in our political climate. Only then will we able to fight the onslaught of bad scientific tests like the IAT and the severely damaging impact they could have on our politics and discourse and begin to use statistics in a scientifically demanding and accurate way.

Exit mobile version