Fake news is in the headlines,  and already a phalanx of tech-savvy students have come to our rescue by creating apps to root out verified and unverified stories. The effort is commendable, and the technology impressive.  Unfortunately, the problem is harder to solve than that. In fact, it may be unsolvable in terms of creating perfect continua of information and information-based decision making. But there are ways to improve in these regards, and that is where our focus should lie.

Though fake news is a problem, it is not necessarily THE problem. The problem is the lack of objectivity among both creators and users of information, a lack which manifests itself on a sliding scale that looks something like this:

reportage scale.JPG

What Are We up Against?

On one end of the spectrum we have the gold standard of objective reporting in which a detached, impartial reporter posts a completely neutral recitation of illuminating facts through which a completely rational readership may choose the best course of action. This isn’t possible, of course, but it serves as a model to which people should aspire.

Less desirable than the perfect world is one in which authors, reporters, and/or their publishers have specific degrees of bias towards their respective ideologies and arguments. In an ideal world, they would be up front about their biases, but often, that is not the case.  Biased reporting may come in the form of words or images that emotionally manipulate readers and/or viewers.  Evaluating sources for bias can be taught, but people should also be aware that biases can be masked by false balance and framing. 

False balance is when reporters take a tone of neutrality by attempting to treat two sides of a controversy or argument with equal validity, even when they are inequitable when all of the underlying differences are considered. For instance, when the vast preponderance of evidence shows that global warming is happening and indicates with very high probability that the cause is human activity, sources giving equal time and word count to the few who deny it obscure the true picture of the scientific community.

Sources may also express bias by framing the discussion.  As a window allows people to view only part of the landscape outside, so does framing limit how information is perceived. For instance, an event, decision, or policy may be discussed and evaluated only on the metric of one particular party’s interest, such as an editorial critiquing an Environmental Protection Agency report for its impact on loggers while ignoring the big picture of its environmental aspects. 

On the far-end of the spectrum is fake news, which to be completely clear, is not news.  It is completely fabricated but often truthful-seeming nonsense created for the sole purpose of generating ad revenue for websites. The unfortunate by-product of this is that the readers of said websites are misinformed and are destined to become even more misinformed as they keep returning to similar sites and having their biases reinforced with untruths.

 

Why Don’t We just Fix It?

Because of the complicated gradations between these different points on the spectrum, it is sometimes difficult to discern where one kind of information ends and another begins. In fact, there is even a great debate in journalism, itself, about the conflicting impulses of the profession and the exact meaning of terms such as objectivity, fairness, balance, and neutrality.  Brent Cunningham captures it in Columbia Journalism Review, and for a better understanding of the theory surrounding this discussion it is a good place to start.

It is also often difficult to unpack why sources take the positions they do. Absent obvious conflicts of interest or economic motives, who can say why someone would respond to a high-profile sexual assault on a college campus by penning an article about the problem of underage drinking, all but ignoring the role of the actual assailant? As long as there are dollars to be made by favoring one side over another, reportage will be biased.  The government mechanism for dealing with this, particularly the FCC, is quite timid in its approach to biased news and has been utterly silent regarding fake news. Though it would be useful to have an official, impartial mechanism to root out all instances of abuse it would be unwise to pursue these aims to an extreme end.  

 

Top-Down and Bottom-Up Information Systems

In the Top Down model of information, those who have authority exert the majority of control over what is printed, who has access to it, and even to a certain extent, how it is interpreted by those who do have access. This would be an excellent environment for gathering and disseminating objective information if and only if those in authority were benevolent, wise, impartial, and infallible. In every other instance the results could range from less-than-desirable to catastrophic.

In the 1800s a number of breakthroughs in mechanical press technology opened the door to mass production, and that access led to more people, regardless of wealth or social prestige, having access to printing.  This model of information might best be referred to as Bottom Up, in which many people, not of the ruling class or social system (or otherwise lacking authority), have access to disseminating their information.  Though it did guarantee the ability to put more things in print, the printing press could not guarantee authority or accuracy.  In light of this, the freedom to read and write what people want to read and write is a fundamental principle of democratic and self-governance. To establish freedom of the press and to decry censorship is to accept that some truths should be de facto decided by consensus, not by a singular, objective measure. At this point it is academic to question which model is superior in which circumstances.  There are plenty of reasons to fear a central controller of information, just as there are reasons to fear the deluge of fake news sites.  The internet is a billion Gutenbergs, and neither can guarantee accuracy.

 

The Consumer

Further complicating this matter is the fact that information consumers are subject to their own sets of preconceived notions and cultural, economic, and even biological biases. People are drawn to sources they know they are going to agree with and seek them out because A) it feels good to be validated by others, B) reading others’ arguments helps people clarify and solidify their own arguments, and C) people tend to avoid feelings of cognitive dissonance like the plague.  

Capture

Political Scientist Arthur Lupia and others have documented a pre-conscious “fight or flight” feeling people experience when they encounter information with which they disagree. How this manifests itself is that people either disengage from the article and seek out other more agreeable sources, or they get ready to fight it with their own counterarguments. It should come as no surprise to learn that people will work really hard not to accept something they don’t want to believe.  In short, people are going to write what they are going to write and people are going to click on the links they want to click on, and as long as there are incentives for both, this state of affairs will persist.

 

Steps to Take

These concerns about information creation and consumption are terrifying stuff to librarians. If people cannot assume the impartiality and accuracy of information sources, it is incumbent upon them to evaluate the information themselves.  Yet if information consumers, themselves, cannot guarantee rationality and open-mindedness, humanity in the information age would seem to be at an impasse.

Fortunately, there are some steps consumers of information can take to protect themselves.  The first step for the information literate individual is to recognize that all people have biases and that present company is no exception. Just by being aware of their own biases, people can start to check themselves.  Because people often have what is known as a bias blind spot, meaning they are not aware of their own biases, it might be wise to use a tool such as the Harvard Implicit Bias Checker to discover hidden biases.  Though some question the validity of the tool, anything that makes people question their own biases is probably a good step.

It is also recommended that people develop and internalize a rubric for evaluating sources.  At the very least, the rubric should formalize the process of evaluating all sources for accuracy, bias, and timeliness.  These criteria should be applied evenly across articles people find both agreeable and disagreeable.   LIS101 suggests this one.

A final step is to triangulate information sources.  Comparing how a topic is discussed in public/government sources, private/free market sources, and academic/scholarly sources will very likely reveal discrepancies of framing, depth of coverage, and scope of meaning.  By recognizing what kind of authority each source may bring to the discussion people can develop a much more nuanced and informed perspective on the subject at hand.