Module 5: Information Online


Since the dawn of the information age, people have been concerned about information overload, which is the stress people feel when they are exposed to so much information that it becomes impossible to think about it clearly. The internet is simply astonishing in its breadth and depth, yet for reasons that are born of technology, laws, and user predilections, not everything is on the Internet, and what is is often inaccessible. A brief history will help readers understand some of the central issues of the Internet, including cost, access, and privacy. Likewise, some early myths will be explored. Finally, a discussion of the Internet dovetails quite naturally with some of the problems with academic publishing.

Political Aspects of the Internet

The internet was started for military purposes by Larry Roberts, a project manager at the defense department, whose team created ARPANET, which was a protocol that would allow computers to connect with one another in order to exchange data. McChesney (2013) documented the close ties between the military and internet technology, arguing, in fact, that, “Military spending on research and development is such a central part of American capitalism that it is almost impossible to imagine the system existing without it” (101). Notwithstanding, early pioneers in the internet, such as Apple’s Steve Wozniak, saw the internet as a tool of social justice, collaboration and cooperation, and the ethos of the computing culture of the 60s and 70s was very much anti-commercial (101). In the early 80s, the US Postal Service sought to establish an e-mail service for businesses and citizens, but businesses such as AT&T lobbied the Reagan administration to keep them from establishing a foothold, and commercial internet was all but guaranteed. McChesney speculated that had the US Postal System been allowed a greater hand in shaping the internet at that point, the end result would have been more as the early egalitarian pioneers had envisioned it in the 60s and 70s (103).

The world wide web as people know it today came much later, in the early 1990s, and was the product of software sort of laid over the skeleton of the internet, which included innovations like graphical user interfaces (GUI) and hypertext links. Just to be clear, the internet is the hundreds of millions of computers that are all linked together; the web is the software that makes it navigable. Infomercials for the early web were often more embarrassing than they were informative about what and how people might find the information they yearned for.

Indeed, there was a certain giddiness about the internet. In the popular imagination it was the thing to solve all problems. Students would learn more than ever before, unpopular kids would make friends all across the globe, businesses would increase revenues, governments would become transparent, and the oppressed would overthrow their oppressors. There were two basic assumptions about the internet that many people had and that, unbelievably, people still hold onto today:

The first is that the world wide web would put all the world’s knowledge at everyone’s fingertips.

The second is that the world wide web would lead to a worldwide revolution of truth and understanding.

These two claims, though related, need to be addressed one at a time to show how each is often right but also often wrong. This will lay important groundwork for understanding the kinds of sources one might encounter (and might not!) while searching the internet.

All the World’s Knowledge
There is this idea among certain university and college administrators that colleges no longer need libraries because “everything is on Google.” The first thing that lets librarians know these people are not informed enough to make that kind of decision is that they think Google is something that things are on. In case they are reading, Google is a search engine, and though Google, inc, offers a wide variety of services, it is not the host of all the internet’s information. The search results that one may retrieve through Google search engine are not ON the search engine, they are merely indexed and listed within their database. The articles or webpages are on different servers all over the world, and search engines merely direct people to them if they search for the right word.

But there is, to be sure, an awful lot of information on the internet, and the information spans the breadth of human interest. One statistic that is often bandied about the web is that every two days people create and upload upwards of 5 billion gigabytes of storage space. For those trying to put that into analog terms, that is approximately 323,910,000,000,000 individual pages of text. Every two days.

But there is a big caveat about this volume of information. The EXACT information one may need from the exact source may not be on the web at all. In fact, if someone were looking for copyrighted print material, such as a novel or a textbook, there is a very good chance that it is not on the web, unless it is there illegally. That is to say, though one may find a page full of algebra problems to practice on, he is unlikely to find the exact problems the instructor has assigned. Thus, it is not exactly true to say that everything is on the internet.

Truth Revolution
Another disappointing realization about the internet is that, though it has the power to be informationally transformative, it rarely is. If the early idealism were mapped out to the present reality it would receive mixed scores. Are people more informed, institutions more effective, government more transparent, education less expensive, and people worldwide free from tyranny?

People can access trillions of pages of information online, businesses can harness social media to help with branding, Khan Academy and other online MOOCS (Massive Open Online Courses) offer free education, and Twitter helped Egyptian youth organize a protest against a possibly corrupt election.  Unfortunately, the protest mostly fizzled out after the West prematurely dubbed it the “Twitter Revolution” (Morozov 1-4). So strangely, even with all the good that has been accomplished because of the web, much of the world has not benefited in the way originally envisioned. The reasons take a bit of explaining.

One reason is that, no matter how freely information is flowing, the power structures that were in place before the internet are still in place. Another reason is that the free-flow of information does not guarantee the free-flow of truth. These reasons are inextricably intertwined. Consider two comparisons:

1) After Julian Assange freely gave the public secrets about our government and business elites, an international warrant was issued for him; after Mark Zuckerberg sold the information of the public to our business and government elite, he became Time’s person of the year.

2) In China, censorship of the internet occurs because of a government-mandated “Great Firewall” that restricts the public’s view of the whole internet to a sizable but carefully monitored bubble, and because Chinese internet companies self-censor and monitor its pages and users to make sure that nothing too politically volatile happens within the bubble (MacKinnon 34-37); in America, censorship of the internet occurs because government exerts too little control over the internet. Deregulation of the internet in the 1980s effectively created monopolies or duopolies for most consumers, meaning that a handful of corporate players determine who had access to information and how much it would cost them each month (McChesney 109-13).The result today is that American internet access is much more expensive and much slower than in many other first-world countries that have less laissez-faire telecommunications policies.

The end result of all four of the above contingencies is the disenfranchisement of everyday people. These seemingly separate outcomes are the product of one system which reigns regardless of governance, and that is that the interests of the elite are valued over the interests of the public. Even more optimistic voices in the debate, such as Cass Sunstein, author of 2.0 who see information as an invaluable public commodity worthy of protection (106-07) offer very little in the way of ideas for protecting it. Though it is inarguably better to have the American problem with the internet rather than the nightmare documented by Reed (2012) of authoritarian governments using malware to spy on dissidents and would-be revolutionaries (125), it should be stated that neither offers an information freedom tonic. If the cause-effect relationship in these matters is not as clear as the above comparisons would suggest, it is no secret that those who have power have more control over the internet than those who do not. More money means better websites, more bandwidth, more advertising, and a much farther reach. Meanwhile the digital divide —a phrase librarians started talking about in the 1980s to discuss the gap between those who have access to information and communication technologies, and those who do not—is just as much a concern today as it always has been. The well off, the urban, and the well-connected have more access to digital technology than the poor, the rural, and the politically disenfranchised. Information freedom seems to depend on the tension between private and public power, the availability of ethical information from multiple points of view, and the accompanying ability to access, compare, and synthesize those disparate sources into an informed and usable knowledge. If there are not safeguards to ensure that information is ethically created and disseminated, then it makes it that much harder for the public to access reliable information.  

Internet Searching--Exploring Limitations of Design and Content

Limitations of Searching

In the early days of what is now known as the internet, there were millions of pages on-line, but it was exceedingly difficult for people to find what they were looking for. Some of the earliest search engines, such as Archie and VLib required quite a bit of learning for those who were new to computing before they could be used effectively. Natural language searching and metadata searching didn’t exist yet, so queries had to be rather circuitous. For instance, if someone wanted to find a picture of a lamb to use on a webpage, he couldn’t simply search for “picture of a lamb.” A search for church AND *.gif would likely retrieve a list of churches that had pictures on their websites, and clicking through each page might turn up a lamb. The process was as frustrating as it sounds.

The 1990s brought several advancements in the area of search engines, and some of the early services were Alta Vista, Excite, Hotbot, and Webcrawler. Yahoo and Google both appeared in the mid 90s and became the dominant players for the next few years, with Google continuing its dominance today. Over the march of time, both the ease and the range of the searching improved. Metadata and natural language searching had become more evolved, and Google began indexing file types that had not been accessible before.

Google works by sending webcrawlers out to scan and record what is on the pages they encounter. This is called “indexing.”  Google in its internal server keeps a large list of keywords along with where on the internet those pages can be found.  When end users do a search, the words that are entered into the search engine are compared with the words in Google’s index, and through an algorithm known only to Google, it lists the pages it thinks the user wants.

The URLs listed in the results list are ordered based on relevancy.  Google uses a trademarked algorithm called PageRank that assigns each Web page its relevancy score based on information such as number of times the keywords are mentioned on the page, how many other pages link to that page, and how long the page has existed.

Another feature of Google is that it uses something called natural language searching. If someone types in a sentence or question that describes the information being sought, Google’s database uses programmed logic to determine the keywords in the sentence. It searches for those words first and the other words second, and then it displays search results based on an algorithm of how often certain keywords appear on the page.

But this is not as accurate a method as it could be. For instance, if someone wanted to find all of the things that George W. Bush has written, he might do a search for author AND George W. Bush. But his results would consist of articles that contained those words, regardless of the context. Therefore, it would retrieve any combination of pages about things he had authored, authors he likes, authors who influenced his thinking about foreign policy, or authors who wrote about him (and many others, as well). 

One advantage the library databases have over the internet search engines is that databases include metadata, which as the name implies, is simply data about data.  For example, if George W. Bush had written an editorial titled “War Planning for Fun and Profit,” and it was published in The Wall Street Journal, then the database would have (at least!) the following information:

  1. The title of the article, “War Planning for Fun and Profit.”
  2. The author’s name, George W. Bush.
  3. The name of the source, The Wall Street Journal.
  4. The text of the article, itself.

The first three pieces of information are considered metadata, which is information ABOUT the fourth piece of information, which is the article, itself. The metadata in databases allows researchers to be more accurate. For example, they can search for articles authored by George W. Bush and not have to sift through a bunch of articles that happen to contain the words George W. Bush and author. 

Different kinds of searches can be combined to create refined searches:

Author: George W. Bush


Subject:  Diplomacy or Foreign Policy


Keyword:  Iraq or Middle East

Limitations of Content

Searching the internet has become both more efficient and more user friendly over the last 30 years. However, even if it has become easier to find pages, there is still no guarantee that the information therein is relevant, timely, or even true. It is important to remember that the internet is a megaphone for information, and that’s not the same thing as being a megaphone for truth. For instance, peer-reviewed, rigorous studies show the safety and efficacy of vaccines, as well as the harmful societal effects of not vaccinating children against various diseases.

The internet does not care.

Page rankings and results lists are not formed in the manner of best-information-first. They are formed–in different search engines–by paid placement, by placement of keywords, by the number of other pages that link to it, and/or any other algorithm of all of those factors, all of which can be faked or tweaked by savvy web programmers hip to the knowledge of Search Engine Optimization, or SEO as it is known in the industry. A search for websites about vaccinations is just as likely to turn up a board certified specialist as it is to turn up Jenny McCarthy, who has no medical training. It is then up to the user to make a sound judgment about which to read. The fact that students will one day be confronted with a list of results that contains choices such as these is why information literacy stresses authority and credibility.

Even bearing those criteria in mind, the choice between sources is often confusing, even purposefully so. Google Scholar was one time seen to be the scholarly refuge from internet scourge.  Here, it was thought, one could at last find free scholarly and academic journal articles. However, according to Gina Kolata, author of “Scientific Articles Accepted (Personal Checks, Too)” appearing in the New York Times (2013), there is an entire industry built around the appearance of academic qualification, “a parallel world of pseudo-academia, complete with prestigiously-titled conferences and journals that sponsor them” (Par. 4).

Many of the journals and meetings have names that are nearly identical to recognized authorities, deceiving both professors and students.

Unwitting scholars are solicited to publish in the journals and attend the conferences only to find after the fact that there are substantial fees for doing so (Par. 4). Of course, while those individuals deserve sympathy, the greater loss is the net impact on academe as a whole.  For as the article points out, “[S]ome researchers are now raising the alarm about what they see as the proliferation of online journals that will print seemingly anything for a fee” (Par. 8).

The problem seems to lie in the abuse of the “open access journal” concept. Peter Suber, author of Open Access (MIT Press 2012) provides a useful overview of open access journals and defines them as, scholarly writing that is “digital, online, free of charge, and free of most copyright and licensing restrictions.”  While open access journals and their attendant online scholarly communities are a boon to the free flow of information and the collaborative sharing of knowledge–things most people see the benefit in—the open nature of the internet allows for abuse.  One kind of abuse stems from charlatans running vanity presses who seek to divest academics from their paychecks.  A much more insidious kind of abuse, is the creation of academic seeming fora in which those who actively seek to obfuscate knowledge have a new megaphone to disseminate misinformation.  There is, for instance, no shortage of evidence about right-wing think tanks and pro-corporate forces trying to muddy the water of the science behind anthropogenic climate change. If think tanks and influence brokers, as seen in Module 4, astroturf for economic benefit and political ideology, then similar assaults on academic discourse are not a far stretch.

Jeffrey Beall, a research librarian at the University of Colorado in Denver, developed and maintained his own blacklist of what he deemed “predatory open-access journals” between the years of 2010 and 2016.  There were 20 publishers on his list in 2010, and more than 300 when the list was closed and the content removed from the web.  Beall has offered no concrete explanation for the site’s closure, but Inside Higher Ed quoted him vaguely as stating it was because of “threats and politics.”  To be sure, not all of the sources on his list are necessarilly “bad” sources or even incredible ones, but they did meet a list of criterion he had developed for determining unsavory journals.  The problem is that it is very difficult to tell one from another without a high degree of subject-specific knowledge and a savvy understanding of information literacy.  As Kolata points out, “[Researchers] warn that non-experts doing online research will have trouble distinguishing credible research from junk….They will not know from a journal’s title if it is for real or not” (Scientific Articles Accepted  Par. 8).

The internet archive known as The Wayback Machine has preserved the list for now and you may view it here.

Much to the chagrin of librarians everywhere, teachers often tell their students to use Google Scholar, and though Google Scholar is in no way connected to shady open access publishers, a cursory search on Google Scholar turns up a number of the journals listed on Beall’s blacklist. That is not to say that all the information found in those journals is wrong; however, telling truth from fiction and wheat from chaff in the world of open source is a task best left to experts who already know the ins and outs of the field. It is worthwhile to remember that extraordinary claims require extraordinary evidence. Sometimes the lone voice that bucks the conclusions formed by the majority of a field’s scholars is a groundbreaking genius, but most of the time, he is simply incorrect. Information literacy asks students to compare and contrast sources within a field for that very reason.

Limitations of Design

As discussed in Module 2, people have a tendency to seek out sources that agree with them in order to avoid the uncomfortable feeling of cognitive dissonance. This predilection of human psychology appears to be built into the design of some of the most prominent sites. For instance, one person’s Google search results do not necessarily look like another person’s? People who have Google accounts have distinct profiles based on their previous search habits, location, and approximate demographics, as well as time of use and web histories. People who are not Google members have similar profiles built around their IP addresses. This is done so that Google can sell ads to media companies that will target individual users as they move about the web. Readers may have noticed some ads and products recur no matter what page they visit. The advantage of this system to users is that it builds a profile of the kinds of sites that they like to visit. Thus, search results are streamlined to reflect user interests. That’s convenient, but anathema to information literacy.

Eli Pariser points out a very important problem in The Filter Bubble: This doesn’t just reflect your identity. It illustrates your options as a consumer and participant in media. “Students who go to an Ivy League college will see targeted ads for jobs that graduates from state schools are never even aware of” (Chapter 4). One person will see links that reflect his circumstances and web browsing choices, and another person will see links that reflect her own, which might explain why, as a collective humanity, people can’t seem to agree on some of the most basic facts about the world. When people get two different sets of facts from two different sources, it makes it difficult to work things out.

Played out, if someone searches for vaccines and autism, finds a page stating that there is a causative relationship, and clicks on it, Google remembers that preference and tailors future search results to include more like that one. One bad source links to another bad source, links to another, all in perfect harmony of wrongness, and soon the unsuspecting user finds his perception being skewed, and reinforced, skewed, and reinforced, and skewed and reinforced. It doesn’t take too long until he has a whole library of bad sources, all telling him the same thing, which makes them look credible, authoritative, and consistent with others in the field. It also hooks him up to a peer group of other would-be researchers who have been subjected to the same processes and can further reinforce him in his errors. Once ingrained to that extent, debiasing such misinformation would become a daunting task.

Facebook and other social media work in a slightly different way. In Google, they take note of what people click on and what they have clicked on in the past. On Facebook, they build user profiles by noting what images and news stories they have shared and what pictures they have liked. Uses are shown media and friend updates based on those criteria, and this is why people may notice that some friends pop up all the time and others seem to disappear. That is their algorithm at work deciding which people users want to hear from. This also creates a reinforcing peer group.

Given how what we see on the internet may not be entirely organic or serendipitous, it is reasonable to fear that people may be entering a sort of vast echo chamber that reflects only the information they want to see.

Keeping Track of it All

Confirming these fears, an article from the Chronicle of Higher Education highlights a relatively new phenomenon, which is that too much low-quality research is now being written and published. In We Must Stop the Avalanche of Low-Quality Research, authors Bauerlein, Gad-el-Hak, Grody, McKelvey, and Trimble (2010) express their concern that greater access to publishing obscures truly brilliant work. Because so much work–much of it “redundant, dim, or otherwise inconsequential”–is published now, it is easy to miss groundbreaking information! The authors state that the amount of research makes it impossible to ensure that it is all accurate:

The surest guarantee of integrity, peer review, falls under a debilitating crush of findings, for peer review can handle only so much material without breaking down. More isn’t better. At some point, quality gives way to quantity.

Indeed, their fears seem to have been confirmed when Science Magazine wrote a completely bogus article about lichen extracts that combat cancer, came up with some fake scientist names, gave them some made up credentials, and then submitted the lot of it to 300 open access journals.  Though many of the journals boasted peer review boards with prestigious-sounding names and titles, the article was accepted by over half of the journals.  Some of the journals did ask for edits, but the authors claim the paper was so bad that mere editing would not have fixed the fatal flaws of the research, methods, data, and conclusions.

It gets worse.  According to the authors:

The paper was accepted by journals hosted by industry titans Sage and
Elsevier. The paper was accepted by journals published by prestigious
academic institutions such as Kobe University in Japan. It was accepted
by scholarly society journals. It was even accepted by journals for which
the paper’s topic was utterly inappropriate, such as the Journal of
Experimental and Clinical Assisted Reproduction.

In other words, people have to be suspicious not only of shady looking web sources, they also have to investigate the credibility of open access journals with academic and professional-sounding names, and beyond that, the authors here have found that researchers even have to be somewhat suspect of open access journals associated with actual, real-life prestigious publishers.

Policing Open Access

The Directory of Open Access Journals  is a generally respectable source for open access journals.  In light of the scandal, they have recently taken steps to insure greater quality, such as making their application form more stringent.

According to the Scholarly Kitchen blog, which covers such things:

In order to establish each journal’s OA bona fides and to keep the scam artists out, the DOAJ’s new application form asks over 50 questions, including the following:

  • In what country is the publishing company legally registered?
  • How many research and review articles did the journal publish in the last calendar year?
  • What is the average number of weeks between submission and publication?
  • Which article identifiers does the journal use (DOI, Handles, ARK, etc.)?
  • Does the journal impose article processing and/or article submission charges?
  • Does the journal have a deposit policy registered with Sherpa/Romeo, OAKlist, Dulcinea, or other similar registry?

Another open internet source for quality information is  The Digital Public Library of America (DPLA).  The portal delivers millions of materials found in American archives, libraries, museums, and cultural heritage institutions to students, teachers, scholars, and the public. Far more than a search engine, the portal provides innovative ways to search and scan through its united collection of distributed resources. Special features include a dynamic map, a timeline that allow users to visually browse by year or decade, and an app library that provides access to applications and tools created by external developers using DPLA’s open data.

So while the internet and the explosion of information has been good for freedom, it might serve as a detriment to scientific inquiry.  People are, as some would say, drowning in information.  Unfortunately, not all information has value.  Considering the limitations of search engines, internet content, and the human predilection towards reinforcing biases, the open internet becomes a chancy proposition when doing academic research.

Search Engine Tricks

The internet suffers no lack of websites sharing tips on Google searches. offers 20 search tips, ranging from the obvious (use only important words), to the well-known (use Google as a calculator), to the sometimes useful (search for a range of numbers by putting two dots between then I.e.”1983..1989”). offers 35 search tips, such as finding the time of sunrise or sunset for a particular location (Example: sunrise Chicago), determining the origins of a word (Example: etymology hot cakes), converting currencies or different units of measurement (Example: 26.2 miles to kilometers), and searching for websites about a particular topic on a particular domain (Example: “global warming” +site:’gov).

Students are encouraged to learn as much as they can about how to search Google efficiently. However, online privacy advocates are concerned about the fact that Google stores users’ search histories and usage details. Some people recommend that internet users avoid Google and use other search engines which have many of the same capabilities but do not track users. Two such search engines are and

A Curated List of Information Sources


globalEDGE International economic data on countries, U.S. states, trade blocs and industries.

GuideStar Information on non-profit companies.

S.E.C. EDGAR Database of foreign and domestic company registration statements and financial reports.

U.S. Bureau of Labor Statistics Federal database of labor market activity, working conditions, and changes in economic indicators.

U.S. Economic Census Federal statistics on U.S. businesses, industries, and local and national economics

Government The official search engine for United States government websites.

CIA World Factbook Information on 267 countries. Searchable database of U.S. federal legislation.

CyberCemetery The CyberCemetery is an archive of government web sites that have ceased operation (usually web sites of defunct government agencies and commissions that have issued a final report). This collection features a variety of topics indicative of the broad nature of government information.

FBI: Uniform Crime Reports Compilations of national crime statistics.

FedStats Access the entire body of statistics produced and collected by all agencies of the U.S. government.

National Archives Search the NARA archival databases for government archival information including military records, government spending, and international relations.

U.S. Census Bureau All of the U.S. Census data and analytical reports on population trends.


Avalon Project Law and diplomacy documents from 4000 BC to the current day.

Chronicling America Collection Digital newspaper archives from each state spanning 1836-1922.

David Rumsey Map Collection Over 30,000 searchable historical maps.

Digital Public Library of America Over 11 million items from U.S. libraries, archives and museums.

Europeana Digital collection of books, art, video and audio from libraries and museums across Europe.


The Academy of American Poets Collection of poetry from classic and contemporary poets, interviews and biographies.

Digital Commons Network – Arts and Humanities Collection Free, full-text peer-reviewed articles from universities and colleges worldwide.

Encyclopedia of Science Fiction A comprehensive, scholarly, and critical guide to science fiction in all its forms.

Encyclopedia of Fantasy Sister volume to the Encyclopedia of Science Fiction providing scholarly and critical articles on fantasy in all its forms.

Project Gutenberg Over 50,000 free eBooks.


Centers for Disease Control Data on diseases and conditions around the globe.

Medline Medicine and health data as well as articles on human health.

NASA Current and historical data on U.S. and international space exploration.

National Agricultural Library Provides technical information on agricultural research and related subjects.

National Weather Service Weather forecasts, weather information and historical data. Federal directory of science data from various U.S. government entities.

USDA Plant Database Standardized information about plants in the United States.

Multipurpose Collections

Digital Commons Network Free, full-text peer-reviewed articles from universities and colleges worldwide.

Google Books Searches books and periodicals. Some resources are available in full, others only show excerpts.

Google Cultural Institute View high definition artwork and artifacts from international galleries and museums.

Google Scholar Searches the free Internet and subscription databases for scholarly/peer reviewed articles.

The Internet Archive Non-profit archive of books, video, software, music, interviews and other digitized media covering a vast array of historical and current information. Includes such collections as Console Living Room, LibriVox, OpenLibrary, TV News Archive. And USGS Maps from all 50 states

Library of Congress Digital Collections Archived books, manuscripts, audio, video, maps, photos and more. Resources for checking pretty much any fact you can think of.

WorldCat Simultaneously search over 10,000 libraries around the world.

Free Books

Baen Free Library mostly sci-fi and fantasy titles from the Baen catalog

Books Should Be Free public domain audiobooks and ebooks

Bookyards external web links, news & blogs links, videos and access to hundreds of online libraries

CHM E-books computer and programming books

Free Science a collection of science e-books

Great Books and Classics a collection of classic literature, much written in Latin

International Children’s Digital Library full e-books for children up to 13 years old

LibriVox public domain audio books contains about 17,000 books

Metropolitan Museum Publications nearly 400 books about fine art

MobiPocket children’s books, English, American and Germanic Literature (associated with Amazon)

Next Reads Bookseer get book suggestions based on what you’ve already read

Penn Digital Library University of Pennsylvania

Perseus Digital Library huge collection of classic (Greek, Roman, etc.) texts

Poetry Foundation vast collection of poetry (not all public domain!) on the web

Project Gutenberg vast collection of ebooks operated on voluntary donations

Rare Book Room photographs and scans of great and rare books from libraries around the world

Sacred Texts religion and spirituality

Tech Books computer and technology books

with special thanks to librarian Drew Collier for assistance in compiling this list.

Questions for Critical Thinking

What is metadata searching and how does it make it easier to find what you are looking for?

How does natural language searching differ from searching in a database?

Define open source journals . What are their advantages and disadvantages?

Is everything on the internet?  Why or why not?

What is the digital divide?  Are some people more empowered by the internet than others?  Why?

What is SEO?

Define the Filter Bubble and explain how it could affect your ability to find useful information and debate issues.

Research Skills

A quote is when you take someone else’s exact words and put them in your paper.

A paraphrase is when you take someone else’s ideas, findings, or observations and put it in your paper in your own words.

A summary is when you briefly restate the main points or main ideas of another source.

When to Quote, Paraphrase, or Summarize

Summaries and paraphrases should be used when you want to touch on a source’s main points.

As a general rule, exact quotes should be reserved for very precise information or for striking turns of phrase.

How to Use Each in a Paper

Whether you are paraphrasing, summarizing, or quoting an article, you need to lead into cited material, use parenthetical notation in the text, explain the material’s relation to your thesis, and include an entry in your works cited page.

If you are quoting an article, you will need to do all of those things as well as enclose the quoted words in quotation marks.

To lead into cited material is to prepare your reader for the shift from your ideas or words to someone else’s. A typical lead may be as simple as saying:

According to climate scientist Michael E. Mann, director of Penn State’s Earth System Science Center, the Exxon-Mobil papers prove that, “the villainy that we long suspected was taking place within ExxonMobil really was. It wasn’t just a conspiracy theory. It was a legitimate conspiracy“ (Song 2015).

By naming the source (Michael E. Mann) and establishing why he is an authoritative source (he is the director of Penn State’s Earth System Science Center), you not only alert your reader that what comes next is someone else’s words, you also establish why those words should be heeded.

Leading into a paraphrase or summary:

Michael E. Mann, the director of Penn State’s Earth System Science Center, asserts that the Exxon-Mobil papers confirmed everyone’s suspicion: they had been aware of the impact the oil industry was having on global warming but had just refused to acknowledge it (Song 2015).

(You can read about paraphrasing and summarizing in any MLA Handbook or on sites such as Purdue University’s Online Writing Lab.)

To summarize an article (1) take note or the main ideas, (2) consider the purpose of your summary, and (3) combine the main ideas in a way that is easy to read.

As an example, please read the article Scientists set to prepare strongest warning that warming man-made and note how the three steps above are applied.

1) Take note of the main ideas contained in the original article.

to prepare the strongest warning yet that climate change is man-made and will cause more heatwaves, droughts and floods this century unless governments take action.

Officials from up to 195 governments and scientists will meet in Stockholm from September 23-26 to edit a 31-page draft that also tries to explain why the pace of warming has slowed this century despite rising human emissions of greenhouse gases.

a main guide for governments, which have agreed to work out a United Nations deal by the end of 2015 to avert the worst impacts.

at least a 95 percent probability – to be the main cause of warming since the 1950s.

“There is high confidence that this has warmed the ocean, melted snow and ice, raised global mean sea level, and changed some climate extremes,” the draft says of man-made warming.

Most impacts are projected to get worse unless governments sharply cut greenhouse gas emissions

In itself, a shift from 90 to 95 percent “would not be a huge short of adrenalin” for spurring government and public awareness… extreme weather events, such as a 2010 drought in Russia that pushed up world

grain prices, or last year’s Superstorm Sandy in the United States, meant that “there is more of a visceral feel for climate change among the public.”

Trying to boost weak global economic growth, governments have focused relatively little on climate change since failing to agree a U.N. deal at a summit in Copenhagen in 2009.

the IPCC will face extra scrutiny after the 2007 report exaggerated the rate of melt of the Himalayan glaciers. A review of the IPCC said that the main conclusions were unaffected by the error.

A combination of natural variations and other factors such as sun-dimming volcanic eruptions have caused the hiatus, it says, predicting a resumption of warming in coming years. The report also finds that the atmosphere may be slightly less sensitive to a build-up of carbon dioxide than expected.

2)  Consider the purpose of your summary to decide which aspects of the article are most important. This does not mean ignore the parts that disagree with you. It simply means that you ask questions such as, “Is it more important to point out that they are preparing for a meeting, or is it more important that they are preparing to warn us about global warming, OR are the things they are going to warn us about the actual important things?” Depending on the aim of your paper, you might want to highlight different aspects. If we focus on the things they are set to warn us about, then we are left with the following ideas:

The fifth report of the IPCC will offer to prepare the strongest warning yet that climate change is man-made and will cause more heatwaves, droughts and floods this century unless governments take action.

at least a 95 percent probability – to be the main cause of warming since the 1950s.

“There is high confidence that this has warmed the ocean, melted snow and ice, raised global mean sea level, and changed some climate extremes,”

Most impacts are projected to get worse unless governments sharply cut greenhouse gas emissions

extreme weather events, such as a 2010 drought in Russia that pushed up world grain prices, or last year’s Superstorm Sandy in the United States, meant that “there is more of a visceral feel for climate change among the public.”

Trying to boost weak global economic growth, governments have focused relatively little on climate change since failing to agree a U.N. deal at a summit in Copenhagen in 2009.

the IPCC will face extra scrutiny after the 2007 report exaggerated the rate of melt of the Himalayan glaciers. A review of the IPCC said that the main conclusions were unaffected by the error.

A combination of natural variations and other factors such as sun-dimming volcanic eruptions have caused the hiatus, it says, predicting a resumption of warming in coming years. The report also finds that the atmosphere may be slightly less sensitive to a build-up of carbon dioxide than expected.

3)  Find a way to combine these ideas in a way that is easy to read (i.e.: not just a collection of randomly presented factoids) and remains true to the ideas presented in the article.

The fifth report of the IPCC will assert at least a 95% probability that human activities are the main cause of global warming since the 1950s and that left unchecked, we can expect the warming to cause more extreme weather events over this century. The report will caution that the impacts of climate change will be worsened if governments do not act to drastically curtail greenhouse gas emissions. The 4th report of the IPCC asserted a 90% probability that human activities were responsible for driving climate change, and though they do not expect the increase in probability to spur greater public awareness, scientists suggest that recent extreme weather events have made climate change more visible to the public. They likewise suggest that global economic considerations have decreased governments’ foci on global warming since the failed UN summit in Copenhagen in 2009. The IPCC will explain the 15-year hiatus in global warming by detailing “a combination of natural variations” and predicts resumption of the warming in coming years.

Typically, academic work requires either MLA or APA citation style.  Generally, MLA is expected in humanities and English, while APA is expected in social sciences work.  Your instructor should clearly inform the class which citation style she prefers.  If there is any doubt, ask.  Many websites already cover citations, so those efforts need not be repeated here.  Below are a list of links pertaining to citations.

MLA The Purdue OWL site offers the MLA style guide online for free.

APA The Purdue OWL site offers the APA style guide online for free.

Trinity College’s Cite Source offers additional information on citing Tweets, blog posts, lectures, and more. CitationMachine is a great time saver, but I have sometimes found mistakes in its automatically generated citations.  Always verify the results with the style guide!