Since the beginning of our existence, humans have been working tirelessly to make our lives easier. What a beautiful irony. And we have come a long way as a species. From the discovery of fire and development of primitive weapons and the wheel, to having the power of fire in our pockets, and weapons that could wipe out the entire planet, and automobiles that can drive themselves, technological progress has been a constant in the world of man. Since the inception of mass communication into our culture, however, the evolution of technology has increased exponentially. With each new installment in the saga of technological advances, there has been an undeniable shift in human cognition and epistemology, as we push our evolutionary limits to keep up with our own creations. By examining some of the most profound cognitive shifts in human history, which occurred between the ages of the printed word, the television, and the Internet, we are given a glimpse into what the future holds for our species, and how the role of technology may be transitioning from societal aide to a Frankenstein’s monster.
According to scientists and anthropologists, humans first appeared on Earth about 200,000 years ago. Documented history (i.e. the printed word) began around 5,000 years ago. Therefore, for almost 98% of our time on this planet, we lived without the art of writing. In fact, according to a 2010 article by NPR, titled “From Grunting to Gabbing, Why Humans Can Talk,” the first estimates of a spoken language place it between 50,000 and 100,000 years ago. So it took nearly 150,000 years of evolution to develop the physical and cognitive capacity to speak to each other, and almost another 50,000 years to put language into writing. As Nicholas Carr, author of The Shallows: What the Internet is Doing to Our Brains, noted, “for most of history, the normal path of human thought was anything but linear” (64). Carr made an effort to point out that “our fast-paced, reflexive shifts in attention,” the same ones that we use while surfing the internet today, “were once crucial to our survival” (64). The original state of the human brain was prone to distraction, governed by a primitive bottom-up mechanism that operated on raw sensory input, involuntarily shifting attention to any visual or auditory signals of potential significance in the environment.
With the creation of writing and, in effect, reading, came one of the most extreme mental shifts in history. “To read a book was to practice an unnatural process of thought, one that demanded sustained, unbroken attention to a single, static object” (Carr, 64). Humans had to condition their brains to block out all the external stimuli and sensory cues that had once dictated their cognition. According to Neil Postman, author of Amusing Ourselves to Death, “it is clear that phonetic writing created a new conception of knowledge, as well as a new sense of intelligence, of audience and of posterity” (12) Postman went on to discuss the ancient Greek philosopher, Plato, and his theories on this “perceptual revolution: a shift from the ear to the eye as the organ of language” (12). Many others during Plato’s time saw writing as the end of philosophy, which had always been an entirely oral practice. Plato however, according to Postman, recognized that “philosophy cannot exist without criticism, and writing makes it possible and convenient to subject thought to a continuous and concentrated scrutiny” (12). In other words, the linear thought process became much more significant, not just to philosophers, but to all humans who participated in the new art of writing. Postman asserted, “Writing…gives birth to the grammarian, the logician, the rhetorician, the historian, the scientist—all those who must hold language before them so that they can see what it means, where it errs, and where it is leading” (12).
It is difficult to discern exactly how the brains of humans adjusted from an oral to a written culture because, as Carr explained, “We can see the products of thought—works of art, scientific discoveries, symbols preserved on documents—but not the thought itself. There are plenty of fossilized bodies, but there are no fossilized minds” (48). However, modern experiments have helped unravel some of the mystery behind the shift in epistemology brought on by the creation of text; “the brains of the literate differ from the brains of the illiterate in many ways—not only in how they understand language but in how they process visual signals, how they reason, and how they form memories” (Carr, 51). To expand further, through the newly developed process of reading, “people made their own associations, drew their own inferences and analogies, fostered their own ideas. They thought deeply as they read deeply” (65). This was when our mode of thinking shifted from a sensory-activated bottom-up processing method to a top-down method, which involves perceiving information through context, expectations, and previous knowledge and biases. The literate brain continued to adapt and evolve over several millennia, and arguably reached its peak in the 18th and 19th centuries. During this ‘Age of Reason,’ “print put forward a definition of intelligence that gave priority to the objective, rational use of the mind and at the same time encouraged forms of public discourse with serious, logically ordered content” (Postman, 51). Postman discussed some of the great authors and orators of the time, such as Abraham Lincoln and Stephen Douglas, who held public debates that lasted for nearly seven hours. And these debates occurred before they were presidential (or even congressional) candidates (44). These brilliant men did not participate in such lengthy and profound debates for fame or glory, they simply enjoyed the art of writing and speaking, and engaging in intellectual discussion with other great minds of the time. For example, after receiving a long round of applause at a debate in Ottowa, Stephen Douglas addressed the crowd by saying, “My friends, silence will be more acceptable to me in the discussion of these questions than applause. I desire to address myself to your judgment, your understanding, and your consciences, and not to your passions or your enthusiasms” (Postman, 45). It was a time of eloquence, expansive vocabularies, and also a high level of comprehension from audiences. Postman included an excerpt from one of Douglas’ debates with Lincoln, and afterward commented, “The language is pure print. That the occasion required it to be spoken aloud cannot obscure that fact. And that the audience was able to process it through the ear is remarkable only to people whose culture no longer resonates powerfully with the printed word” (49). Literacy and linear thinking wasn’t just reserved for authors and the political elite; the average citizen at the time was fully capable of reading and reflecting on intricate text (text that may seem unfathomable by today’s standards for literature), because it was their only way of keeping up with the happenings of society. As Postman explains, “the printed word had a monopoly on both attention and intellect, there being no other means, besides the oral tradition, to have access to public knowledge” (60).
Before the invention and explosive popularity of the photograph and television in the 20th century, public figures were known primarily for their written works. Postman went so far as to assert “most of the first fifteen presidents of the United States would not have been recognized had they passed the average citizen on the street” (60). The citizens almost certainly would have been familiar with the names, writings, policies, and even ideologies of their presidents, but without a medium to convey the visuals of physical appearance, they may not have recognized them in person. It’s hard to imagine in today’s world, where Barack Obama may be the most recognized figure in the world, but for his appearance, not necessarily for his literary prowess. And unlike the people of the 18th and 19th centuries, very few citizens in the United States today could name any of Obama’s published writings, let alone discuss the contents and ideas of those writings. Postman commented on this phenomena using examples from his own era: “Think of Richard Nixon or Jimmy Carter or Billy Graham, or even Albert Einstein, and what will come to your mind is an image, a picture of a face…Of words, almost nothing comes to mind. This is the difference between thinking in a word-centered culture and thinking in an image-based culture” (61). The print society was marked by the characteristics of Postman’s ‘Typographic Man,’ who was “detached, analytical, devoted to logic, abhorring contradiction” (57). However, this was also the time when “the idea of continuous progress [took] hold” (52). With advancements in communication and other areas of technology, the image-based culture began to prosper around the turn of the 20th century, marking yet another monumental shift in human cognition.
The transition into the 20th century, between what Postman called the former “Age of Exposition” and the forthcoming “Age of Show Business,” brought about a new idea to the American people: “that transportation and communication could be disengaged from each other, that space was not an inevitable constraint on the movement of information” (64). The invention of the telegraph in the mid-19th century gave that idea its life. Suddenly, the possibility of a nationwide conversation was conceivable, and wasn’t hindered by the limitations of the traditional horse-delivered mail service. Like some of Plato’s colleagues during the revolution of the written word, there were skeptics of this new conversion to instantaneous communication. One such skeptic was Henry David Thoreau, who wrote in his book, Walden, “We are in great haste to construct a magnetic telegraph from Maine to Texas; but Maine and Texas, it may be, have nothing important to communicate.” Thoreau expanded on that theory, noting that the United States was eager to build a communications bridge across the Atlantic, but by doing so, the country’s definition of discourse was at risk. Postman included Thoreau’s powerful example, one that still holds true to this day: “perchance the first news that will leak through into the broad flapping American ear will be that Princess Adelaide has the whooping cough” (65). This was the beginning of the conversion of information into a commodity, and as a result, the trivialization of discourse in the United States.
The popularity of the telegraph, paired with the appearance of the penny press in the early 20th century, gave rise to the “news of the day.” While this seemed like a wonderful way for individuals to keep tabs on the people, places, and events from all over the world, in reality, it created an illusion of knowledge among readers. Quantity superseded quality in the realm of news. According to Postman, “the abundant flow of information had very little or nothing to do with those to whom it was addressed; that is, with any social or intellectual context in which their lives were embedded” (67). The results were (and are) a superficial and fragmented sense of being informed. The new forms of media gave expression to this tidal wave of irrelevance and triviality. Postman went on to clarify, “I do not mean that things like fires, wars, murders and love affairs did not, ever and always, happen in places all over the world. I mean that lacking a technology to advertise them, people could not attend to them, could not include them in their daily business” (7). It was in this era that people began to know of many things, but not necessarily about many things. Even under the context of important global affairs, this form of inert information gathering gave people something to talk about, but did not lead to any meaningful action. That is the illusion of knowledge that continues to blanket our culture.
The invention of the photograph inspired yet another perceptual revolution in the minds of humans. By isolating images from context and syntax, “photography is preeminently a world of fact, not of dispute about facts or of conclusions to be drawn from them” (Postman, 73). Society’s shift in focus away from the written word and toward the image “undermined traditional definitions of information, of news, and, to a large extent, of reality itself… For countless Americans, seeing, not reading became the basis for believing” (74). Deep reading and contemplation became more of a specialized custom among civilians. Furthermore, the photograph, and eventually the television and Internet, opened the doors for a culture driven solely by entertainment. News of the day became news of the hour, and eventually news of the minute. The addictive properties of television’s imagery drew in our society and have kept us enthralled ever since. It’s hard to argue against Postman’s assertion that “American television is a beautiful spectacle, a visual delight, pouring forth thousands of images on any given day… [Television] offers viewers a variety of subject matter, requires minimal skills to comprehend it, and is largely aimed at emotional gratification” (86). Because contemplation and reflection are slow and deliberate internal processes, they had no place in the flashy, fast-paced arena of television.
Americans (and the rest of the world) spent more and more hours watching television as the years went on, and fewer hours on deep reading and linear thinking. Reading and writing became a more archaic practice, reserved for the literary elite. News became increasingly fragmented, especially when the concepts of news and entertainment became ambiguous and interchangeable, a conversion that Postman described in his book: “Whereas television taught the magazines that news is nothing but entertainment, the magazines have taught television that nothing but entertainment is news” (112). As a culture, what is there to be gained today from a news story about what Angelina Jolie wore to the Oscars? Why do ordinary citizens feel the need to know who’s dating whom in Hollywood? Nevertheless, these frivolous types of stories have now dominated our public discourse for nearly 50 years, and continue to gain popularity. Perhaps these stories appeal to some deep-rooted human desire to be ‘in the loop.’ These entertainers drive our culture, so every aspect of their lives, no matter how mundane they may actually be, fascinates us. Carr believed that we’ve been turned into “lab rats” by these trivial pieces of information since they’ve been adopted and multiplied by the Internet; we are “constantly pressing levers to get tiny pellets of social and intellectual nourishment” (117).
While the human brain was undoubtedly altered through the use of the written word, the television, and other forms of mass communication, none have had an impact as quick and powerful as the Internet. With the written word, humans evolved alongside the medium, for several thousand years. With the telegraph and photograph, the cognitive evolution occurred at a much faster rate, and the television showed an even more rushed mental evolutionary push. The internet, however, in roughly two decades since it’s creation, has completely consumed our culture, and is seemingly evolving at a precipitous rate that even the impressive human brain can’t compete with. In a 2008 essay titled “Your Brain is Evolving Right Now,” the authors note, “this evolutionary brain process has rapidly emerged over a single generation and may represent one of the most unexpected yet pivotal advances in human history” (Small & Vorgan, 77). By focusing all our cognition on adjusting to the rapid-paced lifestyle that the Internet has ushered in, our methods of deep reading and thinking, as well as our social norms, have begun to deteriorate.
Aldous Huxley, a 20th century English writer, was wary of our culture’s future at the hands of technology. According to Postman, “Orwell feared that what we hate will ruin us, Huxley feared that what we love will ruin us” (xx). As it turns out, our society is playing right into Huxley’s fears. The digital age has allowed us to package our entire lives (and minds) into the palm of our hand. The power to talk on the phone, text or email each other, surf the internet, play games, keep a schedule, navigate using GPS, and more (simultaneously!) is all crammed into one device that fits neatly in our pocket. Updated versions of smartphones, each with “new and improved” must-have features, are released every couple of months, and are received with overwhelming enthusiasm. But is this perpetual flood of information at our fingertips really benefitting us as a species? Some would argue that the Internet has turned us into impatient, shallow, scatterbrained creatures. We are constantly exposed to such an overwhelming amount of information that even our natural human process of forming memories has begun to diminish. Nicholas Carr pointed out, “The Net quickly came to be seen as a replacement for, rather than a supplement to, personal memory…by offloading data onto silicon, we free our own gray matter for more germanely ‘human’ tasks like brainstorming and daydreaming” (180). It seems like a step in the wrong direction when deliberation and reflection take a backseat to “daydreaming.” In an interesting metaphor, Carr described how our memories become overloaded while we surf the web: “Imagine filling a bathtub with a thimble; that’s the challenge involved in transferring information from working memory to long term memory…With the Net, we face many information faucets, all going full blast. Our little thimble overflows as we rush from one faucet to the next…what we do transfer is a jumble of drops from different faucets, not a continuous, coherent stream from one source” (124). Deep reading has essentially become a thing of the past; recent studies on reading patterns while online (which, for most people nowadays, is all the time) have shown “signs that new forms of ‘reading’ are emerging as users ‘power browse’ horizontally through titles, contents pages and abstracts going for quick wins” (Carr, 137). Jakob Neilsen, a notable web usability consultant, found in a 2006 eye-tracking study of Web users that his participants weren’t even reading in a methodical manner: “The vast majority skimmed the text quickly, their eyes skipping down the page in a pattern that resembled, roughly, the letter F“ (Carr, 134). Nielsen went on to assert that “for every hundred additional words, the average viewer will spend just 4.4 more seconds perusing the page…even the most accomplished reader can read only about eighteen words in 4.4 seconds” (135). While our species is no longer living in the wild, hunting and gathering to survive, we have resorted back to a similar mind state of alert distractedness, or, as Carr puts it, “a reversal of the early trajectory of civilization: we are evolving from being cultivators of personal knowledge to being hunters and gatherers in the electronic data forest” (138).
We’ve also begun to regress back to a community-based mode of thinking because of social media. We are so connected with everyone around us, and always so concerned about what’s happening with others, that we’ve become disconnected with our own sense of knowledge, intuition, and self-identity. Internet use has become an everyday unconscious routine for most people. Carr compared this to the Taylorist method from the Industrial Revolution, where Frederick Taylor wrote mechanical scripts to increase efficiency in factories, but at the cost of personal initiative, creativity, and whim. According to Carr, we follow similar ‘scripts’ when we go online. “These scripts can be ingenious and extraordinarily useful…but they also mechanize the messy processes of intellectual exploration and even social attachment” (218).
Furthermore, “as the brain evolves and shifts its focus toward new technological skills, it drifts away from fundamental social skills, such as reading facial expressions during conversation or grasping the emotional context of a subtle gesture” (Small & Vorgan, 78). Those fundamental social skills have been part of our evolutionary development for much longer than the printed word, and to see them deteriorating at such a fast rate because of our obsession with technology is cause for concern. “With the weakening of the brain’s neural circuitry controlling human contact, our social interactions may become awkward, and we tend to misinterpret, and even miss, subtle, nonverbal messages” (78). These awkward encounters have become very prevalent in our culture; many famous comedians and actors today use “awkward humor” to draw laughs. But the laughter may subside if that social awkwardness were to “affect an international summit meeting ten years from now, when a misread facial cue or a misunderstood gesture could make the difference between escalating military conflict or peace” (78). In a real life example, just a few months ago, Seth Rogen and James Franco released a trailer for an upcoming movie, titled The Interview, about assassinating North Korean dictator Kim Jong Un. The North Korean government saw the trailer, and was (understandably) quite offended by it. They declared the release of the movie an “act of war” and even went so far as to threaten the moviemakers and the United States with a “strong and merciless countermeasure” (Child). Despite the recent rumors of North Korea wielding powerful nuclear weapons, Seth Rogen responded to the threats in a joking manner, via Twitter. While it is unlikely that any military action will actually arise from this incident, it still leads to a sobering question: has our love for entertainment superseded that of national security? Have our social norms degraded so completely that we would counter a threat of war from a known hostile nation with arrogant jokes through social media?
In addition to the changes in our social norms and modes of thinking, our way of learning new information has also transformed over the last few decades. The epistemological shift in our culture brought on by the television and Internet has been remarkably profound. The new style of learning has brought entertainment to the forefront of education, which has led to the coining of the term, “edutainment.” Postman argued that, “Television’s principal contribution to educational philosophy is the idea that teaching and entertainment are inseparable” (146). Modern classrooms are now considered “boring” if they aren’t filled with computer screens, SMART boards, and silly motivational posters. Teachers are considered “old-fashioned” if they lecture without the assistance of videos, PowerPoint presentations, or audio recordings. It’s not uncommon for students to drop out of classes solely because they aren’t “fun” enough to keep them engaged. Some students complain when their professor is too opinionated, as if having an opinion on the material they teach (rather than standing idly by while presenting slides) is a terrible quality in a professor.
Even the methods of studying have fallen into the trap of television-style learning. Students will wait until the night before an exam, study for a few hours, cover the material enough to regurgitate it once on the exam, and then forget the information forever. This leads back to the false perception of knowledge that’s affecting our society. The idea is in direct parallel with the results of a study from the 1980s that concluded, “21 percent of television viewers could not recall any news items within one hour of broadcast” (Postman, 152) That statement would probably remain fairly accurate if “television viewers” were replaced with “students” and “broadcast” with “the exam.” Most of the classroom material isn’t actually retained through this method of studying, which makes the entire educational process much less rewarding to both students and teachers. Such is the state of our educational system.
As we are spending more and more time in front of screens—we have them in our pockets, our kitchens, our cars, our classrooms, our offices, etc.— we need to remind ourselves how therapeutic the serenity of nature can be for our minds, and attempt to ‘disconnect’ and cool down in the calmness of a natural environment. In The Shallows, Carr talked about the relaxing effect that nature has on the brain, its ability to silence that perpetual ‘buzzing’ that seems to have taken hold of our society. “A series of psychological studies over the past twenty years has revealed that after spending time in a quiet rural setting, close to nature, people exhibit greater attentiveness, stronger memory, and generally improved cognition. Their brains become both calmer and sharper” (Carr, 219). In Carr’s words, “when people aren’t being bombarded by external stimuli, their brains can, in effect, relax. They no longer have to tax their working memories by processing a stream of bottom-up distractions. The resulting state of contemplativeness strengthens their ability to control their mind” (219). The idea that we allow our Internet use to become so compulsive that it gains control over our psyche is alarming. In agreement, Carr said that, “One of the greatest dangers we face as we automate the work of our minds, as we cede control over the flow of our thoughts and memories to a powerful electronic system…is a slow erosion of our humanness and our humanity” (220). The risk of this automation of our perception comes from a desire to “outsource our capacity for sense-making to the computer” (Jackson, 287).
To expand on this idea, consider one of the opening quotes from an insightful video from 2014, titled Humans Need Not Apply: “Some people have specialized to be programmers and engineers whose job is to build mechanical minds. Just as mechanical ‘muscles’ made human labor less in demand, so are mechanical minds making human brain labor in less demand” (CGP Grey). This idea ties in with the late philosopher Marshall McLuhan’s theory that humans are serving as “the sex organs of the machine world…our essential role is to produce ever more sophisticated tools…until technology has developed the capacity to reproduce itself on its own. At that point, we become dispensable” (Carr, 46).
So what does the future hold for our culture of technology enthusiasts? We are already beginning to feel the effects from this ‘rise of the machine’ in the job market, as thousands of jobs are being outsourced to robots in the name of economics and efficiency. Where dozens of people once served as cashiers in grocery stores, there is now one person that oversees a handful of self-checkout machines. Other simple jobs—but jobs nonetheless—have begun to be replaced by robots as well; automatic baristas can be found at coffee shops around the country, and floor cleaning vacuums (the first step toward robotic maids) that take care of themselves are now in many homes and offices. “General purpose” robots are currently in the early stages of development. These machines are capable of learning through observation, and have the potential to perform a myriad of tasks at a fraction of the time and cost of a human worker (CGP Grey). The creation of self-driving cars, which have already begun to appear on the roads, could potentially cause one of the biggest hits to the job market. According to CGP Grey, the transportation industry in 2014 is responsible for about 3 million jobs in the United States, and roughly 70 million jobs worldwide. But what would happen to all those jobs if the cars, trucks, buses, planes, and boats could drive themselves, without the fatigue and distractedness that naturally accompany human drivers and pilots? What’s more, we are creating machines that will be able to replace jobs in all areas of the job market, including white-collar work (like filing paperwork, research, medicine, etc.) and even creative artistry (such as music making and graphic design). Throughout history, especially in the realm of capitalism, businesses have always chosen the most economically efficient options, even it meant taking away jobs from skilled workers. Why pay for the risk of human error when the job can be done quicker, and at a fraction of the cost, by a machine? As the video explains, “technology gets better, cheaper and faster at a rate that biology can’t match.” Our role as the users of our technological tools has begun to invert, and we’re beginning to realize that our humanistic traits, the only thing separating us from our beloved machines, has become a liability in the world of business and communications.
Back in 1985, Postman was able to observe that, “we believe nothing if not that history is moving us toward some preordained paradise and that technology is the force behind the movement” (158). But is it really a paradise? Our chronic use of the television and Internet has had a powerfully profound effect on the way we think and learn, how we interact with each other, and even the physical structures of our brain, in only about fifty years. We spend our days jumping from distraction to distraction, and these “frequent interruptions scatter our thoughts, weaken our memory, and make us tense and anxious” (Carr, 132). Regardless, we are dedicated to staying connected, to skimming through one breaking news story to the next, and to developing our technology to help us do it all more efficiently. Unfortunately, similar to the story of Frankenstein’s monster, our creations have started to separate from our control, and their evolution is passing us by. We can only watch, or perhaps power-browse through stories on social media, while machines continue to evolve at an unimaginable rate, leaving our biologically restricted species behind. Take a moment to reflect on the chilling conclusion to the previously mentioned video, Humans Need Not Apply: “Even in our modern technological wonderland, new kinds of work aren’t a significant portion of the economy. This is a big problem…Automation is inevitable. It’s a tool to produce abundance for little effort. We need to start thinking now about what to do when large sections of the population are unemployable, through no fault of their own; what to do in a future where, for most jobs, humans need not apply” (CGP Grey).
We spent 200,000 years of cognitive evolutionary advancements, from prehistoric hunters and gatherers to a world of well-read, contemplative intellectuals, only to regress back to a more primitive mode of thinking as we’ve allocated our progress into the mechanical minds of computers. As Carr explained, “the computer screen bulldozes our doubts with its bounties and conveniences. It is so much our servant that it would seem churlish to notice that it is also our master” (4). It seems that a cyborgian, post-human society is right around the corner, and for many who have not thought deeply about the cognitive costs of living in such a world, that post-human society is being welcomed with open arms and buzzing smartphone in hand.
Comentários