Mach 1 Club

Full Version: Does the Internet Make You Smarter? Dumber?
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
[attachment=1496]
Amid the silly videos and spam are the roots of a new reading and writing culture, says Clay Shirky.

By CLAY SHIRKY

Digital media have made creating and disseminating text, sound, and images cheap, easy and global. The bulk of publicly available media is now created by people who understand little of the professional standards and practices for media.

Instead, these amateurs produce endless streams of mediocrity, eroding cultural norms about quality and acceptability, and leading to increasingly alarmed predictions of incipient chaos and intellectual collapse.

1.8 billion
Estimated number of Internet users world-wide

But of course, that's what always happens. Every increase in freedom to create or consume media, from paperback books to YouTube, alarms people accustomed to the restrictions of the old system, convincing them that the new media will make young people stupid. This fear dates back to at least the invention of movable type.

As Gutenberg's press spread through Europe, the Bible was translated into local languages, enabling direct encounters with the text; this was accompanied by a flood of contemporary literature, most of it mediocre. Vulgar versions of the Bible and distracting secular writings fueled religious unrest and civic confusion, leading to claims that the printing press, if not controlled, would lead to chaos and the dismemberment of European intellectual life.

These claims were, of course, correct. Print fueled the Protestant Reformation, which did indeed destroy the Church's pan-European hold on intellectual life. What the 16th-century foes of print didn't imagine—couldn't imagine—was what followed: We built new norms around newly abundant and contemporary literature. Novels, newspapers, scientific journals, the separation of fiction and non-fiction, all of these innovations were created during the collapse of the scribal system, and all had the effect of increasing, rather than decreasing, the intellectual range and output of society.

To take a famous example, the essential insight of the scientific revolution was peer review, the idea that science was a collaborative effort that included the feedback and participation of others. Peer review was a cultural institution that took the printing press for granted as a means of distributing research quickly and widely, but added the kind of cultural constraints that made it valuable.

We are living through a similar explosion of publishing capability today, where digital media link over a billion people into the same network. This linking together in turn lets us tap our cognitive surplus, the trillion hours a year of free time the educated population of the planet has to spend doing things they care about. In the 20th century, the bulk of that time was spent watching television, but our cognitive surplus is so enormous that diverting even a tiny fraction of time from consumption to participation can create enormous positive effects.

Wikipedia took the idea of peer review and applied it to volunteers on a global scale, becoming the most important English reference work in less than 10 years. Yet the cumulative time devoted to creating Wikipedia, something like 100 million hours of human thought, is expended by Americans every weekend, just watching ads. It only takes a fractional shift in the direction of participation to create remarkable new educational resources.

Similarly, open source software, created without managerial control of the workers or ownership of the product, has been critical to the spread of the Web. Searches for everything from supernovae to prime numbers now happen as giant, distributed efforts. Ushahidi, the Kenyan crisis mapping tool invented in 2008, now aggregates citizen reports about crises the world over. PatientsLikeMe, a website designed to accelerate medical research by getting patients to publicly share their health information, has assembled a larger group of sufferers of Lou Gehrig's disease than any pharmaceutical agency in history, by appealing to the shared sense of seeking medical progress.

Of course, not everything people care about is a high-minded project. Whenever media become more abundant, average quality falls quickly, while new institutional models for quality arise slowly. Today we have The World's Funniest Home Videos running 24/7 on YouTube, while the potentially world-changing uses of cognitive surplus are still early and special cases.

That always happens too. In the history of print, we got erotic novels 100 years before we got scientific journals, and complaints about distraction have been rampant; no less a beneficiary of the printing press than Martin Luther complained, "The multitude of books is a great evil. There is no measure of limit to this fever for writing." Edgar Allan Poe, writing during another surge in publishing, concluded, "The enormous multiplication of books in every branch of knowledge is one of the greatest evils of this age; since it presents one of the most serious obstacles to the acquisition of correct information."

The response to distraction, then as now, was social structure. Reading is an unnatural act; we are no more evolved to read books than we are to use computers. Literate societies become literate by investing extraordinary resources, every year, training children to read. Now it's our turn to figure out what response we need to shape our use of digital tools.
Does the Internet Make You Dumber?
[Cover_Main] Mick Coulas

The cognitive effects are measurable: We're turning into shallow thinkers, says Nicholas Carr.

The case for digitally-driven stupidity assumes we'll fail to integrate digital freedoms into society as well as we integrated literacy. This assumption in turn rests on three beliefs: that the recent past was a glorious and irreplaceable high-water mark of intellectual attainment; that the present is only characterized by the silly stuff and not by the noble experiments; and that this generation of young people will fail to invent cultural norms that do for the Internet's abundance what the intellectuals of the 17th century did for print culture. There are likewise three reasons to think that the Internet will fuel the intellectual achievements of 21st-century society.

First, the rosy past of the pessimists was not, on closer examination, so rosy. The decade the pessimists want to return us to is the 1980s, the last period before society had any significant digital freedoms. Despite frequent genuflection to European novels, we actually spent a lot more time watching "Diff'rent Strokes" than reading Proust, prior to the Internet's spread. The Net, in fact, restores reading and writing as central activities in our culture.

The present is, as noted, characterized by lots of throwaway cultural artifacts, but the nice thing about throwaway material is that it gets thrown away. This issue isn't whether there's lots of dumb stuff online—there is, just as there is lots of dumb stuff in bookstores. The issue is whether there are any ideas so good today that they will survive into the future. Several early uses of our cognitive surplus, like open source software, look like they will pass that test.

The past was not as golden, nor is the present as tawdry, as the pessimists suggest, but the only thing really worth arguing about is the future. It is our misfortune, as a historical generation, to live through the largest expansion in expressive capability in human history, a misfortune because abundance breaks more things than scarcity. We are now witnessing the rapid stress of older institutions accompanied by the slow and fitful development of cultural alternatives. Just as required education was a response to print, using the Internet well will require new cultural institutions as well, not just new technologies.

It is tempting to want PatientsLikeMe without the dumb videos, just as we might want scientific journals without the erotic novels, but that's not how media works. Increased freedom to create means increased freedom to create throwaway material, as well as freedom to indulge in the experimentation that eventually makes the good new stuff possible. There is no easy way to get through a media revolution of this magnitude; the task before us now is to experiment with new ways of using a medium that is social, ubiquitous and cheap, a medium that changes the landscape by distributing freedom of the press and freedom of assembly as widely as freedom of speech.
—Clay Shirky's latest book is "Cognitive Surplus: Creativity and Generosity in a Connected Age."

Does the Internet Make You Dumber?
The cognitive effects are measurable: We're turning into shallow thinkers, says Nicholas Carr.

[attachment=1497]


The Roman philosopher Seneca may have put it best 2,000 years ago: "To be everywhere is to be nowhere." Today, the Internet grants us easy access to unprecedented amounts of information. But a growing body of scientific evidence suggests that the Net, with its constant distractions and interruptions, is also turning us into scattered and superficial thinkers.
Journal Community

The picture emerging from the research is deeply troubling, at least to anyone who values the depth, rather than just the velocity, of human thought. People who read text studded with links, the studies show, comprehend less than those who read traditional linear text. People who watch busy multimedia presentations remember less than those who take in information in a more sedate and focused manner. People who are continually distracted by emails, alerts and other messages understand less than those who are able to concentrate. And people who juggle many tasks are less creative and less productive than those who do one thing at a time.

The common thread in these disabilities is the division of attention. The richness of our thoughts, our memories and even our personalities hinges on our ability to focus the mind and sustain concentration. Only when we pay deep attention to a new piece of information are we able to associate it "meaningfully and systematically with knowledge already well established in memory," writes the Nobel Prize-winning neuroscientist Eric Kandel. Such associations are essential to mastering complex concepts.

When we're constantly distracted and interrupted, as we tend to be online, our brains are unable to forge the strong and expansive neural connections that give depth and distinctiveness to our thinking. We become mere signal-processing units, quickly shepherding disjointed bits of information into and then out of short-term memory.

In an article published in Science last year, Patricia Greenfield, a leading developmental psychologist, reviewed dozens of studies on how different media technologies influence our cognitive abilities. Some of the studies indicated that certain computer tasks, like playing video games, can enhance "visual literacy skills," increasing the speed at which people can shift their focus among icons and other images on screens. Other studies, however, found that such rapid shifts in focus, even if performed adeptly, result in less rigorous and "more automatic" thinking.

56 Seconds
Average time an American spends looking at a Web page.
Source: Nielsen

In one experiment conducted at Cornell University, for example, half a class of students was allowed to use Internet-connected laptops during a lecture, while the other had to keep their computers shut. Those who browsed the Web performed much worse on a subsequent test of how well they retained the lecture's content. While it's hardly surprising that Web surfing would distract students, it should be a note of caution to schools that are wiring their classrooms in hopes of improving learning.

Ms. Greenfield concluded that "every medium develops some cognitive skills at the expense of others." Our growing use of screen-based media, she said, has strengthened visual-spatial intelligence, which can improve the ability to do jobs that involve keeping track of lots of simultaneous signals, like air traffic control. But that has been accompanied by "new weaknesses in higher-order cognitive processes," including "abstract vocabulary, mindfulness, reflection, inductive problem solving, critical thinking, and imagination." We're becoming, in a word, shallower.

In another experiment, recently conducted at Stanford University's Communication Between Humans and Interactive Media Lab, a team of researchers gave various cognitive tests to 49 people who do a lot of media multitasking and 52 people who multitask much less frequently. The heavy multitaskers performed poorly on all the tests. They were more easily distracted, had less control over their attention, and were much less able to distinguish important information from trivia.

The researchers were surprised by the results. They had expected that the intensive multitaskers would have gained some unique mental advantages from all their on-screen juggling. But that wasn't the case. In fact, the heavy multitaskers weren't even good at multitasking. They were considerably less adept at switching between tasks than the more infrequent multitaskers. "Everything distracts them," observed Clifford Nass, the professor who heads the Stanford lab.
SOURCE WALL STREET JOURNAL