From savannas to blue-phase LCD screens: Prospects and perils for child development in the Post-Modern Digital Information Age
- aCognition and Cognitive Neuroscience Program, Department of Psychology, University of Michigan, Ann Arbor, MI 48109
See allHide authors and affiliations

The Modern Digital Information Age arguably dawned with the construction of moveable-type printing presses by Johannes Guttenberg and others in Western Europe around 1440 CE.* As a result, there was a rapid replacement of hand-written script books by an exponentially increasing number of widely available, relatively inexpensive, mechanically produced volumes. Since then, this progression has been punctuated periodically with further noteworthy harbingers and technical advances—some momentous in their own breadth and depth—that have brought us to our present time. For example, included prominently among them are: (i) the 1832 unveiling of Charles Babbage’s so-called “Difference Engine,” which anticipated his subsequent “Analytical Engine,” a mechanical computing machine with several of the same basic features as modern electronic digital computers; (ii) the 1838 public demonstration of a nascent long-distance electrical telegraph designed by Samuel Morse, Leonard Gale, and Alfred Vail; (iii) Morse and Vail’s creation of American Morse Code, whereby the portentous message “What hath God wrought” was first transmitted as a sequence of acoustic dots and dashes in May 1844; (iv) incorporation of the Western Union Company in 1851; (v) David Hilbert’s challenge for participants at the 1900 International Congress of Mathematicians to formulate a finite, complete, consistent set of axioms for arithmetic of the natural numbers; (vi) Kurt Gödel’s proof, published in 1931, that there can be no such set of axioms; (vii) publication of Alan Turing’s visionary 1936 article on computable numbers, announcing his ideas for the “Universal Turing Machine,” which was named subsequently in his honor; (viii) invention of the electronic point-contact transistor at the AT&T Bell Telephone Laboratories in 1947; (ix) publication of Claude Shannon’s 1948 journal article on mathematical information theory; and (x) introduction of the IBM 704 mainframe during 1954, heralding the first mass-produced computer with floating-point arithmetic hardware, which by 1964 came to pervade scientific computing and essentially realized Turing’s vision.
Indeed, by the early 1960s, the Modern Digital Information Age had reached seemingly full fruition. The advances enabling it were so awesome that, in toto, they led Marshall McLuhan—the iconic 20th century mass-media guru—to proclaim: “We are today as far into the electric age as the Elizabethans had advanced into the typographical and mechanical age. And we are experiencing the same confusions and indecisions which they had felt living simultaneously in two contrasted forms of society and experience” (1).†
Yet little did McLuhan suspect what next lay in store for his Information Age society. Soon after his proclamation was issued, equally—and in some cases even more—mind-boggling innovations emerged: monolithic integrated circuitry, the Intel Corporation, the Advanced Research Projects Agency Network (ARPANET), the Apple 1 personal computer, Microsoft, cyberspace, email, the World Wide Web (WWW), web sites, web browsers, search engines, Google, Wikipedia, blogs, smart phones, smart tablets, social media, Facebook, YouTube, Twitter, Instagram, and the list goes on... Thus, by now, some astute observers of these ever advancing innovations have declared that, in essence, a new Post-Modern Digital Information Age has ensued. It is as if a critical mass of successive innovative developments has coalesced over the past half-century to trigger an immense new “thermonuclear” explosion of information and social interaction. As the popular science writer, James Gleick, has opined, “Cyberspace, of course, changes everything… The Internet represents not just a new fight over names but a [grand] leap in scale causing a phase transition… this time it is different. We are a half century further along now and can begin to see how vast the scale and how strong the effects of connectedness are” (1).
From the perspective of the present author—a mathematical psychologist, cognitive scientist, and psychological neuroscientist—it similarly seems as if “this time is different.”‡ The workplace and personal environments 50 y ago, when I first joined the technical staff of the Human Information-Processing Research Department at the AT&T Bell Telephone Laboratories, were much simpler than they are today, even though “the Labs” was about the most technologically sophisticated place on the planet back then (3).§ Our word processors were called “secretaries,” our personal computers were called “Monroe calculators” or, at best, “Hewlett Packard 35s.” Slide rules still lay on our desks. If we needed access to some technical literature for writing an article, we went to the Labs’ library, or called a librarian to photocopy and mail some required documents that arrived a day or two later in our offices. We never googled Wikipedia or browsed on Safari to find what we needed; the verb “to google” did not exist. Moreover, composition of this Introduction to the PNAS Special Feature on Digital Media and Developing Minds would have been far more difficult, even nigh impossible.
In particular, an absolutely new informational milieu has emerged over the past 50 y. Human beings who currently interact with each other in the WWW are like vast numbers of individual neurons in a gigantic “world brain.”¶ Unlike in a biological brain, however, their interactions typically occur through small mobile electronic digital devices (e.g., Apple watches, smart phones, smart tablets, and so forth) located in the palms of their hands, pockets, purses, offices, and sleeping beds. These devices are, essentially, quasi-universal Turing machines equipped with multimodal sensory-motor interfaces. They enable each “neural cell” in the Web to have Turing computational power, virtually unlimited information storage, and almost instantaneous access to the full expanse of accumulated human knowledge stored in diverse physical formats. Many users of these devices have considerable skill at exploiting their potentially unlimited information technology resources.
Furthermore, each of the Web’s individual human users is, at most, about six degrees of personal separation from any other user (5). In many cases, the separation between users is much less, even if they belong to quite different professional, social, and cultural groups. Thus, in essence, the WWW is by far the most powerful information processing and social network on Earth, combining the creative and computational powers in literally billions of human brains and quasi-universal Turing machines. By contrast, no individual neuron of any actual biological brain has Turing computational power, nor does any such neuron have direct access to all knowledge stored in the rest of a whole biological brain. The situation with the WWW and its constituent users is much different and potentially richer than this.
At the same time, despite having vast computational power and considerable internet-surfing skills at their fingertips, present-day users of the WWW face enormous difficulties on multiple fronts. Human beings (perhaps) evolved in compact social groups on savannas and in other natural habitats. They (maybe) were not adapted originally to sit alone in dimly lit rooms while staring at glowing LCD computer screens connected to the Web, communicating with diverse remote natural and artificial agents through touch pads, key boards, and electro-mechanical mice.#
These days, though, many of us often do so for hours on end, frequently before dawn and after dusk. And doing so inherently confronts us with numerous problematic cognitive, emotional, motivational, social, and cultural challenges. Like all profoundly efficacious tools, the Web is fundamentally a double-edged sword; it can cut toward both the beneficial good and the harmful bad. On the one hand, there are enormous prospects for exploiting its vast informational and social resources. On the other hand, there are treacherous pitfalls hidden in the Web’s vast complexity and chaos: fake news, twittering trolls, disorienting distraction, and overwhelming information overload.
In fact, the challenges posed by entering Cyberspace and participating actively in the WWW span a gamut from the mundane physiological to the profound epistemological. Many present-day digital-media devices are not 100% physically user friendly. For example, their display screens have so-called “blue-phase mode” LCD hardware. They emit a light spectrum with a band of blue waves that can inhibit the pineal gland’s release of melatonin in the brain, disrupt circadian rhythms, and interfere with maintenance of proper sleep cycles, especially when people are chronically exposed to bright computer screens during 2-h time periods immediately before their nightly bedtimes (8).
The cognitive challenges of present-day digital-media devices are equal to or more extreme than the physiological ones. Much of their operating-system software is designed so as to explicitly signal users whenever new input messages arrive through various channels of communication and call for interacting with diverse utility programs (e.g., Gmail, Facebook, Instagram, Twitter, and so forth, not to mention phone calls and skin tickling by Apple watches). Such signaling can be highly distracting and disrupt on-going task performance (9). Human attention is nowhere near perfect at ignoring irrelevant sources of external physical stimulation and selectively focusing on just the currently most relevant input (10). Attempts at so-called “media multitasking” are typically far less than entirely successful, leading to dramatically increased task-completion times, elevated response error rates, reduced learning, and poorer memory function (11, 12).
A need for new modes of attention has further deepened these deficiencies. In the “good old days,” cognitive psychologists recognized that human attention can be oriented toward either of two alternative spatial directions: allocentric (i.e., outward to the external—for example, visual and acoustic—environment) and egocentric (i.e., inward to the internal—for example, somatosensory and cogitative—environment). Now, in addition, there is a third mode that plays a key role: “cyber-centric” attention, oriented “elseward” to the realms of remote natural and artificial agents in Cyberspace. Dealing with the Web is not just about attending to screens; it is about total immersion in a reality where cell-phoning or texting while walking, bike riding, or driving leads inevitably to distraction, potential destruction, and sometimes even death (13).‖
The emotional toll from using digital devices to interact with other individuals through social media in Cyberspace can be considerable too. Malevolent “trolls” lurk everywhere, waiting to leave vitriolic comments about seemingly innocuous, but politically or culturally tinged, posts on Facebook and tweets on Twitter, provoking anger, frustration, anxiety, depression, and even suicide among prior posters and tweeters who are their targets. Mass spam attacks by opponents can also happen through Twitter. In addition, there are “sexting” (a term so new that my word-processor does not recognize it yet), cyber-bullying, and violent video gaming to which young users may be especially vulnerable.**
Nevertheless, although interacting in the WWW sometimes has high emotional costs, it may also be addictive (15⇓⇓–18). Occasional moments of elation, triggered by cognitive, social, and economic rewards from digital-media stimulation, are prone to activate the dopamine-reward system of the human brain, sensitizing it to future stimulus cues reminiscent of ones that have provided prior pleasures (19). In turn, such associations foster systematic operant conditioning whereby digital-media users tend to respond—like pigeons pressing bars in Skinner boxes—more and more incessantly over time to a sporadic “bip, bip, bip” of signals that might portend the next “information fix.”
Most insidious of all, however, is the epistemological challenge posed by the Web. The information available there does not equate to knowledge, let alone true wisdom (1). On the contrary, a substantial portion of the Web’s content constitutes nothing more than alternative “facts,” misinformation, and nonsense; it is essentially “one great blooming, buzzing confusion,” a phrase offered by William James for describing how the surrounding environment may seem to a newborn infant (20). For exploiting the Web assiduously, its digital-media users must exercise subtle discernment, combining intelligent search, selection, exclusion, and cataloguing of information to ascertain “the truth.” Achieving this ideal does not come easily, if ever (especially when prominent public figures like Rudy Giuliani, one of US President Donald Trump’s lawyers, frequently make widely publicized statements, such as “Truth isn’t truth”). The phrase “Post-Modern Digital Information Age” thus embodies an ironic double entendre: it might refer simply to the Era of Cyberspace that has emerged since the middle 1960s of Marshall McCluhan; or, it might refer to the deep concern of postmodern philosophers that most, if not all, claims to “truth” are relative, inherently subjective, and open to alternative interpretations (21). Either way, users of the Web face fundamental problems in judging whom to trust, what to believe, and which if any facts encountered there might enable true knowledge. In essence, their digital-media devices and the “googleplex” of stuff available on the Internet may leave them feeling virtually I-DEAD: that is, immersed, distracted, elated, addicted, and disinformed.††
So why should we care about these matters? For now, the reason is simple. Concerned parents of infants, toddlers, young and midage children, “tweeners,” and adolescents have grown increasingly worried about their off springs’ extensive exposure to and interaction with various electronic devices of the Post-Modern Digital Information Age. Their worries have been propagated during recent years in an echo chamber of television and radio segments, as well as newspaper and magazine articles (23⇓⇓–26), confronting issues such as: At what age is it acceptable for young children to start using digital media? How much screen time per day is the “right” amount? Can there be too much? When should digital media be turned off before bedtime? Is cognitive “brain training” with video games beneficial? Do violent video games significantly increase harmful aggressive social behavior? Has sexting through social media become too prevalent among teenagers? Are video games, the Internet, and the Web addictive? Where will the Information Age go from here? Who will provide the answers for us?
It is not yet known whether, on balance, intense long-term engagement with digital media will have net positive or negative, major or minor, effects on the minds, brains, and bodies of the world’s youth. Nor is it clear by exactly what psychological and physical mechanisms such effects would occur. Prior revolutionary advances in communications technology and cultural evolution do not appear to have caused the demise of humanity. Nonetheless, some worried pundits have begun prophesying that a kind of technological doomsday may be at hand (27⇓–29). At present, however, all that can be said definitively is we are now talking about much more than just telephones, movies, radio, television, and Rock ‘n Roll; perhaps this time really is different!
Consequently, over the past 10 years, a variety of organizations have arisen to help clarify the aforementioned issues and to pursue them through systematic research and practical policies (30⇓⇓–33). Among these initiatives, most noteworthy in the current context are the Institute of Digital Media and Child Development and the Sackler Colloquium “Digital Media and Developing Minds,” which took place at the Beckman Institute on the campus of the University of California, Irvine, in October 2015.
The Sackler Colloquium “Digital Media and Developing Minds” was an interdisciplinary collaborative endeavor to promote joint interests of the National Academy of Sciences, the Arthur M. Sackler Foundation, and the Institute of Digital Media and Child Development.‡‡ At the colloquium, a select group of media-savvy experts in diverse disciplines assembled to pursue several interrelated goals: (i) reporting results from state-of-the art scientific research; (ii) establishing a dialogue between medical researchers, social scientists, communications specialists, policy officials, and other interested parties who study media effects; and (iii) setting a future research agenda to maximize the benefits, curtail the costs, and minimize the risks for children and teens in the Post-Modern Digital Information Age.
Topics covered during the colloquium extended across several intersecting dimensions of interest. The stages of youth development discussed there spanned those involving infants, toddlers, early and middle childhood, ‘tweens, adolescents, and young adults.§§ For each stage, various levels of analysis were considered, ranging from the cellular to the sociological, as well as across diverse psychological domains, such as cognition, emotion, and motivation. Numerous professional approaches were represented in the mix: for example, communications science, computer science, neurobiology, pediatrics, developmental psychology, education, public health, and business. Perspectives of both basic research and practical applications were represented too. Overall, the colloquium’s organization embodied a multidimensional matrix, many of whose cells were filled with specific keynote lectures, panel discussions, and informal commentaries.
This special section of PNAS is intended to convey an apt sense of some prototypical issues addressed at the Sackler Colloquium “Digital Media and Developing Minds.” It is also intended to foster appreciation for the crucial challenges that confront this nascent interdisciplinary research field, and for the importance of surmounting them as best possible through future collaborative scientific investigation. Toward these ends, the present papers span an illustrative cross-section of human developmental stages and diverse relevant topic domains, such as: infant and early childhood cognition; influences of exposure to digital and other electronic visual media on brain development, language acquisition, and selective attention; media-multitasking by adolescents and young adults; enhancement of executive cognitive functions and general “fluid” intelligence through practice with video game training tasks; relationships between digital-media use and attention deficit/hyperactivity disorder (ADHD) in children and adolescents; and relationships between extensive play with violent video games and harmful aggressive physical behavior. Both potential benefits and possible costs from exposure to and use of currently available electronic digital media are considered. To be specific:
Christakis et al. (36) report on “How early media exposure may affect cognitive function: A review of results from observations in humans and experiments in mice,” reviewing relevant results from empirical studies of humans and animal models that concern how intense environmental stimulation influences neural brain development and behavior.
Lytle et al. (37) report on “Two are better than one: Infant language learning from video improves in the presence of peers,” showing that social copresence with other same-aged peers facilitates 9-mo-old infants’ learning of spoken phonemes through interactions with visual touch screens.
Kirkorian and Anderson (38) report on “Effect of sequential video shot comprehensibility on attentional synchrony: A comparison of children and adults,” using temporally extended eye-movement records to investigate how “top-down” cognitive comprehension processes for interpreting video narratives develop over an age-range from early childhood (4-y-old) to adulthood.
Beyens et al. (39) report on “Screen media use and ADHD-related behaviors: Four decades of research,” systematically surveying representative scientific literature that suggests a modest positive correlation—moderated by variables such as gender and chronic aggressive tendencies—between media use and ADHD-related behaviors, thereby helping pave the way toward future detailed theoretical models of these phenomena.
Prescott et al. (40) report on “Metaanalysis of the relationship between violent video game play and physical aggression over time,” applying sophisticated statistical techniques to assess data from a large cross-cultural sample of studies (n = 24; aggregated participant sample size > 17,000) about associations between video game violence and prospective future physical aggression, which has yielded evidence of small but reliable direct relationships that are largest among Whites, intermediate among Asians, and smallest (unreliable) among Hispanics.
Uncapher and Wagner (41) report on “Minds and brains of media multitaskers: Current findings and future directions,” evaluating whether intensive media multitasking (i.e., engaging simultaneously with multiple media streams; for example, texting friends on smart phones while answering email messages on laptop computers and playing video games on other electronic devices) leads to relatively poor performance on various cognitive tests under single-tasking conditions, which might happen because chronic media multitasking diminishes individuals’ powers of sustained goal-directed attention.
Finally, Katz et al. (42) report on “How to play 20 questions with nature and lose: Reflections on 100 years of brain-training research,” analyzing how and why past research based on various laboratory and real-world approaches to training basic mental processes (e.g., selective attention, working memory, and cognitive control)—including contemporary video game playing (also known as “brain training”)—have yet to yield consistently positive, practically significant, outcomes, such as durable long-term enhancements of general fluid intelligence.
When appraising the progress manifested by these papers (36⇓⇓⇓⇓⇓–42), and by the Sackler Colloquium “Digital Media and Developing Minds” more generally, it will be important to remain cognizant of several crucial caveats. Such endeavors typify an extremely important but still nascent and emerging field of scientific investigation. Thirty years ago, the Internet, WWW, general-purpose public web sites (e.g., Google, Wikipedia, and YouTube), social-media platforms (e.g., Facebook, Twitter, and Instagram), and sophisticated portable electronic multimedia digital devices that have spawned the Post-Modern Digital Information Age did not exist. The rigorous study of digital media and child development has had relatively little time to progress. Much of the new methodology needed for advancing this field still has to be fully conceived and implemented. Hence, extant empirical findings, conceptual hypotheses, and theoretical formalisms in this domain are relatively rudimentary to date.
Consequently, some of them remain open to considerable controversy at present. For example, how much screen time per day may be too much is debatable (43, 44). No consensus currently exists among bona fide experts about the extent to which intensive play with violent video games causes subsequent undesirable, physically aggressive, behavior.¶¶ It is also not entirely clear yet whether a distinct real phenomenon of “internet addiction” should be acknowledged (46, 47). Moreover, we do not know for sure whether video game training on particular, relatively circumscribed, types of cognitive task increases general fluid intelligence in diverse populations of individuals (48⇓–50).
That these uncertainties now prevail is unsurprising. After all, more than 30 y transpired from the inception of quantum mechanics’ most basic ideas until they reached a semblance of maturity in the early 1930s (51). Such was so even though the scientific understanding of this fundamental empirical and theoretical realm already rested on the more than 300-y-old foundations of classical Newtonian and Maxwellian mathematical physics (52). So, given that the phenomena under investigation regarding digital media and child development are much more complex, one should expect and make allowances that research in this new interdisciplinary field will take considerably more time henceforth to fully mature.
Furthermore, we may expect that making progress in this new field will not come easily. The way forward is challenging because of several factors. First, the future environments and populations of people under investigation will be constantly changing, given that information technologies (both hardware and software) are likely to continue evolving at an essentially exponential increasing rate, as exemplified by Moore’s Law (53).## Controlled experiments and randomized test trials, necessary for reaching sound inferences about cause–effect relationships, will be hard or impossible to implement on a broad basis; informative interventions may be unethical, or lack adequate numbers of appropriate cooperative participants. Political, cultural, and economic conflicts will surely complicate matters too. Continuing and arising confounded variables may therefore make reaching trustworthy scientific conclusions a difficult business.
Nevertheless, there are also some reasons for being optimistic about the prospects of these future endeavors. Multiple relatively solid, stable foundations already exist in basic bioscience, neuroscience, physiology, psychological science (e.g., developmental, cognitive, and social psychology), sociology, clinical medicine, public health, educational practice, and communications studies on which next steps in digital-media and child-development research may take place. We already know a considerable amount about the neural reward systems, mental cognitive-control functions, and linguistic and socio-emotional processes—as well as intragroup and intergroup cultural networking phenomena—that underlie local and global interactions in the WWW and realms of social media. What we need for going forward, in addition, will be systematically organized endeavors by a collaborative scientific community to merge these extant foundational components in a new grander conceptual synthesis and unified theoretical framework, embodied, for example, in terms of agent-based computational modeling (54).
Meanwhile, as David Deutsch, the eminent quantum-computer scientist, philosopher, and prescient futurist has duly explicated, we now stand at “the beginning of Infinity,” with manifold, as yet untold, greater prospects potentially ahead of us (7). Where humanity and the Post-Modern Digital Information Age go from here will depend on our moving forward by wisely using the current available resources at our disposal, while staying mindful of pitfalls that also lie ahead, and meeting problematic new challenges—which will surely arise—with confidence that they too can be surmounted.
Acknowledgments
I thank Dr. Pamela Hurst-Della Pietra and Prof. Jane Brown, who played central roles in organizing the Sackler Colloquium “Digital Media and Developing Minds,” from which the present special issue in PNAS has stemmed. I especially thank Jill Sackler, who has given generous funding over the years to support a superb series of colloquia in honor of her husband, Arthur M. Sackler; and the staff of the Arthur M. Sackler Foundation and PNAS, including particularly Susan Marty for supervising the logistics of this colloquium, and David Stopak for supervising the editorial processing of the present papers. Additional generous support for this colloquium was also provided by the Institute of Digital Media and Child Development. Preparation of this paper was enabled in part by Tim Berners-Lee, Apple Safari, Google.com, Wikipedia.com, my MacBook Air computer, and iPhone 5S, only one of which existed more than 25 years before this Introduction.*** Additional recordings are available at the following website: www.childrenandscreens.com/conference.
Footnotes
- ↵1Email: demeyer{at}umich.edu.
Author Contributions: D.E.M. wrote the paper.
The author declares no conflict of interest.
↵*For a broad ranging, highly engaging, and provocative overview of the multiple revolutions in information technology that predated Gutenberg’s efforts and that have stemmed subsequently from them, see ref. 1. A considerable amount of background information for this Introduction comes from that wonderful book.
↵†A cogent discussion of this and other remarks by McLuhan appears in ref. 1 (p. 413).
↵‡The prognostication that “this time is different” has been offered elsewhere in other contexts as well. For example, one prominent case involves the title of an influential book about the 2008 global financial crisis (2).
↵#The qualifiers “perhaps” and “maybe” appear here to acknowledge that we do not yet have entirely well-supported empirical and theoretical accounts for where and why humans evolved to be the way they now are. The so-called “Savanna Hypothesis” about the location of human origins remains controversial; it has engendered skepticism from advocates of other alternative hypotheses (6). Furthermore, humans may have undergone a “metaadaptation” that has given them a capacity to flourish in many different environments other than the ones where they evolved originally. If so, then contrary to some of their purported limitations, perhaps people are actually well adapted for sitting alone in dimly lit rooms while staring at glowing LCD screens connected to the World Wide Web (7).
↵§Given the compelling narrative of Gertner (3), it is arguably an unfortunate state of affairs that, during 1984, the US Federal Government required AT&T to disband the Bell System, which had sustained much of the nation’s long-term innovative scientific capacities over much of the 20th century. Since then, such capacities have yet to be sufficiently reestablished. Technical companies like Apple, Microsoft, Google, and Facebook do not—at least not thus far—produce Nobel Prize winners like Bell Labs had.
↵¶Indeed, an emergence akin to the present one was anticipated 80 y ago by the legendary science-fiction author and prescient futurist, H. G. Wells (4).
↵‖For example, an iconic illustration of total immersion may be observed currently every day on college campuses: a majority of students wandering hither and yon individually between classes with their eyes and noses engrossed in hand-held smart phones as they converse or text at length with various remote intelligent agents. Indeed, such behavior has become so pervasive and risky that, in some venues, it is now officially prohibited; in other venues, additional attempts at diminishing the risk have been made by covering street-light poles with heavy padding on city sidewalks (14).
↵††By definition, “googolplex” refers to the number 10googol, where 1 googol = 10100 (22). It is an impressive testament to the ambitions of Larry Page and Sergey Brin, founders of Google LLC, that their company was named as a spin-off (pun intended) from this utterly huge quantity.
↵**Further information about topics such as internet addiction, violent video gaming, cyber-bullying, and sexting is available on the website of the Institute of Digital Media and Child Development: www.childrenandscreens.com.
↵‡‡The Institute of Digital Media and Child Development is a 501C(3) nonprofit organization founded by Pamela Hurst-Della Pietra in 2013. Its objectives are to foster interdisciplinary intellectual dialogue, disseminate trustworthy information for parents in relevant public forums, and support scientific research that bridges the medical, neuroscience, social science, education, and policy communities.
↵§§By current convention, the term ’tweens (also known as “preteens”) refers to youth in late elementary and early middle school who are nearing puberty and experiencing a time of major life transition: that is, ranging approximately between 10 and 13 y of age (34). Furthermore, for now, the term “young adult” refers to individuals between 18- and 21-y-old who may have finished high school, gone to trade schools, enrolled in college, or entered the work force on a full-time basis. In some contexts, these individuals are treated as full-fledged mature adults; for example, they may vote, enlist in military service and, in at least some states, consume alcohol legally. At the same time, though, parts of their brains typically have not reached full growth. Specifically, their prefrontal cortices are still developing, and may continue doing so until around the age of 25 y; in these respects, they are thus not yet full-fledged mature adults, and their brains may still be highly susceptible to the effects of intensive electronic digital-media exposure (35).
↵##For a brief cogent discussion of Moore’s Law and its pervasive consequences, see ref. 1.
↵¶¶In 2015, the American Psychological Association (APA) issued a policy statement concluding that a link exists between violent video game playing and increased aggression (45). By contrast, in 2013 an independent group of approximately 230 media scholars, psychologists, and criminologists issued a statement to the APA Task Force on Violent Media, opposing its position. This opposition has since continued (available at https://www.scribd.com/doc/223284732/Scholar-s-Open-Letter-to-the-APA-Task-Force-On-Violent-Media-Opposing-APA-Policy-Statements-on-Violent-Media).
↵***Tim Berners-Lee invented the World Wide Web and hypertext transfer protocol (http://www) on which it has been based since its inception in 1989 (55).
This paper results from the Arthur M. Sackler Colloquium of the National Academy of Sciences, “Digital Media and Developing Minds,” held, October 14–16, 2015, at the Arnold and Mabel Beckman Center of the National Academies of Sciences and Engineering in Irvine, CA. The complete program and video recordings of most presentations are available on the NAS website at www.nasonline.org/Digital_Media_and_Developing_Minds.
Published under the PNAS license.
References
- ↵
- Gleick J
- ↵
- Reinhart CM,
- Rogoff K
- ↵
- Gertner J
- ↵
- Wells HG
- ↵
- Watts DJ
- ↵
- ↵
- Deutsch D
- ↵
- Green A,
- Cohen-Zion M,
- Haim A,
- Dagan Y
- ↵
- Gopher D,
- Koriat A
- Meyer DE,
- Kieras DE
- ↵
- Yantis SG,
- Abrams RA
- ↵
- Foerde K,
- Knowlton BJ,
- Poldrack RA
- ↵
- ↵
- Bergen B,
- Medeiros-Ward N,
- Wheeler K,
- Drews F,
- Strayer D
- ↵
- Jones T
- ↵
- Rosen LD, et al.
- ↵
- ↵
- Moreno MA,
- Jelenchick LA,
- Christakis DA
- ↵
- Gentile DA, et al.
- ↵
- ↵
- James W
- ↵
- Anderson WT
- ↵
- Sagan C
- ↵
- Kamenetz A
- ↵
- Homayoun A
- ↵
- Salam M,
- Stack L
- ↵
- Sifferlin A
- ↵
- Brody J
- ↵
- Rogers P
- ↵
- Twenge JM
- ↵
- Kaiser Family Foundation
- ↵
- Lenhart A,
- Madden M,
- Cortesi S,
- Gassner U,
- Smith A
- ↵
- Council on Communications and Media,
- American Academy of Pediatrics
- ↵
- Keeley B, et al.
- ↵
- O’Donnell J
- ↵
- ↵
- Christakis DA,
- Ramirez JSB,
- Ferguson SM,
- Ravinder S,
- Ramirez J-M
- ↵
- Lytle SR,
- Garcia-Sierra A,
- Kuhl PK
- ↵
- Kirkorian HL,
- Anderson DR
- ↵
- Beyens I,
- Valkenberg PM,
- Piotrowski JT
- ↵
- Prescott AT,
- Sargent JD,
- Hull JG
- ↵
- Uncapher MR,
- Wagner AD
- ↵
- Katz B,
- Shah P,
- Meyer DE
- ↵
- Mims C
- ↵
- Przybylski AK,
- Weinstein N
- ↵
- American Psychological Association
- ↵
- McClurg L
- ↵
- Przybylski AK,
- Weinstein N,
- Murayama K
- ↵
- Jaeggi SM,
- Buschkuehl M,
- Jonides J,
- Perrig WJ
- ↵
- ↵
- ↵
- Gamow G
- ↵
- Feynman RP,
- Leighton RB,
- Sands M
- ↵
- Moore GE
- ↵
- Railsback SF,
- Grimm V
- ↵
- Berners-Lee T,
- Fischetti M