Increase Your Brain Power
Sonia in Vert
Publications
Shared Idea
Interesting Excerpts
Awards and Honors
Presentations
This Week's Puzzle
Last Week's Puzzle
Interesting Excerpts
The following excerpts are from articles or books that I have recently read. They caught my interest and I hope that you will find them worth reading. If one does spark an action on your part and you want to learn more or you choose to cite it, I urge you to actually read the article or source so that you better understand the perspective of the author(s).
Mythical Androids and Ancient Automatons

[These excerpts are from a book review by Sarah Olson in the 5 October 2018 issue of Science.]

      Long before the advent of modern robots and artificial intelligence (AI), automated technology existed in the storytelling and imaginations of ancient societies. Hephaestus, blacksmith of the mythical Greek gods, fashioned mechanical serving maids from gold and endowed them with learning, reason, and skill in Homer’s Iliad. Designed to anticipate their master’s requests and act on them without instruction, the Golden Maidens share similarities with modern machine learning, which allows AI to learn from experience without being explicitly programmed.

      The possibility of AI servants continues to tantalize the modern imagination, resulting in household automatons such as the Roomba robot vacuum and Amazon’s virtual assistant, Alexa. One could imagine a future iteration of Alexa as a fully automatic robot, perhaps washing our dishes or reminding us to pick up the kids after school. In her new book, Gods and Robots, Adrienne Mayor draws comparisons between mythical androids and ancient robots and the AI of today….

      Through detailed storytelling and careful analysis of popular myths, Mayor urges readers to consider lessons learned from these stories as we set about creating a new world with AI. Like Pandora, an automaton created by Hephaestus who opened a box and released evils into the world, AI could bring about unforeseen problems. Here, Mayor cites Microsoft’s 2016 experiment with the Twitter chatbot “Tay,” a neural network programmed to converse with Twitter users without requiring supervision. Only hours after going live, Tay succumbed to a group of followers who conspired to turn the chatbot into an internet troll. Microsoft’s iteration the following year suffered a similar fate.

      Despite her extensive knowledge of ancient mythology Mayor does little to demonstrate an understanding of modern Al, neural networks, and machine learning; the chatbots are among only a handful of examples of modern technology she explores.

      Instead, Mayor focuses on her own area of expertise: analyzing classical mythology. She recounts, for example, the story of the bronze robot Talos, an animated statue tasked with defending the Greek island of Crete from pirates….

      When Pandora opens her box of evils, she releases hope by accident—inadvertently helping humanity learn to live in a world now corrupted by evil. Like this story Gods and Robots is cautionary but optimistic. While warning of the risks that accompany the irresponsible deployment of technology Mayor reassures readers that AI could indeed bring about many of the wonderful things that our ancestors imagined.

Junk Food, Junk Science?

[This excerpt is from a book review by Cyan James in the 5 October 2018 issue of Science.]

      …In her latest book, Unsavory Truth, [Marion] Nestle levels a withering fusillade of criticism against food and beverage companies that use questionable science and marketing to push their own agendas about what should end up on our dinner tables.

      In her comprehensive review of companies’ nutrition research practices, Nestle (whose name is pronounced “Nes-sul” and who is not affiliated with the Swiss food company) reveals a passion for all things food, combined with pronounced disdain for the systematic way food companies influence the scientists and research behind the contents of our pantries. Her book is an autopsy of modern food science and advertising, pulling the sheet back on scores of suspect practices, as well as a chronicle of Nestle’s own brushes with food company courtship.

      Is it shocking that many food companies do whatever they can in the name of fatter profits? Maybe, but it's old hat for Nestle, who has spent five decades honing her expertise and is a leading scholar in the field of nutrition science. In this book, she details nearly every questionable food company tactic in the playbook, from companies that fund their own food science research centers and funnel media attention to nondietary explanations for obesity, to those that cherry-pick data or fund professional conferences as a plea for tacit approval.

      Even companies that hawk “benign” foods such as blueberries, pomegranate juice, and nuts come under the author’s strict scrutiny because, as she reminds readers, “Foods are not drugs. To ask whether one single food has special health benefits defies common sense.”

      Instead, Nestle urges eaters to look behind the claims to discover who funds food-science studies, influences governmental regulations, advises policy-makers, and potentially compromises researchers. Food fads, we learn, can spring from a few findings lifted out of context or interpreted with willful optimism. Companies, after all, can hold different standards for conducting and interpreting research than those of independent academic institutions or scientific organizations and can be driven by much different motives.

      Nestle wields a particularly sharp pen against junk-food purveyors, soda companies, and others who work hard to portray excess sugar as innocuous or who downplay the adverse effects their products can have. These companies adopt tactics such as founding research centers favorable to their bottom line, diverting consumer attention away from diet to focus instead on the role of exercise in health, or trying to win over individual researchers to favorably represent their products. Nestle calls foul on these attempts to portray junk foods as relatively harmless, even knocking the “health halo” shine from dark chocolates and other candies masquerading as responsible health foods.

      Not content to point to the many thorny problems lurking behind food company labels and glitzy sponsored meetings, however, Nestle offers a constructive set of suggestions for how nutrition scientists can navigate potential conflicts of interest.

      Consumers, too, bear a responsibility for promoting eating habits that steer clear of dubious advertising. Nestle advocates adhering to a few simple guidelines: “[E]at your veggies, choose relatively unprocessed foods, keep junk foods to a minimum, and watch out for excessive calories!” We have the power of our votes and our forks, she reminds us, and can use both to insist that food compa-nies help us eat well. Nestle’s determination to go to bat for public health shines through, illuminating even her occasional sections of workaday prose or dense details.

      Nestle marshals a convincing number of observations on modern food research practices while energetically delineating how food companies’ clout can threaten the integrity of the research performed on their products. There is indeed something rotten in the state of dietary science, but books like this show us that we consumers also hold a great deal of power.

Renewable Energy for Puerto Rico

[These excerpts are from an editorial by Arturo Massol-Deya, Jennie C. Stephens and Jorge L. Colon in the 5 October 2018 issue of Science.]

      Puerto Rico is not prepared for another hurricane. A year ago, Hurricane Maria obliterated the island’s electric grid, leading to the longest power outage in U.S. history. This disrupted medical care for thousands and contributed to an estimated 2975 deaths. The hurricane caused over $90 billion in damage for an island already in economic crisis. Although authorities claim that power was restored completely, some residents still lack electricity. Despite recovery efforts, the continued vulnerability of the energy infrastructure threatens Puerto Rico’s future. But disruptions create possibilities for change. Hurricane Maria brought an opportunity to move away from a fossil fuel-dominant system and establish instead a decentralized system that generates energy with clean and renewable sources. This is the path that will bring resilience to Puerto Rico.

      Puerto Rico is representative of the Caribbean islands that rely heavily on fossil fuels for electric power; 98% of its electricity comes from imported fossil fuels (oil, natural gas, and coal), whereas only 2% comes from renewable sources (solar, wind, or hydroelectric)….This makes the island's centralized electrical grid vulnerable to hurricanes that are predicted to increase in severity because of climate change.

      In Puerto Rico and the rest of the Caribbean, where sun, wind, water, and biomass are abundant sources of renewable energy, there is no need to rely on fossil fuel technology. Unfortunately, the government of Puerto Rico and the U.S. Federal Emergency Management Agency have been making decisions about the local power authority that are restoring the energy system to what it was before Hurricane Maria hit, perpetuating fossil fuel reliance….

      At this juncture, when the opportunity to build a sustainable and resilient electrical system presents itself, moving away from dependency on imported fossil fuels should be the guiding vision. Puerto Rico must embrace the renewable endogenous sources that abound on the island and build robust microgrids powered by solar and wind, install hybrid systems (such as biomass biodigesters), and create intelligent networks that can increase the resilience of the island. The Puerto Rican government and U.S. Congress should use Hurricane Maria as a turning point for pushing Puerto Rico toward using 100% renewable energy rather than a platform to plant generators across the island….

Boys Will Be Superintendents: School Leadership as a Gendered Profession

[These excerpts are from an article by Robert Maranto, Kristen Carroll, Albert Cheng and Manuel P. Teodoro in the October 2018 issue of Phi Delta Kappan.]

      In 19th-century America, both teaching and school leadership were mainly feminine pursuits, in part because most people considered women better at nurturing children, but also because local school boards could get away with paying women less. Eventually, however, progressives sought to professionalize educational leadership with larger, bureaucratic schools led by credentialed principals and su perintendents….At a time when professional meant male, this bureaucratization and professionalization of schools meant replacing female principals and superintendents with men.

      As Kate Rousmaniere…details in her social history of the principalship, graduate programs providing educational leadership credentials served as an increasingly common career pathway for men, particularly veterans who returned from World War II and attended graduate school on the GI Bill. In addition, the emerging field of athletic coaching not only attracted many men to jobs in education but also gave them dear routes to principal and superintendent posts….

      Thus, by the 1970s, researchers found that nearly 80 percent of school superintendents had coached athletic teams earlier in their careers….

      As a result, explains Rousmaniere, the numbers of women in school and district leadership declined through most of the 20th century. For example, the percentage of elementary school principal posts held by women fell from 55% in 1928 to 20% in 1973. Since high school enrollments were rare in the early part of the century, and secondary leadership positions were already seen as relatively prestigious, the number of women in high school leadership was always low, even in the 1920s, but by mid-century it fell to just 1%. By the 1960s, Rousmaniere notes, “It seemed to be the natural order of things that women taught and men managed….”

      Today, the outlook for women in leadership is significantly better, but there's still a serious imbalance. For example, using data on more than 7,500 public school principals from the U.S. Department of Education 2011-12 Schools and Staffing Survey and additional published sources, we recently found that while 90% of elementary teachers are women, only 66% of elementary principals are women. In secondary schools, meanwhile, women make up 63% of teachers but just 48% of principals.

      Before becoming principals, men and women are about equally likely to have served as department heads, vice principals, and dub advisers. In two key respects, however, their paths to the principalship differ. Women are twice as likely as men to have prior service as curricular specialists (31.3% of women and 16010 of men), and men are three times as likely (52.8% of men and 16.5% of women) to have had prior experience as athletic coaches….Overall, men are proportionately more likely to gain promotion to principal, and to do so more quickly than women. In traditional public schools, male principals have taught a mean of only 10.7 years before becoming principal compared to 13.2 years for women.

      The imbalance continues beyond the principalship, too. Nationwide, for example, just 24.1% of superintendents are women….

True Story

[These excerpts are from an article by Steve Mirsky in the October 2018 issue of Scientific American.]

      …The Washington Post tallied 4,229 “false or misleading claims” by Trump in his first 558 days in office…

      Here’s an example of my conundrum. Early this year, Trump refuted the idea of climate change: “The ice caps were going to melt, they were going to be gone by now but now they’re setting records, so okay, they’re at a record level.” But a researcher at the National Snow and Ice Data Center said that polar ice was at “a record low in the Arctic (around the North Pole) right now and near record low in the Antarctic (around the South Pole).” The Trump claim and the response were both published by the Pulitzer Prize-winning organization PolitiFact….

      I was reading a book. The book is called The Death of Truth. The writer’s name is Michiko Kakutani. She wrote that the Trump administration ordered the Centers for Disease Control and Prevention to avoid using the terms “science-based” and “evidence-based.” She says that in another book called 1984 there’s a society that does not even have the word “science” because, as she quoted from that other book, “‘the empirical method of thought, on which all the scientific achievements of the past were founded,’ represents an objective reality that threatens the power of Big Brother to determine what truth is.”…

      Maria Konnikova is a science journalist. She also has a doc-torate in psychology….She wrote an article for a place called Politico entitled “Trump's Lies vs. Your Brain.” She wrote, “If he has a particular untruth he wants to propagate ... he simply states it, over and over. As it turns out, sheer repetition of the same lie can eventually mark it as true in our heads.” She also wrote that because of how our brains work, “Repetition of any kind—even to refute the statement in question—only serves to solidify it.”

Clicks, Lies and Videotapes

[These excerpts are from an article by Brooke Borel in the October 2018 issue of Scientific American.]

      The consequences for public knowledge and discourse could be profound. Imagine, for instance, the impact on the upcoming midterm elections if a fake video smeared a politician during a tight race. Or attacked a CEO the night before a public offering. A group could stage a terrorist attack and fool news outlets into covering it, sparking knee-jerk retribution. Even if a viral video is later proved to be fake, will the public still believe it was true anyway? And perhaps most troubling: What if the very idea of pervasive fakes makes us stop believing much of what we see and hear—including the stuff that is real?

      …The path to fake traces back to the 1960s, when computer-generated imagery was first conceived. In the 1980s these special effects went mainstream, and ever since, movie lovers have watched the technology evolve from science-fiction flicks to Forrest Gump shaking hands with John F. Kennedy in 1994 to the revival of Peter Cushing and Carrie Fisher in Rogue One….

      Experts have long worried that computer-enabled editing would ruin reality. Back in 2000, an article in MIT Technology Review about products such as Video Rewrite warned that “seeing is no longer believing” and that an image “on the evening news could well be a fake—a fabrication of fast new video-manipulation technology.” Eighteen years later fake videos don't seem to be flooding news shows. For one thing, it is still hard to produce a really good one….

      The way we consume information, however, has changed. Today only about half of American adults watch the news on television, whereas two thirds get at least some news via social media, according to the Pew Research Center. The Internet has allowed for a, proliferation of media outlets that cater to niche audiences—including hyperpartisan Web sites that intentionally stoke anger, unimpeded by traditional journalistic standards. The Internet rewards viral content that we are able to share faster than ever before….And the glitches in fake video are less dis-cernible on a tiny mobile screen than a living-room TV.

      The question now is what will happen if a deepfake with significant social or political implications goes viral. With such a new barely studied frontier, the short answer is that we do not know….

      The science on written fake news is limited. But some research suggests that seeing false information just once is sufficient to make it seem plausible later on….

End of the Megafauna

[These excerpts are from a brief article in the Fall 2018 issue of Rotunda, put out by the American Museum of Natural History.]

      The American Museum of Natural History is world famous for its vertebrate paleontology halls, where the story of vertebrate life is traced from its beginnings to the near present, as told by the most direct form of evidence we have: the fossils themselves….

      At one end of the wing are a Columbian mammoth and an American mastodon. Both are very definitely proboscidean, or elephantlike, in body form, although their last common ancestor lived about 25 million years ago. On the North American mainland, populations of mammoths and mastodons were still living as recently as 12,000 years ago; all were gone 1,000 or so years later. A couple of island-bound groups of woolly mammoths struggled on, but these too had disappeared by 4,200 years ago. Asian and African elephants persisted. These magnificent beasts didn’t. Why?

      Elsewhere in the hall are members of Xenarthra, today an almost exclusively South American group that includes living armadillos, tree sloths, and anteaters. The largest of the living xenarthrans is the giant anteater (MyrInecophaga tridactyla), but as late as 12,000 to 13,000 years ago there were several much larger xenarthran species in both North and South America that ao may have weighed as much as 2,000-4,000 kg. Among these was gigantic Lestodo, whose closest living relatives, the two-and three-toed tree sloths Choloepus and Bradypus, weigh no more than 5 kg. They made it, Lestodon didn’t. Why?

      Many other Quaternary species prospered in their native environments for hundreds of thousands of years or more without ro suffering any imperiling losses. But beginning about 50,000 years ago, something started happening to large animals. Species sometimes disappeared singly, at other times in droves. Size must have mattered, because their smaller close relatives mostly weathered the extinction storm and are still with us.

      So why did these megafaunal extinctions occur?

      A short but honest reply would be that there is no satisfactory answer-not yet. The debate continues as fresh leads are traced and dead ends abandoned or refashioned in order to accommodate new evidence. It's a great time to be a Quaternary paleontologist!

The Marvelous Mola

[These excerpts are from a brief article in the Fall 2018 issue of Rotunda, put out by the American Museum of Natural History.]

      Mola may not be a household name, but the various species of this genus of sunfish are found in temperate and tropical seas all over the world and can reach nearly 11 feet (3.5 meters) in length and weigh up to 5,070 lbs (2,300 kg).

      To live large, these marine giants grow fast: in captivity, young sunfish can pack on more than 800 pounds in just over a year….

      Size gives the Mola mola several advantages. Females produce enormous numbers of eggs: one 4-foot (1.2 meter-long) female was estimated to be carrying 300 million eggs. The Mola mola also has a broad thermal range, from 36°F to 86°F, allowing it to dive more than 3,000 feet deep (914 meters). There is slim preliminary evidence that the ocean sunfish may he able to tolerate low oxygen levels, but that would be a boon, as such conditions are among a growing list of concerns in today’s oceans, along with human pollution and global sea temperature rise.

      As adults, their appetite for jellyfish means ocean sunfish are helping combat another modern marine threat. “As we overfish the ocean, in some regions, jellies can move in to fill those open niches,” says marine biologist Tierney Thys, who has tracked ocean sunfishes all over the globe. “As these local jelly populations increase, we need to keep our populations of jelly eaters–like the Mola mola–intact.”

Abortion Facts

[These excerpts are from an article by Michael Shermer in the September 2018 issue of Scientific American.]

      In May of this year the pro-life/pro-choice controversy leapt back into headlines when Ireland overwhelmingly approved a referendum to end its constitutional ban on abortion. Around the same time, the Trump administration proposed that Title X federal funding be withheld from abortion clinics as a tactic to reduce the practice, a strategy similar to that of Texas and other states to shut down clinics by burying them in an avalanche of regulations, which the U.S. Supreme Court struck down in 2016 as an undue burden on women for a constitutionally guaranteed right. If the goal is to attenuate abortions, a better strategy is to reduce unwanted pregnancies. Two methods have been proposed: abstinence and birth control. /p>

      Abstinence would obviate abortions just as starvation would forestall obesity. There is a reason no one has proposed chastity as a solution to overpopulation. Sexual asceticism doesn’t work, because physical desire is nearly as fundamental as food to our survival and flourishing. A 2008 study…found that among American adolescents ages 15 to 19, “abstinence-only education did not reduce the likelihood of engaging in vaginal intercourse" and that "adolescents who received comprehensive sex education had a lower risk of pregnancy than adolescents who received abstinence-only or no sex education.”

      …When women are educated and have access to birth-control technologies, pregnancies and, eventually, abortions decrease. A 2003 study…concluded that abortion rates declined as contraceptive use increased in seven countries (Kazakhstan, Kyrgyzstan, Uzbekistan, Bulgaria, Turkey, Tunisia and Switzerland). In six other nations (Cuba, Denmark, the Netherlands, Singapore, South Korea and the US.), contraceptive use and abortion rates rose simultaneously, but overall levels of fertility were falling during the period studied. After fertility levels stabilized, contraceptive use continued to increase, and abortion rates fell.

      Something similar happened in Turkey between 1988 and 1998, when abortion rates declined by almost half when unreli-able forms of birth control (for one, the rhythm method) were replaced by more modern technologies (for example, condoms)….

      To be fair, the multivariable mesh of correlations in all 1 these studies makes inferring direct causal links difficult for social scientists to untangle. But as I read the research, when women have limited sex education and no access to contraception, they are more likely to get pregnant, which leads to higher abortion rates. When women are educated about and have access to effective contraception, as well as legal and medically safe abortions, they initially use both strategies to control family size, after which contraception alone is often all that is needed and abortion rates decline. Admittedly, deeply divisive moral issues are involved. Abortion does end a human life, so it should not be done without grave consideration for what is at stake, as we do with capital punishment and war. Likewise, the recognition of equal rights, especially reproductive rights, should be acknowledged by all liberty-loving people. But perhaps progress for all human life could be more readily realized if we were to treat abortion as a problem to be solved rather than a moral issue over which to condemn others. As gratifying as the emotion of moral outrage is, it does little to bend the moral arc toward justice.

Alone in the Milky Way

[This excerpt is from an article by John Gribbin in the September 2018 issue of Scientifi American.]

      DNA evidence pinpoints two evolutionary bottle-necks in particular. A little more than 150,000 years ago the human population was reduced to no more than a few thousand—perhaps only a few hundred—breeding pairs. And about 70,000 years ago the entire human population fell to about 1,000. Although this interpretation of the evidence has been questioned by some researchers, if it is correct, all the billions of people now on Earth are descended from this group, which was so small that a species diminished to such numbers today would likely be regarded as endangered.

      That our species survived—and even flourished, eventually growing to number more than seven billion and advancing into a technological society—is amazing. This outcome seems far from assured.

      As we put everything together, what can we say? Is life likely to exist elsewhere in the galaxy? Almost certainly yes, given the speed with which it appeared on Earth. Is another technological civilization likely to exist today? Almost certainly no, given the chain of circumstances that led to our existence. These considerations suggest we are unique not just on our planet but in the whole Milky Way. And if our planet is so special, it becomes all the more important to pre-serve this unique world for ourselves, our descendants and the many creatures that call Earth home.

Why We Fight

[These excerpts are from an article by R. Brian Ferguson in the September 2018 issue of Scientifi American.]

      Do people, or perhaps just males, have an evolved predisposition to kill members of other groups? Not just a capacity to kill but an innate propensity to take up arms, tilting us toward collective violence? The word “collective” is key. People fight and kill for personal reasons, but homicide is not war. War is social, with groups organized to kill people from other groups. Today controversy over the historical roots of warfare revolves around two polar positions. In one, war is an evolved propensity to eliminate any potential competitors. In this scenario, humans all the way back to our common ancestors with chimpanzees have always made war. The other position holds that armed conflict has only emerged over recent millennia, as changing social conditions provided the motivation and organization to collectively kill. The two sides separate into what the late anthropologist Keith Otterbein called hawks and doves….

      If war expresses an inborn tendency, then we should expect to find evidence of war in small-scale societies throughout the prehistoric record. The hawks claim that we have indeed found such evidence….

      …If wars are natural eruptions of instinctive hate, why look for other answers? If human nature leans toward collective killing of outsiders, how long can we avoid it?

      The anthropologists and archaeologists in the dove camp challenge this view. Humans, they argue, have an obvious capacity to engage in warfare, but their brains are not hardwired to identify and kill outsiders involved in collective conflicts. Lethal group attacks, according to these arguments, emerged only when hunter-gatherer societies grew in size and complexity and later with the birth of agriculture. Archaeology, supplemented by observations of contemporary hunter-gatherer cultures, allows us to identify the times and, to some degree, the social circumstances that led to the origins and inten-sification of warfare.

      In the search for the origins of war, archaeologists look for four kinds of evidence. The artwork on cave walls is exhibit one. Paleolithic cave paintings from Grottes de Cougnac, Pech Merle and Cosquer in France dating back approximately 25,000 years show what some scholars perceive to be spears penetrating people, suggesting that people were waging war as early as the late Paleolithic period. But this interpretation is contested. Other scientists point out that some of the incomplete figures in those cave paintings have tails, and they argue that the bent or wavy lines that intersect with them more likely represent forces of shamanic power, not spears. (In contrast, wall paintings on the eastern Iberian Peninsula, probably made by settled agriculturalists thousands of years later, clearly show battles and executions.)

      Weapons are also evidence of war, but these artifacts may not be what they seem…

      Beyond art and weapons, archaeologists look to settlement remains for clues. People who fear attack usually take precautions….

      Skeletal remains would seem ideal for determining when war began, but even these require careful assessment. Only one of three or four projectile wounds leaves a mark on bone. Shaped points made of stone or bone buried with a corpse are sometimes ceremonial, sometimes the cause of death. Unhealed wounds to a single buried corpse could be the result of an accident, an execution or a homicide. Indeed, homicide may have been fairly common in the prehistoric world—but homicide is not war. And not all fights were lethal….

      The global archaeological evidence, then, is often ambiguous and difficult to interpret. Often different clues must be pieced together to produce a suspicion or probability of war….

      The preconditions that make war more likely include a shift to a more sedentary existence, a growing regional population, a concentration of valuable resources such as livestock, increasing social complexity and hierarchy, trade in high-value goods, and the establishment of group boundaries and collective identities. These conditions are sometimes combined with severe environmental changes….

      The preconditions for war are only part of the story, however, and by themselves, they may not suffice to predict outbreaks of collective conflicts. In the Southern Levant, for instance, those preconditions existed for thousands of years without evidence of war….

      People are people. They fight and sometimes kill. Humans have always had a capacity to make war, if conditions and culture so dictate. But those conditions and the warlike cultures they generate became common only over the past 10,000 years—and, in most places, much more recently than that. The high level of killing often reported in history, ethnography or later archaeology is contradicted in the earliest archaeological findings around the globe. The most ancient bones and artifacts are consistent with the ti-tle of Margaret Mead’s 1940 article: “Warfare Is Only an Invention—Not a Biological Necessity.”

Last Hominin Standing

[This excerpt is from an article by Kate Wong in the September 2018 issue of Scientifi American.]

      …It now looks as though H. sapiens originated far earlier than previously thought, possibly in locations across Africa instead of a single region, and that some of its distinguishing traits—including aspects of the brain—evolved piecemeal. Moreover, it has become abundantly clear that H. sapiens actually did mingle with the other human species it encountered and that interbreeding with them may have been a crucial factor in our success. Together these findings paint a far more complex picture of our origins than many researchers had envisioned—one that privileges the role of dumb luck over destiny in the success of our kind.

      Debate about the origin of our species has traditionally focused on two competing models. On one side was the Recent African Origin hypothesis…which argues that H. sapiens arose in either eastern or southern Africa within the past 200,000 years and, because of its inherent superiority, subsequently replaced archaic hominin species around the globe without interbreeding with them to any significant degree. On the other was the Multiregional Evolution model…which holds that modem H. sapiens evolved from Neandertals and other archaic human populations throughout the Old World, which were connected through migration and mating. In this view, H. sapiens has far deeper roots, reaching back nearly two million years.

      By the early 2000s the Recent African Origin model had a wealth of evidence in its favor. Analyses of the DNA of living people indicated that our species originated no more than 200,000 years ago. The earliest known fossils attributed to our species came from two sites in Ethiopia, Omo and Herto, dated to around 195,000 and 160,000 years ago, respectively. And sequences of mitochondrial DNA (the tiny loop of genetic material found in the cell’s power plants, which is different from the DNA contained in the cell’s nucleus) recovered from Neandertal fossils were distinct from the mitochondrial DNA of people today—exactly as one would expect if H. sapiens replaced archaic human species without mating with them.

      Not all of the evidence fit with this tidy story, however. Many archaeologists think that the start of a cultural phase known as the Middle Stone Age (MSA) heralded the emergence of people who were beginning to think like us. Prior to this technological shift, archaic human species throughout the Old World made pretty much the same kinds of stone tools fashioned in the so-called Acheulean style. Acheulean technology centered on the production of hefty hand axes that were made by taking a chunk of stone and chipping away at it until it had the desired shape. With the onset of the MSA, our ancestors adopted a new approach to toolmaking, inverting the knapping process to focus on the small, sharp flakes they detached from the core—a more efficient use of raw material that required sophisticated planning. And they began attaching these sharp flakes to handles to create spears and other projectile weapons. Moreover, some people who made MSA tools also made items associated with symbolic behavior, including shell beads for jewelry and pigment for painting. A reliance on symbolic behavior, including language, is thought to be one of the hallmarks of the modern mind.

      The problem was that the earliest dates for the MSA were more than 250,000 years ago—far older than those for the earliest H. sapiens fossils at less than 200,000 years ago. Did another human species invent the MSA, or did H. sapiens actually evolve far earlier than the fossils seemed to indicate? In 2010 another wrinkle emerged. Geneticists announced that they had recovered nuclear DNA from Neandertal fossils and sequenced it. Nuclear DNA makes up the bulk of our genetic material. Comparison of the Neandertal nuclear DNA with that of living people revealed that non-African people today carry DNA from Neandertals, showing that H. sapiens and Neandertals did interbreed after all, at least on occasion.

      Subsequent ancient genome studies confirmed that Neandertals contributed to the modern human gene pool, as did other archaic humans. Further, contrary to the notion that H. sapiens originated within the past 200,000 years, the ancient DNA suggested that Neandertals and H. sapiens diverged from their common ancestor considerably earlier than that, perhaps upward of half a million years ago. If so, H. sapiens might have originated more than twice as long ago as the fossil record indicated.

Talking through Time

[This excerpt is from an article by Christine Kenneally in the September 2018 issue of Scientifi American.]]

      That language is uniquely human has been assumed for a long time. But trying to workout exactly how and why that is the case has been weirdly taboo. In the 1860s the Societe de Linguistique de Paris banned discussion about the evolution of language, and the Philological Society of London banned it in the 1870s. They may have wanted to clamp down on unscientific speculation, or perhaps it was a political move—either way, more than a century’s worth of nervousness about the subject followed. Noam Chomsky, the extraordinarily influential linguist at the Massachusetts Institute of Technology, was, for decades, rather famously disinterested in language evolution, and his attitude had a chilling effect on the field. Attending an undergraduate linguistics class in Melbourne, Australia, in the early 1990s, I asked my lecturer how language evolved. I was told that linguists did not ask the question, because it was not really possible to answer it.

      Luckily, just a few years later, scholars from different disciplines began to grapple with the question in earnest. The early days of serious research in language evolution unearthed a perplexing paradox: Language is plainly, obviously, uniquely human. It consists of wildly complicated interconnecting sets of rules for combining sounds and words and sentences to create meaning. If other animals had a system that was the same, we would likely recognize it. The problem is that after looking for a considerable amount of time and with a wide range of methodological approaches, we cannot seem to find anything unique in ourselves—either in the human genome or in the human brain—that explains language.

      To be sure, we have found biological features that are both unique to humans and important for language. For example, humans are the only primates to have voluntary control of their larynx: it puts us at risk of choking, but it allows us to articulate speech. But the equipment that seems to be designed for language never fully explains its enormous complexity and utility.

Space for Nature

[These excerpts are from an editorial by Jonathan Baillie and Ya-Ping Zhang in the September 14, 2018, issue of Science.]

      How much of the planet should we leave for other forms of life? This is a question humanity must now grapple with. The global human population is 7.6 billion and anticipated to increase to around 10 billion by the middle of the century. Consumption is also projected to increase, with demands for food and water more than doubling by 2050. Simply put, there is finite space and energy on the planet, and we must decide how much of it we’re willing to share. This question requires deep consideration as it will determine the fate of millions of species and the health and well-being of future generations.

      About 20% of the world’s vertebrates and plants are threatened with extinction, mostly because humans have degraded or converted more than half of the terrestrial natural habitat. Moreover, we are harnessing biomass from other forms of life and converting it into crops and animals that are more useful to us. Livestock now constitute 60% of the mammalian biomass and humans another 36%. Only 4% remains for the more than 5000 species of wild mammals. This ratio is not surprising: Wild vertebrate populations have declined by more than 50% since 1970. Both from an ethical and a utilitarian viewpoint, this depletion of natural ecosystems is extremely troubling.

      Most scientific estimates of the amount of space needed to safeguard biodiversity and preserve ecosystem benefits suggest that 25 to 75% of regions or major ecosystems must be protected. But estimating how much space is required to protect current levels of biodiversity and secure existing ecosystem benefits is challenging because of limited knowledge of the number of species on this planet, poor understanding of how ecosystems function or the benefits they provide, and growing threats such as climate change. Thus, spatial targets will be associated with great uncertainty. However, targets set too low could have major negative implications for future generations and all life. Any estimate must therefore err on the side of caution.

      Current levels of protection do not even come close to the required levels. Just less than half of Earth's surface remains relatively intact, but this land tends to be much less productive. Only 3.6% of the oceans and 14.7% of land are formally protected. Many of these protected areas are “paper parks,” meaning they are not effectively managed, and one-third of the terrestrial protected lands are under intense human pressure….

      If we truly want to protect biodiversity and secure critical ecosystem benefits, the world’s governments must set a much more ambitious protected area agenda and ensure it is resourced….This will be extremely challenging, but it is possible, and anything less will likely result in a major extinction crisis and jeopardize the health and well-being of future generations.

Decoding the Puzzle of Human Consciousness

[These excerpts are from an article by Susan Backmore in the September 2018 issue of Scientifi American.]

      Might we humans be the only species on this planet to be truly conscious? Might lobsters and lions, beetles and bats be unconscious automata, responding to their worlds with no hint of conscious experience? Aristotle thought so, claiming that humans have rational souls but that other animals have only the instincts needed to survive. In medieval Christianity the “great chain of being” placed humans on a level above soulless animals and below only God and the angels. And in the 17th century French philosopher Rene Descartes argued that other animals have only reflex behaviors. Yet the more biology we learn, the more obvious it is that we share not only anatomy, physiology and genetics with other animals but also systems of vision, hearing, memory and emotional expression. Could it really be that we alone have an extra special something—this marvelous inner world of subjective experience?

      The question is hard because although your own consciousness may seem the most obvious thing in the world, it is perhaps the hardest to study. We do not even have a clear definition beyond appealing to a famous question asked by philosopher Thomas Nagel back in 1971: What is it like to be a bat? Nagel chose bats because they live such very different lives from our own. We may try to imagine what it is like to sleep upside down or to navigate the world using sonar, but does it feel like anything at all? The crux here is this: If there is nothing it is like to be a bat, we can say it is not conscious. If there is something (anything) it is like for the bat, it is conscious. So is there?

      We share a lot with bats: we, too, have ears and can imagine our arms as wings. But try to imagine being an octopus. You have eight curly, grippy, sensitive arms for getting around and catching prey but no skeleton, and so you can squeeze yourself through tiny spaces. Only a third of your neurons are in a central brain; the rest are in the nerve cords in each of your eight arms, one for each arm. Consider: Is it like something to be a whole octopus, to be its central brain or to be a single octopus arm? The science of consciousness provides no easy way of finding out.

      Even worse is the "hard problem" of consciousness: How does subjective experience arise from objective brain activity? How can physical neurons, with all their chemical and electrical communications, create the feeling of pain, the glorious red of the sunset or the taste of fine claret? This is a problem of dualism: How can mind arise from matter? Indeed, does it?...

      When lobsters or crabs are injured, are taken out of water or have a claw twisted off, they release stress hormones similar to cortisol and corticosterone. This response provides a physiological reason to believe they suffer. An even more telling demonstration is that when injured prawns limp and rub their wounds, this behavior can be reduced by giving them the same painkillers as would reduce our own pain.

      The same is true of fish. When experimenters injected the lips of rainbow trout with acetic acid, the fish rocked from side to side and rubbed their lips on the sides of their tank and on gravel, but giving them morphine reduced these reactions. When zebra fish were given a choice between a tank with gravel and plants and a barren one, they chose the interesting tank. But if they were injected with acid and the barren tank contained a painkiller, they swam to the barren tank instead. Fish pain may be simpler or in other ways different from ours, but these experiments suggest they do feel pain.

      Some people remain unconvinced. Australian biologist Brian Key argues that fish may respond as though they are in pain, but this observation does not prove they are consciously feeling anything….

The Silent, Reasonable Majority Must Be Heard

[These excerpts are from an article by Joshua P. Starr in the September 2018 issue of Phi Delta Kappan.]

      …on days when Democrats and Republicans hold their primary elections, only the most partisan of voters tend to show up at their polling places. Most people stay home, allowing their least reasonable neighbors to cast the majority of the ballots.

      That’s often how things work in school systems, too. When they have to make important decisions, most superintendents — at least, the ones I know — try to bracket off their personal beliefs and preferences in order to weigh the issues fairly and make the best choices they can. But they have to do so in the face of intense pressure from their most aggressive and opinionated constituents. Let's just say that it’s rarely the more thoughtful and fair-minded parents and community members who launch Twitter storms, light up the phones, and march into the district office to make demands on the superintendent….

      Whatever the demographics and political makeup of the loudest people in the room, the question is, How do we ensure that the quieter voices get heard, too? Most parents are reasonable — silent, but reasonable. They want a quality education for their sons and daughters. They have hopes for and concerns about their schools. They want problems to be resolved in expeditious and practical ways. But they rarely testify at board meetings, email the superintendent, or buttonhole their elected officials at the supermarket….

      System and school leaders can’t stop partisans from organizing and advocating, nor can they stop rabid bloggers or shut down toxic email chains. But they can certainly do more to reach out to a wider and more diverse set of parents and community members, rather than sitting back and waiting to see who shows up at board meetings and gets in line for the microphone…

      To be sure, school system leaders need to listen to those people who choose to speak up. But those can’t be the only voices that matter. The majority of parents may be silent much of the time, but they have just as much of a right to be heard, and their kids are just as deserving of an excellent education. We must find ways to hear what they have to say, rather than listening only to those who push and shove their way into our offices and board meetings.

It’s Time to Redefine the Federal Role in K-12 Education

[These excerpts are from an article by Jack Jenning in the September 2018 issue of Phi Delta Kappan.]

      What matters most in education? At its core, education comes down to a student, a teacher, and something to be taught and learned. Everything else (e.g., testing, accountability systems, and teacher evaluations) is secondary, having an indirect influence, at best, on what happens in the classroom. A person with the desire and readiness to learn, another person with the knowledge and skills to foster that learning, and the material to be learned. These are the fundamental elements of education (along with that additional element, money, without which public schooling cannot function), and they should be the starting points for any new federal policy agenda:

      No state has yet come close to ensuring that all young children enter school with the early math, literacy, and other skills that will allow them to succeed. A wealth of research shows that high-quality pre-school programs tend to be extraordinarily effective in helping kids become ready for kindergarten, but access to preschool is woefully inadequate in most of the country, especially for children from families below the middle class. Further, the quality of existing programs is wildly uneven, and many programs lack essential components that might enable them to improve, such as well-educated teachers, adequate salaries, careful teacher supervision, and assessment tools….

      An equally pressing problem, which states have shown little ability to solve on their own, has to do with raising the quality of the teaching force, which will require efforts to improve teacher recruitment, preparation, and retention. In each of these areas, we have failed to keep pace with other developed nations….Similarly, if we were serious about teacher recruitment, quality, and reten-tion, would we pay our teachers such meager wages? On average, teacher compensation is equivalent to about 60% of what comparably educated college graduates earn in other fields, whereas in most other developed countries, teacher pay is more or less comparable to that of college graduates….

      ...the federal government can and should (as it has done many times before) support curricular improvements in literacy, math, science, civics, language learning, and other subject areas.

      Finally, the funding of public education needs to be overhauled, but few states have shown the will or capacity to make meaningful changes, particularly when it comes to the distribution of resources among school districts….the American approach to school funding now stands out as one of the most dysfunctional systems in the world:

      If we want to make serious improvements in the areas of preschool education, teacher quality; and curriculum, then federal policy should address these things directly. And, in fact, while federal policy makers are often reluctant to fund programs that focus on teaching and curriculum — fearing that this would intrude on local control — there are a number of precedents for doing so….

      At the very least, policy advocates on all sides must recognize two basic truths about American education today: First, to ensure the future prosperity and cohesion of our nation, we must help our students achieve at higher levels than in the past; second, our schools do not currently provide all students with equal opportunities to become well educated. Given the urgency of the challenges posed, our politicians, educators, parents, business leaders, and other citizens must seek common ground on plausible solutions. We must get going, and fast.

Fake America Great Again

[These excerpts are from an article by Will Knight in the September/October 2018 issue of Technology Review.]

      These advances threaten to further blur the line between truth and fiction in politics. Already the internet accelerates and reinforces the dissemination of disinformation through fake social-media accounts. “Alternative facts” and conspiracy theories are common and widely believed. Fake news stories, aside from their possible influence on the last US presidential election, have sparked ethnic violence in Myanmar and Sri Lanka over the past year. Now imagine throwing new kinds of real-looking fake videos into the mix: politicians mouthing nonsense or ethnic insults, or getting caught behaving inappropriately on video—except it never really happened.…

      Are we about to enter an era when we can’t trust anything, even authentic-looking videos that seem to capture real “news”? How do we decide what is credible? Whom do we trust?

      Several technologies have converged to make fakery easier, and they’re readily accessible: smartphones let anyone capture video footage, and powerful computer graphics tools have become much cheaper. Add artificial-intelligence software, which allows things to be distorted, remixed, and synthesized in mind-bending new ways. AI isn’t just a better version of Photoshop or iMovie. It lets a computer learn how the world looks and sounds so it can conjure up convincing simulacra….

      There are well-established methods for identifying doctored images and video. One option is to search the web for images that might have been mashed together. A more technical solution is to look for telltale changes to a digital file, or to the pixels in an image or a video frame. An expert can search for visual inconsistencies—a shadow that shouldn’t be there, or an object that's the wrong size….

      In April, a supposed BBC news report announced the opening salvos of a nuclear conflict between Russia and NATO. The clip, which began circulating on the messaging platform WhatsApp, showed footage of missiles blasting off as a newscaster told viewers that the German city of Mainz had been destroyed along with parts of Frankfurt.

      It was, of course, entirely fake, and the BBC rushed to denounce it. The video wasn’t generated using AI, but it showed the power of fake video, and how it can spread rumors at warp speed. The proliferation of AI programs will make such videos far easier to make, and even more convincing.

      Even if we aren't fooled by fake news, it might have dire consequences for political debate. Just as we are now accustomed to questioning whether a photograph might have been Photoshopped, AI-generated fakes could make us more suspicious about events we see shared online. And this could contribute to the further erosion of rational political debate.

      In The Death of Truth, published this year, the literary critic Michiko Kakutani argues that alternative facts, fake news, and the general craziness of modern politics represent the culmination of cultural currents that stretch back decades. Kakutani sees hyperreal Al fakes as just the latest heavy blow to the concept of objective reality….

      Perhaps the greatest risk with this new technology, then, is not that it will be misused by state hackers, political saboteurs, or Anonymous, but that it will further undermine truth and objectivity itself. If you can’t tell a fake from reality, then it becomes easy to question the authenticity of anything. This already serves as a way for politicians to evade accountability.

      President Trump has turned the idea of fake news upside down by using the term to attack any media reports that criticize his administration. He has also suggested that an incriminating clip of him denigrating women, released during the 2016 campaign, might have been digitally forged. This April, the Russian government accused Britain of faking video evidence of a chemical attack in Syria to justify proposed military action. Neither accusation was true, but the possibility of sophisticated fakery is increasingly diminishing the credibility of real information. In Myanmar and Russia new legislation seeks to prohibit fake news, but in both cases the laws may simply serve as a way to crack down on criticism of the government….

      The truth will still be out there. But will you know it when you see it?

No, Big Tech Didn’t Make Us Polarized (But It Sure Helps)

[These excerpts are from an article by Adam Piore in the September/October 2018 issue of Technology Review.]

      …the most concerning problem highlighted by the 2016 election isn’t that the Russians used Twitter and Facebook to spread propaganda, or that the political consulting firm Cambridge Analytica illicitly gained access to the private information of more than 50 million Facebook users. It's that we have all, quite voluntarily, retreated into hyperpartisan virtual corners, owing in no small part to social media and internet companies that determine what we see by monitoring what we have clicked on in the past and giving us more of the same. In the process, opposing perspectives are sifted out, and we’re left with content that reinforces what we already believe.

      This is the famous “filter bubble,” a concept popularized in the 2011 book of the same name by Eli Pariser, an internet activist and founder of the viral video site Upworthy. “Ultimately, democracy works only if we citizens are capable of thinking beyond our narrow self-interest,” wrote Pariser. “But to do so, we need a shared view of the world we coinhabit. The filter bubble pushes us in the opposite direction—it creates the impression that our narrow self-interest is all that exists.”

      Or does it? The research suggests that things are not quite that simple.

      The legal scholar Cass Sunstein warned way back in 2007 that the internet was giving rise to an “era of enclaves and niches.” He cited a 2005 experiment in Colorado in which 60 Americans from conservative Colorado Springs and liberal Boulder, two cities about 100 miles apart, were assembled into small groups and asked to deliberate on three controversial issues (affirmative action, gay marriage, and an international treaty on global warming). In almost every case, people held more extreme positions after they spoke with like-minded others….

      Data from the polling firm Pew backs up the idea that polarization doesn’t come just from the internet. After the 2016 election, Pew found that 62 percent of Americans got news from social-media sites, but—in a parenthetical ignored in most articles about the study—only 18 percent said they did so “often.” A more recent Pew study found that only about 5 percent said they had “a lot” of trust in the information.

      “The internet is absolutely not the causal factor here,” says Ethan Zuckerman, who directs MIT’s Center for Civic Media….

      Lousy results such as this have led Zuckerman toward a more radical idea for countering filter bubbles: the creation of a taxpayer-funded social-media platform with a civic mission to provide a “diverse and global view of the world.”

      The early United States, he noted in an essay for the Atlantic, featured a highly partisan press tailored to very specific audiences. But publishers and editors for the most part abided by a strong cultural norm, republishing a wide range of stories from different parts of the nation and reflecting different political leanings. Public broadcasters in many democracies have also focused on providing a wide range of perspectives. It’s not realistic, Zuckerman argues, to expect the same from outlets like Facebook: their business model drives them to pander to our natural human desire to congregate with others like ourselves. A public social-media platform with a civic mission, says Zuckerman, could push unfamiliar perspectives into our feeds and push us out of our comfort zones. Scholars could review algorithms to make sure we’re seeing an unbiased representation of views. And yes, he admits, people would complain about publicly funding such a platform and question its even-handedness. But given the lack of other viable solutions, he says, it’s worth a shot.

      …if a Republican politician tells people that immigrants are moving in and changing the culture or taking locals’ jobs, or if a Democrat tells female students that Christian activists want to ban women’s rights, their words have power. Bavel’s research suggests that if you want to overcome partisan divisions, avoid the intellect and focus on the emotions….

      Maybe in the end it’s up to us to decide to expose ourselves to content from a wider array of sources, and then to engage with it. Sound unappealing? Well, consider the alternative: your latest outraged political post didn’t accomplish much, because the research shows that anyone who read it almost certainly agreed with you already.

The Road from Tahrir to Trump

[These excerpts are from an article by Zeynep Tufekci in the September/October 2018 issue of Technology Review.]

      Digital platforms allowed communities to gather and form in new ways, but they also dispersed existing communities, those that had watched the same TV news and read the same newspapers. Even living on the same street meant less when information was disseminated through algorithms designed to maximize revenue by keeping people glued to screens. It was a shift from a public, collective politics to a more private, scattered one, with political actors collecting more and more personal data to figure out how to push just the right buttons, person by person and out of sight….

      The US National Security Agency had an arsenal of hacking tools based on vulnerabilities in digital technologies—bugs, secret backdoors, exploits, shortcuts in the (very advanced) math, and massive computing power. These tools were dubbed “nobody but us” (or NOBUS, in the acronym-loving intelligence community), meaning no one else could exploit them, so there was no need to patch the vulnerabilities or make computer security stronger in general. The NSA seemed to believe that weak security online hurt its adversaries a lot more than it hurt the NSA.

      That confidence didn't seem unjustified to many. After all, the internet is mostly an American creation; its biggest companies were founded in the United States. Computer scientists from around the world still flock to the country, hoping to work for Silicon Valley. And the NSA has a giant budget and, reportedly, thousands of the world's best hackers and mathematicians….

      There doesn’t seem to have been a major realization within the US's institutions—its intelligence agencies, its bureaucracy, its electoral machinery—that true digital security required both better technical infrastructure and better public awareness about the risks of hacking, meddling, misinformation, and more. The US's corporate dominance and its technical wizardry in some areas seemed to have blinded the country to the brewing weaknesses in other, more consequential ones….

      Though smaller than Facebook and Google, Twitter played an outsize role thanks to its popularity among journalists and politically engaged people. Its open philosophy and easygoing approach to pseudonyms suits rebels around the world, but it also appeals to anonymous trolls who hurl abuse at women, dissidents, and minorities. Only earlier this year did it crack down on the use of bot accounts that trolls used to automate and amplify abusive tweeting.

      Twitter’s pithy, rapid-fire format also suits anyone with a professional or instinctual understanding of attention, the crucial resource of the digital economy.

      Say, someone like a reality TV star. Someone with an uncanny ability to come up with belittling, viral nicknames for his opponents, and to make boastful promises that resonated with a realignment in American politics—a realignment mostly missed by both Republican and Democratic power brokers.

      Donald Trump, as is widely acknowledged, excels at using Twitter to capture attention. But his campaign also excelled at using Facebook as it was designed to be used by advertisers, testing messages on hundreds of thousands of people and microtargeting them with the ones that worked best. Facebook had embedded its own employees within the Trump campaign to help it use the platform effectively (and thus spend a lot of money on it), but they were also impressed by how well Trump himself performed. In later internal memos, reportedly, Facebook would dub the Trump campaign an “innovator” that it might learn from. Facebook also offered its services to Hillary Clinton’s campaign, but it chose to use them much less than Trump’s did.

      …they started posting materials aimed at fomenting polarization. The Russian trolls posed as American Muslims with terrorist sympathies and as white supremacists who opposed immigration. They posed as Black Lives Matter activists exposing police brutality and as people who wanted to acquire guns to shoot police officers. In so doing, they not only fanned the flames of division but provided those in each group with evidence that their imagined opponents were indeed as horrible as they suspected. These trolls also incessantly harassed journalists and Clinton supporters online, resulting in a flurry of news stories about the topic and fueling a (self-fulfilling) narrative of polarization among the Democrats….

      Second, the new, algorithmic gatekeepers aren’t merely (as they like to believe) neutral conduits for both truth and falsehood. They make their money by keeping people on their sites and apps; that aligns their incentives closely with those who stoke outrage, spread misinformation, and appeal to people’s existing biases and preferences….

      Third, the loss of gatekeepers has been especially severe in local journalism. While some big US media outlets have managed (so far) to survive the upheaval wrought by the internet, this upending has almost completely broken local newspapers, and it has hurt the industry in many other countries. That has opened fertile ground for misinformation. It has also meant less investigation of and accountability for those who exercise power, especially at the local level. The Russian operatives who created fake local media brands across the US either understood the hunger for local news or just lucked into this strategy. Without local checks and balances, local corruption grows and trickles up to feed a global corruption wave playing a major part in many of the current political crises….

      This is also how Russian operatives fueled polarization in the United States, posing simultaneously as immigrants and white supremacists, angry Trump supporters and “Bernie bros.” The content of the argument didn't matter; they were looking to paralyze and polarize rather than convince. Without old-style gatekeepers in the way, their messages could reach anyone, and with digital analytics at their fingertips, they could hone those messages just like any advertiser or political campaign….

      Ubiquitous digital surveillance should simply end in its current form. There is no justifiable reason to allow so many companies to accumulate so much data on so many people. Inviting users to “click here to agree” to vague, hard-to-pin-down terms of use doesn’t produce “informed consent.” If, two or three decades ago, before we sleepwalked into this world, a corporation had suggested so much reckless data collection as a business model, we would have been horrified.

      There are many ways to operate digital services without siphoning up so much personal data. Advertisers have lived without it before, they can do so again, and it's probably better if politicians can’t do it so easily. Ads can be attached to content, rather than directed to people….

      But we didn’t get where we are simply because of digital technologies. The Russian government may have used online platforms to remotely meddle in US elections, but Russia did not create the conditions of social distrust, weak institutions, and detached elites that made the US vulnerable to that kind of meddling.

      Russia did not make the US (and its allies) initiate and then terribly mishandle a major war in the Middle East, the after-effects of which—among them the current refugee crisis—are still wreaking havoc, and for which practically nobody has been held responsible. Russia did not create the 2008 financial collapse: that happened through corrupt practices that greatly enriched financial institutions, after which all the culpable parties walked away unscathed, often even richer, while millions of Americans lost their jobs and were unable to replace them with equally good ones.

      Russia did not instigate the moves that have reduced Americans’ trust in health authorities, environmental agencies, and other regulators. Russia did not create the revolving door between Congress and the lobbying firms that employ ex-politicians at handsome salaries. Russia did not defimd higher education in the United States. Russia did not create the global network of tax havens in which big corporations and the rich can pile up enormous wealth while basic government services get cut.

      These are the fault lines along which a few memes can play an outsize role. And not just Russian memes: whatever Russia may have done, domestic actors in the United States and Western Europe have been eager, and much bigger, participants in using digital platforms to spread viral misinformation.

      Even the free-for-all environment in which these digital platforms have operated for so long can be seen as a symptom of the broader problem, a world in which the powerful have few restraints on their actions while everyone else gets squeezed. Real wages in the US and Europe are stuck and have been for decades while corporate profits have stayed high and taxes on the rich have fallen. Young people juggle multiple, often mediocre jobs, yet find it increasingly hard to take the traditional wealth-building step of buying their own home—unless they already come from privilege and inherit large sums.

      If digital connectivity provided the spark, it ignited because the kindling was already everywhere. The way forward is not to cultivate nostalgia for the old-world information gatekeepers or for the idealism of the Arab Spring. It’s to figure out how our institutions, our checks and balances, and our societal safeguards should function in the 21st century—not just for digital technologies but for politics and the economy in general. This responsibility isn’t on Russia, or solely on Facebook or Google or Twitter. It’s on us.

Controlling Cholera with Microbes

[These excerpts are from an article by Anne Trafton in the September/October 2018 issue of MIT News.]

      MIT engineers have developed a mix of natural and engineered bacteria designed to diagnose and treat cholera, an intestinal infection that causes severe dehydration.

      Cholera outbreaks are usually caused by contaminated drinking water, and infections can be fatal if not treated. The most common treatment is rehydration, which must be done intravenously if the patient is extremely dehydrated. However, intravenous treatment is not always available, and the disease kills an estimated 95,000 people per year.

      The MIT team’s new probiotic mix could be consumed regularly as a preventive measure in regions where cholera is common, but it could also be used to treat people soon after infection occurs….

      The researchers chose Lactococcus lactis, a strain of bacteria used to make cheese, and engineered into it a genetic circuit that detects a molecule produced by Vibrio cholerae, the microbe that causes cholera. When engineered L. lactis encounters this molecule, it turns on an enzyme that produces a red color detectable in stool samples.

      Serendipitously, while working on a way to further engineer L. lactis so that it could treat infections, the researchers discovered that the unmodified bacterium can kill cholera microbes by producing lactic acid. This natural metabolic by-product makes the gastrointestinal environment more acidic, inhibiting the growth of V. cholerae.

      In tests in mice, the researchers found that a mixture of engineered and unmodified L. lactis could successfully prevent cholera from developing and could also treat existing infections. The MIT team is now exploring the possibility of using this approach to combat other microbes.

Smog Patrol

[These excerpts are from an article by Peter Dizikes in the September/October 2018 issue of MIT News.]

      The Chinese government has implemented measures to clean up the air pollution that has smothered its cities for decades. Are they effective? A study coauthored by an MIT scholar shows that one key antipollution law is indeed working—but unevenly.

      The study examines a Chinese law, in effect since 2014, requiring coal-fired power plants to significantly reduce emissions of sulfur dioxide, a pollutant associated with respiratory illnesses. Overall, the concentration of these emissions at coal power plants fell by 13.9 percent.

      However, the law also called for greater emissions reductions in more heavily polluted and more populous regions, and those “key” regions are precisely where plants may have been least compliant. Only 50 percent reported meeting the new standard, and remote data—as opposed to readings supplied by the plants themselves—suggests that the results were even worse….

      Data from the two monitoring systems corresponded closely in “non-key” regions, where the maximum allowable concentration of sulfur dioxide was lowered from 400 to 200 milligrams per cubic meter. But in the key regions, where the limit was placed at 50 milligrams per cubic meter, the research found no evidence of correspondence.

      That tougher standard may have been harder for power plants to meet….

Can a Transgenic Chestnut Restore a Forest Icon?

[These excerpts are from an article by Gabriel Popkin in the August 31, 2018, issue of Science.]

      Two deer-fenced plots here contain some of the world's most highly regulated trees. Each summer researchers double-bag every flower the trees produce. One bag, made of breathable plastic, keeps them from spreading pollen. The second, an aluminum mesh screen added a few weeks later, prevents squirrels from stealing the spiky green fruits that emerge from pollinated flowers. The researchers report their every move to regulators with the U.S. Department of Agriculture….

      These American chestnut trees (Castanea dentata) are under such tight security because they are genetically modified (GM) organisms, engineered to resist a deadly blight that has all but erased the once widespread species from North American forests….

      If the regulators approve the request, it would be “precedent setting”—the first use of a GM tree to try to restore a native species in North America…

      American chestnuts, towering 30 meters or more, once dominated forests throughout the Appalachian Mountains. But in the early 1900s, a fungal infection appeared on trees at the Bronx Zoo in New York City, and then spread rapidly. The so-called chestnut blight—an accidental import from Asia—releases a toxin that girdles trees and kills everything above the infection site, though still-living roots sometimes send up new shoots. By midcentury, large American chestnuts had all but disappeared.

Revolutionary Technologies

[This excerpt is from an editorial by Jeremy Berg in the August 31, 2018, issue of Science.]

      …New technology is one of the most powerful drivers of scientific progress. For example, the earliest microscopes magnified images only 50-fold at most. When the Dutch fabric merchant and amateur scientist Antonie van Leeuwenhoek developed microscopes with more than 200-fold magnifications (likely to examine cloth), he used them to study many items, including pond water and plaque from teeth. His observations of “animalcules” led to fundamental discoveries in microbiology and cell biology, and spurred the elaboration of improved microscopes. Today, various light microscopes remain prime tools in modern biology. This example embodies two characteristics of a revolutionary technology: a capability for addressing questions better than extant technologies, and the possibility of being utilized and adapted by many other investigators.

      The discovery of x-rays in 1895 ushered in a multifaceted revolution in imaging. As scientists sought to understand the nature of these electromagnetic waves, they realized that they were diffracted by crystals, establishing that the wavelengths of x-rays were comparable to the separation between atoms in crystals. In 1913, William Henry Bragg and his son William Lawrence Bragg found that diffraction patterns could be interpreted to reveal the arrangement of atoms in a crystal. The Braggs determined the structures of many simple substances, including table salt and diamond. Others began using similar techniques to reveal more complex structures of inorganic and organic compounds. In the late 1950s, these methods were extended to determine the structure of proteins, and eventually to larger proteins and protein complexes. Thousands of structures are now reported each year and are foundational to our understanding of biochemistry and cell biology. Technical innovations, improved commercial and shared-facility instrumentation, and powerful software continue to drive the x-ray crystallography revolution.

Voting for the Earth

[These excerpts are from an article by Erich Pica in the Summer 2018 issue of the Friends of the Earth Newsmagazine.]

      In poll after poll, Americans from across the political spectrum stress the importance of clean air, pristine water, protected wildlife, healthy food, preserved forests and prairies, and a stable climate.

      In one recent Gallup poll, 62 percent of Americans said they believe the government needs to do more for the environment and 76 percent would like to see a greater investment in alternative energy sources like solar and wind power.

      You’d never know the U.S. had such a consensus on the environment. Mainstream media coverage of environmental issues is spotty at best. National politicians on both sides of the aisle do little more than pay lip service to environmental issues…

      When environmentalists vote, it forces politicians to pay attention to environmental issues and act on them...or face losing their jobs.

      Unfortunately, too many people who care about the environment have been staying home on election day, giving politicians a free pass on issues affecting our planet.

      We simply can’t afford to let climate change and the environment fall to the bottom of the list any longer.

      We’re at a make-or-break point for our planet. We’ve endured several of history's warmest years in the last decade. The Arctic just experienced its warmest winter on record, with temperatures up to 36 degrees Fahrenheit higher than average. Wildfire and hurricane seasons have been more severe.

      Scientists estimate that we need to cut greenhouse gas emissions dramatically by 2050—to 80 percent below levels we saw in 1990— if we want to save ourselves from catastrophe.

      In short, we need to vote like our lives and our planet depend on it...because they do.

Putting Sleep Myths to Bed

[These excerpts are from an article by Adrian Woolfson in the August 24, 2018, issue of Science.]

      In humans, the master timekeeper underwriting the architecture of sleep is a small region buried deep within the brain, known as the suprachiasmatic nucleus. The pacemaker activity of this biological timepiece is controlled by several genes, which have names like Period, Clock, and Timeless. Mutations in these can transform us from night owls into morning larks or something in between. Fortunately, errant wanderings are typically adjusted by “zeitgebers”—literally, “time-givers”—principally in the form of blue light.

      In Nodding off, [Alice] Gregory makes the point that although sleep studies typically focus on individuals, sleep is not always a solitary pastime and may be adversely affected by our choice of bedmate. Indeed, sleep researchers are now attempting to address the reality that adults often do not sleep in isolation.

      The structure of sleep also changes across an individual’s lifetime. The sleep patterns of young children, for example, are profoundly influenced by their belief systems. Gregory cautions parents against sending misbehaving children to bed early because this association between sleep and punishment may inadvertently condemn them to years of insomnia.

      Similarly, she explains why the Sisyphean struggle to force teenagers to wake up early is invariably destined to fail. Their pattern of melatonin release differs to those of adults and children, and the recapitulation of this phenomenon in other mammals suggests that it has been hard-wired by evolution.

      While extolling the virtues of sleep and its fundamental importance to our health, Gregory reveals some interesting tidbits, including the fact that dolphins sleep with just half of their brain at a time and that male armadillos have erections during non-REM (rapid eye movement) sleep, unlike their human counterparts, who experience this phenomenon only during REM sleep. The heterogeneity of sleep across different species indicates its essential function but also how it may be modified to perform different functions.

      Although the precise function of sleep remains enigmatic, poor-quality sleep and sleep deprivation may have a profound impact on our health…..

      It is noteworthy, and apparently contrary to the central thesis of these two tomes, that genius has sometimes emerged within the context of abnormal sleep patterns. The serial micronapper Leonardo da Vinci’s remarkable canon of work, for example, was achieved on a sleeping pattern comprising naps of just 15 minutes taken every 4 hours….

Back to the Blackboard

[These excerpts are from an article by Jill Lepore in the September 10, 2018, issue of The New Yorker.]

      Is education a fundamental right? The Constitution, drafted in the summer of 1787, does not mention a right to education, but the Northwest Ordinance, passed by Congress that same summer, held that “religion, morality, and knowledge, being necessary to good government and the happiness of mankind, schools and the means of education shall forever be encouraged.” By 1868 the constitutions of twenty-eight of the thirty-two states in the Union had provided for free public education, open to all. Texas, in its 1869 constitution, provided for free public schooling for “all the inhabitants of this State,” a provision that was revised to exclude undocumented immigrants only in 1975.

      [Judge] Justice skirted the questions of whether education is a fundamental right and whether undocumented immigrants are a suspect class. Instead of applying the standard of “strict scrutiny” to the Texas law, he applied the lowest level of scrutiny to the law, which is known as the “rational basis test.” He decided that the Texas law failed this test. The State of Texas had argued that the law was rational because undocumented children are expensive to educate—they often require bilingual education, free meals, and even free clothing. But, Justice noted, so are other children, including native-born children, and children who have immigrated legally, and their families are not asked to bear the cost of their special education. As to why Texas had even passed such a law, he had two explanations, both cynical: “Children of illegal aliens had never been explicitly afforded any-judicial protection, and little political uproar was likely to be raised in their behalf.”

      In September, 1978, Justice ruled in favor of the children. Not long afterward, a small bouquet arrived at his house, sent by three Mexican workers. Then came the hate mail. A man from Lubbock wrote, on the back of a postcard, “Why in the hell don’t you illegally move to mexico?”

      …Later, [Supreme Court Justice] Marshall came back at him, asking, “Could Texas pass a law denying admission to the schools of children of convicts?” Hardy said that they could, but that it wouldn’t be constitutional. Marshall's reply: “We are dealing with children. I mean, here is a child that is the son of a murderer, but he can go to school, but the child that is the son of an unfortunate alien cannot?”

Ancient DNA Reveals Tryst between Extinct Human Species

[These excerpts are from an article by Gretchen Vogel in the August 24, 2018, issue of Science.]

      The woman may have been just a teenager when she died more than 50,000 years ago, too young to have left much of a mark on her world. But a piece of one of her bones, unearthed in a cave in Russia's Denisova valley in 2012, may make her famous. Enough ancient DNA lingered within the 2-centimeter fragment to reveal her startling ancestry: She was the direct offspring of two different species of ancient humans—neither of them ours….her mother was Neanderthal and her father was Denisovan, the mysterious group of ancient humans discovered in the same Siberian cave in 2011. It is the most direct evidence yet that various ancient humans mated with each other and had offspring.

      Based on other ancient genomes, researchers already had concluded that Denisovans, Neanderthals, and modern humans interbred in ice age Europe and Asia. The genes of both archaic human species are present in many people today. Other fossils found in the Siberian cave have shown that all three species lived there at different times….

      …the proportion of genes in which her chromosorne pairs harbored different variants—so-called heterozygous alleles—was close to 50% across all chromosomes, suggesting the maternal and paternal chromosomes came directly from different groups. And her mitochondrial DNA, which is inherited maternally, was uniformly Neanderthal, so the researchers concluded she was a first-generation hybrid of a Denisovan man and Neanderthal woman….

Retreat on Economics at the EPA

[These excerpts are from an editorial by Kevin Boyle and Matthew Kotchen in the August 24, 2018, issue of Science.]

      Rigorous economic analysis has long been recognized as essential for sound, defensible decision-making by government agencies whose regulations affect human health and the environment The acting administrator (since July 2018) of the U.S. Environmental Protection Agency (EPA) has emphasized the importance of transparency and public trust. These laudable goals are enhanced by external scientific review of the EPA’s analytical procedures. Yet, in June 2018, the EPA’s Science Advisory Board (SAB) eliminated its Environmental Economics Advisory Committee (EEAC). The agency should be calling for more—not less—external advice on economics, given the Trump administration’s promotion of economic analyses that push the boundaries of well-established best practices. The pattern is clear: When environmental regulations are expected to provide substantial public benefits, assumptions are made to substantially diminish their valuations.

      …today, many economic analyses that support the Trump administration’s regulatory rollbacks conflict with the EPA's previous findings. The 2017 analysis for eliminating the Waters of the United States rule turned favorable only after excluding all benefits of protecting wetlands. Eliminating the Clean Power Plan is supported in another 2017 analysis only after changing assumptions about the scope of climate damages, the measurement of health effects, and the impact on future generations. Differing assumptions also underlie the economic justification of the administration’s 2018 proposal to roll back automotive fuel economy standards.

Inside Our Heads

[These excerpts are from an article by Thomas Suddendorf in the September 2018 issue of Scientific American.]

      …It also turns out that animal and human cognition, though similar in many respects, differ in two profound dimensions. One is the ability to form nested scenarios, an inner theater of the mind that allows us to envision and mentally manipulate many possible situations and anticipate different outcomes. The second is our drive to exchange our thoughts with others. Taken together, the emergence of these two characteristics transformed the human mind and set us on a world-changing path.

      … animal studies have not met similar stringent criteria for establish-ing foresight, nor have they demonstrated deliberate practice. Does this mean we should conclude that animals do not have the relevant capacities at all? That would be premature. Absence of evidence is not evidence of absence, as the saying goes. Establishing competence in animals is difficult; establishing the absence of competence is even harder.

      Consider the following study, in which my colleague Jon Redshaw of the University of Queensland in Australia and I tried to assess one of the most fundamental aspects of thinking about the future: the recognition that it is largely uncertain. When one realizes that events may unfold in more than one way, it makes sense to prepare for various possibilities and to make contingency plans. Human hunters demonstrate this when they lay a trap in front of all their prey’s potential escape routes rather than just in front of one. Our simple test of this capacity was to show a group of chimpanzees and orangutans a vertical tube and drop a reward at the top so they could catch it at the bottom. We compared the apes’ performance with that of a group of human children aged two to four doing the same thing. Both groups readily anticipated that the reward would reappear at the bottom of the tube: they led their hand under the exit to prepare for the catch.

      Next, however, we made events a little harder to predict. The straight tube was replaced by an upside-down Y-shaped tube that had two exits. In preparation for the drop, the apes and the two-year-old children alike tended to cover only one of the potential exits and thus ended up catching the reward in only half of the trials. But four-year-olds immediately and consistently covered both exits with their hands, thus demonstrating the capacity to prepare for at least two mutually exclusive versions of an imminent future event. Between ages two and four, we could see this contingency planning increase. in frequency. We saw no such ability among the apes.

      This experiment does not prove, however, that apes and two-year-old humans have no understanding that the future can unfold in distinct ways. As I mentioned, there is a fundamental problem when it comes to showing the absence of a capacity. Perhaps the animals were not motivated, did not understand the basic task or could not coordinate two hands. Or maybe we simply tested the wrong individuals, and more competent animals might be able to pass.

      To truly prove this ability is absent, a scientist would have to test all animals, at all times, on some fool-proof task. Clearly, that is not practical. All we can do is give individuals the chance to demonstrate competence. If they consistently fail, we can become more confident that they really do not have the capacity in question, but even then, future work may prove that wrong….

      The earliest evidence of deliberate practice is more than a million years old. The Acheulean stone tools of Homo erectus some 1.8 million years ago already suggest considerable foresight, as they appeared to have been carried from one place to another for repeated use. Crafting these tools requires considerable knowledge about rocks and how to work them. At some sites, such as Olorgesailie in Kenya, the ground is still littered with shaped stones, raising the question of why our ancestors kept making more tools when there were plenty lying around. The answer is that they were probably practicing how to manufacture those tools. Once they were proficient, they could wander the plains knowing they could make a new tool if the old one broke. These ancestors were armed and ready to reload….

      None of this is an excuse for arrogance. It is, in fact, a call for care. We are the only creatures on this planet with these abilities. As Spider-Man’s Uncle Ben declared, communicating complex ideas in an urge to connect with his superhero nephew, “With great power comes great responsibility.”

An Evolved Uniqueness

[These excerpts are from an article by the Kevin Laland in the September 2018 issue of Scientific American.]

      Most people on this planet blithely assume, largely without any valid scientific rationale, that humans are special creatures, distinct from other animals. Curiously, the scientists best qualified to evaluate this claim have often appeared reticent to acknowledge the uniqueness of Homo sapiens, perhaps for fear of reinforcing the idea of human exceptionalism put forward in religious doctrines. Yet hard scientific data have been amassed across fields ranging from ecology to cognitive psychology affirming that humans truly are a remarkable species.

      The density of human populations far exceeds what would be typical for an animal of our size. We live across an extraordinary geographical range and control unprecedented flows of energy and matter: our global impact is beyond question. When one also considers our intelligence, powers of communication, capacity for knowledge acquisition and sharing—along with magnificent works of art, architecture and music we create—humans genuinely do stand out as a very different kind of animal. Our culture seems to separate us from the rest of nature, and yet that culture, too, must be a product of evolution.

      The challenge of providing a satisfactory scientific explanation for the evolution of our species’ cognitive abilities and their expression in our culture is what I call “Darwin’s Unfinished Symphony.” That is because Charles Darwin began the investigation of these topics some 150 years ago, but as he himself confessed, his understanding of how we evolved these attributes was in his own words “imperfect” and “fragmentary” Fortunately, other scientists have taken up the baton, and there is an increasing feeling among those of us who conduct research in this field that we are closing in on an answer.

      The emerging consensus is that humanity’s accomplishments derive from an ability to acquire knowledge and skills from other people. Individuals then build iteratively on that reservoir of pooled knowledge over long periods. This communal store of experience enables creation of ever more efficient and diverse solutions to life’s challenges. It was not our large brains, intelligence or language that gave us culture but rather our culture that gave us large brains, intelligence and language. For our species and perhaps a small number of other species, too, culture transformed the evolutionary process.

      The term “culture” implies fashion or haute cuisine, but boiled down to its scientific essence, culture comprises behavior patterns shared by members of a community that rely on socially transmitted information. Whether we consider automobile designs, popular music styles, scientific theories or the foraging of small-scale societies, all evolve through endless rounds of innovations that add incremental refinements to an initial baseline of knowledge. Perpetual, relentless copying and innovation—that is the secret of our species’ success….

      Animals also “innovate.” When prompted to name an innovation, we might think of the invention of penicillin by Alexander Fleming or the construction of the World Wide Web by Tim Berners-Lee. The animal equivalents are no less fascinating. My favorite concerns a young chimpanzee called Mike, whom primatologist Jane Goodall observed devising a noisy dominance display that involved banging two empty kerosene cans together. This exhibition thoroughly intimidated Mike’s rivals and led to him shooting up the social rankings to become alpha male in record time. Then there is the invention by Japanese carrion crows of using cars to crack open nuts. Walnuts shells are too tough for crows to crack in their beaks, but they nonetheless feed on these nuts by placing them in the road for cars to run over, returning to retrieve their treats when the lights turn red. And a group of starlings—birds famously fond of shiny objects used as nest decorations—started raiding a coin machine at a car wash in Fredericksburg, Va., and made off with, quite literally, hundreds of dollars in quarters….

      Such stories are more than just enchanting snippets of natural history. Comparative analyses reveal intriguing patterns in the social learning and innovation exhibited by animals. The most significant of these discoveries finds that innovative species, as well as animals most reliant on copying, possess unusually large brains (both in absolute terms and relative to body size). The correlation between rates of innovation and brain size was initially observed in birds, but St. Andrews in Scotland this research has since been replicated in primates….

      …The results suggested that natural selection does not favor more and more social learning but rather a tendency toward better and better social learning. Animals do not need a big brain to copy, but they do need a big brain to copy well.

      …Those primates that excel at social 11 learning and innovation are the same species that have the most diverse diets, use tools and extractive foraging, and exhibit the most complex social behavior. In fact, statistical analyses suggest that these abilities vary in lockstep so tightly that one can align primates along a single dimension of general cognitive performance, which we call primate intelligence (loosely analogous to IQ. in humans).

      Chimpanzees and orangutans excel in all these performance measures and have high primate intelligence, whereas some nocturnal prosimians are poor at most of them and have a lower metric. The strong correlations between primate intelligence and both brain size measures and performance in laboratory tests of learning and cognition validate the use of the metric as a measure of intelligence….

      Cultural drive is not the only cause of primate brain evolution: diet and sociality are also important because fruit-eating primates and those living in large, complex groups possess large brains. It is difficult, however, to escape the conclusion that high intelligence and longer lives co-evolved in some primates because their cultural capabilities allowed them to exploit high-quality but difficult-to-access food resources, with the nutrients gleaned “paying” for brain growth. Brains are energetically costly organs, and social learning is paramount to animals gathering the resources necessary to grow and maintain a large brain efficiently.

      …Why haven’t chimpanzees sequenced genomes or built space rockets? Mathematical theory has provided some answers. The secret comes down to the fidelity of information transmission from one member of a species to another, the accuracy with which learned information passes between transmitter and receiver. The size of a species’ cultural repertoire and how long cultural traits persist in a population both increase exponentially with transmission fidelity. Above a certain threshold, culture begins to ratchet up in complexity and diversity. Without accurate transmission, cumulative culture is impossible. But once a given threshold is surpassed, even modest amounts of novel invention and refinement lead rapidly to massive cultural change. Humans are the only living species to have passed this threshold.

      Our ancestors achieved high-fidelity transmission through teaching—behavior that functions to facilitate a pupil's learning. Whereas copying is widespread in nature, teaching is rare, and yet teaching is universal in human societies once the many subtle forms this practice takes are recognized….

      …Agriculture freed societies from the constraints that the peripatetic lives of hunter-gatherers imposed on population size and any inclinations to create new technologies. In the absence of this constraint, agricultural societies flourished, both because they outgrew hunter-gatherer communities through allowing an increase in the carrying capacity of a particular area for food production and because agriculture triggered a raft of associated innovations that dramatically changed human society. In the larger societies supported by increasing farming yields, beneficial innovations were more likely to spread and be retained. Agriculture precipitated a revolution not only by triggering the invention of related technologies—ploughs or irrigation technology, among others—but also by spawning entirely unanticipated initiatives, such as the wheel, city-states and religions.

      The emerging picture of human cognitive evolution suggests that we are largely creatures of our own making. The distinctive features of humanity—our intelligence, creativity, language, as well as our ecological and demographic success—are either evolutionary adaptations to our ancestors' own cultural activities or direct consequences of those adaptations. For our species' evolution, cultural inheritance appears every bit as important as genetic inheritance.

      We tend to think of evolution through natural selection as a process in which changes in the external environment, such as predators, climate or disease, trigger evolutionary refinements in an organism’s traits. Yet the human mind did not evolve in this straightforward way. Rather our mental abilities arose through a convoluted, reciprocal process in which our ancestors constantly constructed niches (aspects of their physical and social environments) that fed back to impose selection on their bodies and minds, in endless cycles….But our ability to think, learn, communicate and control our environment makes humanity genuinely different from all other animals.

The So-Called Right to Try

[These excerpts are from an article by the Claudia Wallis in the September 2018 issue of Scientific American.]

      There’s no question about it: the new law sounds just great. President Donald Trump, who knows a thing or two about marketing, gushed about its name when he signed the “Right to Try” bill into law on May 30. He was surrounded by patients with incurable diseases, including a second grader with Duchenne muscular dystrophy, who got up from his small wheelchair to hug the president. The law aims to give such patients easier access to experimental drugs by bypassing the Food and Drug Administration.

      The crowd-pleasing name and concept are why 40 states had already passed similar laws, although they were largely symbolic until the federal government got onboard. The laws vary but generally say that dying patients may seek from drugmakers any medicine that has passed a phase I trial---a minimal test of safety. “We’re going to be saving tremendous numbers of lives,” Trump said. “The current FDA approval process can take many, many years. For countless patients, time is not what they have.”

      But the new law won’t do what the president claims. Instead it gives false hope to the most vulnerable patients….

      In fact, for decades pharmaceutical companies have made unapproved drugs available through programs overseen by the FDA. This “expanded access” is aimed at extremely ill patients who, for one reason or another, do not qualify for formal drug studies. A 2016 report shows that the FDA receives more than 1,000 annual requests on behalf of such patients and approves 99.7 percent of them. It acts immediately in emergency cases or else within days, according to FDA commissioner Scott Gottlieb.

      Of course, there are barriers to getting medicines that may not be effective or safe. Some patients cannot find a doctor to administer them or an institution that will let them be used on-site. And many of these drugs are simply not made available. Drugmakers cannot be compelled to do so: a 2007 federal court decision found “there is no fundamental right... of access to experimental drugs for the terminally ill.” The new law changes none of this….

      Unethical companies, however, may find fresh opportunities to prey on desperate patients under the new law. It releases doctors, hospitals and drugmakers from liability. And although it stipulates that manufacturers can charge patients only what it costs to provide the drug, there is no required preapproval of these charges by the FDA, as there is with expanded access. Such issues led dozens of major patient-advocacy groups to oppose the legislation, which was originally drafted and promoted by the Goldwater Institute, a libertarian think tank.

Oil in Your Wine

[These excerpts are from an article by the Lucas Laursen in the September 2018 issue of Scientific American.]

      Every great bottle of wine begins with a humble fungal infection. Historically, wine-makers relied on naturally occurring yeasts to convert grape sugars into alcohol; modern vintners typically buy one of just a few laboratory-grown strains. Now, to set their products apart, some of the best winemakers are revisiting nature’s lesser-used microbial engineers. Not all these strains can withstand industrial production processes and retain their efficacy—but a natural additive offers a possible solution, new research suggests.

      Industrial growers produce yeast in the presence of oxygen, which can damage cell walls and other important proteins during a process called oxidation. This can make it harder for yeasts—which are dehydrated for shipping—to perform when winemakers revive them. Biochemist Emilia Matallana of the University of Valencia in Spain and her colleagues have been exploring practical ways to fend off such oxidation for years. After showing that pure antioxidants worked, they began searching for a more affordable natural source. They found it in argan, an olivelike fruit used for food and cosmetics. The trees it grows on are famously frequented by domesticated goats.

      Matallana and her team treated three varieties of wine yeast (Saccharomyces cerevisiae) with argan oil, dehydrated them and later rehydrated them. The oil protected important proteins in the yeasts from oxidation and boosted wine fermentation….

      Microbiologists are now interested in 1 studying how and why each yeast strain responded to the argan oil as it did….The oil may one day enable vintners to use I a wider range of specialized yeasts, putting more varied wines on the menu. As for how the oil affected the wine's taste, Matallana says it was “nothing weird.”

Why Sex Matters in Alzheimer’s

[This excerpt is from an article by Rebecca Nebel in the September 2018 issue of Scientific American.]

      Growing older may be inevitable, but getting Alzheimer’s disease is not. Although we can’t stop the aging process, which is the biggest risk factor for Alzheimer’s, there are many other factors that can be modified to lower the risk of dementia.

      Yet our ability to reduce Alzheimer’s risk and devise new strategies for prevention and treatment is impeded by a lack of knowledge about how and why the disease differs between women and men. There are tantalizing hints in the literature about factors that act differently between the sexes, including hormones and specific genes, and these differences could be important avenues of research. Unfortunately, in my experience, most studies of Alzheimer’s risk combine data for women and men….

      Risk factors are just one of the areas in which we need more research into the differences between the sexes in Alzheimer’s. Scientists have often overlooked sex differences in diagnosis, clinical trial design, treatment outcomes and caregiving. This bias has impeded progress in detection and care.

      Approaches that incorporate sex differences into research have advanced innovation in respect to many diseases. We need to do the same in Alzheimer’s. Looking at these differences will greatly enhance our understanding of this thief of minds and improve health outlooks for all.

Creative Science

[These excerpts are from an editorial by Steve Metz in the September 2018 issue of The Science Teacher.]

      In his famous 1959 Rede lecture, the chemist and novelist C. P. Snow lamented the modern creation of “Two Cultures,” one scientific and technical, the other humanistic and artistic. But it seems clear that science and the arts both spring from the same deep well of human creativity and imagination, as Richard Holmes explains in his wonderful book, The Age of Wonder. Holmes describes the deep connection Enlightenment scientists like astronomers Caroline and William Herschel, physicist Michael Faraday, and chemists Antoine Lavoisier and Sir Humphry Davies (who was also a published poet) had with Romantic artists, including the poets Samuel Coleridge and John Keats, the novelist Mary Shelley, and others.

      Job security increasingly requires imagination and creativity. As routine tasks become digitized and automated, successful workers will be those who imagine and create. Along with critical thinking, communication, and collaboration, creativity is one of the “Four Cs”—the 21st century learning and innovation skills that prepare students for increasingly complex life and work environments….It’s also at the apex of Bloom's taxonomy of learning objectives….

      Students often associate creativity with painting, music, and writing, but not with science. They think that scientists and engineers follow routine procedures and that science itself is a set of facts and vocabulary to memorize. And who can blame them? Too often their science classes involve passive note-taking, rote memorization, and step-by-step laboratory activities designed to produce a single right answer.

Our Looming Lead Problem

[These excerpts are from an article by Frederick Rowe Davis in the August 17, 2018, issue of Science.]

      On 25 April 2014, Flint Mayor Dayne Walling ceremoniously shut off a valve to the Detroit water supply and opened the flow of the Flint River into local homes and businesses. He marked the occasion by drinking from a glass filled with water from the new source. Eighteen months later, the water supply was switched back to Detroit. What occurred in between—the city’s failure to control infrastructure corrosion, the deterioration of fresh water entering Flint residences, citizen complaints, government denials, elevated lead levels in children, and public outcry—would become the basis for a crisis that rose to national attention in 2015….

      But the story of Flint is also one of resilience. Residents rose up and challenged the indifference and false palliatives offered by the authorities, who for many months proclaimed that the water supply in Flint was safe for drinking….

      The water crisis in Flint is profoundly worrisome: Numerous children suffered lead poisoning as a direct result of a bureaucratic focus on the fiscal rather than the social. With the huge amount of lead incorporated into the nation's infrastructure, many other communities are just a few poor decisions from a similar fate.

Global Warming Policy: Is Population Left Out in the Cold?

[These excerpts are from an article by John Bongaarts and Brian C. O’Neill in the August 17, 2018, issue of Science.]

      Would slowing human population growth lessen future impacts of anthropogenic climate change? With an additional 4 billion people expected on the planet by 2100, the answer seems an obvious “yes.” Indeed, substantial scientific literature backs up this intuition. Many nongovernmental organizations undertake climate- and population-related activities, and national adaptation plans for most of the least-developed countries recognize population growth as an important component of vulnerability to climate impacts. But despite this evidence, much of the climate community, notably the Intergovernmental Panel on Climate Change (IPCC), the primary source of scientific information for the international climate change policy process, is largely silent about the potential for population policy to reduce risks from global warming. Though the latest IPCC report includes an assessment of technical aspects of ways in which population and climate change influence each other, the assessment does not extend to population policy as part of a wide range of potential adaptation and mitigation responses. We suggest that four misperceptions by many in the climate change community play a substantial role in neglect of this topic, and propose remedies for the IPCC as it prepares for the sixth cycle of its multiyear assessment process.

      Population-related policies—such as offering voluntary family planning services as well as improved education for women and girls—can have many of the desirable characteristics of climate response options: benefits to both mitigation and adaptation, co-benefits with human well-being and other environmental issues, synergies with Sustainable Development Goals (SDGs), and cost effectiveness. These policies can also enable women to achieve their desired family size, and lead to lower fertility and slower population growth. The resulting demographic changes can not only lessen the emissions that drive climate change but also improve the ability of populations to adapt to its consequences.

      • MISPERCEPTION 1: POPULATION GROWTH IS NO LONGER A PROBLEM

      • MISPERCEPTION 2: POPULATION POLICIES ARE NOT EFFECTIVE

      • MISPERCEPTION 3: POPULATION DOES NOT MATTER MUCH FOR CLIMATE

      • MISPERCEPTION 4: POPULATION POLICY IS TOO CONTROVERSIAL TO SUCCEED

Administrators: Be Intentional ‘For All’

[These excerpts are from an editorial by Sharon Delesbore in the September 2018 issue of NSTA Reports.]

      School district leaders and campus administrators must take the helm and realize that science instruction must be a priority for a sustainable society. Because science understanding is not assessed as frequently as math and reading—and often left out of funding calculations—its importance has been woefully negated, and our workforce is suffering from lack of qualified science-literate candidates. Even more dismal is the rarity of science-literate candidates from underrepresented populations in the global schema. This is not just about ethnicity or low socioeconomic status, but also about access, now more than ever.

      …What I do not see is an influx of campus administrators seeking opportunities to develop their capacity in science education to support their teachers.

      As educators and humans in general, we tend to focus on and assist in areas in which we are strong, confident, and successful. When math or science is discussed, the common comments are “I was not good at that,” or “Those subjects scare me.” Many adults believe science and math are difficult subjects and transfer those beliefs to their children at an early age, inadvertently laying the foundation for barriers for their children. Combined with the negative reinforcement of little or poor experiences with science engagement, they are creating a formula for STEM evasion.

Clinical Trials Need More Diversity

[This excerpt is from an editorial by the editors in the September 2018 issue of Scientific American.]

      Nearly 40 percent of Americans belong to a racial or ethnic minority, but the patients who participate in clinical trials for new drugs skew heavily white—in some cases, 80 to 90 percent. Yet nonwhite patients will ultimately take the drugs that come out of clinical studies, and that leads to a real problem. The symptoms of conditions such as heart disease, cancer and diabetes, as well as the contributing factors, vary across lines of ethnicity, as they do between the sexes. If diverse groups aren’t part of these studies, we can't be sure whether the treatment will work in all populations or what side effects might emerge in one group or another.

      This isn’t a new concern. In 1993 Congress passed the National Institutes of Health Revitalization Act, which required the agency to include more women and people of color in their research studies. It was a step in the right direction, and to be sure, the percentage of women in clinical trials has grown significantly since then.

      But participation by minorities has not increased much at all: a 2014 study found that fewer than 2 percent of more than 10,000 cancer clinical trials funded by the National Cancer Institute focused on a racial or ethnic minority. And even if the other trials fulfilled those goals, the 1993 law regulates only studies funded by the NIH, which represent a mere 6 percent of all clinical trials.

      The shortfall is especially troubling when it comes to trials for diseases that particularly affect marginalized racial and ethnic groups. For example, Americans of African descent are more likely to suffer from respiratory ailments than white Americans are; however, as of 2015, only 1.9 percent of all studies of respiratory disease included minority subjects, and fewer than 5 percent of NIH-funded respiratory research included racial minorities.

      The problem is not necessarily that researchers are unwilling to diversify their studies. Members of minority groups are often reluctant to participate. Fear of discrimination by medical professionals is one reason. Another is that many ethnic and racial minorities do not have access to the specialty care centers that recruit subjects for trials. Some may also fear possible exploitation, thanks to a history of unethical medical testing in the U.S. (the infamous Tuskegee experiments, in which black men were deliberately left untreated for syphilis, are perhaps the best-known example). And some minorities simply lack the time or financial resources to participate.

      The problem is not confined to the U.S., either. A recent study of trials involving some 150,000 patients in 29 countries at five different time points over the past 21 years showed that the ethnic makeup of the trials was about 86 percent white….

Remember the Alamo! Remember Goliad!

[These excerpts are from pages 154-157 of The Eagle and the Raven/u> by James A. Michener.]

      For thirteen relentless days the Mexican troops besieged the defenders of the Alamo. Santa Anna conducted the battle under a blood-red flag to the marching tune Deguello, each symbol traditionally meaning: ‘Surrender now or you will be executed when we win.’ In the final charge through the walls of the Alamo there would be no quarter; the men inside knew that this time the often windy cry ‘Victory or Death’ meant what it said.

      Early in the morning of the thirteenth day Santa Anna’s foot soldiers, lashed forward by officers who did not care how many of their men were lost in the attack, stormed the walls, overcame the defenders, and slaughtered every man inside, American or Mexican. Jim Bowie was slain in his sick bed. Captain Travis was shot on the walls. What exactly happened to Davy Crockett would never be known—dead inside the walls or murdered later outside as a prisoner of war. But dead.

      The Texicans lost 186, the Mexicans about 600. One Texican, a grizzled French veteran of the Napoleonic wars, had fled the fort before the last fight began. Santa Anna had won a crushing victory and had been remorseless in exterminating Texicans.

      Any evaluation of Mexico’s stunning victory at the Alamo must consider two hideous acts of Santa Anna, acts so contemptuous of the customary decencies which always existed between honorable adversaries that they enraged the Americans, kindling fires of revenge that would be extinguished only in the equally horrible acts that followed the great Tejas victory at San Jacinto. When a group of prisoners was brought to Santa Anna he said scornfully: ‘I do not want to see those men living’ and despite the rules which ensured safety to soldiers who surrendered honorably, he growled: ‘Shoot them,’ and they were executed.

      Enraging the Texicans even more was his treatment of the bodies of the men who had died bravely trying to defend the Alamo. Instead of providing the bodies with a decent burial, or turning them over to their friends for proper burial, he had the corpses thrown into a great pile, as if they were useless timbers, then cremated in slow fires for two and a half days. When the heat subsided, scavenging citizens probed the ashes for metal items of value, after which the bones were left to dogs and vultures. When news of the desecration circulated, a fearful oath was sworn by Texicans: ‘Remember the Alamo!’

      Flushed by his triumph, Santa Anna behaved as if he were all-powerful. When a detachment of his army won a second victory at Goliad, some miles southeast of San Antonio, taking more than four hundred prisoners during several skirmishes, he ordered the general in charge to execute every man….

      On a bright spring morning, the Mexicans marched nearly four hundred unarmed prisoners in three different groups along country roads, suddenly turning on them and murdering three hundred and forty-two. Those who escaped by fleeing across fields, ducking into woods, and throwing themselves into streams in which they swam to safety, spread news of the massacre. Across Texas men whispered with a terrifying lust for vengeance: ‘Remember the Alamo! Remember Goliad!’

How Islands Shrink People

[These excerpts are from an article by Ann Gibbons in the August 3, 2018, issue of Science.]

      Living on an island can have strange effects. On Cyprus, hippos dwindled to the size of sea lions. On Flores in Indonesia, extinct elephants weighed no more than a large hog, but rats grew as big as cats. All are examples of the so-called island effect, which holds that when food and predators are scarce, big animals shrink and little ones grow. But no one was sure whether the same rule explains the most famous example of dwarfing on Flores, the odd extinct hominin called the hobbit, which lived 60,000 to 100,000 years ago and stood about a meter tall.

      Now, genetic evidence from modern pygmies on Flores—who are unrelated to the hobbit—confirms that humans, too, are subject to so-called island dwarfing….Flores pygmies differ from their closest relatives on New Guinea and in East Asia in carrying more gene variants that promote short stature. The genetic differences testify to recent evolution—the island rule at work. And they imply that the same force gave the hobbit its short stature….

      The team found no trace of archaic DNA that could be from the hobbit. Instead, the pygmies were most closely related to other East Asians. The DNA suggested that their ancestors came to Flores in several waves: in the past 50,000 years or so, when modern humans first reached Melanesia; and in the past 5000 years, when settlers came from both East Asia and New Guinea.

      The pygmies’ genomes also reflect an environmental shift. They carry an ancient version of a gene that encodes enzymes to break down fatty acids in meat and seafood. It suggests their ancestors underwent a “big shift in diet” after reaching Flores, perhaps eating pygmy elephants or marine foods…

      The discovery fits with a recent study suggesting evolution also favored short stature in people on the Andaman Islands….Such selection on islands boosts the theory that the hobbit, too, was once a taller species, who dwindled in height over millennia on Flores.

Did Kindness Prime Our Species for Language?

[These excerpts are from an article by Michael Erard and Catherine Matacie in the August 3, 2018, issue of Science.]

      The idea is rooted in a much older one: that humans tamed themselves. This self-domestication hypothesis, which got its start with Charles Darwin, says that when early humans started to prefer cooperative friends and mates to aggressive ones, they essentially domesticated themselves….Along with tameness came evolutionary changes seen in other domesticated mammals—smoother brows, shorter faces, and more feminized features—thanks in part to lower levels of circulating androgens (such as testosterone) that tend to promote aggression.

      Higher levels of neurohormones such as serotonin were also part of the domestication package. Such pro-social hormones help us infer others' mental states, learn through joint attention, and even link objects and labels—all prerequisites for language….

      If early humans somehow developed their own lower-stress “domesticated” environment—perhaps as a result of easier access to food—it could have fostered more cooperation and reduced aggression….

      …In a famous experiment, Russian geneticist Dmitry Belyaev and colleagues selected for tameness among captured Siberian silver foxes starting in the 1950s. If a wild fox did not attack a human hand placed into its cage, it was bred. Over 50 generations, the foxes came to look like other domesticated species, with shorter faces, curly tails, and lighter coloring—traits that have since been linked to shifts in prenatal hormones.

      Unlike their wild counterparts, tame foxes came to understand the importance of human pointing and gazing….That ability to “mind read” is key to language. Thus, even though the foxes don’t vocalize in complex ways, they show that selection only for tameness can carry communication skills in its wake.

Nudge, not Sludge

[These excerpts are from an editorial by Richard H. Thaler in the August 3, 2018, issue of Science.]

      Helpful nudges abound—good signage, text reminders of appointments, and thoughtfully chosen default options are all nudges. For example, by automatically enrolling people into retirement savings plans from which they can easily opt out, people who always meant to join a plan but never got around to it will have more comfortable retirements.

      Yet, the same techniques for nudging can be used for less benevolent purposes. Take the enterprise of marketing goods and services. Firms may encourage buyers in order to maximize profits rather than to improve the buyers’ welfare (think of financier Bernie Madoff who defrauded thousands of investors). A common example is when firms offer a rebate to customers who buy a product, but then require them to mail in a form, a copy of the receipt, the SKU bar code on the packaging, and so forth. These companies are only offering the illusion of a rebate to the many people like me who never get around to claiming it. Because of such thick sludge, redemption rates for rebates tend to be low, yet the lure of the rebate still can stimulate sales—call it “buy bait.”

      Public sector sludge also comes in many forms. For example, in the United States, there is a program called the earned income tax credit that is intended to encourage work and transfer income to the working poor. The Internal Revenue Service has all the information necessary to make adjustments for credit claims by any eligible taxpayer who files a tax return. But instead, the rules require people to fill out a form that many eligible taxpayers fail to complete, thus depriving themselves of the subsidy that Congress intended they receive.

      Similarly, one of the most important rights of citizens is the ability to vote. Increased voter participation can be nudged by automatically registering anyone who applies for a driver’s license. But voter participation can also be decreased through sludge, as the state of Ohio has recently done, by purging from its list of eligible voters those who have not voted recently and who have not responded to a postcard prompt. Defenders of such sludge claim that it serves as a protection against voter fraud, despite the fact that people who intentionally vote illegally are rare.

      So, sludge can take two forms. It can discourage behavior that is in a person's best interest such as claiming a rebate or tax credit, and it can encourage self-defeating behavior such as investing in a deal that is too good to be true.

      Let’s continue to encourage everyone to nudge for good, but let's also urge those in both the public and private sectors to engage in sludge cleanup campaigns. Less sludge will make the world a better place.

Calculus of Probabilities

[This excerpt is from pages 216-217 of Great Ideas of Modern Mathematics: Their Nature and Use by Jagjit Singh (1959).]

      Chevalier de Mere was fond of a dice game which was played in the following way. A die was thrown four times in succession and one of the players bet that a six would appear at least once in four throws while the other bet against. Mere found that there was greater chance in favour of the first player, that is, of getting a six at least once in four throws. Tired of it, he introduced a variation. The game was now played with two dice instead of one and the betting was on the appearance or non-appearance of at least one double-six in twenty-four throws. Mere found that this time the player who bet against the appearance of a double-six won more frequently. This seemed strange, as at first sight the chance of getting at least one six in four throws should be the same as that of at least one double-six in twenty-four. Mere asked the contemporary mathematician, Fermat, to explain this paradox.

      Fermat showed that while the odds in favour of a single six in four throws were a little more than even (actually about 51:49), those favour of a double-six in twenty-four throws were a little less than even, being 49:51. In solving this paradox Fermat virtually created a new science, the Calculus of Probabilities. It was soon discovered that the new calculus could not only handle problems posed by gamblers like Mere, but it could also aid financial speculators engaged in marine insurance.

Western Machine Civilization

[These excerpts are from pages 50-52 and 64-65 of Great Ideas of Modern Mathematics: Their Nature and Use by Jagjit Singh (1959).]

      Lewis Mumford has divided the history of the Western machine civilisation during the past millennium into three successive but over-lapping and interpenetrating phases. During the first phase—the eotechnic phase—trade, which at the beginning was no more than an irregular trickle, grew to such an extent that it transformed the whole life of Western Europe. It is true that the development of trade led to a steady growth of manufacture as well, but throughout this period (which lasted till about the middle of the eighteenth century) trade on the whole dominated manufacture. Thus it was that the minds of men were occupied more with problems connected with trade, such as the evolution of safe and reliable methods of navigation, than with those of manufacture. Consequently, while the two ancient sources of power, wind and water, were developed at a steadily accelerating pace to increase manufacture, the attention of most leading scientists, particularly during the last three centuries of this phase, was directed towards the solution of navigational problems. The chief and most difficult of these was that of finding the longitude of a ship at sea. It was imperative that a solution should be found as the inability to determine longitudes led to very heavy shipping losses. Newton had tackled it, although without providing a satisfactorily practical answer. In fact, as Hessen has shown, Newton’s masterpiece, the Principia, was in part an endeavour to deal with the problems of gravity, planetary motions and the shape and size of the earth, in order to meet the demands for better navigation. It was shown that the most promising method of determining longitude from observation of heavenly bodies was provided by the moon. The theory of lunar motion, therefore, began to absorb the attention of an increasing number of distinguished mathematicians of England, France, Germany and America.

      Although more arithmetic and algebra were devoted to Lunar Theory than to any other question of astronomy or mathematical physics, a solution was not found till the middle of the eighteenth century, when successful chronometers, that could keep time on a ship in spite of pitching and rolling in rough weather, were constructed. Once the problem of longitude was solved it led to a further growth of trade, which in turn induced a corresponding increase in manufacture. A stage was now reached when the old sources of power, namely wind and water, proved too ‘weak, fickle, and irregular’ to meet the needs of a trade that had burst all previous bounds. Men began to look for new sources of power rather than new trade routes.

      This change marks the beginning of Mumford's second phase, the palaeotechnic phase, which ushered in the era of the ‘dark Satanic mills’. As manufacture began to dominate trade, the problem of discovering new prime movers became the dominant social problem of the time. It was eventually solved by the invention of the steam engine. The discovery of the power of steam—the chief palaeotechnic source of power—was not the work of ‘pure’ scientists; it was made possible by the combined efforts of a long succession of technicians, craftsmen and engineers from Porta, Rivault, Caus, Branca, Savery and Newcomen to Watt and Boulton.

      Although the power of steam to do useful work had been known since the time of Hero of Alexandria (A.D. 50), the social impetus to make it the chief prime mover was lacking before the eighteenth century. Further, a successful steam engine could not have been invented even then had it not been for the introduction by craftsmen of more precise methods of measurement in engineering design. Thus, the success of the first two engines that Watt erected at Bloomfield colliery in Staffordshire, and at John Wilkinson’s new foundry at Broseley, depended in a great measure on the accurate cylinders made by Wilkinson’s new machine tool with a limit of error not exceeding ‘the thickness of a thin six pence’ in a diameter of seventy-two inches. The importance of the introduction of new precision tools, producing parts with increasingly narrower ‘tolerances’, in revolutionising production has never been fully recognised. The transformation of the steam engine from the wasteful burner of fuel that it was at the time of Newcomen into the economical source of power that it became in the hands of Watt and his successors seventy years later, was achieved as much by the introduction of precision methods in technology as by Black’s discovery of the latent heat of steam….

      For about 150 years after Newton the study of the motion of material bodies, whether cannon balls and bullets, to meet the needs of war, or the moon and planets to meet those of navigation, closely followed the Newtonian tradition. Then as it was just about beginning to end up in high-brow pedantry it was rescued by the emergence of a new science—electricity—in much the same way as cybernetics was to rescue mathematical logic a century later.

      Though known from earliest times electricity became a quantitative science in 1819, when Oersted accidentally observed that the flow of an electric current in a wire deflected a compass needle in its neighbourhood. This was the first explicit revelation of the profound connection between electricity and magnetism, already suspected on account of a number of analogies between the two. A little later Faraday showed that this connection was no mere one-sided affair. If electricity in motion produced magnetism, then equally magnets in motion produced electricity. In other words, electricity in motion produced the same effects as stationary magnets and magnets in motion the same effects as electricity.

      This reciprocal relation between electricity and magnetism led straightway to a whole host of new inventions from the electric telegraph and telephone to the electric motor and dynamo. In fact, it is the seed from which has sprouted the whole of heavy electrical industry which was destined to transform the paleotechnic phase of Western machine civilisation with its ugly, dark and satanic mills, into the neotechnic phase based on electric power. But before this industry could arise results of two generations of experiments and prevailing ideas in different fields o physics—electricity, magnetism and light—had to be rationalised and synthesised in a mathematically coherent theory capable of experimental verification. Now the results of the mathematical theory depended for their verification on the establishment of accurate units for electricity—a task necessary before it could be commercialised for household use. The theory, thus verified, in turn formed the basis of electrical engineering--itself the result of a complete interpenetration of deductive reasoning and experimental practice. It reached the apogee of its success when Hertz experimentally demonstrated the existence of electromagnetic waves, which Maxwell had postulated on purely theoretical grounds, and from which wireless telegraphy and all that it implies was to arise later.

      Maxwell’s theory was actually a mathematisation of the earlier physical intuitions of Faraday. In this he used all the mathematical apparatus of mechanics and calculus of the Newtonian period. But in some important and puzzling respects the new laws of electromagnetism differed from those of Newton. In the first place, all the forces between bodies that he considered as, for example, the force of earth’s gravity on falling bodies, acted along the line joining their centres. But in the case of a magnetic pole it was urged to move at right angles to the line joining it to the current-carrying wire. Secondly, electromagnetic theory was differentiated from the earlier gravitation theory of Newton in its insistence that electric and magnetic energy actually resided in the surrounding empty space. According to this view the forces acting on electrified and magnetised bodies did not form the whole system of forces in action but only served to reveal the presence of a vastly more intricate system of forces acting everywhere in free space….

Cinderella and Equations

[This excerpt is from page 43 of Great Ideas of Modern Mathematics: Their Nature and Use by Jagjit Singh (1959).]

      It is needless to add that it is easier to write equations, whether differential, integral or integro-differential, than to solve them. Only a small class of such equations has been solved explicitly. In some cases, when, due to its importance in physics or elsewhere, we cannot do without an equation of the insoluble variety, we use the equation itself to define the function, just as Prince Charming used the glass slipper to define Cinderella as the girl who could wear it. Very often the artifice works; it suffices to isolate the function from other undesirables in much the same way as the slipper sufficed to distinguish Cinderella from her ugly sisters.

An Earlier Perception of Computers Later

An Earlier Perception of Computers

      [This excerpt is from pages 24-26 of Great Ideas of Modern Mathematics: Their Nature and Use by Jagjit Singh (1959).]

      Now, as Norbert Wiener has remarked, the human and animal nervous systems, which too are capable of the work of a computation system, contain elements—the nerve cells or neurons—which are ideally suited to act as relays :

      `While they show rather complicated properties under the influence of electrical currents, in their ordinary physiological action they conform very nearly to the "all-or-none" principle; that is, they are either at rest, or when they "fire" they go through a series of changes almost independent of the nature and intensity of the stimulus.' This fact provides the link between the art of calculation and the new science of Cybernetics, recently created by Norbert Wiener and his collaborators

      This science (cybernetics) is the study of the 'mechanism of control and communication in the animal and the machine', and bids fair to inaugurate a new social revolution likely to be quite as profound as the earlier Industrial Revolution inaugurated by the invention of the steam engine. While the steam engine devalued brawn, cybernetics may well devalue brain—at least, brain of a certain sort. For the new science is already creating machines that imitate certain processes of thought and do some kinds of mental work with a speed, skill and accuracy far beyond the capacity of any living human being.

      The mechanism of control and communication between the brain and various parts of an animal is not yet clearly understood. We still do not know very much about the physical process of thinking in the animal brain, but we do know that the passage of some kind of physico-chemical impulse through the nerve-fibres between the nuclei of the nerve cells accompanies all thinking, feeling, seeing, etc. Can we reproduce these processes by artificial means? Not exactly, but it has been found possible to imitate them in a rudimentary manner by substituting wire for nerve-fibre, hardware for flesh, and electro-magnetic waves for the unknown impulse in the living nerve-fibre. For example, the process whereby flatworms exhibit negative phototropism—that is, a tendency to avoid light—has been imitated by means of a combination of photocells, a Wheatstone bridge and certain devices to give an adequate phototropic control for a little boat. No doubt it is impossible to build this apparatus on the scale of the flatworm, but this is only a particular case of the general rule that the artificial imitations of living mechanisms tend to be much more lavish in the use of space than their prototypes. But they more than make up for this extravagance by being enormously faster. For this reason, rudimentary as these artificial reproductions of cerebral processes still are, the thinking machines already produced achieve their respective purposes for which they are designed incomparably better than any human brain.

      As the study of cybernetics advances—and it must be remembered that this science is just an infant barely ten years old—there is hardly any limit to what these thinking-machines may not do for man. Already the technical means exist for producing automatic typists, stenographers, multilingual interpreters, librarians, psychopaths, traffic regulators, factory-planners, logical truth calculators, etc. For instance, if you had to plan a production schedule for your factory, you would need only to put into a machine a description of the orders to be executed, and it would do the rest. It would know how much raw material is necessary and what equipment and labour are required to produce it. It would then turn out the best possible production schedule showing who should do what and when.

      Or again, if you were a logician concerned with evaluating the logical truth of certain propositions deducible from a set of given premises, a thinking machine like the Kahn-Burkhart Logical Truth Calculator could work it out for you very much faster and with much less risk of error than any human being. Before long we may have mechanical devices capable of doing almost anything from solving equations to factory planning. Nevertheless, no machine can create more thought than is put into it in the form of the initial instructions. In this respect it is very definitely limited by a sort of conservation law, the law of conservation of thought or instruction. For none of these machines is capable of thinking anything new.

      A 'thinking machine' merely works out what has already been thought of beforehand by the designer and supplied to it in the form of instructions. In fact, it obeys these instructions as literally as the unfortunate Casabianca boy, who remained on the burning deck because his father had told him to do so. For instance, if in the course of a computation the machine requires the quotient of two numbers of which the divisor happens to be zero, it will go on, Sisyphus-wise, trying to divide by zero for ever unless expressly forbidden by prior instruction. A human computer would certainly not go on dividing by zero, whatever else he might do. The limitation imposed by the aforementioned conservation law has made it necessary to bear in mind what Hartree has called the 'machine-eye view' in designing such machines. In other words, it is necessary to think out in advance every possible contingency that might arise in. the course of the work and give the machine appropriate instructions for each case, because the machine will not deviate one whit from what the `Moving Finger’ of prior instructions has already decreed. Although the limitation imposed by this conservation law on the power of machines to produce original thinking is probably destined to remain forever, writers have never ceased to speculate on the danger to man from robot machines of his own creation. This, for example, is the moral of stories as old as those of Famutus and Frankenstein, and as recent as those of Karel Capek's play, R.U.R., Olaf Stapledon’s First and Last Men.

      It is true that as yet there is no possibility whatsoever of constructing Frankenstein monsters, Rossum robots or Great Brains—that is, artificial beings possessed of a 'free will' of their own. This, however, does not mean that the new developments in this field are without danger to mankind. The danger from the robot machines is not technical but social. It is not that they will disobey man but that if introduced on. a large enough scale, they are liable to lead to widespread unemployment.

Irrational Numbers

[This excerpt is from pages 15-16 of Great Ideas of Modern Mathematics: Their Nature and Use by Jagjit Singh (1959).]

      The discovery of magnitudes which, like the diagonal of a unit square, cannot be measured by any whole number or rational fraction, that is, by means of integers, singly or in couples, was first made by Pythagoras some 2500 years ago. This discovery was a great shock to him. For he was a number mystic who looked upon integers…as the essence and principle of all things in the universe. When, therefore, he found that the integers did not suffice to measure even the length of the diagonal of a unit square, he must have felt like a Titan cheated by the gods. He swore his followers to a secret vow never to divulge the awful discovery to the world at large and turned the Greek mind once for all away from the idea of applying numbers to measure geometrical lengths. He thus created an impassable chasm between algebra and geometry that was not bridged till the tie of Descartes nearly 2000 years later.

The Value of Mathematics

[This excerpt is from page 3 of Great Ideas of Modern Mathematics: Their Nature and Use by Jagjit Singh (1959).]

      Mathematics is too intimately associated with science to be explained away as a mere game. Science is serious work serving social ends. To isolate mathematics from the social conditions which bring mathematics…into existence is to do violence to history.

A Quantum Future Awaits

[This excerpt is from an article by Jacob M. Taylor in the July 27, 2018, issue of Science.]

      A century ago, the quantum revolution quietly began to change our lives. A deeper understanding of the behavior of matter and light at atomic and subatomic scales sparked a new field of science that would vastly change the world's technology landscape. Today, we rely upon the science of quantum mechanics for applications ranging from the Global Positioning System to magnetic resonance imaging to the transistor. The advent of quantum computers presages yet another new chapter in this story that will enable us to not only predict and improve chemical reactions and new materials and their properties, for example, but also to provide insights into the emergence of spacetime and our universe. Remarkably, these advances may begin to be realized in a few years.

      From initial steps in the 1980s to today, science and defense agencies around the world have supported the basic research in quantum information science that enables advanced sensing, communication, and computational systems. Recent improvements in device performance and quantum bit (“qubit”) approaches show the possibility of moderate-scale quantum computers in the near future. This progress has focused the scientific community on, and engendered substantial new industrial investment for, developing machines that produce answers we cannot simulate even with the world’s fastest supercomputer (currently the Summit supercomputer at the US. Department of Energy’s Oak Ridge National Laboratory in Tennessee).

      Achieving such quantum computational supremacy is a natural first goal. It turns out, however, that devising a classical computer to approximate quantum systems is sometimes good enough for the purposes of solving certain problems. Furthermore, most quantum devices have errors and produce correct results with a decreasing probability as problems become more complicated. Only with substantial math from quantum complexity theory can we actually separate “stupendously hard” problems to solve from just “really hard” ones. This separation of classical and quantum computation is typically described as approaching quantum supremacy. A device that demonstrates a separation may rightly deserve to be called the world’s first quantum computer and will represent a leap forward for theoretical computer science and even for our understanding of the universe.

Astro Worms

[This excerpt is from an article by Katherine Kornei in the August 2018 issue of Scientific American.]

      Caenorhabditis elegans would make an ace fighter pilot. That's because the roughly one-millimeter-long roundworm, a type of nematode that is widely used in biological studies, is remarkably adept at tolerating acceleration. Human pilots lose consciousness when they pull only 4 or 5 g’s (1 g is the force of gravity at Earth’s surface), but C. eiegans emerges unscathed from 400,000 g’s, new research shows.

      This is an important benchmark; rocks have been theorized to experience similar forces when blasted off planet surfaces and into space by volcanic eruptions or asteroid impacts. Any hitchhiking creatures that survive could theoretically seed another planet with life, an idea known as ballistic panspermia.

Capture that Carbon

[These excerpts are from an article by Madison Freeman and David Yellen in the August 2018 issue of Scientific American.]

      The conclusion of the Paris Agreement in 2015, in which almost every nation committed to reduce their carbon emissions, was supposed to be a turning point in the fight against climate change. But many countries have already fallen behind their goals, and the U.S. has now announced it will withdraw from the agreement. Meanwhile emissions worldwide continue to rise.

      The only way to make up ground is to aggressively pursue an approach that takes advantage of every possible strategy to reduce emissions. The usual suspects, such as wind and solar energy and hydropower, are part of this effort, but it must also include investing heavily in carbon capture, utilization and storage (CCUS)---a cohort of technologies that pull carbon dioxide from smokestacks, or even from the air, and convert it into useful materials or store it underground….

      Without CCUS, the level of cuts needed to keep global warming to two degrees Celsius (3.6 degrees Fahrenheit)—the upper limit allowed in the Paris Agreement—probably cannot be achieved, ad6Ording to the International Energy Agency. By 2050 carbon capture and storage must provide at least 13 percent of the reductions needed to keep warming in check, the agency calculates….

      CCUS technologies can also help decarbonize emissions in heavy industry—including production of cement, refined metals and chemicals—which accounts for almost a quarter of U.S. emissions. In addition, direct carbon-removal technology—which captures and converts carbon dioxide from the air rather than from a smokestack—can offset emissions from industries that cannot readily implement other clean technology, such as agriculture.

      The basic idea of carbon capture has faced a lot of opposition. Skepticism has come from climate deniers, who see it as a waste of money, and from passionate supporters of climate action, who fear that it would be used to justify continued reliance on fossil fuels. Both groups are ignoring the recent advances and the opportunity they present By limiting investment in decarbonization, the world will miss a major avenue for reducing emissions both in the electricity sector and in a variety of industries. CCUS can also create jobs and profits from what was previously only a waste material by creating a larger economy around carbon.

      For CCUS to succeed, the federal government must kick in funding for basic research and development and offer incentives such as tax breaks for carbon polluters who adopt the technology. The Trump administration has repeatedly tried to slash energy technology R&D, with the Department of Energy;s CCUS R&D cut by as much as 76 percent in proposed budgets. But this funding must be protected….

      The transition to clean energy has become inevitable. But that transition’s ability to achieve deep decarbonization will falter without this wide range of solutions, which must include CCUS.

Physics Makes Rules, Evolution Rolls the Dice

[These excerpts are from a book review by Chico Camargo in the July 20, 2018, issue of Science.]

      Picture a ladybug in motion. The image that came into your head is probably one of a small, round red-and-black insect crawling up a leaf. After reading Charles Cockell’s The Equations of Life, however, you may be more likely to think of this innocuous organism as a complex biomechanical engine, every detail honed and operating near thermodynamic perfection.

      In a fascinating journey across physics and biology Cockell builds a compelling argument for how physical principles constrain the course of evolution. Chapter by chapter, he aims his lens at all levels of biological organization, from the molecular machinery of electron transport to the social organisms formed by ant colonies. In each instance, Cockell shows that although these structures might be endless in their detail, they are bounded in their form. If organisms were pawns in a game of chess, physics would be the board and its rules, limiting how the game unfolds.

      Much of the beauty of this book is in the diversity of principles it presents. In the chapter dedicated to the physics of the ladybug, for example, Cockell first describes an unassuming assignment in which students are asked to study the properties of the insect. Physical principles emerge naturally: from the surface tension and viscous forces between the ladybug’s feet and vertical surfaces, to the diffusion-driven pattern formation on its back, to the thermodynamics of surviving as a small insect at water-freezing temperatures. These discussions are accompanied by a series of equations that one would probably not expect to see in a single textbook, as various branches of physics—from physical chemistry to optics—are discussed side by side.

      Physics itself is different at different scales. A drop of water, for example, is inconsequential to a human being. If you are a ladybug, however, water surface tension is a potential problem: Having a drop of water on your back might become as burdensome as a heavy backpack that can’t be discarded. For a tiny ant, a droplet large enough can turn into a watery prison because the molecular forces in play are too strong for the insect to escape….

      At the end of every chapter, the reader is reminded of how the laws of physics nudge, narrow, mold, shape, and restrict the “endless forms most beautiful” that Charles Darwin once described. Cockell’s persistence pays off as he gears up for his main argument: If life exists on other planets, it has to abide by the same laws as on Earth.

      Because the atoms in the Milky Way behave the same as in any other galaxy Cockell argues that water in other galaxies will still be an abundant solvent, carbon should still be the preferred choice for self-replicating complex molecules, and the thermodynamics of life should still be the same. Sure, a cow on a hypothetical planet 10 times the diameter of Earth would need wider, stronger legs, but there is no reason to believe that replaying evolution on another planet would lead to unimaginable life forms. Rather, one should expect to see variations on the same theme.

      Cockell ends the book by celebrating the elegant equations that represent the relations between form and function. Rather than being a lifeless form of reductionism, equations, he argues, are our window into what physics renders possible (or impossible) for life to achieve. In equations, we express how our biosphere is full of symmetry, pattern, and law. Within them, we express the boldest claim of them all: that these limitations should be no less than universal.

Space, Still the Final Frontier

[These excerpts are from an editorial by Daniel N. Baker and Amal Chandran in the July 20, 2018, issue of Science.]

      …At the height of the Cold War in the 1960s and 1970s, space science and human space exploration offered a channel for citizens from the East and West to communicate and share ideas. Space has continued to be a domain of collaboration and cooperation among nations. The International Space Station has been a symbol of this notion for the past 20 years, and it is expected to be used by many nations until 2028. By contrast, there have been recent trends toward increased militarization of space with more—not less—fractionalization among nations. As well, the commercial sector is becoming a key player in exploring resource mining, tourism, colonization, and national security operations in spate. Thus, space is becoming an arena for technological shows of economic and military force. However, nations are realizing that the Outer Space Treaty of 1967 needs to be reexamined in light of today’s new space race—a race that now includes many more nations. No one nation or group of nations has ever claimed sovereignty over the “high frontier” of space, and, simply put, this should never be allowed to happen….

      As was true during the Cold War, there are still political differences on Earth, but in space we should together seek to push forward the frontiers of knowledge with a common sense of purpose and most certainly in a spirit of peaceful cooperation.

A Path to Clean Water

[These excerpts are from an article by Klaus Kummerer, Dionysios D. Dionysiou, Oliver Olsson and Despo Fatta-Kassinos in the July 20, 2018, issue of Science.]

      Chemicals, including pharmaceuticals, are necessary for health, agriculture and food production, industrial production, economic welfare, and many other aspects of modern life. However, their widespread use has led to the presence of many different chemicals in the water cycle, from which they may enter the food chain. The use of chemicals will further increase with growth, health, age, and living standard of the human population. At the same time, the need for clean water will also increase, including treated wastewater for food production and high-purity water for manufacturing electronics and pharmaceuticals. Climate change is projected to further reduce water availability in sufficient quantity and quality. Considering the limits of effluent treatment, there is an urgent need for input prevention at the source and for the development of chemicals that degrade rapidly and completely in the environment.

      Conventional wastewater treatment has contributed substantially to the progress in health and environmental protection. However, as the diversity and volume of chemicals used have risen, water pollution levels have increased, and conventional treatment of wastewater and potable water has become less efficient. Even advanced wastewater and potable water treatments, such as extended filtration and activated carbon or advanced oxidation processes, have limitations, including increased demand for energy and additional chemicals; incomplete or, for some pollutants, no removal from the wastewater; and generation of unwanted products from parent compounds, which may be more toxic than their parent compounds. Microplastics are also not fully removed, and advanced treatment such as ozonation can lead to the increased transfer of antibiotic resistance genes, preferential enhancement of opportunistic bacteria, and strong bacterial population shifts.

      Furthermore, water treatment is far from universal. Sewer pipes can leak, causing wastewater and its constituents to infiltrate groundwater. During and after heavy rain events, wastewater and urban stormwater runoff is redirected to protect sewage treatment plants; this share of wastewater is not treated. Such events, as well as urban flooding, are likely to increase in the future because of climate change. Globally, 80% or more of wastewater is not treated.

      … given the ever-increasing list of chemicals that are introduced into the aquatic environment, attempts to assess harm and introduce thresholds will tend to lag new introductions. A preventive approach is therefore also needed. For example, giving companies relief from effluent charges if they use compounds from a list proven to be of low toxicity and readily mineralized-such as the abovementioned cellulose microbeads-could provide strong incentives for creating more sustainable products.

Prepare for Water Day Zero

[These excerpts are from an editorial by the editors in the August 2018 issue of Scientific American.]

      Earlier this year ominous headlines blared that Cape Town, South Africa, was headed for Day Zero—the date when the city’s taps would go dry because its reservoirs would become dangerously low on water. That day—originally expected in mid-April—has been postponed until at least 2019 as of this writing, thanks to water rationing and a welcome rainy season. But the conditions that led to this desperate situation will inevitably occur again, hitting cities all over the planet.

      As the climate warms, extreme droughts and vanishing water supplies will likely become more common. But even without the added impact of climate change, normal rainfall variation plays an enormous role in year-to-year water availability. These ordinary patterns now have extraordinary effects because urban populations have had a tremendous growth spurt: by 2050 the United Nations projects that two thirds of the world’s people will live in cities. Urban planners and engineers need to learn from past rainfall variability to improve their predictions and take future demand into account to build more resilient infrastructure.

      …since 2015 the region has been suffering from the worst drought in a century, and the water in those reservoirs dwindled perilously. Compounding the problem, Cape Town's population has grown substantially, increasing demand. The city actually did a pretty good job of keeping demand low by reducing leaks in the system, a major cause of water waste….But the government of South Africa was slow to declare a national disaster in the areas hit hardest by the drought, paving the way for the recent crisis.

      Cape Town is not alone. Since 2014 southeastern Brazil has been suffering its worst water shortage in 80 years, resulting from decreased rainfall, climate change, poor water management, deforestation and other factors. And many cities in India do not have access to municipal water for more than a few hours a day, if at all….

      In the U.S., the situation is somewhat better, but many urban centers still face water problems. California’s recent multiyear drought led to some of the state's driest years on record. Fortunately, about half of the state's urban water usage is for landscaping, so it was able to cut back on that fairly easily. But cities that use most of their water for more essential uses, such as drinking water, may not be so adaptable. In addition to the problems that drought, climate change and population growth bring, some cities face threats of contamination; crises such as the one in Flint, Mich., arose because the city changed the source of its water, causing lead to leach into it from pipes. If other cities are forced to change their water suppliers, they could face similar woes.

      Fortunately, steps can be taken to avoid urban water crises. In general, a “portfolio approach” that relies on multiple water sources is probably most effective. Cape Town has already begun implementing a number of water-augmentation projects, including tapping groundwater and building water-recycling plants. Many other cities will need to repair existing water infrastructure to cut down on leakage….

      The global community has an opportunity right now to take action to prevent a series of Day Zero crises. If we don’t act, many cities may soon face a time when there isn’t a drop to drink.

It’s Critical

[These excerpts are from an editorial by Steve Metz in the August 2018 issue of The Science Teacher.]

      One of the most important things students can learn in their science classes is the ability to think critically. Science content is important, of course. Our future scientists and engineers need deep understanding of the big ideas of science, as do all citizens. But students must also develop the life-long habit of critical, analytical thinking and evidence-based reasoning. Scientific facts and ideas are not enough. The ability to think critically gives these ideas meaning and is required for assessment of truth and falsehood.

      On the internet and social media the importance of critical thinking—and the notable lack thereof—are on full display. Clickbait promotes outrageous headlines untethered to reality. Statements are made with no concern for supporting evidence. We often accept a claim or counterclaim based on personal belief, reliance on authority, or downright prejudice rather than on evidence-based critical thinking.

      A recent study by researchers at Stanford found that students from middle school to college could not distinguish between a real and false news source, concluding that “Overall, young people’s ability to reason about the information on the Internet can be summed up in one word: bleak”….This is particularly troubling in a world where people—especially young people—increasingly depend on social media as their primary source of news and information.

      The unfortunate fact is that if left to itself, human thinking can be biased, distorted, uninformed, and incomplete. We often believe what we want to believe. Confirmation bias is real, especially in a world where media allow us to choose to view only what uncritically supports our own beliefs. The results include acceptance of fantastic conspiracy theories, pseudoscience, angry rhetoric, and untested assumptions—often leading to poor decision-making, both at the personal and societal level.

      Critical thinking is a difficult, higher-order skill that, like all such skills, requires intensive, deliberate practice. At its core is a healthy skepticism that questions everything, treats all conclusions as tentative, and sets aside interpretations that are not supported by multiple lines of reliable evidence….

Bringing Darwin Back

[These excerpts are from an article by Adam Piore in the August 2018 issue of Scientific American.]

      Straight talk about evolution in classrooms is less common than one might think. According to the most comprehensive study of public school biology teachers, just 28 percent implement the major recommendations and conclusions of the National Research Council, which call for them to “unabashedly introduce evidence that evolution has occurred and craft lesson plans so that evolution is a theme that unifies disparate topics in biology,” according to a 2011 Science article by Pennsylvania State University political scientists Michael Berkman and Eric Plutzer.

      Conversely, 13 percent of teachers (found in virtually every state in the Union and the District of Columbia) reported explicitly advocating creationism or intelligent design by spending at least an hour of class time presenting it in a positive light. Another 5 percent said they endorsed creationism in passing while answering student questions.

      The majority—60 percent of teachers—either attempted to avoid the topic of evolution altogether, quickly blew past it, allowing students to debate evolution, or “watered down” their lessons, Plutzer says. Many said they feared the reaction of students, parents and religious members of their community. And although only 2 percent of teachers reported avoiding the topic entirely, 17 percent, or roughly one in six teachers, avoided discussing human evolution. Many others simply raced through it.

      To confront these challenges, several organizations have launched new kinds of training sessions that are aimed at better preparing teachers for what they will face in the classroom. Moreover, a growing number of researchers have begun to examine the causes of these teaching failures and new ways to overcome them.

      Among many educators, a new idea has begun to take root: perhaps it is time to rethink the way evolution teachers grapple with religion (or choose not to grapple with it) in the classroom….

      For decades the most high-stakes, high-profile battles over evolution education were fought in the courts and state legislatures. The debate centered on, among other things, whether the subject itself could be banned or whether lawmakers could require that equal time be given to the biblical account of creation or the idea of “intelligent design.” Now, with those questions largely resolved—courts have overwhelmingly sided with those pushing to keep evolution in the classroom and creationism out of it—the battle lines have moved into the schools themselves….

      Today, many are now realizing, the far larger obstacle for the vast majority of ordinary science teachers is the legacy of acrimony left over from the decades of legal battles. In many communities, evolution education remains so charged with controversy that teachers either water down their lesson plans, devote as little time as possible to the subject or attempt to avoid it altogether.

      Meanwhile teachers in deeply religious communities such as Baconton face an additional challenge. Often they lack tools and methods that allow them to teach evolution in a way that does not force those students to take sides—a choice that usually does not go well for the scientists perceived to be at war with their community.

      Without such tools, even those teachers who do feel confident with the material often have trouble convincing students to listen to their lesson plans with an open mind—or to listen to them at all….

STEMM Education Should Get “HACD”

[These excerpts are from an article by Robert Root-Bernstein in the July 6, 2018, issue of Science.]

      If you’ve ever had a medical procedure, chances are you benefited from the arts. The stethoscope was invented by a French flautist/physician named Rene Laennec who recorded his first observations of heart sounds in musical notation. The suturing techniques used for organ transplants were adapted from lace-making by another Frenchman, Nobel laureate Alexis Carrel. The methods (and some of the tools) required to perform the first open-heart surgeries were invented by an African-American innovator named Vivien Thomas, whose formal training was as a master carpenter.

      But perhaps you’re more of a technology lover. The idea of instantaneous electronic communication was the invention of one of America's most famous artists, Samuel Morse, who built his first telegraph on a canvas stretcher. Actress Hedy Lamm collaborated with the avant-garde composer George Antheil to invent modern encryption of electronic messages. Even the electronic chips that run our phones and computers are fabricated using artistic inventions: etching, silk-screen printing, and photolithography.

      On 7 May 2018, the Board on Higher Education and Workforce of the U.S. National Academies of Sciences, Engineering, and Medicine (NASEM) released a report recommending that humanities, arts, crafts, and design (HACD) practices be integrated with science, technology, engineering, mathematics, and medicine (STEMM) in college and post-graduate curricula. The motivation for the study is the growing divide in American educational systems between traditional liberal arts curricula and job-related specialization….

      Because the ecology of education is so complex, the report concludes that there is no one, or best, way to integrate arts and humanities with STEMM learning, nor any single type of pedagogical experiment or set of data that proves incontrovertibly that integration is the definitive answer to improved job preparedness. Nonetheless, a preponderance of evidence converges on the conclusion that incorporating HACD into STEMM pedagogies can improve STEMM performance.

      Large-scale statistical studies have demonstrated significant correlations between the persistent practice of HACD with various measures of STEMM achievement….STEMM professionals with avocations such as wood- and metalworking, printmaking, painting, and music composition are more likely to file and license patents and to found companies than those who lack such experience. Likewise, authors that publish high-impact papers are more likely to paint, sculpt, act, engage in wood- or metalworking, or pursue creative writing….

      Every scientist knows that correlation is not causation, but many STEMM professionals report that they actively integrate their HACD and STEMM practices….

The Power of Many

[These excerpts are from an article by Elizabeth Pennisi in the June 29, 2018, issue of Science.]

      Billions of years ago, life crossed a threshold. Single cells started to band together, and a world of formless, unicellular life was on course to evolve into the riot of shapes and functions of multicellular life today, from ants to pear trees to people. It’s a transition as momentous as any in the history of life, and until recently we had no idea how it happened.

      The gulf between unicellular and multicellular life seems almost unbridgeable. A single cell's existence is simple and limited. Like hermits, microbes need only be concerned with feeding themselves; neither coordination nor cooperation with others is necessary, though some microbes occasionally join forces. In contrast, cells in a multicellular organism, from the four cells in some algae to the 37 trillion in a human, give up their independence to stick together tenaciously; they take on specialized functions, and they curtail their own reproduction for the greater good, growing only as much as they need to fulfill their functions. When they rebel, cancer can break out….

      Multicellularity brings new capabilities. Animals, for example, gain mobility for seeking better habitat, eluding predators, and chasing down prey. Plants can probe deep into the soil for water and nutrients; they can also grow toward sunny spots to maximize photosynthesis. Fungi build massive reproductive structures to spread their spores….

      …The evolutionary histories of some groups of organisms record repeated transitions from single-celled to multicellular forms, suggesting the hurdles could not have been so high. Genetic comparisons between simple multicellular organisms and their single-celled relatives have revealed that much of the molecular equipment needed for cells to band together and coordinate their activities may have been in place well before multicellularity evolved. And clever experiments have shown that in the test m tube, single-celled life can evolve the beginnings of multicellularity in just a few hundred generations—an evolutionary instant.

      Evolutionary biologists still debate what drove simple aggregates of cells to become more and more complex, leading to the wondrous diversity of life today. But embarking Lon that road no longer seems so daunting….

      …Some have argued that 2-billion-year-old, coil-shaped fossils of what may be blue-green or green algae—found in the United States and Asia and dubbed Grypania spiralis—or 2.5-billion-year-old microscopic filaments recorded in South Africa represent the first true evidence of multicellular life. Other kinds of complex organisms don’t show up until much later in the fossil record. Sponges, considered by many to be the most primitive living animal, may date back to 750 million years ago, but many researchers consider a group of frondlike creatures called the Ediacarans, common about 570 million years ago, to be the first definitive animal fossils. Likewise, fossil spores suggest multicellular plants evolved from algae at least 470 million years ago.

      Plants and animals each made the leap to multicellularity just once. But in other groups, the transition took place again and again. Fungi likely evolved complex multicellularity in the form of fruiting bodies—think mushrooms—on about a dozen separate occasions….The same goes for algae: Red, brown, and green algae all evolved their own multicellular forms over the past billion years or so….

See-through Solar Cells Could Power Offices

[These excerpts are from an article by Robert F. Service in the June 29, 2018, issue of Science.]

      Lance Wheeler looks at glassy skyscrapers and sees untapped potential. Houses and office buildings, he says, account for 75% of electricity use in the United States, and 40% of its energy use overall. Windows, because they leak energy, are a big part of the problem….

      A series of recent results points to a solution, he says: Turn the windows into solar panels. In the past, materials scientists have embedded light-absorbing films in window glass. But such solar windows tend to have a reddish or brown tint that architects find unappealing. The new solar window technologies, however, absorb almost exclusively invisible ultraviolet (UV) or infrared light. That leaves the glass clear while blocking the UV and infrared radiation that normally leak through it, sometimes delivering unwanted heat. By -cutting heat gain while generating power, the windows “have huge prospects.” Wheeler says, including the possibility that a large office building could power itself.

      Most solar cells, like the standard crystalline silicon cells that dominate the industry, sacrifice transparency to maximize their efficiency, the percentage of the energy in sunlight converted to electricity. The best silicon cells have an efficiency of 25%. Meanwhile, a new class of opaque solar cell materials, called perovskites, are closing in on silicon with top efficiencies of 22%. Not only are the perovskites cheaper than silicon, they can also be tuned to absorb specific frequencies of light by tweaking their chemical recipe.

Tomorrow’s Earth

[These excerpts are from an editorial by Jeremy Berg in the June 29, 2018, of Science.]

      Our planet is in a perilous state. The combined effects of climate change, pollution, and loss of biodiversity are putting our health and well-being at risk. Given that human actions are largely responsible for these global problems, humanity must now nudge Earth onto a trajectory toward a more stable, harmonious state. Many of the challenges are daunting, but solutions can be found….

      Many of today’s challenges can be traced back to the “Tragedy of the Commons” identified by Garrett Hardin in his landmark essay, published in Science 50 years ago. Hardin warned of a coming population-resource collision based on individual self-interested actions adversely affecting the common good. In 1968, the global population was about 3.5 billion; since then, the human population has more than doubled, a rise that has been accompanied by large-scale -changes in land use, resource consumption, waste generation, and societal structures….

      Through collective action, we can indeed achieve planetary-scale mitigation of harm. A case in point is the Montreal Protocol on Substances that Deplete the Ozone Layer, the first treaty to achieve universal ratification by all countries in the world. In the 1970s, scientists had shown that chemicals used as refrigerants and propellants for aerosol cans could catalyze the destruction of ozone. Less than a decade later, these concerns were exacerbated by the discovery of seasonal ozone depletion over Antarctica. International discussions on controlling the use of these chemicals culminated in the Montreal Protocol in 1987. Three decades later, research has shown that ozone depletion appears to be decreasing in response to industrial and domestic reforms that the regulations facilitated.

      More recent efforts include the Paris Agreement of 2015, which .aims to keep a global temperature rise this century well below 2°C and to strengthen the ability of countries to deal with the impacts of climate change, and the United Nations Sustainable Development Goals. As these examples show, there is widespread recognition that we must reverse damaging planetary change for the sake of the next generation. However, technology alone will not rescue us. For changes to be willingly adopted by a majority of people, technology and engineering will have to be integrated with social sciences and psychology….Although human population growth is escalating, we have never been so affluent. Along with affluence comes increasing use of energy and materials, which puts more pressure on the environment. How can humanity maintain high living standards without jeopardizing the basis of our survival?

      As our “Tomorrow’s Earth” series…will highlight, rapid research and technology developments across the sciences can help to facilitate the implementation of potentially corrective options. There will always be varying expert opinions on what to do and how to do it. But as long as there are options, we can hope to find the right paths forward.

Greenhouse Gases

[This excerpt is from chapter nine of Caesar’s Last Breath by Sam Kean.]

      Greenhouse gases got their name because they trap incoming sunlight, albeit not directly. Most incoming sunlight strikes the ground first and warms it. The ground then releases some of that heat back toward space as infrared light. (Infrared light has a longer wavelength than visible light; for our purposes, it's basically the same as heat.) Now, if the atmosphere consisted of nothing but nitrogen and oxygen, this infrared heat would indeed escape into space, since diatomic molecules like N2 and O2 cannot absorb infrared light. Gases like carbon dioxide and methane, on the other hand, which have more than two atoms, can and do absorb infrared heat. And the more of these many-atomed molecules there are, the more heat they absorb. That’s why scientists single them out as greenhouse gases: they’re the only fraction of the air that can trap heat this way.

      Scientists define the greenhouse effect as the difference between a planet’s actual temperature and the temperature it would be with-out these gases. On Mars, the sparse CO2 coverage raises its temp by less than 10°F. On Venus, greenhouse gases add a whopping 900°F. Earth sits between these extremes. Without greenhouse gases, our average global temperature would be a chilly 0°F, below the freezing point of water. With greenhouse gases, the average temp remains a balmy 60°F. Astronomers often talk about how Earth orbits at a perfect distance from the sun — a “Goldilocks distance” where water neither freezes nor boils. Contra that cliché, it’s actually the combination of distance and greenhouse gases that gives us liquid H2O. Based on orbiting distance alone, we’d be Hoth.

      By far the most important greenhouse gas on Earth, believe it or not, is water vapor, which raises Earth’s temperature 40 degrees all by itself. Carbon dioxide and other trace gases contribute the remaining 20. So if water actually does more, why has CO2 become such a bogeyman? Mostly because carbon dioxide levels are rising so quickly. Scientists can look back at the air in previous centuries by digging up small bubbles trapped beneath sheets of ice in the Arctic. From this work, they know that for most of human history the air contained 280 molecules of carbon dioxide for every million particles overall. Then the Industrial Revolution began, and we started burning ungodly amounts of hydrocarbons, which release CO2 as a by-product. To give you a sense of the scale here, in an essay he wrote for his grandchildren in 1882, steel magnate Henry Bessemer boasted that Great Britain alone burned fifty-five Giza pyramids’ worth of coal each year. Put another way, he said, this coal could “build a wall round London of 200 miles in length, 100 feet high, and 41 feet 11 inches in thickness --a mass not only equal to the whole cubic contents of the Great Wall of China, but sufficient to add another 346 miles to its length.” And remember, this was decades before automobiles and modern shipping and the petroleum industry. Carbon dioxide levels reached 312 parts per million in 1950 and have since zoomed past 400.

      People who pooh-pooh climate change often point out, correctly, that CO2 concentrations have been fluctuating for millions of years, long before humans existed, sometimes peaking at levels a dozen times higher than those of today. It’s also true that Earth has natural mechanisms for removing excess carbon dioxide—a nifty negative feedback loop whereby ocean water absorbs excess CO2, converts it to minerals, and stores it underground. But when seen from a broader perspective, these truths deteriorate into half-truths. Concentrations of CO2 varied in the past, yes — but they’ve never spiked as quickly as in the past two centuries. And while geological processes can sequester CO2 underground, that work takes millions of years. Meanwhile, human beings have dumped roughly 2,500 trillion pounds of extra CO2 into the air in the past fifty years alone. (That's over 1.6 million pounds per second. Think about how little gases weigh, and you can appreciate how staggeringly large these figures are.) Open seas and forests will gobble up roughly half that CO2, but nature simply can’t bail fast enough to keep up.

      Things look even grimmer when you factor in other greenhouse gases. Molecule for molecule, methane absorbs twenty-five times more heat than carbon dioxide. One of the main sources of methane on Earth today is domesticated cattle: each cow burps up an average of 570 liters of methane per day and farts 30 liters more; worldwide, that adds up to 175 billion pounds of CH4 annually — some of which degrades due to natural processes, but much of which doesn’t. Other gases do even more damage. Nitrous oxide (laughing gas) sponges up heat three hundred times more effectively than carbon dioxide. Worse still are CFCs, which not only kill ozone but trap heat several thousand times better than carbon dioxide. Collectively CFCs account for one-quarter of human-induced global warming, despite having a concentration of just a few parts per billion in the air.

      And CFCs aren’t even the worst problem. The worst problem is a positive feedback loop involving water. Positive feedback —like the screech you hear when two microphones get acquainted— involves a self-perpetuating cycle that spirals out of control. In this case, excess heat from greenhouse gases causes ocean water to evaporate at a faster rate than normal. Water, remember, is one of the best (i.e., worst) greenhouse gases around, so this increased water vapor traps more heat. This causes temperatures to inch up a bit more, which causes more evaporation. This traps still more heat, which leads to more evaporation, and so on. Pretty soon it's Venus outside. The prospect of a runaway feedback loop shows why we should care about things like a small increase in CFC concentrations. A few parts per billion might seem too small to make any difference, but if chaos theory teaches us anything, it’s that tiny changes can lead to huge consequences.

Chaos

[This excerpt is from chapter eight of Caesar’s Last Breath by Sam Kean.]

      …But Lorenz made us confront the fact that we might never be able to lift our veil of ignorance—that no matter how hard we stare into the eye of a hurricane, we might never understand its soul. Over the long run, that may be even harder to accept than our inability to bust up storms. Three centuries ago we christened ourselves Homo sapiens, the wise ape. We exult in our ability to think, to know, and the weather seems well within our grasp—it’s just pockets of hot and cold gases, after all. But we’d do well to remember our etymology: gas springs from chaos, and in ancient mythology chaos was something that not even the immortals could tame.

Fallout

[This excerpt is from chapter seven of Caesar’s Last Breath by Sam Kean.]

      The world had never known a threat quite like fallout. Fritz Haber during World War I had also weaponized the air, but after a good stiff breeze, Haber's gases generally couldn't harm you. Fallout could— it lingered for days, months, years. One writer at the time commented about the anguish of staring at every passing cloud and wondering what dangers it might hold. “No weather report since the one given to Noah,” he said, “has carried such foreboding for the human race.”

      More than any other danger, fallout shook people out of their complacency about nuclear weapons. By the early 1960s, radioactive atoms (from both Soviet and American tests) had seeded every last square inch on Earth; even penguins in Antarctica had been exposed. People were especially horrified to learn that fallout hit growing children hardest. One fission product, strontium-90, tended to settle onto breadbasket states in the Midwest, where plants sucked it up into their roots. It then began traveling up the food chain when cows ate contaminated grass. Because strontium sits below calcium on the periodic table, it behaves similarly in chemical reactions. Strontium-90 therefore ended up concentrated in calcium-rich milk—which then got concentrated further in children’s bones and teeth when they drank it. One nuclear scientist who had worked at Oak Ridge and then moved to Utah, downwind of Nevada, lamented that his two children had absorbed more radioactivity from a few years out West than he had in eighteen years of fission research.

      Even ardent patriots, even hawks who considered the Soviet Union the biggest threat to freedom and apple pie the world had ever seen, weren’t exactly pro-putting-radioactivity-into-children's-teeth. Sheer inertia allowed nuclear tests to continue for a spell, but by the late 1950s American citizens began protesting en masse. The activist group SANE ran ads that read “No contamination without representation,” and within a year of its founding in 1957, SANE had twenty-five thousand members. Detailed studies of weather patterns soon bolstered their case, since scientists now realized just how quickly pollutants could spread throughout the atmosphere. Pop culture weighed in as well, with Spiderman and Hulk and Godzilla — each the victim of a nuclear accident—debuting during this era. The various protests culminated in the United States, the Soviet Union, and Great Britain signing a treaty to stop all atmospheric nuclear testing in 1963. (China continued until 1974, France until 1980.) And while this might seem like ancient history JFK signed the test-ban treaty, after all—we're still dealing with the fallout of that fallout today, in several ways.

Nuclear Bombs

[This excerpt is from chapter seven of Caesar’s Last Breath by Sam Kean.]

      The Manhattan Project wasn’t a scientific breakthrough as much as an engineering triumph. All the essential physics had been worked out before the war even started, and the truly heroic efforts involved not blackboards and eurekas but elbow grease and backbreaking labor. Consider the refinement of uranium. Among other steps, workers had to convert over 20,000 pounds of raw uranium ore into a gas (uranium hexafluoride) and then whittle it down, almost atom by atom, to 112 pounds of fissionable uranium-235. This required building a $500 million plant ($6.6 billion today) in Oak Ridge, Tennessee, that sprawled across forty-four acres and used three times as much electricity as all of Detroit. All that fancy theorizing about bombs would have gone for naught if not for this unprecedented investment.

      Plutonium was no picnic, either. Making plutonium (it doesn’t exist in nature) proved every bit as challenging and costly as refining uranium. Detonating the stuff was an even bigger hassle. Although plutonium is quite radioactive —inhaling a tenth of a gram of it will kill most adults—the small amount of plutonium that scientists at Los Alamos were working with wouldn’t undergo a chain reaction and explode unless they increased its density dramatically. Plutonium metal is already pretty dense, though, so the only plausible way to do this was by crunching it together with a ring of explosives. Unfortunately, while it's easy to blow something apart with explosives, it's well-nigh impossible to collapse it into a smaller shape in a coherent way. Los Alamos scientists spent many hours screaming at one another over the details.

      By spring 1945, they’d finally sketched out a plausible setup for the explosives. But the idea needed confirming, so they scheduled the famous Trinity test for July 16, 1945. Responsibility for arming the device—nicknamed the Gadget—fell to Louis Slotin, a young Canadian physicist who had a reputation for being foolhardy (perfect for bomb work). After he'd climbed the hundred-foot Trinity tower and assembled the bomb, Slotin and his bosses accepted a $2 billion receipt for it and drove off to watch from the base camp ten miles distant.

      At 5:30 a.m. the ring of explosives went off and crushed the Gadget’s grapefruit-sized plutonium core into a ball the size of a peach pit. A tiny dollop in the middle— beryllium mixed with polonium—then kicked out a few subatomic particles called neutrons, which really got things hopping. These neutrons stuck to nearby plutonium atoms, rendering them unstable and causing them to fission, or split. This splitting released loads of energy: the kick from a single plutonium atom can make a grain of sand jump visibly, even though a plutonium atom is a hundred thousand million billion times smaller. Crucially, each split also released more neutrons. These neutrons then glommed onto other plutonium atoms, rendered them unstable, and caused more fissioning.

      Within a few millionths of a second, eighty generations of plutonium atoms had fissioned, releasing an amount of energy equal to fifty million pounds of TNT. What happened next gets complicated, but all that energy vaporized everything within a chip shot of the bomb — the metal tower, the sand below, every lizard and scorpion. More than vaporized, actually. The temperature near the core spiked so high, to tens of millions of degrees, that electrons within the vapor were torn loose from their atoms and began to roam around on their own, like fireflies. This produced a new state of matter called a plasma, a sort of ubergas most commonly found inside the nuclear furnace of stars.

      Given the incredible energies here, even sober scientists like Robert Oppenheimer (director of the Manhattan Project) had seriously considered the possibility that Trinity would ignite the atmo-sphere and fry everything on Earth's surface. That didn’t happen, obviously, but each of the several hundred men who watched that morning — some of whom slathered their faces in sunscreen, and shaded their eyes behind sunglasses—knew they’d unleashed a new type of hell on the world. After Trinity quieted down Oppenheimer famously recalled a line from the Bhagavad Gita: “Now I am become Death, the destroyer of worlds.” Less famously, Oppenheimer also recalled something that Alfred Nobel once said, about how dynamite would render war so terrible that humankind would surely give it up. How quaint that wish seemed now, in the shadow of a mushroom cloud.

      After the attacks on Hiroshima and Nagasaki in early August, most Manhattan Project scientists felt a sense of triumph. Over the next few months, however, the stories that emerged from Japan left them with a growing sense of revulsion. They'd known their marvelously engineered bombs would kill tens of thousands of people, obviously. But the military had already killed comparable numbers of civilians during the firebombings of Dresden and Tokyo. (Some historians estimate that more human beings died during the six hours of the Tokyo firebombing at least 100,000—than in any attack in history, then or since.)

      What appalled most scientists about Hiroshima and Nagasaki, then, wasn’t the immediate body count but the lingering radioactivity. Most physicists before this had a rather cavalier attitude about radioactivity; stories abound about their macho disdain for the dangers involved. Japan changed that. Fallout from the bomb continued to poison people for months afterward—killing their cells, ulcerating their skin, turning even the salt in their blood and the fillings in their teeth into tiny radioactive bombs.

Night Light

[This excerpt is from the interlude between the sixth and seventh chapter of Caesar’s Last Breath by Sam Kean.]

      For some context here, recall that Joseph Priestley’s Lunar Society met on the Monday nearest the full moon because its members needed moonlight to find their way home. But Priestley’s generation was among the last to have to worry about such problems. Several of the gases that scientists discovered in the late 1700s burned with uncanny brightness, and within a half century of Priestley’s death in 1804, gas lighting had become standard throughout Europe. Edison’s lightbulb gets all the historical headlines, but it was coal gas that first eradicated darkness in the modern world.

      Human beings had artificial lighting before 1800, of course --- wood fires, candles, oil lamps. But however romantic bonfires and candlelit dinners seem nowadays, those are actually terrible sources of light. Candles especially throw off a sickly, feeble glow that, as one historian joked, did little but “make darkness visible.” (A French saying from the time captured the sentiment in a different way: “By candlelight, a goat is ladylike.”) Not everyone could afford candles on a daily basis anyway imagine if all your lightbulbs needed replacing every few nights. Larger households and businesses might go through 2,500 candles per year. To top it off, candles released noxious smoke indoors, and it was all too easy to knock one over and set your house or factory ablaze.

      In retrospect coal gas seems the obvious solution to these problems. Coal gas is a heterogeneous mix of methane, hydrogen, and other gases that emerge when coal is slowly heated. Both methane and hydrogen burn brilliantly alone, and when burned together, they produce light dozens of times bolder and brighter than candlelight. But as with laughing gas, people considered coal gas little more than a novelty at first. Hucksters would pack crowds into dark rooms for a halfpenny each and dazzle them with gas pyrotechnics. And it wasn’t just the brilliance that impressed. Because they didn’t depend on wicks, coal-gas flames could defy gravity and leap out sideways or upside down. Some showmen even combined different flames to make flowers and animal shapes, somewhat like balloon animals.

      Gradually, people realized that coal gas would make fine interior lighting. Gas jets burned steadily and cleanly, without a candle’s flickering and smoking, and you could secure gas fixtures to the wall, decreasing the odds of things catching fire. An eccentric engineer named William Murdoch— the same man who invented a steam locomotive in James Watt's factory, before Watt told him to knock it off—installed the world’s first gas-lighting system in his home in Birmingham in 1792. Several local businessmen were impressed enough to install gas lighting in their factories shortly thereafter.

      After these early adopters, city governments began using coal gas to light their streets and bridges. Cities usually stored the gas inside giant tanks (called gasometers) and piped it through underground mains, much like water today. London alone had forty thousand gas street lamps by 1823, and other cities in Europe followed suit. (Paris didn't want bloody London usurping its reputation as the city of light, after all.) For the first time in history, human settlements would have been visible from space by night.

      Public buildings came online next, including railway stations, churches, and especially theaters, which benefitted more than probably any other institution. With more light available, theater directors could position actors farther back onstage, allowing for more depth of movement. A related technology called limelight—which involved streaming oxygen and hydrogen over burning quicklime—provided even brighter light and led to the first spotlights. Because the audience could see them clearly now, actors could also get by with less makeup and could gesture in more realistic, less histrionic ways.

      Even villages in rural England had rudimentary gas mains by the mid-1800s, and the spread of cheap, consistent lighting changed society in several ways. Crime dropped, since thugs and lowlifes could no longer hide under the cloak of darkness. Nightlife exploded as taverns and restaurants began staying open later. Factories instituted regular working hours since they no longer had to shut down after sunset in the winter, and some manufacturers operated all night to churn out goods.

Science in 1900

[This excerpt is from chapter six of Caesar’s Last Breath by Sam Kean.]

      Chemists in the 1780s fulfilled maybe the oldest dream of humankind, to snap the tethers of gravity and take flight. A century later a physicist solved one of humankind’s most enduring mysteries, why the sky is blue. So you can forgive scientists for having a pretty lofty view of themselves circa 1900, and for assuming that they had a full reckoning of how air worked. Thanks to chemists from Priestley to Ramsay, they now knew all its major components. Thanks to the ideal gas law, they now knew how air responded to almost any change in temperature and pressure you could throw at it. Thanks to Charles and Gay-Lussac and other balloonists, they now knew what the air was like even miles above our heads. There were a few loose ends, sure, in fields like atomic physics and meteorology. But all scientists needed to do was extrapolate from known gas laws to cover those cases. They must have felt achingly close to figuring out their world.

      Guess what. Scientists not only ran into difficulties tying those loose ends together, they eventually had to construct whole new laws of nature, out of sheer desperation, to make sense of what was going on. Atomic physics of course led to the absurdities of quantum mechanics and the horrors of nuclear warfare. And tough as it is to believe, meteorology, one of the sleepiest branches of science around, stirred to life chaos theory, one of the most profound and troubling currents in twentieth-century thought.

Steel

[This excerpt is from the interlude between the fifth and sixth chapter of Caesar’s Last Breath by Sam Kean.]

      The story of Bessemer’s discoveries in this field is long and convoluted, and there’s no room to get into it all here. (It also involved several other chemists and engineers, most of whom he stingily refused to credit in later years.) Suffice it to say that through a series of happy accidents and shrewd deductions, Bessemer figured out two shortcuts to making steel.

      He’d start by melting down cast iron, same as most smelters. He then added oxygen to the mix, to strip out the carbon. But instead of using iron ore to supply the oxygen atoms, like everyone else, Bessemer used blasts of air —a cheaper, faster substitute. The next shortcut was even more important. Rather than mix in lots and lots of oxygen gas and strip all the carbon out of his molten cast iron, Bessemer decided to stop the air flow partway through. As a result, instead of carbon-free wrought iron, he was left with somewhat-carbon-infused steel. In other words, Bessemer could make steel directly, without all the extra steps and expensive material.

      He’d first investigated this process by bubbling air into molten cast iron with a long blowpipe. When this worked, he arranged for a larger test at a local foundry seven hundred pounds of molten iron in a three-foot-wide cauldron. Rather than rely on his own lungs, this time he had several steam engines blast compressed air through the mixture. The workers at the foundry gave Bessemer pitying looks when he explained that he wanted to make steel with puffs of air. And indeed, nothing happened for ten long minutes that afternoon. All of a sudden, he later recalled, “a succession of mild explosions” rocked the room. White flames erupted from the cauldron and molten iron whooshed out in “a veritable volcano,” threatening to set the ceiling on fire.

      After waiting out the pyrotechnics, Bessemer peered into the cauldron. Because of the sparks, he hadn't been able to shut down the blasts of air in time, and the batch was pure wrought iron. He grinned anyway: here was proof his process worked. All he had to do now was figure out exactly when to cut the airflow off, and he’d have steel.

      At this point things moved quickly for Bessemer. He went on a patent binge over the next few years, and the foundry he set up managed to screw down the cost of steel production from around £40 per ton to £7. Even better, he could make steel in under an hour, rather than weeks. These improvements finally made steel available for large-scale engineering projects a development that, some historians claim, ended the three-thousand-year-old Iron Age in one stroke, and pushed humankind into the Age of Steel.

      Of course, that’s a retrospective judgment. At the time, things weren’t so rosy, and Bessemer actually had a lot of trouble persuading people to trust his steel. The problem was, each batch of steel varied significantly in quality, since it proved quite tricky to judge when to stop the flow of air. Worse, the excess phosphorus in English iron ore left most batches brittle and prone to fracturing at cold temperatures. (Bessemer, the lucky devil, had run his initial tests on phosphorus-free ore from Wales; otherwise they too would have failed.) Other impurities introduced other structural problems, and each snafu sapped the public's confidence in Bessemer steel a little more. Like Thomas Beddoes with gaseous medicine, colleagues and competitors accused Bessemer of overselling steel, even of perpetrating a fraud.

      Over the next decade Bessemer and others labored with all the fervor of James Watt to eliminate these problems, and by the 1870s steel was, objectively, a superior metal compared to cast iron—stronger, lighter, more reliable. But you can’t really blame engineers for remaining wary. Steel seemed too good to be true —it seemed impossible that puffs of air could really toughen up a metal so much— and years of problems with steel had corroded their faith anyway….

The Final Mysterians

[These excerpts are from an article by Michael Shermer in the July 2018 issue of Scientific American.]

      …For millennia, the greatest minds of our species have grappled to gain purchase on the vertiginous ontological cliffs of three great mysteries—consciousness, free will and God—without ascending anywhere near the thin air of their peaks. Unlike other inscrutable problems, such as the structure of the atom, the molecular basis of replication and the causes of human violence, which have witnessed stunning advancements of enlightenment, these three seem to recede ever further away from understanding, even as we race ever faster to catch them in our scientific nets.

      …I contend that not only consciousness but also free will and God are mysterian problems—not because we are not yet smart enough to solve them but because they can never be solved, not even in principle, relating to how the concepts are conceived in language. Call those of us in this camp the “final mysterians.”

      …It is not possible to know what it is like to be a bat (in philosopher Thomas Nagel's famous thought experiment), because if you altered your brain and body from humanoid to batoid, you would just be a bat, not a human knowing what it feels like to be a bat….

      …We are not inert blobs of matter bandied about the pinball machine of life by the paddles of nature's laws; we are active agents within the causal net of the universe, both deter-mined by it and helping to determine it through our choices….

      If the creator of the universe is supernatural—outside of space and time and nature's laws—then by definition, no natural science can discover God through any measurements made by natural instruments. By definition, this God is an unsolvable mystery. If God is part of the natural world or somehow reaches into our universe from outside of it to stir the particles (to, say, perform miracles like healing the sick), we should be able to quantify such providential acts. This God is scientifically soluble, but so far all claims of such measurements have yet to exceed statistical chance. In any case, God as a natural being who is just a whole lot smarter and more powerful than us is not what most people conceive of as deific.

      Although these final mysteries may not be solvable by science, they are compelling concepts nonetheless, well deserving of our scrutiny if for no other reason than it may lead to a deeper understanding of our nature as sentient, volitional, spiritual beings.

The Science of Anti-Science Thinking

[These excerpts are from an article by Douglas T. Kenrick, Adam B. Cohen, Steven L. Neuberg and Robert B. Cialdini in the July 2018 issue of Scientific American.]

      On a regular basis, government decision makers enact policies that fail to heed decades of evidence on climate change. In public opinion surveys, a majority of Americans choose not to accept more than a century of evidence on evolution by natural selection. Academic intellectuals put the word “science” in quotes, and members of the lay public reject vaccinations for their children.

      Scientific findings have long met with ambivalent responses: A welcome mat rolls out instantly for horseless buggies or the latest smartphones. But hostility arises just as quickly when scientists’ findings challenge the political or religious status quo. Some of the British clergy strongly resisted Charles Darwin’s theory of evolution by natural selection. Samuel Wilberforce, bishop of Oxford, asked natural selection proponent Thomas Huxley, known as “Darwin’s bulldog,” on which side of his family Huxley claimed descent from an ape.

      In Galileo’s time, officials of the Roman Catholic Church, well-educated and progressive intellectuals in most respects, expressed outrage when the Renaissance scientist reported celestial observations that questioned the prevailing belief that Earth was the center of the universe. Galileo was placed under house arrest and forced to recant his views as heresy.

      In principle, scientific thinking should lead to decisions based on consideration of all available information on a given question. When scientists encounter arguments not firmly grounded in logic and empirical evidence, they often presume that purveyors of those alternative views either are ignorant of the facts or are attempting to discourage their distribution for self-serving reasons—tobacco company executives suppressing findings linking tobacco use to lung cancer, for instance. Faced with irrational or tendentious opponents, scientists often grow increasingly strident. They respond by stating the facts more loudly and clearly in the hope that their interlocutors will make more educated decisions.

      Several lines of research, however, reveal that simply presenting a litany of facts does not always lead to more objective decision making. Indeed, in some cases, this approach might actually backfire. Human beings are intelligent creatures, capable of masterful intellectual accomplishments. Unfortunately, we are not completely rational decision makers….

      Although natural selection stands out as one of the most solidly supported scientific theories ever advanced, the average citizen has not waded through textbooks full of evidence on the topic. In fact, many of those who have earned doctorates in scientific fields, even for medical research, have never taken a formal course in evolutionary biology In the face of these challenges, most people rely on mental shortcuts or the pronouncements of experts, both strategies that can lead them astray. They may also rely—at their own peril—on intuition and gut instinct….

      Fear increases the tendency toward conformity. If you wish to persuade others to reduce carbon emissions, take care whom you scare: a message that arouses fear of a dystopian future might work well for an audience that accepts the reality of climate change but is likely to backfire for a skeptical audience….

Fish Bombs

[These excerpts are from an article by Katherine Kornei in the July 2018 issue of Scientific American.]

      Rogue fishers around the world toss explosives into the sea and scoop up bucketloads of stunned or dead fish, a practice that is illegal in many nations and can destroy coral reefs and wreak havoc on marine biodiversity. Catching perpetrators amid the vastness of the ocean has long proved almost impossible, but researchers working in Malaysia have now adapted acoustic sensors—originally used to locate urban gunfire—to pinpoint these marine blasts within tens of meters.

      Growing human populations and international demand for seafood are pushing fishers to increase their catches….Shock waves from the explosions rupture the fishes’ swim bladders, immobilizing the fish and causing some to float to the surface. And the bombs themselves are easy to make: ammonium nitrate (a common fertilizer) and diesel fuel are mixed in an empty bottle and topped with a detonator and waterproof fuse….

      Malaysian officials are proposing an initiative to promote fish farming….

Auto Mileage Rollback Is a Sick Idea

[These excerpt s are from an article by Rob Jackson in the July 2018 issue of Scientific American.]

      Seven years ago representatives from General Motors, Ford, Chrysler and other car manufacturers joined President Barack Obama to announce historic new vehicle mileage standards. The industry-supported targets would have doubled the fuel efficiency of cars and light trucks in the U.S. to 54.5 miles per gallon by 2025.

      But in April the Environmental Protection Agency announced plans to roll back part or all of the new standards, saying they were “wrong” and based on “politically charged expediency.” Let me explain why this terrible idea should unify Republicans and Democrats in opposition. The rollback is going to harm us economically and hurt us physically.

      The Obama-era standards made sense for many reasons, starting with our wallets. It is true that each vehicle would initially cost $1,000 to $2,000 more as manufacturers researched lighter materials and built stronger vehicles. In return, though, we would save about $3,000 to $5,000 in gas over the life of each vehicle, according to a 2016 report by Consumers Union. (Because gas prices were higher in 2011 and 2012, when the standards were proposed, estimated savings back then were significantly higher—about $8,000 per car. Prices have risen somewhat since 2016.) This research will also help auto companies compete internationally.

      National security and trade deficits are also reasons to keep the existing standards. Despite a growing domestic oil industry, the U.S. imported more than 10 million barrels of oil daily last year, about a third of it coming from OPEC nations. Imports added almost $100 billion to our trade deficit, sending hard-earned dollars to Canada, Saudi Arabia, Venezuela, Iraq and Colombia. Better gas mileage could eliminate half of our OPEC imports. It would also make our country safer and more energy-independent.

      The biggest reason to support the fuel-efficiency standards, however, is the link between vehicle exhaust and human health. More than four in 10 Americans—some 134 million of us—live in regions with unhealthy particulate pollution and ozone in the air. That dirty air makes people sick and can even kill them. A 2013 study by the Massachusetts Institute of Technology estimated that about 200,000 Americans now die every year from air pollution. The number-one cause of those deaths—more than 50,000 of them—is air pollution from road traffic….

      Here is what a rollback in mileage standards would mean: Thousands of Americans would die unnecessarily from cardiovascular and other diseases every year. Our elderly would face more bronchitis and emphysema. More children would develop asthma—a condition that, according to an estimate by the Centers for Disease Control and Prevention, affects more than one in 12. Millions of your sons and daughters have it. My son does, too.

      Rarely in my career have I seen a proposal more shortsighted and counterproductive than this one. Please say there is still time to change our minds.

How Did Homo sapiens Evolve?

[This excerpt is from an editorial by Julia Galway-Witham and Chris Stringer in the June 22, 2018, issue of Science.]

      Over the past 30 years, understanding of Homo sapiens evolution has advanced greatly. Most research has supported the theory that modern humans had originated in Africa by about 200,000 years ago, but the latest findings reveal more complexity than anticipated. They confirm interbreeding between H. sapiens and other hominin species, provide evidence for H. sapiens in Morocco as early as 300,000 years ago, and reveal a seemingly incremental evolution of H. sapiens cranial shape. Although the cumulative evidence still suggests that all modern humans are descended from African H. sapiens populations that replaced local populations of archaic humans, models of modern human origins must now include substantial interactions with those populations before they went extinct. These recent findings illustrate why researchers must remain open to challenging the prevailing theories of modern human origins.

      Although living humans vary in traits such as body size, shape, and skin color, they clearly belong to a single species, H. sapiens, characterized by shared features such as a narrow pelvis, a large brain housed in a globular braincase, and reduced size of the teeth and surrounding skeletal architecture. These traits distinguish modern humans from other now-extinct humans (members of the genus Homo), such as the Neandertals in western Eurasia (often classified as H. neanderthalensis) and, by inference, from the Denisovans in eastern Eurasia (a genetic sister group of Neandertals). How did H. sapiens relate to these other humans in evolutionary and taxonomic terms, and how do those relationships affect evolving theories of modern human origins?

      By the 1980s, the human fossil record had grown considerably, but it was still insufficient to demonstrate whether H. sapiens had evolved from local ancestors across much of the Old World (multiregional evolution) or had originated in a single region and then dispersed from there (single origin). In 1987, a study using mitochondrial DNA from living humans indicated a recent and exclusively African origin for modern humans. In the following year one of us coauthored a review of the fossil and genetic data, expanding on that discovery and supporting a recent African origin (RAO) for our species.

      The RAO theory posits that by 60,000 years ago, the shared features of modern humans had evolved in Africa and, via population dispersals, began to spread from there across the world. Some paleoanthropologists have resisted this single-origin view and the narrow definition of H. sapiens to exclude fossil humans such as the Neandertals. In subsequent decades, genetic and fossil evidence supporting the RAO theory continued to accumulate, such as in studies of the genetic diversity of African and non-African modern humans and the geographic distribution of early H. sapiens fossils, and this model has since become dominant within mainstream paleoanthropology. In recent years, however, new fossil discoveries, the growth of ancient DNA research, and improved dating techniques have raised questions about whether the RAO theory of H. sapiens evolution needs to be revised or even abandoned.

      Different views on the amount of genetic and skeletal shape variation that is reasonably subsumed within a species definition directly affect developing models of human origins. For many researchers, the anatomical distinctiveness of modern humans and Neandertals has been sufficient to place them in separate species; for example, variation in traits such as cranial shape and the anatomy of the middle and inner ears are greater between Neandertals and H. sapiens than between well-recognized species of apes. Yet, Neandertal genome sequences and the discovery of past interbreeding between Neandertals and H. sapiens provide support for their belonging to the same species under the biological species concept, and this finding has revived multiregionalism. The recent recognition of Neandertal art further narrows--or for some researchers removes--the perceived behavioral gap between the two supposed species.

      These challenges to the uniqueness of H. sapiens were a surprise to many and question assignments of hominin species in the fossil record. However, the limitations of the biological species concept have long been recognized. If it were to be implemented rigorously, many taxa within mammals-such as those in Equus, a genus that includes horses, donkeys, and zebras-would have to be merged into a single species. Nevertheless, in our view, species concepts need to have a basis in biology. Hence, the sophisticated abilities of Neandertals, however interesting, are not indicative of their belonging to H. sapiens. The recently recognized interbreeding between the late Pleistocene lineages of H. sapiens, Neandertals, and Denisovans is nonetheless important, and the discovery of even more compelling evidence to support Neandertals and modem humans belonging to the same species would have a profound effect on models of the evolution of H. sapiens.

Students Report Less Sex, Drugs

[This brief article by Jeffrey Brainard is in the June 22, 2018, issue of Science.]

      Fewer U.S. high school students report having sex and taking illicit drugs, but other risky activity remains alarmingly high, according to a biennial report released last week by the U.S. Centers for Disease Control and Prevention. Even as sexual activity declined, fewer reported using condoms during their most recent intercourse, increasing their risks of HIV and other sexually transmitted diseases. And nearly one in seven students reported misusing prescription opioids, for example by taking them without a prescription—a behavior that can lead to future injection drug use and risks of overdosing and contracting HIV. Gay, lesbian, and bisexual students reported experiencing significantly higher levels of violence in school, including bullying and sexual violence, and higher risks for suicide, depression, substance use, and poor academic performance than other students did. Nearly 15,000 students took the survey.

Emerging Stem Cell Ethics

[These excerpts are from an editorial by Douglas Sipp, Megan Munsie and Jeremy Sugarman in the June 22, 2018, issue of Science.]

      It has been 20 years since the first derivation of human embryonic stem cells. That milestone marked the start of a scientific and public fascination with stem cells, not just for their biological properties but also for their potentially transformative medical uses. The next two decades of stem cell research animated an array of bioethical debates, from the destruction of embryos to derive stem cells to the creation of human-animal hybrids. Ethical tensions related to stem cell clinical translation and regulatory policy are now center stage….Care must be taken to ensure that entry of stem cell-based products into the medical marketplace does not come at too high a human or monetary price.

      Despite great strides in understanding stem cell biology very few stem cell-based therapeutics are as yet used in standard clinical practice. Some countries have responded to patient demand and the imperatives of economic competition by promulgating policies to hasten market entry of stem cell-based treatments. Japan, for example, created a conditional approvals scheme for regenerative medicine products and has already put one stem cell treatment on the market based on preliminary evidence of efficacy. Italy provisionally approved a stem cell product under an existing European Union early access program. And last year, the United States introduced an expedited review program to smooth the path for investigational stem cell-based applications, at least 16 of which have been granted already. However, early and perhaps premature access to experimental interventions has uncertain consequences for patients and health systems.

      A staggering amount of public money has been spent on stem cell research globally. Those seeking to develop stem cell products may now not only leverage that valuable body of resulting scientific knowledge but also find that their costs for clinical testing are markedly reduced by deregulation. How should this influence affordability and access? The state and the taxpaying public's interests should arguably be reflected in the pricing of stem cell products that were developed through publicly funded research and the regulatory subsidies. Detailed programs for recouping taxpayers' investments in stem cell research and development must be established.

      Rushing new commercial stem cell products into the market also entails considerations inherent to the ethics of using pharmaceuticals and medical devices. For example, once a product is approved for a given indication, it becomes possible for physicians to prescribe it for “off-label use.” We have already witnessed the untoward effects of the elevated expectations that stem cells can serve as a kind of cellular panacea, a misconception that underlies the direct-to-consumer marketing of unproven uses of stem cells. Once off-label use of approved products becomes an option, there may be a new flood of untested therapeutic claims with which to contend….

      The new frontiers of stem cell-based medicine also raise questions about the use of fast-tracked products. In countries where healthcare is not considered a public good, who should pay for post-market efficacy testing? Patients already bear a substantial burden of risk when they volunteer for experimental interventions. Frameworks that ask them to pay to participate in medical research warrant much closer scrutiny than has been seen thus far.

      …For stem cell treatments, attaining this balance will require frank and open discussion between all stakeholders, including the patients it seeks to benefit and the taxpayers who make it possible.

A Good Day’s Work

[This excerpt is from an editorial by Steve Metz in the June 2018 issue of The Science Teacher.]

      …It is easy to drift through science and math classes wondering, “Why do I need to learn this?” Many do not see a college science major or science career in their future, making the need to learn science less than obvious. For underrepresented students of color and young women—who often lack exposure to STEM role models, especially in engineering and the physical sciences—a vision of a STEM career can seem even more remote.

      Of course, the best reason for learning science is that knowing science is important in and of itself, as part of humankind’s search for understanding. Scientific knowledge makes everything—a walk in the woods, reading a newspaper, a family visit to a science museum or beach—simply more interesting. The skepticism and critical thinking that are part of the scientific world view are essential for informed civic participation and evidence-based social discourse.

      But the next-best answer to the question—why do I need to learn this?—may strike students as more practical and persuasive: STEM careers can provide excellent employment opportunities with good salaries and better-than average job security. The Bureau of Labor Statistics reports that among occupations surveyed, all 20 of the fastest growing and 18 of the 20 highest paying are STEM fields.

      As science teachers, we need to get the word out: Learning science, engineering, and mathematics can lead to a life's work that is interesting, rewarding, and meaningful. Our classes must encourage students to pursue these fields and provide them with the skills necessary for success.

Copper Hemispheres

[This excerpt is from the second chapter of Caesar’s Last Breath by Sam Kean.]

      In general, a summons to stand in judgment before the Holy Roman Emperor was not an occasion for celebration. But Otto Gericke, the mayor of Magdeburg, Germany, felt his confidence soar as his cart rattled south. After all, he was about to put on perhaps the greatest science experiment in history.

      Gericke, a classic gentleman-scientist, was obsessed with the idea of vacuums, enclosed spaces with nothing inside. All most people knew about vacuums then was Aristotle’s dictum that nature abhors and will not tolerate them. But Gericke suspected nature was more open-minded than that, and he set about trying to create a vacuum in the early 1650s. His first attempt involved evacuating water from a barrel, using the local fire brigade’s water pump. The barrel was initially full of water and perfectly sealed, so that no air could get in. Pumping out the water should therefore have left only empty space behind. Alas, after a few minutes of pumping, the barrel staves leaked and air rushed in anyway. He next tried evacuating a hollow copper sphere using a similar setup. It held up longer, but halfway through the process the sphere imploded, collapsing with a bang that left his ears ringing.

      The violence of the implosion startled Gericke — and set his mind racing. Somehow, mere air pressure or more precisely, the difference in air pressure between the inside and outside of the sphere—had crumpled it. Was a gas really strong enough to crunch metal? It didn't seem likely. Gases are so soft, after all, so pillowy. But Gericke could see no other answer, and when his mind made that leap, it proved to be a turning point in the human relationship with gases. For perhaps the first time in history, someone realized just how strong gases are, how muscular, how brawny. Conceptually, it was but a short step from there to steam power and the Industrial Revolution.

      Before the revolution could begin, however, Gericke had to convince his contemporaries how powerful gases were, and luckily he had the scientific skills to pull this off. In fact, a demo he put together over the next decade began stirring up such wild rumors in central Europe that Emperor Ferdinand III eventually summoned Gericke to court to see for himself.

      On the 220-mile trip south, Gericke carried two copper hemispheres; they fit together to form a twenty-two-inch spherical shell. The walls of the hemispheres were thick enough to withstand crumpling this time, and each half had rings welded to it where he could affix a rope. Most important, Gericke had bored a hole into one hemisphere and fitted it with an ingenious air valve, which allowed air to flow through in one direction only.

      Gericke arrived at court to find thirty horses and a sizable crowd awaiting him. These were the days of drawing and quartering convicts, and Gericke announced to the crowd that he had a similar ordeal planned for his copper sphere: he believed that Ferdinand’s best horses couldn’t tear the two halves apart. You could forgive the assembled for laughing: without someone holding them together, the hemispheres fell apart from their own weight. Gericke ignored the naysayers and reached into his cart for the key piece of equipment, a sort of cylinder on a tripod. It had a tube coming off it, which he attached to the one-way valve on the copper sphere. He then had several local blacksmiths—the burliest men he could find— start cranking the machine’s levers and pistons. Every few seconds it wheezed. Gericke called the contraption an “air pump.”

      It worked like this. Inside the air pump’s cylinder was a special airtight chamber fitted with a piston that moved up and down. At the start of the process the piston was depressed, meaning the chamber had no air inside. Step one involved a blacksmith hoisting the piston up. Because the chamber and copper sphere were connected via the tube, air inside the sphere could now flow into the chamber. Just for argument's sake, let's say there were 800 molecules of air inside the sphere to start. (Which is waaaaay too low, but it's a nice round number.) After the piston got hoisted up, maybe half that air would flow out. This left 400 molecules in the sphere and 400 in the chamber.

      Now came the key step. Gericke closed the one-way valve on the sphere, trapping 400 molecules in each half. He then opened another, separate valve on the chamber, and had the smithy stamp the piston down. This recollapsed the chamber and expelled all 400 molecules. The net result was that Gericke had now pumped out half the original air in the sphere and expelled it to the outside world.

      Equally important, Gericke was now back at the starting point, with the piston depressed. As a result he could reopen the valve on the sphere and repeat the process. This time 200 molecules (half the remaining 400) would flow into the chamber, with the other 200 remaining in the sphere. By closing the one-way valve a second time he could once again trap those 200 molecules in the chamber and expel them. Next round, he’d expel an additional 100 molecules, then 50, and so on. It got harder each round to hoist the piston up—hence the stout blacksmiths —but each cycle of the pump removed half the air from the sphere.

      As more and more air disappeared from inside, the copper sphere started to feel a serious squeeze from the outside. That’s because gazillions of air molecules were pinging its surface every second. Each molecule was minuscule, of course, but collectively they added up to thousands of pounds of force (really). Normally air from inside the sphere would balance this pressure by pushing outward. But as the blacksmiths evacuated the inside, a pressure imbalance arose, and the air on the outside began squeezing the hemispheres together tighter and tighter: given the sphere’s size, there would have been 5,600 pounds of net force at perfect vacuum. Gericke couldn't have known all these details, and it's not clear how close he got to a perfect vacuum. But after watching that first copper shell crumple, he knew that air was pretty brawny. Even brawnier, he was gambling, than thirty horses.

      After the blacksmiths had exhausted the air (and themselves), Gericke detached the copper sphere from the pump, wound a rope through the rings on each side, and secured it to a team of horses. The crowd hushed. Perhaps some maiden raised a silk handkerchief and dropped it. When the tug-of-war began, the ropes snapped taut and the sphere shuddered. The horses snorted and dug in their hooves; veins bulged on their necks. But the sphere held—the horses could not tear it apart. Afterward, Gericke picked up the sphere and flicked a secondary valve open with his finger. Hissss. Air rushed in, and a second later the hemispheres fell apart in his hands; like the sword in the stone, only the chosen one could perform the feat. The stunt so impressed the emperor that he soon elevated plain Otto Gericke to Otto von Guericke, official German royalty.

      In later years von Guericke and his acolytes came up with several other dramatic experiments involving vacuums and air pressure. They showed that bells in evacuated glass jars made no noise when rung, proving that you need air to transmit sound. Similarly, they found that butter exposed to red-hot irons inside a vacuum would not melt, proving that vacuums cannot transmit heat convectively. They also repeated the hemisphere stunt at other sites, spreading far and wide von Guericke’s discovery about the strength of air. And it’s this last discovery that would have the most profound impact on the world at large. Our planet's normal, ambient air pressure of 14.7 pounds per square inch might not sound impressive, but that works out to one ton of force per square foot. It's not just copper hemispheres that feel this, either. For an average adult, twenty tons of force are pressing inward on your body at all times. The reason you don't notice this crushing burden is that there's another twenty tons of pressure pushing back from inside you. But even when you know the forces balance here, it all still seems precarious. I mean, in theory a piece of aluminum foil, if perfectly balanced between the blasts from two fire hoses, would survive intact. But who would risk it? Our skin and organs face the same predicament vis-à-vis air, suspended inside and out between two torrential forces.

      Luckily, our scientific forefathers didn’t tremble in fear of such might. They absorbed the lesson of von Guericke— that gases are shockingly strong—and raced ahead with new ideas. Some of the projects they took on were practical, like steam engines. Some shaded frivolous, like hot-air balloons. Some, like explosives, chastised us with their deadly force. But all relied on the raw physical power of gases.

Ammonia

[These excerpts are from the second chapter of Caesar’s Last Breath by Sam Kean.]

      The alchemy of air started with an insult. Fritz Haber was born into a middle-class German Jewish family in 1868, and despite an obvious talent for science, he ended up drifting between several different industries as a young man—dye manufacturing, alcohol production, cellulose harvesting, molasses production—without distinguishing himself in any of them. Finally, in 1905 an Austrian company asked Haber—by then a balding fellow with a mustache and pince-nez glasses—to investigate a new way to manufacture ammonia gas (NH3).

      The idea seemed straightforward. There’s plenty of nitrogen gas in the air (N2), and you can get hydrogen gas (H2) by splitting water molecules with electricity. To make ammonia, then, simply mix and heat the gases: N2 + 3H2 --> 2NH3. Voila. Except Haber ran into a heckuva catch-22. It took enormous heat to crack the nitrogen molecules in half so they could react; yet that same heat tended to destroy the product of the reaction, the fragile ammonia molecules. Haber spent months going in circles before finally issuing a report that the process was futile.

      The report would have languished in obscurity —negative results win no prizes— if not for the vanity of a plump chemist named Walther Nernst. Nernst had everything Haber coveted. He worked in Berlin, the hub of German life, and he'd made a fortune by inventing a new type of electric lightbulb. Most important, Nernst had earned scientific prestige by discovering a new law of nature, the Third Law of Thermodynamics. Nernst’s work in thermodynamics also allowed chemists to do something unprecedented: examine any reaction--like the conversion of nitrogen into ammonia—and estimate the yield at different temperatures and pressures. This was a huge shortcut. Rather than grope blindly, chemists could finally predict the optimum conditions for reactions.

      Still, chemists had to confirm those predictions in the lab, and here's where the conflict arose. Because when Nernst examined the data in Haber's report, he declared that the yields for ammonia were impossible— 50 percent too high, according to his predictions.

      Haber swooned upon hearing this. He was already a high-strung sort—he had a weak heart and tended to suffer nervous breakdowns. Now Nernst was threatening to destroy the one thing he had going for himself, his reputation as a solid experimentalist. Haber carefully redid his experiments and published new data more in line with Nernst’s predictions. But the numbers remained stubbornly higher, and when Nernst ran into Haber at a conference in May 1907, he dressed down his younger colleague in front of everyone.

      Honestly, this was a stupid dispute. Both men agreed that the industrial production of ammonia via nitrogen gas was impossible; they just disagreed over the exact degree of impossibility. But Nernst was a petty man, and Haber— who had a chivalrous streak—could not let this insult to his honor stand. Contradicting everything he’d said before, Haber now decided to prove that you could make ammonia from nitrogen gas after all. Not only could he rub Nernst’s fat nose in it if he succeeded, he could perhaps patent the process and grow rich. Best of all, unlocking nitrogen would make Haber a hero throughout Germany, because doing so would provide Germany with the one thing it lacked to become a world power— a steady supply of fertilizer….

      Beyond fiddling with temperatures and pressures, Haber focused on a third factor, a catalyst. Catalysts speed up reactions without getting consumed themselves; the platinum in your car’s muffler that breaks down pollutants is an example. Haber knew of two metals, manganese and nickel, that boosted the nitrogen-hydrogen reaction, but they worked only above 1300°F, which fried the ammonia. So he scoured around for substitute catalysts, streaming these gases over dozens of different metals to see what happened. He finally hit upon osmium, element 76, a brittle metal once used to make lightbulbs. It lowered the necessary temperature to “only” 1100°F, which gave ammonia a fighting chance.

      Using his nemesis Nernst’s equations, Haber calculated that osmium, if used in combination with the high-pressure jackets, might boost the yield of ammonia to 8 percent, an acceptable result at last. But before he could lord his triumph over Nernst, he had to confirm that figure in the lab. So in July 1909— after several years of stomach pains, insomnia, and humiliation— Haber daisy-chained several quartz canisters together on a tabletop. He then flipped open a few high-pressure valves to let the N2 and H2 mix, and stared anxiously at the nozzle at the far end.

      It took a while: even with osmium's encouragement, nitrogen breaks its bonds only reluctantly. But eventually a few milky drops of ammonia began to trickle out of the nozzle. The sight sent Haber racing through the halls of his department, shouting for everyone to “Look! Come look!” By the end of the run, they had a whole quarter of a teaspoon.

      They eventually cranked that up into a real gusher— a cup of ammonia every two hours. But even that modest output persuaded BASF to purchase the technology and fast-track it. As he often did to celebrate a triumph, Haber threw his team an epic party. “When it was over,” one assistant recalled, “we could only walk home in a straight line by following the streetcar tracks.”

      Haber’s discovery proved to be an inflection point in history—right up there with the first time a human being diverted water into an irrigation canal or smelted iron ore into tools. As people said back then, Haber had transformed the very air into bread.

      Still, Haber’s advance was as much theoretical as anything: he proved you could make ammonia (and therefore fertilizer) from nitrogen gas, but the output from his apparatus barely could have nourished your tomatoes, much less fed a nation like Germany. Scaling Haber's process up to make tons of ammonia at once would require a different genus of genius—the ability to turn promising ideas into real, working things. This was not a genius that most BASF executives possessed. They saw ammonia as just another chemical to add to their portfolio, a way to pad their profits a little. But the thirty-five-year-old engineer they put in charge of their new ammonia division, Carl Bosch, had a grander vision. He saw ammonia as potentially the most important— and lucrative—chemical of the new century, capable of transforming food production worldwide. As with most visions worth having, it was inspiring and dicey all at once.

      Bosch decided to tackle each of the many subproblems with ammonia production independently. One issue was getting pure enough nitrogen, since regular air contains oxygen and other “impurities.” For help here, Bosch turned to an unlikely source, the Guinness Brewing company. Fifteen years earlier Guinness had developed the most powerful refrigeration devices on the planet, so powerful they could liquefy air. (As with any substance, if you chill the gases in the air enough, they'll condense into puddles of liquid.) Bosch was more interested in the reverse process —taking cold liquid air and boiling it. Curiously, although liquid air contains many different substances mixed together, each substance within it boils off at a separate temperature when heated. Liquid nitrogen happens to boil at –320°F. So all Bosch had to do was liquefy some air with the Guinness refrigerators, warm the resulting pool of liquid to -319°F, and collect the nitrogen fumes. Every time you see a sack of fertilizer today, you can thank Guinness stout.

      The second issue was the catalyst. Although effective at kick-starting the reaction, osmium would never work in industry: as an ore it makes gold look cheap and plentiful, and buying enough osmium to produce ammonia at the scales Bosch envisioned would have bankrupted the firm. Bosch needed a cheap substitute, and he brought the entire periodic table to bear on the problem, testing metal after metal after metal. In all, his team ran twenty thousand experiments before finally settling on aluminum oxide and calcium mixed with iron. Haber the scientist had sought perfection—the best catalyst. Bosch the engineer settled for a mongrel.

      Pristine nitrogen and cut-rate catalysts meant nothing, however, if Bosch couldn't overcome the biggest obstacle, the enormous pres-sures involved. A professor in college once told me that the ideal piece of equipment for an experiment falls apart the moment you take the last data point: that means you wasted the least possible amount of time maintaining it. (Typical scientist.) Bosch’s equipment had to run for months without fail, at temperatures hot enough to make iron glow and at pressures twenty times higher than in locomotive steam engines. When BASF executives first heard those figures, they gagged: one protested that an oven in his department running at a mere seven times atmospheric pressure — one-thirtieth of what was proposed—had exploded the day before. How could Bosch ever build a reaction vessel strong enough?

      Bosch replied that he had no intention of building the vessel himself. Instead he turned to the Krupp armament company, makers of legendarily large cannons and field artillery. Intrigued by the challenge, Krupp engineers soon built him the chemistry equivalent of the Big Bertha: a series of eight-foot-tall, one-inch-thick steel vats. Bosch then jacketed the vessels in concrete to further protect against explosions. Good thing, because the first one burst after just three days of testing. But as one historian commented, “The work could not be allowed to stop because of a little shrapnel.” Bosch’s team rebuilt the vessels, lining them with a chemical coating to prevent the hot gases from corroding the insides, then invented tough new valves, pumps, and seals to withstand the high-pressure beatings.

      Beyond introducing these new technologies, Bosch also helped introduce a new approach to doing science. Traditional science had always relied on individuals or small groups, with each person providing input into the entire process. Bosch took an assembly-line approach, running dozens of small projects in parallel, much like the Manhattan Project three decades later. Also like the Manhattan Project, he got results amazingly quickly—and on a scale most scientists had never considered possible. Within a few years of Haber's first drips, the BASF ammonia division had erected one of the largest factories in the world, near the city of Oppau. The plant contained several linear miles of pipes and wiring, and used gas liquefiers the size of bungalows. It had its own railroad hub to ship raw materials in, and a second hub for transporting its ten thousand workers. But perhaps the most amazing thing about Oppau was this: it worked, and it made ammonia every bit as quickly as Bosch had promised. Within a few years, ammonia production doubled, then doubled again. Profits grew even faster.

      Despite this success, by the mid-1910s Bosch decided that even he had been thinking too small, and he pushed BASF to open a larger and more extravagant plant near the city of Leuna. More steel vats, more workers, more miles of pipes and wiring, more profit. By 1920 the completed Leuna plant stretched two miles wide and one mile across—“a machine as big as a town,” one historian marveled.

      Oppau and Leuna launched the modern fertilizer industry, and it has basically never slowed down since. Even today, a century later, the Haber-Bosch process still consumes a full 1 percent of the world’s energy supply. Human beings crank out 175 million tons of ammonia fertilizer each year, and that fertilizer grows half the world’s food. Half. In other words, one of every two people alive today, 3.6 billion of us, would disappear if not for Haber-Bosch. Put another way, half your body would disappear if you looked in a mirror: one of every two nitrogen atoms in your DNA and proteins would still be flitting around uselessly in the air if not for Haber’s spiteful genius and Bosch’s greedy vision.

Report Details Persistent Hostility to Women in Science

[These excerpts are from an article by Meredith Wadman in the June 15, 2018, issue of Science.]

      Ask someone for an example of sexual harassment and they might cite a professor’s insistent requests to a grad student for sex. But such lurid incidents account for only a small portion of a serious and widespread harassment problem in science, according to a report released this week by the National Academies of Sciences, Engineering, and Medicine. Two years in the making, the report describes pervasive and damaging “gender harassment”—behaviors that belittle women and imply that they don’t belong, including sexist comments and demeaning jokes. Between 17% and 50% of female science and medical students reported this kind of harassment in large surveys conducted by two major university systems across 36 campuses….

      Decades of failure to curb sexual harassment, despite civil rights laws that make it illegal, underscore the need for a change in culture, the report says….The authors suggest universities take measures to publicly report the number of harassment complaints they receive and investigations they conduct, use committee-based advising to prevent students from being in the power of a single harasser, and institute alternative, less formal ways for targets to report complaints if they don’t wish to start an official investigation….

      The report says women in science, engineering, or medicine who are harassed may abandon leadership opportunities to dodge perpetrators, leave their institutions, or leave science altogether. It also highlights the ineffectiveness of ubiquitous, online sexual harassment training and notes what is likely massive underreporting of sexual harassment by women who justifiably fear retaliation. To retain the talents of women in science, the authors write, will require true cultural change rather than “symbolic compliance” with civil rights laws.

Seaweed Masses Assault Caribbean Islands

[These excerpts are from an article by Katie Langin in the June 15, 2018, issue of Science.]

      In retrospect, 2011 was just the first wave. That year, massive rafts of Sargassum—a brown seaweed that lives in the open ocean—washed up on beaches across the Caribbean, trapping sea turtles and filling the air with the stench of rotting eggs….Before then, beachgoers had sometimes noticed “little drifty bits on the tideline,” but the 2011 deluge of seaweed was unprecedented,…piling up meters thick in places…. "

      Locals hoped the episode, a blow to tourism and fisheries, was a one-off….Now, the Caribbean is bracing for what could be the mother of all seaweed invasions, with satellite observations warning of record-setting Sargassum blooms and seaweed already swamping beaches. Last week, the Barbados government declared a national emergency….

      Before 2011, open-ocean Sargassum was mostly found in the Sargasso Sea, a patch of the North Atlantic Ocean enclosed by ocean currents that serves as a spawning ground for eels. So when the first masses hit the Caribbean, scientists assumed they had drifted south from the Sargasso Sea. But satellite imagery and data on ocean currents told a different story….

      Since 2011, tropical Sargassum blooms have recurred nearly every year, satellite imagery showed….

      Yet in satellite data prior to 2011, the region is largely free of seaweed....That sharpens the mystery of the sudden proliferation….Nutrient inputs from the Amazon River, which discharges into the ocean around where blooms were first spotted, may have stimulated Sargassum growth. But other factors, including changes in ocean currents and increased ocean fertilization from iron in airborne dust, are equally plausible….

      In the meantime, the Caribbean is struggling to cope as yearly bouts of Sargassum become “the new normal”….the blooms visible in satellite imagery dwarf those of previous years….

HIV—No Time for Complacency

[These excerpts are from an article by Quarralisha Abdool Karim and Salim S. Abdool in the June 15, 2018, issue of Science.]

      Today, the global HIV epidemic is widely viewed as triumph over tragedy. This stands in stark contrast to the first two decades of the epidemic, when AIDS was synonymous with suffering and death….

      The AIDS response has now become a victim of these successes: As it eases the pain and suffering from AIDS, it creates the impression that the epidemic is no longer important or urgent. Commitment to HIV is slowly dissipating as the world’s attention shifts elsewhere. Complacency is setting in.

      However, nearly 5000 new cases of HIV infection occur each day, defying any claim of a conquered epidemic. The estimated 36.7 million people living with HIV, 1 million AIDS-related deaths, and 1.8 million new infections in 2016 remind us that HIV remains a serious global health challenge. Millions need support for life-long treatment, and millions more still need to start antiretroviral treatment, many of whom do not even know their HIV status. People living with HIV have more than a virus to contend with; they must cope with the stigma and discrimination that adversely affect their quality of life and undermine their human rights.

      A further crucial challenge looms large: how to slow the spread of HIV. The steady decline in the number of new infections each year since the mid-1990s has almost stalled, with little change in the past 5 years. HIV continues to spread at unacceptable levels in several countries, especially in marginalized groups, such as men who have sex with men, sex workers, people who inject drugs, and transgender individuals. Of particular concern is the state of the HIV epidemic in sub-Saharan Africa, where young women aged 15 to 24 years have the highest rates of new infections globally. Their sociobehavioral and biological risks—including age-disparate sexual coupling patterns between teenage girls and men in their 30s, limited ability to negotiate safer sex, genital inflammation, and vaginal dysbiosis—are proving difficult to mitigate. Current HIV prevention technologies, such as condoms and pre-exposure prophylaxis, have had limited impact in young women in Africa, mainly due to their limited access, low uptake, and poor adherence.

      There is no room for complacency when so much more remains to be done for HIV prevention and treatment. The task of breaking down barriers and building bridges needs greater commitment and impetus. Now is not the time to take the foot off the pedal….

Does Tailoring Instruction to “Learning Styles” Help Students Learn?

[These excerpts are from an article by Daniel T. Willingham in the Summer 2018 issue of American Educator.]

      Research has confirmed the basic summary I offered in 2005; using learning-styles theories in the classroom does not bring an advantage to students. But there is one new twist. Researchers have long known that people claim to have learning preferences—they’ll say, “I’m a visual learner” or “I like to think in words.” There's increasing evidence that people act on those beliefs; if given the chance, the visualizer will think in pictures rather than words. But doing so confers no cognitive advantage. People believe they have learning styles, and they try to think in their preferred style, but doing so doesn't help them think….

      It’s fairly obvious that some children learn more slowly or put less effort into schoolwork, and researchers have amply confirmed this intuition. Strategies to differentiate instruction to account for these disparities are equally obvious: teach at the learner’s pace and take greater care to motivate the unmotivated student. But do psychologists know of any nonobvious student characteristics that teachers could use to differentiate instruction?

      Learning-styles theorists think they’ve got one: they believe students vary in the mode of study or instruction from which they benefit most. For example, one theory has it that some students tend to analyze ideas into parts, whereas other students tend to think more holistically. Another theory posits that some students are biased to think verbally, whereas others think visually.

      When we define learning styles, it’s important to be clear that style is not synonymous with ability. Ability refers to howwell you can do something. Style is the way you do it. I find an analogy to sports useful: two basketball players might be equally good at the game but have different styles of play; one takes a lot of risks, whereas the other is much more conservative in the shots she takes. To put it another way, you’d always be pleased to have more ability, but one style is not supposed to be valued over another; it’s just the way you happen to do cognitive work. But just as a conservative basketball player wouldn’t play as well if you forced her to take a lot of chancy shots, learning-styles theories hold that thinking will not be as effective outside of your preferred style.

      In other words, when we say someone is a visual learner, we don’t mean they have a great ability to remember visual detail (although that might be true). Some people are good at remembering visual detail, and some people are good at remembering sound, and some people are gifted in moving their bodies. That’s kind of obvious because pretty much every human ability varies across individuals, so some people will have a lot of any given ability and some will have less. There’s not much point in calling variation in visual memory a “style” when we already use the word “ability” to refer to the same thing.

      The critical difference between styles and abilities lies in the idea of style as a venue for processing, a way of thinking that an individual favors. Theories that address abilities hold that abilities are not interchangeable; I can’t use a mental strength (e.g., my excellent visual memory) to make up for a mental weakness (e.g., my poor verbal memory). The independence of abilities shows us why psychologist Howard Gardner's theory of multiple intelligences is not a theory of learning styles. Far from suggesting that abilities are exchangeable, Gardner explicitly posits that different abilities use different “codes” in the brain and therefore are incompatible. You can’t use the musical code to solve math problems, for example….

      In short, recent experiments do not change the conclusion that previous reviewers of this literature have drawn: there is not convincing evidence to support the idea that tailoring instruction according to a learning-styles theory improves student outcomes….

      Research from the last 10 years confirms that matching instruction to learning style brings no benefit. But other research points to a new conclusion: people do have biases about preferred modes of thinking, even though these biases don’t help them think better.

      …In sum, people do appear to have biases to process information one way or another (at least for the verbalizer/visualizer and the intuitive/reflective styles), but these biases do not confer any advantage. Nevertheless, working in your preferred style may make it feel as though you’re learning more.

      But if people are biased to think in certain ways, maybe catering to that bias would confer an advantage to motivation, even if it doesn't help thinking? Maybe honoring learning styles would make students more likely to engage in class activities? I don’t believe either has been tested, but there are a few reasons I doubt we'd see these hypothetical benefits. First, these biases are not that strong, and they are easily overwhelmed by task features; for example, you may be biased to reflect rather than to intuit, but if you feel hurried, you’ll abandon reflection because it’s time-consuming. Second, and more important, there are the task effects. Even if you're a verbalizer, if you're trying to remember sentences, it doesn’t make sense for me to tell you to verbalize (for example, by repeating the sentences to yourself) because visualizing (for example, by creating a visual mental image) will make the task much easier. Making the task more difficult is not a good strategy for motivation….

      One educational implication of this research is obvious: educators need not worry about their students’ learning styles. There's no evidence that adopting instruction to learning styles provides any benefit. Nor does it seem worthwhile to identify students’ learning styles for the purpose of warning them that they may have a pointless bias to process information one way or another. The bias is only one factor among many that determine the strategy an individual will select-the phrasing of the question, the task instructions, and the time allotted all can impact thinking strategies.

      A second implication is that students should be taught fruitful thinking strategies for specific types of problems. Although there’s scant evidence that matching the manner of processing to a student's preferred style brings any benefit, there’s ample evidence that matching the manner of processing to the task helps a lot. Students can be taught useful strategies for committing things to memory, reading with comprehension, overcoming math anxiety, or avoiding distraction, for example. Learning styles do not influence the effectiveness of these strategies.

Bilingual Boost

[These excerpts are from an article by Jane C. Hu in the June 2018 issue of Scientific American.]

      Children growing up in low-income homes score lower than their wealthier peers on cognitive tests and other measures of scholastic success, study after study has found. Now mounting evidence suggests a way to mitigate this disadvantage: learning another language.

      …researchers probed demographic data and intellectual assessments from a subset of more than 18,000 kindergartners and first graders in the U.S. As expected, they found children from families with low socioeconomic status (based on factors such as household income and parents’ occupation and education level) scored lower on cognitive tests. But within this group, kids whose families spoke a second language at home scored better than monolinguals.

      Evidence fora “bilingual advantage”—the idea that speaking more than one language improves mental skills such as attention control or ability to switch between tasks—has been mixed. Most studies have had only a few dozen participants from mid- to high-socioeconomic-status backgrounds perform laboratory-based tasks.

      …sought out a data set of thousands of children who were demographically representative of the U.S. population. It is the largest study to date on the bilingual advantage and captures more socioeconomic diversity than most others….The analysis also includes a real-world measure of children's cognitive skills: teacher evaluations.

      The use of such a sizable data set “constitutes a landmark approach” for language studies…..the data did not contain details such as when bilingual subjects learned each language or how often they spoke it. Without this information…it is difficult to draw conclusions about how being bilingual could confer cognitive advantages….

Suffocated Seas

[These excerpts are from an article by Lucas Joel in the June 2018 issue of Scientific American.]

      Earth’s largest mass extinction to date is sometimes called the Great Dying—and for good reason: it wiped out about 70 percent of life on land and 95 percent in the oceans. Researchers have long cited intense volcanism in modern-day Siberia as the main culprit behind the cataclysm, also known as the Permian Triassic mass extinction, 252 million years ago. A recent study pins down crucial details of the killing mechanism, at least for marine life: oceans worldwide became oxygen-starved, suffocating entire ecosystems.

      Scientists had previously suspected that anoxia, or a lack of oxygen, was responsible for destroying aquatic life. Supporting data came from marine rocks that formed in the ancient Tethys Ocean—but that body of water comprised only about 15 percent of Earth’s seas. That is hardly enough to say anything definitive about the entire marine realm….

      …This approach enabled the researchers to spot clues in rocks from Japan that formed around the time of the extinction in the middle of the Panthalassic Ocean, which then spanned most of the planet and held the majority of its seawater….

      The findings may have special relevance in modern times because the trigger for this ancient anoxia was most likely climate change caused by Siberian volcanoes pumping carbon dioxide into the atmosphere. And today, as human activity warms the planet, the oceans hold less oxygen than they did many decades ago. Brennecka cautions against speculating about the future but adds: “I think it’s pretty clear that when large-scale changes happen in the oceans, things die.”

The Origin of the Earth

[This excerpt is the start of an article by Harold C. Urey in the October 1952 issue of Scientific American.]

      It is probable that as soon as man acquired a large brain and the mind that goes with it he began to speculate on how far the earth extended, on what held it up, on the nature of the sun and moon and stars, and on the origin of all these things. He embodied his speculations in religious writings, of which the first chapter of Genesis is a poetic and beautiful example. For centuries these writings have been part of our culture, so that many of us do not realize that some of the ancient peoples had very definite ideas about the earth and the solar system which are quite acceptable today.

      Aristarchus of the Aegean island of Samos first suggested that the earth and the other planets moved about the sun—an idea that was rejected by astronomers until Copernicus proposed it again 2,000 years later. The Greeks knew the shape and the approximate size of the earth, and the cause of eclipses of the sun. After Copernicus the Danish astronomer Tycho Brahe watched the motions of the planet Mars from his observatory on the Baltic island of Hveen; as a result Johannes Kepler was able to show that Mars and the earth and the other planets move in ellipses about the sun. Then the great Isaac Newton proposed his universal law of gravitation and laws of motion, and from these it was possible to derive an exact description of the entire solar system. This occupied the minds of some of the greatest scientists and mathematicians in the centuries that followed.

      Unfortunately it is a far more difficult problem to describe the origin of the solar system than the motion of its parts. The materials that we find in the earth and the sun must originally have been in a rather different condition. An understanding of the process by which these materials were assembled requires the knowledge of many new concepts of science such as the molecular theory of gases, thermodynamics, radioactivity and quantum theory. It is not surprising that little progress was made about these lines until the 20th century.

Beavers, Rebooted

[These excerpts are from an editorial by Ben Goldfarb in the June 8, 2018, issue of Science.]

      In 1836, an explorer named Stephen Meek wandered down the piney slopes of Northern California’s Klamath Mountains and ended up here, in the finest fur trapping ground he’d ever encountered. This swampy basin would ultimately become known as the Scott Valley, but Meek’s men named it Beaver Valley after its most salient resource: the rodents whose darns shaped its ponds, marshes, and meadows. Meek’s crew caught 1800 beavers here in 1850 alone, shipping their pelts to Europe to be felted into waterproof hats. More trappers followed, and in 1929 one killed and skinned the valley’s last known beaver.

      The massacre spelled disaster not only for the beavers, but also for the Scott River’s salmon, which once sheltered in beaver-built ponds and channels. As old beaver dams collapsed and washed away, wetlands dried up and streams carved into their beds. Gold mining destroyed more habitat. Today, the Scott resembles a postindustrial sacrifice zone, its once lush floodplain buried under heaps La mine tailings…

      All is not lost, however. Beyond one slag heap, a tributary called Sugar Creek has been transformed into a shimmering pond, broad as several tennis courts and fringed with willow and alder. Gilmore tugged up her shorts and waded into the basin, sandals sinking deep into chocolatey mud. Schools of salmon fry flowed like mercury around her ankles. It was as if she had stepped into a time machine and been transported back to the Scott's fecund past. This oasis, Gilmore explained, is the fruit of a seemingly quixotic effort to re-beaver Beaver Valley. At the downstream end of the pond stood the structure that made the resurrection possible: a rodent-human collaboration known as a beaver darn analog (BDA). Human hands felled and peeled Douglas fir logs, pounded them upright into the stream bed, and wove a lattice of willow sticks through the posts. A few beavers that had recently returned to the valley promptly took over, gnawing down nearby trees and reinforcing the dam with branches and mud….

Dig Seeks Site of First English Settlement in the New World

[These excerpts are from an editorial by Andrew Lawler in the June 8, 2018, issue of Science.]

      In 1587, more than 100 men, women, and children settled on Roanoke Island in what is now North Carolina. War with Spain prevented speedy resupply of the colony—the first English settlement in the New World, backed by Elizabethan courtier Sir Walter Raleigh. When a rescue mission arrived 3 years later, the town was abandoned and the colonists had vanished.

      What is commonly called the Lost Colony has captured the imagination of generations of professional and amateur sleuths, but the colonists’ fate is not the only mystery. Despite more than a century of digging, no trace has been found of the colonists’ town—only the remains of a small workshop and an earthen fort that may have been built later, according to a study to be published this year. Now, after a long hiatus, archaeologists plan to resume digging this fall….

      The first colonists arrived in 1585, when a voyage from England landed more than 100 men here, among them a science team including Joachim Gans, a metallurgist from Prague and the first known practicing Jew in the Americas. According to eyewitness accounts, the colonists built a substantial town on the island’s north end. Gans built a small lab where he worked with scientist Thomas Harriot. After the English assassinated a local Native American leader, however, they faced hostility. After less than a year, they abandoned Roanoke and returned to England.

      A second wave of colonists, including women and children, arrived in 1587 and rebuilt the decaying settlement. Their governor, artist John White, returned to England for supplies and more settlers, but war with Spain delayed him in England for 3 years. When he returned here in 1590, he found the town deserted.

      By the time President James Monroe paid a visit in 1819, all that remained was the outline of an earthen fort, presumed to have been built by the 1585 all-male colony. Digs near the earthwork in the 1890s and 1940s yielded little. The U.S. National Park Service (NPS) subsequently reconstructed the earthen mound, forming the centerpiece of today’s Fort Raleigh National Historic Site.

      Then in the 1990s, archaeologists led by Ivor Noel Hume of The Colonial Williamsburg Foundation in Virginia uncovered remains of what archaeologists agree was the workshop where Gans tested rocks for precious metals and Harriot studied plants with medicinal properties, such as tobacco. Crucibles and pharmaceutical jars littered the floor, along with bits of brick from a special furnace. The layout closely resembled those in 16th century woodcuts of German alchemical workshops.

      In later digs Noel Hume determined that the ditch alongside the earthwork cuts across the workshop—suggesting the fort was built after the lab and possibly wasn’t even Elizabethan. NPS refused to publish these controversial results, and Noel. Hume died in 2017. But the foundation intends to publish his paper in coming months….

Knowledge Can Be Power

[These excerpts are from an article by Peter Salovey in the June 2018 issue of Scientific American.]

      If knowledge is power, scientists should easily be able to influence the behavior of others and world events. Researchers spend their entire careers discovering new knowledge—from a single cell to the whole human, from an atom to the universe.

      Issues such as climate change illustrate that scientists, even if armed with overwhelming evidence, are at times powerless to change minds or motivate action….

      For many, knowledge about the natural world is superseded by personal beliefs. Wisdom across disciplinary and political divides is needed to help bridge this gap. This is where institutions of higher education can provide vital support. Educating global citizens is one of the most important charges to universities, and the best way we can transcend ideology is to teach our students, regardless of their majors, to think like scientists. From American history to urban studies, we have an obligation to challenge them to be inquisitive about the world, to weigh the quality and objectivity of data presented to them, and to change their minds when confronted with contrary evidence.

      Likewise, STEM majors' college experience must be integrated into a broader model of liberal education to prepare them to think critically and imaginatively about the world and to understand different viewpoints. It is imperative for the next generation of leaders in science to be aware of the psychological, social and cultural factors that affect how people understand and use information.

      Through higher education, students can gain the ability to recognize and remove themselves from echo chambers of ideologically-driven narratives and help others do the same. Students at Yale, the California Institute of Technology and the University of Waterloo, for instance, developed an Internet browser plug-in that helps users distinguish bias in their news feeds. Such innovative projects exemplify the power of universities in teaching students to use knowledge to fight disinformation.

      For a scientific finding to find traction in society, multiple factors must be considered. Psychologists, for example, have found that people are sensitive to how information is framed. My research group discovered that messages focused on positive outcomes have more success in encouraging people to adopt illness-prevention measures, such as applying sunscreen to lower their risk for skin cancer, than loss-framed messages, which emphasize the downside of not engaging in such behaviors. Loss-framed messages are better at motivating early-detection behaviors such as mammography screening.

      Scientists cannot work in silos and expect to improve the world, particularly when false narratives have become entrenched in communities…

      Universities are conveners of experts and leaders across disciplinary and political boundaries. Knowledge is power but only if individuals are able to analyze and compare information against their personal beliefs, are willing to champion data-driven decision making over ideology, and have access to a wealth of research findings to inform policy discussions and decisions.

Constitutional Right to Contraception

[This excerpted editorial from the March 8, 2018, issue of The New York Times appeared in the June 2018 issue of Population Connection.]

      Landmark Supreme Court decisions in 1965 and 1972 recognizing a constitutional right to contraception made it more likely that women went to college, entered the work force, and found economic stability. That's all because they were better able to choose when, or whether, to have children.

      A 2012 study from the University of Michigan found that by the 1990s, women who had early access to the birth control pill had wage gains of up to 30 percent, compared with older women.

      It’s mind-boggling that anyone would want to thwart that progress, especially since women still have so far to go in attaining full equality in the United States. But the Trump administration has signaled it may do just that, in a recent announcement about funding for a major family planning program, Title X.

      Since 1970, the federal government has awarded Title X grants to providers of family planning services — including contraception, cervical cancer screenings, and treatment for sexually transmitted infections — to help low-income women afford them. It’s a crucial program.

      Yet the Trump administration appeared to accept the conservatives’ retrograde thinking with a recent announcement from the Department of Health and Human Services’ Office of Population Affairs outlining its priorities for awarding Title X grants. Alarmingly, unlike previous funding announcements, the document makes zero reference to contraception. In setting its standards for grants, it disposes of nationally recognized clinical standards, developed with the federal Centers for Disease Control and Prevention, that have long been guideposts for family planning. Instead, the government says it wants to fund “innovative” services and emphasizes “fertility awareness” approaches, which include the so-called rhythm method. These have long been preferred by the religious right, but are notoriously unreliable.

Trump on Family Planning

[This excerpted editorial from the March 11, 2018, issue of the St. Louis Post-Dispatch appeared in the June 2018 issue of Population Connection.]

      The Trump administration's answer to questions surrounding family planning and safe sex is to give preference for $260 million in grants to groups stressing abstinence and “fertility awareness.” Instead of urging at-risk members of the public to use condoms and other forms of protection, the administration favors far-less safe and effective measures such as the rhythm method.

      Effective and accessible contraception has helped lower rates of unplanned pregnancies in the U.S., thereby reducing the number of abortions. The federal Centers for Disease Control and Prevention reported last year that there were fewer abortions in 2014 than at any time since abortion was legalized in 1973. Adolescent pregnancies decreased 55 percent between 1990 and 2011. Birth rates for women between 15 and 19 declined an additional 35 percent between 2011 and 2016, according to the data.

      Much of the goal of family planning and contraception is to reduce the abortion rate by limiting unintended pregnancies and to decrease the number of sexually transmitted infections. There are enormous health, social, and economic benefits for women who control their own reproductive health.

      The administration’s emphasis on abstinence and natural family planning — including the so-called rhythm method — is part of a familiar pattern of shifting away from scientific, evidence-based policies toward non-scientific ideologies. With the current shift, Trump is undermining nearly fifty years of successful family planning efforts.

      Abstinence is 100 percent effective if practiced consistently. That’s a big if. Fertility awareness is effective if practitioners have a nearly medical understanding of hormonal cycles and adhere to them unfailingly.

When Facts Are Not Enough

[These excerpts are from an article by Katherine Hayhoe in the June 1, 2018, issue of Science.]

      …Scientists furthermore assume that disagreements can be resolved by more facts. So when people object to the reality of climate change with science-y sounding arguments—“the data is wrong,” or “it’s just a natural cycle,” or even, “eve need to study it longer”—the natural response of scientists is simple and direct: People need more data. But this approach often doesn't work and can even backfire. Why? Because when it conies to climate change, science-y sounding objections are a mere smokescreen to hide the real reasons, which have much more to do with identity and ideology than data and facts.

      For years, climate change has been one of the most politically polarized issues in the United States. Today, the best predictor of whether the public agrees with the reality of anthropogenic climate change is not how much scientific information there is. It’s where each person falls on the political spectrum. That’s why the approach of bombarding the unconvinced with more data doesn’t work—people see it as an attack on their identity and an attempt to change their way of life.

      …As uncomfortable as this is for a scientist in today’s world, the most effective thing I’ve done is to let people know that I am a Christian. Why? Because it’s essential to connect the impacts of a changing climate directly to what's already meaningful in one's life, and for many people, faith is central to who they are. Scientists can be effective communicators by bonding over a value that they genuinely share with the people with whom they’re speaking. It doesn't have to be a shared faith. It could be that both are parents, or live in the same place, or are concerned about water resources or national security, or enjoy the same outdoor activities. Instead of beginning with what most divides scientists from others, start the conversation from a place of agreement and mutual respect. Then, scientists can connect the dots: share from their head and heart why they care.

      Talking about impacts isn't enough, though. Sadly, the most dangerous myth that many people have bought into is, “it doesn’t matter to me,” and the second most dangerous myth is, “there’s nothing I can do about it.” If scientists describe the daunting challenge of climate change but can’t offer an engaging solution, then people's natural defense mechanism is to disassociate from the reality of the problem. That's why changing minds also requires providing practical, viable, and attractive solutions that someone can get excited about. Concerned homeowner? Mention the amazing benefits of energy conservation. Worried parent? Bring up the practical steps to take to make outdoor play spaces safer for kids, even in the hot summer. Business executive? Talk about the economic benefits of renewables.

      We all live on the same planet, and we all want the same things. By connecting our heads to our hearts, we all can talk about—and tackle—the problem of climate change together.

What I Learned from Teaching

[These excerpts are from an article by Moamen Elmassry in the May 25, 2018, issue of Science.]

      …I tried my best to help my students learn, but my inexperience was apparent.

      I could have carried on as a mediocre teacher. But I recalled how some of my own teachers had inspired me over the years. I felt I owed my students the same—which, I realized, would require time and training. It was my responsibility to make that happen, even if it meant taking a little more time and focus away from my research.

      …I introduced my students to epidemiology by asking them to write short stories about an epidemic spreading on campus, hoping to incorporate more creativity into their learning. This unconventional assignment surprised the students at first. But some of them got so into it that they wrote much more than the half page I had assigned. I loved seeing my students so engaged with an activity I had designed. In my end-of-semester evaluations, some students said that I was their favorite TA, and others asked me to write recommendation letters for them, which was both humbling and rewarding.

      …teaching has provided me with some unexpected benefits. Knowing that I have teaching commitments pushes me to conduct efficient, well-designed experiments. Answering undergraduate students’ fundamental “why” questions helps keep me intellectually stimulated and forces me to think about science in new ways….

The Unlikely Triumph of Dinosaurs

[These excerpts are from an article by Stephen Brusatte in the May 2018 issue of Scientific American.]

      …Like many successful organisms, dinosaurs were born of catastrophe. Around 252 million years ago, at the tail end of the Permian Period, a pool of magma began to rumble underneath Siberia. The animals living at the surface—an exotic menagerie of large amphibians, knobby-skinned reptiles and flesh-eating forerunners of mammals—had no inkling of the carnage to come. Streams of liquid rock snaked through the mantle and then the crust, before flooding out through mile-wide cracks in the earth’s surface. For hundreds of thousands, maybe millions, of years the eruptions continued, spewing heat, dust, noxious gases and enough lava to drown several million square miles of the Asian landscape. Temperatures spiked, oceans acidified, ecosystems collapsed and up to 95 percent of the Permian species went extinct. It was the worst mass extinction in our planet's history. But a handful of survivors staggered into the next period of geologic time, the Triassic. As the volcanoes quieted and ecosystems stabilized, these plucky creatures now found themselves in a largely empty world. Among them were various small amphibians and reptiles, which diversified as the earth healed and which later diverged into today’s frogs, salamanders, turtles, lizards and mammals…

      The Prorotodactylus tracks date to about 250 million years ago, just one or two million years after the volcanic eruptions that brought the Permian to a close. Early on it was clear from the narrow distance between the left and right tracks that they belonged to a specialized group of reptiles called archosaurs that emerged after the Permian extinction with a newly evolved upright posture that helped them run faster, cover longer distances and track down prey with greater ease. The fact that the tracks came from an early archosaur meant that they could potentially bear on questions about the origins of dinosaurs. Almost as soon as the archosaurs originated, they branched into two major lineages, which would grapple with each other in an evolutionary arms race over the remainder of the Triassic: the pseudosuchians, which led to today's crocodiles, and the ave-metatarsalians, which developed into dinosaurs. Which branch did Prorotodactylus belong to?

      The technology itself is old, but its usefulness in this context is due to the rapid recent rise of intermittent renewable energy sources and the peculiarities of the way electricity prices are set….

      …Prorotodactyius is therefore a dinosauromorph: not a dinosaur per se but a primitive member of the avemetatarsalian subgroup that includes dinosaurs and their very closest cousins. Members of this group had long tails, big leg muscles, and hips with extra bones connecting the legs to the trunk, which allowed them to move even faster and more efficiently than other archosaurs.

      These earliest dinosauromorphs were hardly fearsome, however. Fossils indicate that they were only about the size of a house cat, with long, skinny legs….

      Over the next 10 million to 15 million years the dinosauromorphs continued to diversify. The fossil record from this time period shows an increasing number of track types in Poland and then around the world. The tracks get larger and develop a greater variety of shapes. Some trackways stop showing impressions of the hand, a sign the makers were walking only on their hind legs. Skeletons start to turn up as well. Then, at sonic point between 240 million and 230 million years ago, one of these primitive dinosauromorph lineages evolved into true dinosaurs. It was a radical change in name only—the transition involved just a few subtle anatomical innovations: a long scar on the upper arm that anchored bigger muscles, some tablike flanges on the neck vertebrae that supported stronger ligaments, and an open, windowlike joint where the thighbone meets the pelvis that stabilized upright posture. Still, modest though these changes were, they marked the start of something big….

      But then, just when it seemed that dinosaurs would never escape their rut, they received two lucky breaks. First, in the humid zone, the dominant large herbivores of the time—reptiles called rhynchosaurs and mammal cousins called dicynodonts—went into decline, disappearing entirely in some areas for reasons still unknown. Their fall from grace between 225 million and 215 million years ago gave primitive plant-eating sauropodomorphs such as Saturnalia, a dog-size species with a slightly elongated neck, the opportunity to claim an important niche. Before long these sauropod precursors were the main herbivores in the humid parts of the Northern and Southern Hemispheres. Second, around 215 million years ago dinosaurs finally broke into the deserts of the Northern Hemisphere, probably because shifts in the monsoons and the amount of carbon dioxide in the atmosphere made differences between the humid and arid regions less severe, allowing dinosaurs to migrate between them more easily….

      No matter which interval you look at in the Triassic, from the time the first dinosaurs appeared around 230 million years ago until the period ended 201 million years ago, the story is the same. Only some dinosaurs were able to live in some parts of the world, and wherever they lived—humid forests or parched deserts—they were surrounded by all kinds of bigger, more common, more diverse animals….

      More than anything, however, Triassic dinosaurs were being outgunned by their close cousins the so-called pseudosuchians, on the crocodile side of the archosaur family….

      Our statistical analysis led us to an iconoclastic conclusion: the first dinosaurs were not particularly special, at least compared with the variety of other animals they were evolving alongside during the Triassic. If you were around back then to survey the Taiwan scene, you probably would have considered the dinosaurs a fairly marginal group. And if you were of a gambling persuasion, you would probably have bet on some of the other animals, most likely those hyperdiverse pseudosuchians, to eventually become dominant, grow to massive sizes and conquer the world. But of course, we know that it was the dinosaurs that became ascendant and even persist today as more than 10,000 species of birds. In contrast, only two dozen or so species of modern crocodilians have survived to the present day.

      How did dinosaurs eventually wrestle the crown from their crocodile-line cousins? The biggest factor appears to have been another stroke of good fortune outside the dinosaurs’ control. Toward the end of the 'Triassic great geologic forces pulled on Pangea from both the east and west, causing the supercontinent to fracture. Today the Atlantic Ocean fills that gap, but back then it was a conduit for magma. For more than half a million years tsunamis of lava flooded across much of central Pangea, eerily similar to the enormous volcanic eruptions that closed out the Permian 50 million years prior. Like those earlier eruptions, the End Triassic ones also triggered a mass extinction. The crocodile-line archosaurs were decimated, with only a few species—the ancestors of today’s crocodiles and alligators—able to endure.

      Dinosaurs, on the other hand, seemed to have barely noticed this fire and brimstone. All the major subgroups—the theropods, sauropodomorphs and ornithischians—sailed into the next interval of geologic time, the Jurassic Period. As the world was going to hell, dinosaurs were thriving, somehow taking advantage of the chaos around them. I wish I had a good answer for why—was there something special about dinosaurs that gave them an edge over the pseudosuchians, or did they simply walk away from the plane crash unscathed, saved by sheer luck when so many others perished? This is a riddle for the next generation of paleontologists to solve.

      Whatever the reason dinosaurs survived that disaster, there is no mistaking the consequences. Once on the other side, freed from the yoke of their pseudosuchian rivals, these dinosaurs had the opportunity to prosper in the Jurassic. They became more diverse, more abundant and bigger than ever before. Completely new dinosaur species evolved and migrated widely, taking pride of place in terrestrial ecosystems the world over. Among these newcomers were the first dinosaurs with plates on their backs and armor covering their bodies; the first truly colossal sauropods that shook the earth as they walked; carnivorous ancestors of T rex that began to get much bigger; and an assortment of other theropods that started to get smaller, lengthen their arms and cover themselves in feathers—predecessors of birds. Dinosaurs were now dominant. It took more than 30 million years, but they had, at long last, arrived.

The Battle of the Belt

[These excerpts are from an article by Claudia Wallis in the May 2018 issue of Scientific American.]

      Among the indignities of aging is a creeping tendency to put on weight, as our resting metabolism slows down—by roughly 1 to 2 percent every decade. But what's worse, at least for women, is a shift, around menopause, in where this excess flab accumulates. Instead of thickening the hips and thighs, it starts to add rolls around the belly—a pattern more typical of men—which notoriously reshapes older women from pears into apples.

      The change is not just cosmetic. A high waist-to-hip ratio portends a greater risk of heart disease, stroke, diabetes, metabolic syndrome and even certain cancers—for both men and women. The shift helps to explain why, after menopause, women begin to catch up to men in their rates of cardiovascular disease. And those potbellies are costly. A 2008 Danish study found that for every inch added to a healthy waistline, annual health care costs rose by about 3 percent for women and 5 percent for men.

      Researchers have been investigating “middle-aged spread” for decades, but there is still debate about why it happens, whether it is a cause or merely an indicator of health risks, and what can be done to avoid it. As we grow older, we deposit relatively more excess fat around our abdominal organs as opposed-to under the skin—where most of our body fat sits. There are some ethnic and racial differences, however….For a given waist circumference, African-Americans tend to have less of this “visceral fat,” and Asians tend to have more. Visceral fat differs from subcutaneous fat in that it releases fatty acids and inflammatory substances directly into the liver rather than into the general circulation. Some experts believe this may play a direct role in causing the diseases linked to abdominal obesity.

      But not everyone agrees….

      Another area of uncertainty is why we pack on visceral fat with aging. Clearly, sex hormones are involved, given that the change occurs in women around menopause. But it is more complicated than just a drop in estrogen. Consider, for instance, that young women with polycystic ovary syndrome tend to have the apple shape and insulin resistance, although their bodies produce plenty of estrogen. Such women do, however, have high levels of androgens. Or consider that when transgender males—who are biologically female—take androgens to masculinize their body, they, too, develop more visceral fat and glucose intolerance. Both examples suggest that “a relative imbalance” of male and female hormones may be at work…. The same might also be true of healthy women at menopause.

      But this isn’t settled science. A newer theory made a splash last year after researchers reported in Nature that they could radically reduce body fat—including visceral fat—and raise metabolic rates in mice by blocking the action of follicle-stimulating hormone (FSH), a substance better known for its role in reproduction. Could FSH be the key to the midlife weight puzzle? The researchers had previously shown that blocking FM could halt bone loss, raising the intriguing prospect of a medical twofer: one drug to combat obesity and osteoporosis. “The next step is to take this to humans,” says senior author Mone Zaidi of the Icahn School of Medicine at Mount Sinai.

      Of course, many a thrilling discovery in mice has fizzled in humans, and combating the evolutionary programming for storing fat is particularly difficult….

      As far as we know, there’s only one way to fight nature's plan for a thickening middle and its attendant risks—and you know where this is going. Eat less or exercise more as you age, or do both….

Results Roll in from the Dinosaur Renaissance

[These excerpts are from a book review by Vistoria Arbour in the May 11, 2018, issue of Science.]

      …Steve Brusatte’s The Rise and Fall of the Dinosaurs takes readers on a tour of the new fossils and discoveries that are shedding light on the dinosaurs’ evolutionary story.

      The dawn of the dinosaurs, the Triassic period, is still one of the most poorly understood periods in dinosaur history, but it’s also where some of the information gaps are being filled most rapidly and most surprisingly. Whereas the end of the age of dinosaurs was abruptly cut short by a meteor, their ascent was complex and drawn-out. New finds from Poland, New Mexico, and Argentina show that dinosaurs were uncommon and relatively unspecialized for the first 30 million years of their existence, and they lived alongside relatives of today’s crocodiles that looked much like dinosaurs themselves.

      Elsewhere in the Mesozoic era, we meet a variety of newly discovered dinosaurs alongside old favorites. As the giant supercontinent Pangaea split apart during the Jurassic and Cretaceous periods, dinosaurs on newly drifting continents were isolated from each other and began to evolve their own characteristic features. South America was home to the snub-snouted, tiny-armed abelisaurs, Africa to the shark-toothed carcharodontosaurs, and in Transylvania, a bizarre set of dinosaurs includes relatives of Velociraptor with not one but two sets of killer sickle claws on their feet.

      …The veritable flood of fluffy and feathery fossils from China has revealed an amazing diversity of winged dinosaurs. These specimens indicate that feathers evolved long before flight but also suggest that powered, flapping flight may have evolved multiple times in dinosaurs. (We need look no further than the totally weird bat-winged Yi qi to see that dinosaurs experimented with many ways to get airborne.)

      …Tyrannosaurs weren’t all giant bone-crunchers with tiny arms: the earliest members of the group started out as small, lightly built, long-armed predators with fancy crests on their heads. Many were feathered, as evidenced by those found in China, where the right kind of conditions preserved soft tissues such as skin.

      …Recent advances in understanding dinosaur growth, biogeography, extinction dynamics, and fine-scale evolutionary changes through time, for example, have only been possible because of the comparatively abundant fossil record of duck-billed hadrosaurs and homed ceratopsians….

Finding the First Horse Tamers

[These excerpts are from an article by Michael Price in the May 11, 2018, issue of Science.]

      Taming horses opened a new world, allowing prehistoric people to travel farther and faster than ever before, and revolutionizing military strategy. But who first domesticated horses—and the genetic and cultural impact of the early riders—has long been a puzzle.

      The “steppe hypothesis” suggested that Bronze Age pastoralists known as the Yamnaya, or their close relatives, first domesticated the horse. Aided by its fleet transport, they migrated out from the Eurasian steppe and spread their genes, as well as precursors of today’s Indo-European languages, across much of Eurasia. But a new study of ancient genomes…suggests that the Yamnaya’s effect on Asia was limited, and that another culture domesticated the horse first….

      The first signs of horse domestication—pottery containing traces of mares’ milk and horse teeth with telltale wear from a riding bit—come from Botai hunter-gatherers, who lived in modern Kazakhstan from about 3700 B.C.E. to 3100 B.C.E. Yet some researchers thought the Botai were unlikely to have invented horse husbandry because they lingered as hunter-gatherers long after their neighbors had adopted farming and herding. These researchers assumed the Botai learned to handle horses from nearby cultures on the steppe, perhaps even the Yarrinaya, who were already herding sheep and goats.

      Genetic data suggest the Yamnaya migrated both east and west during the Bronze Age, and mixed with locals. Some researchers hypothesize that they also spread early branches of a Proto-Indo-European (PIE) language, which later diversified into today’s many Indo-European languages, including I English, Italian, Hindi, Russian, and Persian.

      …sequenced the whole genomes of 74 ancient Eurasians, most of whom lived between 3500 B.C.E. and 1500 B.C.E. The researchers devised a rough family tree and timeline for these samples and those from later civilizations and modern people.

      The team found no Yamnaya DNA in the three Botai individuals, suggesting the two groups hadn’t mixed. That implies the Botai domesticated horses on their own….

      The new work fits with the archaeologi-cal evidence and a recent study of DNA from ancient horses themselves….That work showed that Botai horses were not related to modern horses, hinting at separate domestications by the Botai and other steppe dwellers….

NASA Cancels Carbon Monitoring Research Program

[These excerpts are from an article by Paul Voosen in the May 11, 2018, issue of Science.]

      You can’t manage what you don’t measure. The adage is especially relevant for climate-warming greenhouse gases, which are crucial to manage—and challenging to measure. In recent years, though, satellite and aircraft instruments have begun monitoring carbon dioxide and methane remotely, and NASA’s Carbon Monitoring System (CMS), a $10- million-a-year research line, has helped stitch together observations of sources and sinks into high-resolution models of the planet’s flows of carbon. Now, President Donald Trump’s administration has quietly killed the CMS, Science has learned.

      The move jeopardizes plans to verify, the national emission cuts agreed to in the Paris climate accords….

      The White House has mounted a broad attack on climate science, repeatedly proposing cuts to NASA's earth science budget, including the CMS, and cancellations of climate missions such as the Orbiting Carbon Observatory 3 (OCO-3). Although Congress fended off the budget and mission cuts, a spending deal signed in March made no mention of the CMS. That allowed the administration’s move to take effect….

      The agency declined to provide a reason for the cancellation beyond “budget constraints and higher priorities within the science budget.” But the CMS is an obvious target for the Trump administration because of its association with climate treaties and its work to help foreign nations understand their emissions……

      Many of the 65 projects supported by the CMS since 2010 focused on understanding the carbon locked up in forests. For example, the U.S. Forest Service has long operated the premier land-based global assessment of forest carbon, but the labor-intensive inventories of soil and timber did not extend to the remote interior of Alaska. With CMS financing, NASA scientists worked with the Forest Service to develop an aircraft-based laser imager to tally up forest carbon stocks….

      The program has also supported research to improve tropical forest carbon inventories. Many developing nations have been paid to prevent deforestation through mechanisms like the United Nations’s REDD+ program, which is focused on reducing emissions from deforestation and forest degradation. But the limited data and tools for monitoring tropical forest change often meant that claimed reductions were difficult to trust….The end of the CMS is disappointing and “means we're going to be less capable of tracking changes in carbon….”

      The CMS improved other carbon monitoring as well. It supported efforts by the city of Providence to combine multiple data sources into a picture of its greenhouse gas emissions, and identify ways to reduce them. It has tracked the dissolved carbon in the Mississippi River as it flows out into the ocean….

Fossils Reveal How Ancient Bird Got Their Beaks

[These excerpts are from an article by Gretchen Vogel in the May 4, 2018, issue of Science.]

      As every schoolchild now knows, birds are dinosaurs, linked to their extinct relatives by feathers and anatomy. But birds’ beaks—splendidly versatile adaptations that allow their owners to grasp, pry, preen, and tear—are nothing like stiff dinosaurian snouts, and how they evolved has been a mystery. Now, 3D scans of new fossils of an iconic ancient bird capture the beak just as it took form.

      …By bringing details from multiple specimens together, the new scans offer an early glimpse of key features of bird skulls, including a big brain and the movable upper jaw that helps make beaks so nimble.

      Ichthyornis, an ancient seabird from about 90 million years ago, has long been famous for having a body like a modem bird, with a snout lined with teeth like a dinosaur. Paleontologists studying the first Ichthyornis fossil, discovered in the 1870s in Kansas, initially thought the body came from a small bird and the jaw from a marine reptile. Further excavation convinced them that the pieces belonged to the same animal. In 1880, Charles Darwin wrote that Ichthyornis was among “the best support for the theory of evolution” since On the Origin of Species was published 2 decades earlier.

      But in the original Ichthyornis fossil, the upper jaw is missing, and the toothed lower jaw resembles that of other dinosaurs. So paleontologists assumed that early birds made do with a fixed upper jaw, like most other vertebrates.

      In 2014, paleontologists in Kansas found a new specimen of Ichthyornis….

      Instead of extracting the fossil from the limestone in which it is embedded, the researchers used computerized tomography to scan the entire block of rock. Then they scanned three previously unrecognized specimens that they found in museum collections, and combined all the scans into a complete model of Ichthyornis’s skull. They also re-examined the original fossil from the 1870s, housed at Yale's Peabody Museum of Natural History. Among unidentified pieces stored with the fossil, they found a small fragment that, when scanned, turned out to contain two key bones from the upper snout—bones that were missing in the new specimens.

      The resulting 3D model captures Ichthyornis’s transitional position between modern birds and other dinosaurs….Despite its dinosaurlike teeth, Ichthyornis had a hooked beak, likely covered by a hard layer of keratin, on the tip of its snout. It also could move both top and bottom jaws independently like modern birds.

      That means beaks appeared earlier than thought, perhaps around the same time as wings….

Critics See Hidden Goal in EPA Data Access Rule

[These excerpts are from an article by Warren Cornwall in the May 4, 2018, issue of Science.]

      When Scott Pruitt, administrator of the U.S. Environmental Protection Agency (EPA) in Washington, D.C., announced last week that the agency plans to bar regulators from considering studies that have not made their underlying data public, he said it was to ensure the quality of the re-search used to shape new rules. “The era of secret science at EPA is coming to an end,” Pruitt said at a 24 April event (which was closed to the press) unveiling the proposed “transparency” rule.

      But longtime observers of EPA, including former senior agency officials, see a more troubling and targeted goal: undermining key studies that have helped justify stricter limits on air pollution. In particular, they say, the new policy is aimed at blocking EPA consideration of large epidemiological studies that have highlighted the health dangers of tiny particles of soot and other chemicals less than 2.5 microns in diameter. Those studies, which rest in part on confidential health information that is difficult to make public, have been under attack for decades from some industry groups and Republican lawmakers in Congress, who argue that the confidentiality masks flaws in the studies. The same interests lobbied heavily for the new EPA rule, and critics of the policy say it is just new clothing for an old—and largely discredited—argument….

      At the heart of the fight is a type of pollution scientists believe is particularly lethat, but relatively costly to control: tiny particles of soot and other chemicals produced by burning oil, coal, gasoline, wood, and other fuels, which can lodge deep in the lungs. In the mid-1990s, two major epidemiological studies—known as the Harvard Six Cities and American Cancer Society (ACS) studies—tracked the medical histories of thousands of people exposed to different levels of air pollution. The studies found that exposure to even relatively low particulate levels increased premature deaths. Further studies have linked the pollution to other problems including asthma, heart disease, and heart attacks.

      In response, EPA began tightening clean air regulations—and affected industries began to attack the findings. Industry representatives also urged Congress to pass legislation that would bar EPA from using nonpublic data in crafting regulations. In recent years that legislation, championed by Representative Lamar Smith (R-TX), head of the House of Representatives’s science committee, failed to gain approval. But after the election of President Donald Trump, Smith and his allies found a receptive audience in Pruitt, who agreed to implement similar policies as an EPA rule.

      In the meantime, an array of studies, including a government-sponsored reanalysis of the original particulate data, has generally validated the findings….

      …Lowering the standard to 11 micrograms would increase pollution-control costs by as much as $1.35 billion in 2020, analysts estimated, but the health gains and lives saved would be worth as much as $20 billion a year.

      …The timing of the rule—which observers expect EPA to adopt once a public comment L period closes—is no coincidence….The agency is about to embark on a periodic review of key air pollution limits, including those governing paticulates. Even seemingly modest changes in how the agency evaluates the science could lead to lower estimates of the health benefits of tighter standards….

Orangutan Medicine

[These excerpts are from an article by Doug Main in the May issue of Scientific American.]

      Medicine is not exclusively a human invention. Many other animals, from insects to birds to nonhuman primates, have been known to self-medicate with plants and minerals for infections and other conditions. Behavioral ecologist Helen Morrogh-Bernard of the Borneo Nature Foundation has spent decades studying the island's orangutans and says she has now found evidence they use plants in a previously unseen medicinal way

      …watched 10 orangutans occasional& chew a particular plant (which is not part of their diet) into a foamy lather and then rub it into their fur. The apes spent up to 45 minutes at a time massaging the concoction onto their upper arms or legs. The researchers believe this behavior is the first known example of a non-human animal using a topical analgesic.

      Local people use the same plant—Dracaena cantleyi, an unremarkable-looking shrub with stalked leaves—to treat aches and pains….

      …That behavior may then have been passed on to other orangutans. Because this type of self-medication is seen only in south-central Borneo, Morrogh-Bernard says, it was probably learned locally.

Watchful Plants

[These excerpts are from an article by Erica Tennenhouse in the May 2018 issue of Scientific American.]

      Plants cannot run or hide, so they need other strategies to avoid being eaten. Some curl up their leaves; others chum out chemicals to make themselves taste bad if they sense animals drooling on them, chewing them up or laying eggs on them—all surefire signals of an attack. New research now shows some flora can detect an herbivorous animal well before it launches an assault, letting a plant mount a preemptive defense that even works against other pest species.

      When ecologist John Orrock of the University of Wisconsin-Madison squirted snail slime—a lubricating mucus the animals ooze as they slide along—into soil, nearby tomato plants appeared to notice. They increased their levels of an enzyme called lipoxygenase, which is known to deter herbivores….

      Initially Orrock found this defense worked against snails; in the latest study, his team measured the slimy warning’s impact on another potential threat. The investigators found that hungry caterpillars, which usually gorge on tomato leaves, had no appetite for them after the plants were exposed to snail slime and activated their chemical resistance. This nonspecific defense may be a strategy that gets the plants more bang for their buck by further improving their overall odds of survival, says Orrock….

Batty Schedules

[These excerpts are from an article by Inga Vesper in the May 2018 issue of Scientific American.]

      Every year migratory bats travel from Mexico to Bracken Cave near San Antonio, Tex., where they spend the summer consuming insects that would otherwise devour common food crops. But the bats have been showing up far earlier than they did two decades ago, possibly because of a warming climate, new research suggests.

      This trend creates a risky situation in which bats may not find enough food for themselves and their young, as the insects they prey on may not yet have arrived or hatched. If bat colonies shrink as a result of this schedule snafu, their pest control effect could fall out of sync with crop-growing seasons—potentially causing hefty losses, scientists say….

      Mexican (also called Brazilian) free-t tailed bats, the migratory species that inhabits Bracken Cave, feast on 20 different moth species and more than 40 other agricultural pests. One favorite is the corn earworm moth, which eats plants such as corn, soybean, potato and pumpkin—costing U.S. farmers millions of dollars a year in ruined crops. A 2011 study estimated that bats indirectly contribute around $23 billion to the U.S. economy by keeping plant-eating insects in check and by hunting bugs that prey on pollinator insects….

      Changing bat migration times can also clash with rainfall patterns. Many insects that bats eat breed in seasonal lakes and puddles. If the bats arrive too early to benefit from summer rainfall and the resulting abundance of bugs, they may struggle to feed their pups or skip reproduction altogether, O'Keefe says. She fears this shift could cause Midwestern bats to dwindle toward extinction, which would be bad news for humans. “Declines in bat populations could have severe implications for crop success,” she says, adding that bats also “control significant disease vectors, such as mosquitoes.”

Incentivize Responsible Antibiotic Use

[These excerpts are from a book review by Ramana Laxminarayan in the April 27, 2018, issue of Science.]

      Ever since the advent of antibiotics, scientists and clinicians have warned of the potential for widespread antibiotic resistance. Indeed, the first in vitro study of resistance to penicillin was published in 1940, 2 years before the first patient was even treated with the drug. In the ensuing decades, experts and the media continued to warn about an impending crisis of resistance but were largely ignored by the public and policy-makers.

      Public opinion changed when resistance became clinically relevant….

      The U.S. Centers for Disease Control and Prevention warned of “nightmare bacteria,” and global leaders started talking about the problem. Yet the problem of resistance t. lacked an effective global spokesperson….

      Antibiotic resistance has much in common with climate change, in that actions in any single country have the potential to affect the rest of the world. No matter where the next strain of multidrug-resistant S. aureus arises, it will become a problem in rich and poor countries alike. And, as with climate change, a key problem is the lack of incentives—for individuals, organizations, and countries—to preserve a global common resource.

      The metaphor of an “arms race” against bacteria is outdated. One could argue that the idea that bacteria are our enemy is what got us to the problem of antibiotic overuse in the first place….

Plastics Recycling with a Difference

[These excerpts are from an article by Haritz Sardon and Andrew P. Dove in the April 27, 2018, issue of Science.]

      Since the synthesis of the first synthetic polymer in 1907, the low cost, durability, safeness, and processability of polymers have led to ever-expanding uses throughout the global economy. Polymers, commonly called plastics, have become so widely used that global production is expected to exceed 500 million metric tons by 2050. This rising production, combined with rapid disposal and poor mechanisms for recycling, has led to the prediction that, by 2050, there will be more plastic in the sea than fish….

      The production of synthetic plastics is far from being sustainable. Most plastics are produced for single-use applications, and their intended use life is typically less than 1 year. Yet the materials commonly persist in the environment for centuries. More than 40 years after the launch of the recycling symbol, only 5% of plastics that are manufactured are recycled, mainly mechanically into lower-value secondary products that are not recycled again and that ultimately find their way to landfills or pollute the environment. With these materials being lost from the system, there is a constant need for the generation of new plastics, mostly from petrochemical sources, thus further depleting natural resources. Although there has been a substantial effort to develop biodegradable plastics, with polylactide arguably the most successful example, the mechanical and thermal properties of these materials still need to be improved to be substitutes for a wider range of existing materials properties.

      In the past decade, an alternative sustainable strategy has been proposed in which the plastic never becomes waste. Instead, once used, it is collected and chemically recycled into raw materials for the production of new virgin plastics with the same properties as the original but without the need for further new monomer feedstocks. This strategy not only helps to address the environmental issues related to the continual growth of disposed plastics over the world but may also reduce the demand for finite raw materials by providing a circular materials economy….

      Plastics will continue to be critical for addressing the continuing demands of our society. New polymeric materials will, for example, be needed for energy generation and storage, to address healthcare needs, for food conservation, and for providing clean water….

      Studies…in which disposed plastics can be infinitely recycled without deleterious effects on their properties, can lead to a world in which plastics at the end of their life are not considered as waste but as raw materials to generate high-value products and virgin plastics. This will both incentivize recycling and encourage sustainability by reducing the requirement for new monomer feedstocks. Current chemical recycling processes are expensive and energetically unfavorable, and further advances in monomer and polymer development and catalyst design are required to facilitate the implementation of economically viable sustainable polymers.

Searching for a Stone Age Odysseus

[These excerpts are from an article by Andrew Lawler in the April 27, 2018, issue of Science.]

      Odysseus, who voyaged across the wine-dark seas of the Mediterranean in Homer's epic, may have had some astonishingly ancient forerunners. A decade ago, when excavators claimed to have found stone tools on the Greek island of Crete dating back at least 130,000 years, other archaeologists were stunned—and skeptical. But since then, at that site and others, researchers have quietly built up a convincing case for Stone Age seafarers—and for the even more remarkable possibility that they were Neandertals, the extinct cousins of modern humans.

      The finds strongly suggest that the urge to go to sea, and the cognitive and technological means to do so, predates modern h mans….

      Scholars long thought that the capability to construct and victual a watercraft and then navigate it to a distant coast arrived only with the advent of agriculture and animal domestication. The earliest known boat, found in the Netherlands, dates back only 10,000 years or so, and convincing evidence of sails only shows up in Egypt’s Old Kingdom around 2500 B.C.E. Not until 2000 B.C.E. is there physical evidence that sailors crossed the open ocean, from India to Arabia.

      But a growing inventory of stone tools and the occasional bone scattered across Eurasia tells a radically different story. (Wooden boats and paddles don't typically survive the ages.) Early members of the human family such as Homo erectus are now known to have crossed several kilometers of deep water more than a million years ago in Indonesia, to islands such as Flores and Sulawesi. Modern humans braved treacherous waters to reach Australia by 65,000 years ago. But in both cases, some archaeologists say early seafarers might have embarked by accident, perhaps swept out to sea by tsunamis.

      In contrast, the recent evidence from the Mediterranean suggests purposeful navigation. Archaeologists had long noted ancient-looking stone tools on several Mediterranean islands including Crete, which has been an island for more than 5 million years, but they were dismissed as oddities.

      …The picks, cleavers, scrapers, and bifaces were so plentiful that a one-off accidental stranding seems unlikely, a Strasser says. The tools also offered a clue to the identity of the early seafarers: The artifacts resemble Acheulean tools developed more than a million years ago by H. erectus and used until about 130,000 years ago by Neandertals as well.

      …the tools may represent a sea-borne migration of Neandertals from the Near East to Europe….

In Blockchain We Trust

[These excerpts are from an article by Michal J. Casey and Paul Vigna is in the May/June 2018 issue of Technology Review.]

      …we need to go back to the 14th century.

      That was when Italian merchants and bankers began using the double-entry bookkeeping method. This method, made possible by the adoption of Arabic numerals, gave merchants a more reliable record-keeping tool, and it let bankers assume a powerful new role as middlemen in the international payments system. Yet it wasn’t just the tool itself that made way for modern finance. It was how it was inserted into the culture of the day.

      In 1494 Luca Pacioli, a Franciscan friar and mathematician, codified their practices by publishing a manual on math and accounting that presented double-entry bookkeeping not only as a way to track accounts but as a moral obligation. The way Pacioli described it, for everything of value that merchants or bankers took in, they had to give something back. Hence the use of offsetting entries to record separate, balancing values — a debit matched with a credit, an asset with a liability.

      Pacioli’s morally upright accounting bestowed a form of religious benediction on these previously disparaged professions. Over the next several centuries, clean books came to be regarded as a sign of honesty and piety, clearing bankers to become payment intermediaries and speeding up the circulation of money. That funded the Renaissance and paved the way for the capitalist explosion that would change the world.

      Yet the system was not impervious to fraud. Bankers and other financial actors often breached their moral duty to keep honest books, and they still do—just ask Bernie Madoff’s clients or Enron’s shareholders. Moreover, even when they are honest, their honesty comes at a price. We've allowed centralized trust managers such as banks, stock exchanges, and other financial middlemen to become indispensable, and this has turned them from intermediaries into gatekeepers. They charge fees and restrict access, creating friction, curtailing innovation, and strengthening their market dominance.

      The real promise of blockchain technology, then, is not that it could make you a billionaire overnight or give you a way to shield your financial activities from nosy governments. It's that it could drastically reduce the cost of trust by means of a radical, decentralized approach to accounting—and, by extension, create a new way to structure economic organizations.

      A new form of bookkeeping might seem like a dull accomplishment. Yet for thousands of years, going back to Hammurabi’s Babylon, ledgers have been the bedrock of civilization. That's because the exchanges of value on which society is founded require us to trust each other’s claims about what we own, what we’re owed, and what we owe. To achieve that trust, we need a common system for keeping track of our transactions, a system that gives definition and order to society itself. How else would we know that Jeff Bezos is the world's richest human being, that the GDP of Argentina is $620 billion, that 71 percent of the world's population lives on less than $10 a day, or that Apple's shares are trading at a particular multiple of the company’s earnings per share?

      A blockchain (though the term is bandied about loosely, and often misapplied to things that are not really blockchains) is an electronic ledger—a list of transactions. Those transactions can in principle represent almost anything. They could be actual exchanges of money, as they are on the blockchains that underlie cryptocurrencies like Bitcoin. They could mark exchanges of other assets, such as digital stock certificates. They could represent instructions, such as orders to buy or sell a stock. They could include so-called smart contracts, which are computerized instructions to do something (e.g., buy a stock) if something else is true (the price of the stock has dropped below $10).

      What makes a blockchain a special kind of ledger is that instead of being managed by a single centralized institution, such as a bank or government agency, it is stored in multiple copies on multiple independent computers within a decentralized network. No single entity controls the ledger. Any of the computers on the network can make a change to the ledger, but only by following rules dictated by a “consensus protocol,” a mathematical algorithm that requires a majority of the other computers on the network to agree with the change.

      Once a consensus generated by that algorithm has been achieved, all the computers on the network update their copies of the ledger simultaneously. If any of them tries to add an entry to the ledger without this consensus, or to change an entry retroactively, the rest of the network automatically rejects the entry as invalid.

      Typically, transactions are bundled together into blocks of a certain size that are chained together (hence “blockchain”) by cryptographic locks, themselves a product of the consensus algorithm. This produces an immutable, shared record of the “truth,” one that—if things have been set up right—cannot be tampered with….

      The benefits of this decentralized model emerge when weighed against the current economic system's cost of trust. Consider this: In 2007, Lehman Brothers reported record profits and revenue, all endorsed by its auditor, Ernst & Young. Nine months later, a nosedive in those same assets rendered the 158-year-old business bankrupt, triggering the biggest financial crisis in 80 years. Clearly, the valuations cited in the preceding years’ books were way off. And we later learned that Lehman’s ledger wasn’t the only one with dubious data Banks in the US and Europe paid out hundreds of billions of dollars in fines and settlements to cover losses caused by inflated balance sheets. It was a powerful reminder of the high price we often pay for trusting centralized entities’ internally devised numbers.

      The crisis was an extreme example of the cost of trust. But we also find that cost ingrained in most other areas of the economy. Think of all the accountants whose cubicles fill the skyscrapers of the world. Their jobs, reconciling their company’s ledgers with those of its business counterparts, exist because neither party trusts the other’s record. It is a time-consuming, expensive, yet necessary process.

      …Might this blind spot explain why some prominent economists are quick to dismiss blockchain technology? Many say they can’t see the justification for its costs. Yet their analyses typically don't weigh those costs against the far-reaching societal cost of trust that the new models seek to overcome….

      Although there are still major obstacles to overcome before blockchains can fulfill the promise of a more robust system for recording and storing objective truth, these concepts are already being I tested in the field. Companies such as IBM and Foxconn are exploiting the idea of immutability in projects that seek to unlock trade finance and make supply chains more transparent….

      What makes these programmable money contracts “smart” is not that they’re automated; we already have that when our bank follows our programmed instructions to autopay our credit card bill every month. It’s that the computers executing the contract are monitored by a decentralized blockchain network. That assures all signatories to a smart contract that it will be carried out fairly.

      With this technology, the computers of a shipper and an exporter, for example, could automate a transfer of ownership of goods once the decentralized software they both use sends a signal that a digital-currency payment—or a cryptgraphically unbreakable commitment to pay—has been made. Neither party necessarily trusts the other, but they can nonetheless carry out that automatic transfer without relying on a third party. In this way, smart contracts take automation to a new level—enabling a much more open, global set of relationships.

      Programmable money and smart contracts constitute a powerful way for communities to govern themselves in pursuit of common objectives. They even offer a potential breakthrough in the “Tragedy of the Commons,” the long-held notion that people can't simultaneously serve their self-interest and the common good….

Evidence for Opportunity

[These excerpts are from an article by Michael J. Feuer in the April 27, 2018, issue of Science.]

      “Our nation is moving toward two societies, one black, one white—separate and unequal” So concluded a 1968 report by the Kerner Commission, established by U.S. President Lyndon Johnson to investigate the race riots of 1967. Not only did the report shine a spotlight on America’s unfulfilled promises, it spurred action by politicians and policy-makers. Fifty years later, it is fair—and necessary—to ask if anything has changed. Healing Our Divided Society, the 2018 sequel to the Kerner report, argues sadly that gains of the 1970s and early 1980s are evaporating or reversing. But, noting the role of empirical evidence in bolstering past reforms, the new report suggests hopefully that “the quantity and sophistication of scientific information available today far exceeds what was available [in 1968].”

      To be sure, there was important progress after the 1968 report: Education achievement gaps narrowed (mostly in the early grades), college participation and degree attainment rose for all groups, and average family wealth for black and Hispanic Americans increased. But the current picture is alarming: Income inequality has exploded; child poverty is unacceptably high, especially in racially concentrated neighborhoods; and black children face considerably lower chances of upward mobility than their white peers.

      …The U.S. now has more and better policy-relevant research and evidence that can help move the needle….investments in programs such as Head Start, coupled with sustained K-12 funding, can break the cycle of poverty.

      A key takeaway from such examples is that political will is necessary but insufficient without empirical evidence. The question is whether we can be confident in the supply of good research, in renewed political commitment, and in a revived appetite for evidence-informed policy at all levels of government.

      Let’s hope so. To be prepared, we must address worrisome trends. After decades of federal funding, the U.S. has a robust supply of doctoral-level scientists in education and related fields, but federal resources for their continuing work are meager and politically vulnerable….Private foundations mostly advance the public good, but few support general education research and some put advocacy ahead of evidence…

      …Congress should increase funding for behavioral and social sciences, and governments at all levels should consider new approaches to accessing evidence….

      If there is hope for restoring economic and educational opportunity, research is essential. The proven tradition of relying on science to make the world better cannot end on our watch.

False News Flies Faster

[This article by Peter Dizikes is in the May/June 2018 issue of Technology Review.]

      Paleontologists unearthed a strange sight in Newfoundland in the early 2000s: an ancient fossil bed of giant, frond-shaped marine organisms. Researchers had discovered these mysterious extinct creatures—called rangeomorphs—before, but they continue to defy categorization. Now scientists believe the Newfoundland fossils and their brethren could help answer key questions about life on Earth.

      “We found that falsehood diffuses significantly farther, faster, deeper, and more broadly than the truth, in all categories of information, and in many cases by an order of magnitude,” says Sinan Aral, a professor at the MIT Sloan School of Management and coauthor of a paper detailing the results in Science.

      “These findings shed new light on fundamental aspects of our online communication ecosystem,” says study coauthor Deb Roy, an associate professor of media arts and sciences at the MIT Media Lab and director of the Media Lab’s Laboratory for Social Machines (LSM). Roy, who served as Twitter’s chief media scientist from 2013 to 2017, adds that the researchers were “somewhere between surprised and stunned” at the different trajectories of true and false news on Twitter.

      To conduct the study, the researchers tracked roughly 126,000 “cascades,” or unbroken retweet chains, of news stories cumulatively tweeted over 4.5 million times by about three million people from 2006 to 2017. To determine whether stories were true or false, they used the assessments of six fact-checking organizations.

      The researchers found that false news stories are 70 percent more likely to be retweeted than true stories are. It also takes true stories about six times as long to reach 1,500 people as it does for false stories to reach the same number of people. And falsehoods reach a “cascade depth” of 10 about 20 times faster than real facts do.

      Moreover, the scholars found, hots are not the principal reason inaccurate stories get around so much faster and farther than real news is able to manage. Instead, inaccurate news items spread faster around Twitter because people are retweeting them.

      “When we removed all of the bots in our data set, [the] differences between the spread of false and true news stood,” says LSM postdoc and paper coauthor Soroush Vosoughi, whose PhD research with Roy on the spread of rumors led to the current study.

      So why do falsehoods spread more quickly than the truth on Twitter? The scholars suggest the answer may reside in human psychology: we like new things, and false news is often accompanied by reactions of surprise.

      “False news is more novel, and people are more likely to share novel information,” Aral says.

Biases in Forensic Experts

[These excerpts are from an article by Itiel E. Dror in the April 20, 2018, issue of Science.]

      Forensic evidence plays a critical role in court proceedings and the administration of justice. It is a powerful tool that can help convict the guilty and avoid wrongful conviction of the innocent. Unfortunately, flaws in forensic evidence are increasingly becoming apparent. Assessments of forensic science have too often focused only on the data and the underlying science, as if they exist in isolation, without sufficiently addressing the process by which forensic experts evaluate and interpret the evidence. After all, it is the forensic expert who observes the data and makes interpretations, and therefore forensic evidence is mediated by human and cognitive factors. A U.S. National Research Council examination of forensic science in 2009, followed by a 2016 evaluation by a presidential panel, along with a U.K. inquiry into fingerprinting in 2011 and a 2015 guidance by the U.K. Forensic Science Regulator, have all expressed concerns about biases in forensic expert decision-making. Where does forensic bias come from, and how can we minimize it?

      Forensic experts are too often exposed to irrelevant contextual information, largely because they work with the police and prosecution. Extraneous information—from a suspect's ethnicity or criminal record to eyewitness identifications, confessions, and other lines of evidence—can potentially cause bias. This can give rise to conclusions that are incorrect or overstated, rather than what forensic decisions should be: impartial decisions, appropriately circumscribed by what the evidence actually supports. A consequence of cognitive biases is that science is misused, and sometimes even abused, in court. Not only can irrelevant information bias a particular aspect of an investigation, it often causes “bias cascade” from one component of an investigation to another and “bias snowball,” whereby the bias increases in strength and momentum as different components of an investigation influence one another. Bias also arises when forensic experts work backward: Rather than having the evidence drive the forensic decision-making process, experts work from the target suspect to the evidence.

      …many forensic experts have a “bias blind spot” to these implicit biases and therefore tend to deny their existence. Forensic experts frequently present their decisions to the court with great confidence and then incorrectly take the court's acceptance of their findings as confirmation that they have not been biased or made a mistake. Acknowledging that bias can influence forensic science experts would be a substantial step toward implementing countermeasures that could greatly improve forensic evidence and the fair administration of justice.

      If we want science to serve society, then it must be properly used in the halls of justice.

Plant Responses to CO2 Are a Question of Time

[These excerpts are from an article by Mark Hoverden and Paul Newton in the April 20, 2018, issue of Science.]

      Rising carbon dioxide (CO2) concentrations in the atmosphere as a result of fossil fuel burning are expected to fertilize plants, resulting in faster growth. However, this change is not expected to be the same for all plants. Rather, scientists believe that differences in photosynthetic mechanism favor one plant group—the C3 plants—over the other, the C4 plants….

      In 1966, Hatch and Slack found that some plants have a distinct mechanism for assimilating CO2 from the atmosphere. This has profound ecological consequences. The ancestral method of photosynthesis combines CO2 with a five-carbon molecule to produce two identical three-carbon molecules. However, the enzyme that catalyzes this reaction also combines the same five-carbon compound with dioxygen, thus reducing the carbon assimilation rate and, hence, growth. Hatch and Slack discovered that some plants can avoid this by first combining CO2 from the atmosphere with a three-carbon molecule, producing a four-carbon molecule as the first stable product in photosynthesis. Plants with this pathway are known as C4 plants, distinguishing them from those with the ancestral pathway, termed C3 plants.

      Although only about 3% of the global plant species are C4 plants, they play a crucial role hi many ecosystems, particularly savannas and grasslands. C4 species contribute 25% of land biomass globally, provide forage for animals in both natural (for example, Serengeti) and managed (for example, Great Plains) grasslands, and contribute 14 of the world’s 18 worst weeds. It is clearly important, therefore, to predict the future distribution and abundance of C4 plants.

      …The C4 method of photosynthesis appears to have evolved during a period of declining atmospheric CO2 concentration and allows the plants to use CO2 more efficiently than C3 species. Today, the atmospheric CO2 concentration is higher than at any other time in the past 500,000 years and continues to rise. Most scientists expect C3 plants to benefit from this additional CO2 and outcompete C4 species, because C3 photosynthesis increases in efficiency with increasing CO2 concentration to a far greater extent than does C4 photosynthesis….

The Suns in Our Daughters

[These excerpts are from an article by Lisa Einstein in the May 2018 issue of Scientific American.]

      …Aissatou is a designer: she builds, plays and imagines. I observe her ingenuity with awe.

      I see Aissatou the way my parents saw me: filled with unlimited potential. My parents called their four kids “their greatest collaboration” and helped us grow into our fullest selves. Knowing the challenges facing young women in physics, Dad went out of his way to fuel my passion. Once he drove me six hours to a lecture by a female physicist. His encouragement emboldened me to dive into a challenging field dominated by men.

      Aissatou, on the other hand, has been taught that she should be dominated by men. When male visitors arrive at her house, the jubilant builder I know transforms into a meek and submissive servant, bowing as she acquiesces to their every request.

      The difference? I won the lottery at birth: time, place and parents who gave me the chance to develop my passions. I am on a mission to give Aissatou and Binta the chance to do the same.

      I think about the untapped potential of millions of girls like Aissatou and Binta, who lack opportunities because of custom, poverty, laws or terrorist threats. The gifted young women I’ve taught as a Peace Corps volunteer implementing the Let Girls Learn program have strengthened my conviction that it is possible for them to fulfill their promise through education. And educating girls is not only morally right but also provides a cornerstone of achieving a peaceful and prosperous future….

      Do you want to know something exciting I learned? Mass-energy equivalence means that the solar energy striking the earth each second equals only four pounds of mass. That means a small girl of 40 pounds could unleash the energy of 10 suns shining on the earth in a second. Take the 132 million girls who are not in school, and we have 1.32 billion suns in our daughters.

      How will we help them rise?

End the War on Weed

[These excerpts are from an editorial in the May 2018 issue of Scientific American.]

      Like the failed Nixon-era War on Drugs, this resurgent war on marijuana is ill informed and misguided. Evidence suggests that cannabis—though not without its risks—is less harmful than legal substances such as alcohol and nicotine. And despite similar marijuana use among blacks and whites, a disproportionate number of blacks are arrested for it. By allowing states to regulate marijuana without federal interference, we can ensure better safety and control while allowing for greater research into its possible harms and benefits….

      That does not mean that marijuana is entirely benign. Studies suggest it can impair driving, and a subset of users develops a form of dependence called marijuana use disorder. Other research indicates that teenage marijuana use may adversely impact the developing brain: it has been linked to changes in neural structure and function, including lower IQ, as well as an increased risk of psychosis in vulnerable individuals. But some of these findings have been challenged. A pair of longitudinal twin studies, for example, found no significant link between marijuana use and IQ. Moreover, people with these brain characteristics may simply be more likely to use marijuana in the first place.

      We are not advocating for unfettered access to marijuana, especially by adolescents. More large-scale, randomized controlled studies are needed to tease out the risks and benefits. But to do these kinds of studies, scientists must have access to the drug, and until very recently, the federal government has had a monopoly on growing cannabis for research purposes. We also need more research on the various, often more potent, marijuana strains grown for recreational use. As long as the federal government continues to crack down on state-level legal marijuana, it will be difficult to carry out such studies….

      It is time to stop treating marijuana like a deadly drug, when science and public opinion agree that it is relatively safe for adult recreational use. The last thing we need is another expensive and ineffective war on a substance like cannabis—especially when there are far more serious drug problems to tackle.

Adapting to Life in the Big City

[This excerpt of a book review by Arne Mooers is in the April 13, 2018, issue of Science.

      Metal-excreting pigeons, pigeon-eating catfish, cigarette-wielding sparrows, soprano-voiced great tits: The modem city is a fantastical menagerie of the odd and unexpected. Through a series of 20 short but connected chapters that mix natural history vignettes, interviews with visionary scientists, and visits to childhood haunts, science journalist and biology professor Menno Schilthuizen introduces readers to the striking facts of ongoing urban evolution in Darwin Comes to Town. But while the prose may be playful (“Cut to the Hollywood bobcats”), the underlying message may cause discomfort.

      Two cross-cutting ideas permeate the book. The first is the notion of rampaging sameness. Because we are incessant, but messy, busybodies, Schilthuizen argues, we scatter species across countries and continents. And we move most among cities.

      The author does a fine job of conveying this urban sameness when describing a scene along an estuary in Singapore: the house crows and the mynas feeding in the cow grass, the apple snails laying eggs among the mimosa, the red-eared slider turtles dipping into the water, and the peacock bass breaking the surface for a gulp of air. Every one of the species he describes is a non-native, every one is found in countless other cities the world over, and every one is at home in its new habitat. Schilthuizen has even borrowed a name from parasitology for them: anthropophiles.

      And the reason for this biological sameness is urban sameness. Cities around the world produce the same sorts of garbage and the same sorts of noise, house the same sorts of skyscrapers, and produce the same fragmented landscapes. They can even generate the same sort of weather via particulate pollution and the heat-island effect.

      The book’s second major theme is that rapid change is an enduring part of the urban environment. Urban plants and animals evolve and adapt to their novel surroundings at remarkable speed. The city pigeons’ darker, more melanic feathers, for example, sequester poisonous metals; the great tit’s new soprano notes are better heard above the city din; and city moths in Europe have become less attracted to deadly artificial lights.

      Indeed, the realization that adaptive evolutionary change occurring on human time scales in multicellular species is common, rather than rare, is both fairly new and fairly profound. The ubiquity of the phenomenon has even given rise to a new field known as eco-evolutionary dynamics.

      It is now clear that adaptation can be so fast as to affect the very environment that sets the stage for those adaptations, leading to possible merry-go-rounds of organism-environment-organism changes through time. The implications of this are still not fully known, but it’s safe to assume that this is not what Darwin envisioned from his seat in the Kent countryside. (Perhaps he should have come up to the city more often.)….

How Cleaner Air Changes the Climate

[These excerpts are from an an article by Elizabeth Pennisi in the April 13, 2018, issue of Science.]

      Human influence on the climate is a tug-of-war, with greenhouse gas-induced warming being held partly in check by cooling from aerosol emissions. In a Faustian bargain, humans have effectively dampened global climate change through air pollution. Increased greenhouse gas concentrations from fossil fuel use are heating the planet by trapping heat radiation. At the same time, emissions of aerosols—particles that make up a substantial fraction of air pollution—have an overall cooling effect by reflecting incoming sunlight. The net effect of greenhouse gases and aerosols is the -1°C of global warming observed since 1880 CE. The individual contributions of greenhouse gases and aerosols are, however, much more uncertain. Recent climate model simulations indicate that without anthropogenic aerosols, global mean surface warming would be at least 0.5°C higher, and that in their absence there would also be a much greater precipitation change….

      Since 1990, there has been little change in the global volume of anthropogenic aerosol emissions. Regionally, however, there are large differences, with reductions in Europe and the United States balanced by increases in Africa and Asia….

Human Mutation Rate a Legacy from Our Past

[These excerpts are from an an article by Elizabeth Pennisi in the April 13, 2018, issue of Science.]

      Kelley Harris wishes humans were more like paramecia. Every newborn’s DNA carries more than 60 new mutations, some of which lead to birth defects and disease, including cancers. “If we evolved parameciumlike replication and DNA repair processes, that would never happen,” says Harris….Researchers have learned that these single-cell protists go thousands of generations without a single DNA error—and they are figuring out why human genomes seem so broken in comparison.

      The answer, researchers reported at the Evolution of Mutation Rate workshop here late last month, is a legacy of our origins. Despite the billions on Earth today, humans numbered just thousands in the early years of our species. In large populations, natural selection efficiently weeds out deleterious genes, but in smaller groups like those early humans, harmful genes that arise—including those that foster mutations—can survive.

      Support comes from data on a range of organisms, which show an inverse relationship between mutation rate and ancient population size. This understanding offers insights into how cancers develop and also has implications for efforts to use DNA to kdate branches on the tree of life.

      Mutations occur, for example, when cells copy their DNA incorrectly or fail to repair damage from chemicals or radiation. Some mistakes are good, providing variation that enables organisms to adapt. But some of these genetic mistakes cause the mutation rate to rise, thus fostering more mutations.

      For a long time, biologists assumed mutation rates were identical among all species, and so predictable that they could be used as “molecular clocks.” By counting differences between the genomes of two species or populations, evolutionary geneticists could date when they diverged. But now that geneticists can compare whole genomes of parents and their offspring, they can count the actual number of new mutations per generation.

      That has enabled researchers to measure mutation rates in about 40 species, including newly reported numbers for orangutans, gorillas, and green African monkeys. The primates have mutation rates similar to humans….But…bacteria, paramecia, yeasts, and nematodes—all of which have much larger populations than humans—have mutation rates orders of magnitude lower.

      The variation suggests that in some species, genes that cause high mutation rates—for instance, by interfering with DNA repair—go unchecked. In 2016, Lynch detailed a possible reason, which he calls the drift barrier hypothesis.… Genetic drift plays a bigger role in smaller populations. In large populations, harmful mutations are often counteracted by later beneficial mutations. But in a smaller population with fewer individuals reproducing, the original mutation can be preserved and continue to do damage.

      Today, 7.6 billion people inhabit Earth, but population geneticists focus on the effective population size, which is the number of people it took to produce the genetic variation seen today. In humans, that's about 10,000—not so different from that of other primates. Humans tend to form even smaller groups and mate within them. In such small groups, Harris says, “we can’t optimize our biology because natural selection is imperfect”….

      …Among Europeans, the excess cytosine to thymine mutations existed in early farmers but not in hunter-gatherers, she reported. She speculates that these farmers’ wheat diet may have led to nutrient deficiencies that predisposed them to a mutation in a gene that in turn favored the cytosine-to-thymine changes, suggesting environment can lead to changes in mutation rate. Drift likely played a role in helping the mutation-promoting gene stick around.

What We All Need to Know about Vaping

[These excerpts are from an article by Susan Leonard in the April 2018 issue of Phi Delta Kappan.]

      …Among these experts’ many concerns was the fact that most delivery devices contain large concentrations of propylene glycol, which is a known irritant when inhaled. Little is known about the effect of long-term inhalation of this chemical. Additionally, because these devices are unregulated, users have no idea what other chemicals they may be inhaling, nor what the short- or long-term effects of that exposure are.

      …Nicotine causes the release of adrenaline, which elevates the heart rate, increases blood pressure, and constricts blood vessels, potentially leading to long-term heart problems. The effect of vaping is almost immediate, but it also wears off quickly and encourages the user to want to nape again and again, needing more and more to feel the same effects.

      Candidly, it is hard for me to believe that this industry is truly trying to do good by helping smokers wean themselves off harmful tobacco cigarettes when they market the devices with fruity flavors and pack their juices with more nicotine than cigarettes are allowed to contain under Food and Drug Administration regulations. Nicotine is highly and quickly addicting, and addiction creates a lucrative product, especially if those users are young.

      During adolescence, the brain is sensitive to novel experiences. Unfortunately, just as young people want most to experiment with new and risky behaviors, their immature brains have very different sensitivities to drugs and alcohol. Exposure to nicotine during this stage can lead to long-term changes in neurology and behavior. Chronic nicotine exposure during adolescence also alters the subsequent response of the serotonin system. Such alterations are often permanent….

      We should also not ignore the gateway effect nicotine has on its users. In 2012, nearly 90% of U.S. adults 18-34 years of age who had used cocaine had smoked cigarettes (i.e., used nicotine) first. Behavioral experiments have proven that nicotine “primes” the brain to en-hance the effects of cocaine. So, it's not just that one risky behavior leads to another — nicotine actually affects how the brain works, making other drugs (most commonly marijuana and cocaine) more pleasurable and desirable. And the younger a person is when beginning to use drugs and alcohol, the more likely it is that person will move on to other drugs and/or develop a serious addiction ….

      …Vaping is not just smokeless smoking – it’s a real and present danger to our students.

A Wider Vision of Learning

[These excerpts are from an article by Elliot Washor in the April issue of Phi Delta Kappan.]

      Today, school leaders tend to be fixated on using big data to crunch a narrow set of numbers, rather than actually thinking big — and deep and broad —about learning. And the more sophisticated the technology they apply (or misapply) to the same handful of indicators, the less clearly they see their students. They use test results to assign learners to groups so that schools can provide “appropriate” interventions, but they don’t actually know very much about the varying talents and interests of the individuals they put into those groups, nor do they know much about the personal struggles they may be facing that can profoundly affect their performance.

      Further, they may not be able to imagine other ways of assessing students, or — consistent with Abraham Maslow’s observation that a fear of knowing is a fear of doing — they may lack the courage to look more deeply at the talent that sits before them. After all, if they did so, then they might feel obliged to act on what they learn, and this could interfere with their ability to complete the tasks they’ve been assigned. Worse yet, if educators allowed themselves to know more about their students, then they might be forced to acknowledge that the whole system needs to be redesigned.

      But why, given its rapidly expanding ability to collect and analyze seemingly endless streams of data, does the educational system remain so narrowly fixated on the same few indicators? Why can’t we use the power of big data to collect different and better measures that look more broadly and deeply at the things students can and want to do, not just in the classroom but also outside of school?

      …Traditional indicators and measures are at best incomplete; often, they are perniciously inaccurate, even as they delude us into believing we actually know the individual learner. A student might ace an interim standardized test, indicating they are on track to succeed, while also facing significant issues at home that could increase their likelihood of dropping out of school before graduation. Or a student might be engaged in pursuing a passion that presents valuable learning opportunities but does not necessarily improve their grades and test scores. We can’t understand students unless we expand our vision.

Do Students’ High Scores on International Assessments Translate to Low Levels of Creativity?

[This excerpt is from an article by Stefan Johansson in the April 2018 issue of Phi Delta Kappan.]

      Although it’s true that test scores have been misused, Zhao’s critique of PISA — arguing that high PISA scores in East Asian countries are related to low levels of creativity — is difficult, if not impossible, to support. Certainly, nobody other than Zhao has been able to establish any causal relationships between increasing scores on international large-scale assessments and decreasing levels of innovation across countries.

      To back up his thesis, Zhao compares the mathematics scores from the 2009 PISA with results of the Global Entrepreneurship Monitor (GEM) study, which is a survey of, among other things, perceived levels of entrepreneurial capability (i.e., an individual’s confidence in his or her ability to succeed in entrepreneurship) in a wide range of countries. On first glance, the comparison is striking — countries with high scores on PISA (such as Japan, Korea, and Singapore) all had low scores on their perceived entrepreneurial capacity. At the same time, countries that performed in the middle of the pack on PISA (such as Sweden and the U.S.) ranked fairly high on entrepreneurial capacity, and one of PISA’s lowest performers (the United Arab Emirates) reported the highest entrepreneurial capacity of all.

      Zhao finds this pattern to be evidence of a statistically significant relationship, showing that countries with high PISA scores tend to be less innovative. Moreover, he asserts, many Chinese and Singaporeans themselves “blame their education for their shortage of creative and entrepreneurial talents,” and states that if they’re correct, then the relation-ship “could be causal.” That is, Zhao appears to be arguing that the way these countries teach math causes their students not only to do well on tests but also to become less creative. Thus, he concludes, it’s a mistake for U.S. policy makers to pursue reforms that make their schools more like those in East Asia. If they continue to push for more emphasis on standardization, test taking, and highly rigorous academic work, then students’ creativity will be seriously harmed, and the American workforce will become less innovative.

      But on closer inspection, this argument turns out to be pretty weak. It may be true that Singapore and China haven’t produced many world-famous musicians, but other than that, there isn’t much evidence that East Asians suffer from a lack of creativity or, if they do, that it has anything to do with their NSA scores.

      Zhao’s reliance on self-reported levels of entrepreneurial capacity is especially problematic. For one thing, the construct is not well defined, making it difficult to interpret people's self-assessments. For another, it is unclear why entrepreneurial abilities should be equated with creativity, or why creativity should be distinguished from mathematical proficiency. Actually, mathematical reasoning and problem solving are often described as deeply creative activities. For example, Haylock argued that there are at least two major ways in which the term creativity is used in the context of mathematics: 1) thinking that is divergent and overcomes fixation and 2) the thinking behind a product that is perceived as outstanding by a large group of people. Further, creativity is associated with long periods of work and reflection rather than rapid and unique insights.

      Perhaps more important, the context is so different from one country to another that it may not be possible to compare those self-assessments at all, or to know what to make of results that seem to show that people in East Asia are less innovative than their counterparts in the West. For example, it can be very difficult to start a business or pursue other forms of entrepreneurship in a country that is tightly governed by the state, while it is relatively easy to do so in the U.S. or Sweden — one would expect such differences to affect the GEM results.

Avoiding Difficult History

[This excerpt is from a ‘worthy item’ in the April 2018 issue of Phi Delta Kappan.]

      The Southern Poverty Law Center (SPLC) has found that U.S. students are receiving an inadequate education about the role of slavery in American history. Surveys of high school seniors and social studies teachers, analysis of state content standards, and a view of history textbooks revealed what the SPLC considers seven key problems with current practice:

      1. We teach about slavery without context, preferring feel-good stories of heroes like Harriet Tubman over the more difficult story of the role of slave labor in building the nation.

      2. We subscribe to a view of history that acknowledges flaws only to the extent that they have been solved and avoids exploration of the continuing legacy of those flaws.

      3. We teach about slavery as an exclusively southern institution even though it existed in all states when the Declaration of Independence was signed.

      4. We rarely connect slavery to the White supremacist ideology that grew up to protect it.

      5. We rely on pedagogy, such as simulations, that is poorly suited to the subject and potentially traumatizing.

      6. We rarely connect slavery to the present, or even historical events such as the Great Migration and the Harlem Renaissance.

      7. We tend to foreground the White experience by focusing on the political and economic impacts of slavery.

      The report identifies 10 key concepts that should be incorporated into instruction on slavery in the U.S. These include the facts that the slave trade was central to the growth of the U.S. economy and was the chief cause of the Civil War….

Edge of Extinction

[These excerpts are from an article by Sanjay Kumar in the April 6, 2018, issue of Science.]

      The first dinosaur fossils found in Asia, belonging to a kind of sauropod, were unearthed in 1828 in Jabalpur, in central India’s Narmada Valley. Ever since, the subcontinent has yielded a stream of important finds, from some of the earliest plant remains through the reign of dinosaurs to a skull of the human ancestor Homo erectus….

      Much of that fossil richness reflects India’s long, solitary march after it broke loose from the supercontinent Gondwanaland, starting some 150 million years ago. During 100 million years of drifting, the land mass acquired a set of plant and animal species, including many dinosaurs, that mix distinctive features with ones seen elsewhere. Then, 50 million to 60 million years ago, India began colliding with Asia, and along the swampy edges of the vanishing ocean between the land masses, new mammals emerged, including ancestral horses, primates, and whales.

      Now, that rich legacy is colliding with the realities of present-day India. Take a site in Himachal Pradesh state where, in the late 1960s, an expedition by Panjab University and Yale University excavated a trove of humanoid fossils, including the most complete jaw ever found of a colossal extinct ape, Gigantopithecus bilaspurensis. The discovery helped flesh out a species known previously only through teeth and fragmentary jaws. Today’s paleontologists would love to excavate further at the site, Sahni says. But it “has been completely flattened”—turned into farm fields, with many of its fossils lost or sold. To India’s paleontologists, that is a familiar story.

      In the early 1980s, for example, blasting at a cement factory in Balasinor in Gujarat revealed what the workers believed were ancient cannon balls. A team led by Dhananjay Mohabey, a paleontologist then at the Geological Survey of India in Kolkata, realized they were dinosaur eggs. Mohabey and his colleagues soon uncovered thousands more in hundreds of nests, as well as many other fossils. Examining one Cretaceous period clutch in 2010, Jeffrey Wilson of the University of Michigan in Ann Arbor discerned what appeared to be snake bones. He and Mohabey recovered more fossil fragments and confirmed that a rare snake (Sanajeh indicus) had perished while coiled around a dinosaur egg. It was the first evidence, Mohabey says, of snakes preying on dinosaur hatchlings.

      Mohabey and others have since documented seven dinosaur species that nested in the area. (In a separate find in Balasinor, other researchers unearthed the skeleton of a horned carnivore called Rajasaurus narmadensis — the royal Narmada dinosaur.) But locals and visitors soon began pillaging the sites. In the 1980s, dinosaur eggs were sold on the street for pennies.

      In 1997, local authorities designated 29 hectares encompassing the nesting sites as the Balasinor Dinosaur Fossil Park in Raiyoli. But poaching continued largely unabated in the park and outside its boundaries, Mohabey says. Even now, the park is not fully fenced and the museum building, ready since 2011, is still not open….

United States to Ease Car Emission Rules

[This news brief by Jeffrey Brainard is in the April 6, 2018, issue of Science.]

      U.S. President Donald Trump’s administration last week announced it intends to roll back tough auto mileage standards championed by former President Barack Obama to combat climate change. The standards, released in 2012, called for doubling the average fuel economy of new cars and light trucks, to 23.2 kilometers per liter by 2025. The Environmental Protection Agency (EPA) estimated the rules would prevent about 6 billion tons of carbon emissions by 2025. But on 2 April, EPA Administrator Scott Pruitt said the agency would rewrite the standards, arguing that Obama’s EPA “made assumptions ... that didn’t comport with reality, and set the standards too high.” In a formal finding, Pruitt argued the standards downplay costs and are too optimistic about the deployment of new technologies and consumer demand for electric vehicles. Clean car advocates disputed many of Pruitt’s claims, noting that auto sales have been strong despite stiffer tailpipe rules. “Backing off now is irresponsible and unwarranted,” said Luke Tonachel of the Natural Resources Defense Council in Washington, D.C. Pruitt’s move could also set up a legal clash with California state regulators, who have embraced the Obama-era standards and say they want to keep them.

Mist Hardships

[This excerpt is from the first chapter of Caesar’s Last Breath, by Sam Kean.]

      The most deadly gas outburst in history took place in Iceland in 1783, when a volcanic fissure spewed poisonous gas for eight months, ultimately releasing 7 million tons of hydrochloric acid, 15 million tons of hydrofluoric acid, and 122 million tons of sulfur dioxide. Locals called the event the Moduhardindin, or the “mist hardships,” after the strange, noxious fumes that emerged —"air bitter as seaweed and reeking of rot," one witness remembered. The mists killed 80 percent of the sheep in Iceland, plus half the cattle and horses. Ten thousand people there also died—one-fifth of the population—mostly of starvation. When the mists wafted over to England, they mixed with water vapor to form sulfuric acid, killing twenty thousand more people. The mists also killed crops across huge swaths of Europe, inducing long-term food shortages that helped spark the French Revolution six years later.

Tiny Dancers

[These excerpts are from a brief article in the Spring 2018 issue of the American Museum of Natural History Rotunda.]

      Relying on sight, scent, and touch, honey bees navigate the world—and even dance to it.

      Their antennae are highly sensitive to vibrations. They also have numerous receptors that respond to odors and other stimuli. Together, these keen senses may explain how worker bees are able to pick up and interpret the so-called “waggle dance,” which they use to share the location of food with fellow worker bees of the colony.

      While the process is not completely understood, it goes a little something like this: a successful forager uses an elaborate dance pattern to indicate both the direction of food in relation to the Sun and its distance from the hive. The dancer adjusts the direction over time to account for the movement of the Sun, as do the foragers in the field.

      The worker bees don’t actually see the waggle dance within the pitch-black hive, perhaps giving new meaning to the phrase “dancing in the dark” Instead, the bees sense air vibrations through their antennae, which are held close to the dancing, waggling bee….

      The dance is accompanied by an olfactory message, too: pollen brought back by the returning dancing bee or regurgitated nectar conveys the scent of the food at the forage site. Finally, the richness of the nectar source is indicated by the duration of the dance. The bees don’t exactly measure the length of the dance, but the longer the bee dances, the more foragers are recruited—essentially matching the workforce to the harvest at hand.

Volcanic Eruptions

[This excerpt is from the first chapter of Caesar’s Last Breath, by Sam Kean.]

      It took Mount Saint Helens two thousand years to build up its beautiful cone and about two seconds to squander it. It quickly shrank from 9,700 feet to 8,400 feet, shedding 400 million tons of weight in the process. Its plume of black smoke snaked sixteen miles high and created its own lightning as it rose. And the dust it spewed swept across the entire United States and Atlantic Ocean, eventually circling the world and washing over the mountain again from the west seventeen days later. Overall the eruption released an amount of energy equivalent to 27,000 Hiroshima bombs, roughly one per second over its nine-hour eruption.

      With all that in mind, it’s worth noting that Mount Saint Helens was actually small beer as far as eruptions go. Although it vaporized a full cubic mile of rock, that’s only 8 percent of what Krakatoa ejected in 1883 and 3 percent of what Tambora did in 1815. Tambora also decreased sunlight worldwide by 15 percent, disrupted the mighty Asian monsoons, and caused the infamous Year Without a Summer in 1816, when temperatures dropped so much that snow fell in New England in summertime. And Tambora itself would tremble before the truly epic outbursts in history, like the Yellowstone eruption 2.1 million years ago that launched 585 cubic miles of Wyoming into the stratosphere. (This megavolcano will likely have an encore someday and will bury much of the continental United States in ash.)

  Website by Avi Ornstein, "The Blue Dragon" – 2016 All Rights Reserved