Increase Your Brain Power
Sonia in Vert
Publications
Shared Idea
Interesting Excerpts
Awards and Honors
Presentations
This Week's Puzzle
Last Week's Puzzle
Interesting Excerpts
The following excerpts are from articles or books that I have recently read. They caught my interest and I hope that you will find them worth reading. If one does spark an action on your part and you want to learn more or you choose to cite it, I urge you to actually read the article or source so that you better understand the perspective of the author(s).
Climate Change and Marine Mass Extinction

[These excerpts are from an article by Lee Kump in the 7 December 2018 issue of Science.]

      Voluminous emissions of carbon dioxide to the atmosphere, rapid global warming, and a decline in biodiversity-the storyline is modern, but the setting is ancient: The end of the Permian Period, some 252 million years ago. For the end-Permian, the result was catastrophic: the greatest loss of plant and animal life in Earth history. Understanding the details of how this mass extinction played out is thus crucial to its use as an analog for our future….

      A number of kill mechanisms for end-Permian extinction have been proposed, most triggered by the tremendous volcanic activity associated with the emplacement of the vast lava flows of the Siberian Traps, the eruption of which was coincident with the mass extinction. The Siberian Traps are estimated to have released tens of thousands of petagrams of carbon as carbon dioxide and methane, explaining the 10° to 15°C tropical warming revealed by oxygen isotope compositions of marine fossils. On land, unbearably hot temperatures and hypoxia likely were the main cause of mass extinction of plants and animals, although ultraviolet radiation exposure from a collapsed ozone shield contributed as well. Rapid warming also likely led to the loss of oxygen from the ocean’s interior, extending up onto the continental shelves—a conclusion supported both by the widespread distribution of indicators for marine anoxia in sedimentary rocks and by numerical modeling of the Permian ocean-atmosphere system.

      Once considered nonselective, mass extinc-ions are increasingly revealing patterns of differential impact across species, lifestyles, and geographic locations through their fossil records. A geographic pattern to Permian extinction, however, has remained elusive….

Taking Aim

[These excerpts are from an article by Meredith Wadman in the 7 December 2018 issue of Science.]

      …Guns are the second-leading cause of death of children and teens in the United States, after motor vehicle crashes….In 2016, the most recent year for which data are available, they killed nearly 3150 people aged 1 to 19, according to data from the Centers for Disease Control and Prevention (CDC) in Atlanta. Cancer killed about 1850. But this year, the National Institutes of Health (NIH) 'in Bethesda, Maryland, spent $486 million researching pediatric cancer and $4.4 million studying children and guns….

      That's because gun violence research has been operating under a chill for more than 2 decades. In 1996, Congress crafted an amendment, named for its author, then-Arkansas Representative Jay Dickey (R), preventing CDC—the government’s lead injury prevention agency—from spending money “to advocate or promote gun control.”

      That law was widely interpreted as banning any CDC studies that probe firearm violence or how to prevent it. The agency’s gun injury research funding was quickly zeroed out, and other health agencies grew wary. The few dozen firearm researchers who persisted were forced to rely on modest amounts from other agencies or private funders…to tackle a massive problem.

      Now, there may be early signs of a thaw. In March, in the wake of the mass shooting at a Parkland, Florida, high school, Congress wrote that CDC is free to probe the causes of gun violence, despite the Dickey amendment. The agency has not done so, citing a lack of money.) And annual firearm-related funding from NIH…roughly tripled after a 2013 presidential directive that was issued in the wake of the mass shooting at Sandy Hook Elementary School in Newtown, Connecticut. Just as importantly, the agency began to flag firearm violence in some of its calls for research.

      …And last month, the National Rifle Association (NRA) in Fairax, Virginia, provoked a firestorm when it tweeted that “self-important anti-gun doctors” should “stay in their lane.” Hundreds of emergency department doctors tweeted back, many including photographs of their scrubs, hands, and shoes bloodied from treating gunshot victims. More than 40,000 health care professionals…signed an open letter to NRA complaining that the group has hobbled gun violence research, declaring, “This is our lane!”

      All the same, there’s still little public money for gun research….

Chess, a Drosophila of Reasoning

[These excerpts are from an editorial by Garry Kasparov in the 7 December 2018 issue of Science.]

      Much as the Drosophila melanogaster fruit fly became a model organism for geneticists, chess became a Drosophila of reasoning. In the late 19th century Alfred Binet hoped that understanding why certain people excelled at chess would unlock secrets of human thought. Sixty years later, Alan Turing wondered if a chess-playing machine might illuminate, in the words of Norbert Wiener, “whether this sort of ability represents an essential dif-ference between the poten-tialities of the machine and `the mind.”

      Much as airplanes don’t flap their wings like birds, machines don’t generate chess moves like humans do. Early programs that attempted it were weak. Success came with the “minimax” algorithm and Moore’s law, not with the ineffable human combination of pattern recognition and visualization. This prosaic formula dismayed the artificial intelligence (AI) crowd, who realized that profound computational insights were not required to produce a machine capable of defeating the world champion.

      But now the chess fruit fly is back under the microscope….AlphaZero starts out knowing only the rules of chess, with no embedded human strategies. In just a few hours, it plays more games against itself than have been recorded in human chess history. It teaches itself the best way to play, reevaluating such fundamental concepts as the relative values of the pieces. It quickly becomes strong enough to defeat the best chess-playing entities in the world, winning 28, drawing 72, and losing none in a victory over Stockfish.

      …Programs usually reflect priorities and prejudices of programmers, but because AlphaZero programs itself, I would say that its style reflects the truth. This superior understanding allowed it to outclass the world's top traditional program despite calculating far fewer positions per second. It’s the embodiment of the cliche, “work smarter, not harder?”

      AlphaZero shows us that machines can be the experts, not merely expert tools….

      Machine learning systems aren't perfect, even at a closed system like chess. There will be cases where an Al will fail to detect exceptions to their rules. Therefore, we must work together, to combine our strengths. I know better than most people what it’s like to compete against a machine. Instead of raging against them, it’s better if we're all on the same side.

Leon Max Lederman (1922-2018)

[These excerpts are from an article by Rocky Kolb in the 30 November 2018 issue of Science.]

      …Leon’s group used the Nevis accelerator to discover nonconservation of parity in muon decay. Remarkably, they conceived of the idea during the half-hour drive from the main campus to Nevis one Friday, constructed the experiment that evening, and collected the data by the end of the weekend, demonstrating clear evidence for the fundamental result.

      In 1962, Leon and his colleagues led a team that established the existence of a second-generation neutrino, the muon neutrino. He would share the Nobel Prize in recognition of this work. In addition to the importance of the two-neutrino result in the development of the standard model of particle physics, the experiment pioneered the use of neutrino beams to study weak interactions.

      In the 1970s, Leon’s interests turned to particles produced at high transverse momentum in high-energy proton-proton collisions. A sequence of ever-higher-energy experiments at Brookhaven, the Intersecting Storage Ring at CERN, and finally at Fermilab culminated in the 1977 discovery of the bottom quark, one of six known quarks. Shortly afterward, Leon became the second director of Fermilab. During his three decades as an experimental particle physicist, he supervised 50 Columbia University Ph.D. candidates and was a mentor to many other young students and postdocs….

      …In addition to scientific leadership, with characteristic charm and humor, Leon transformed Fermilab from a frontier outpost in the cornfields of Illinois into a cosmopolitan center for science and science education.

      As a professor at Columbia University, Leon reached many generations of undergraduates in his “physics for poets” class. He transposed his love of teaching to Fermilab, where he instigated Saturday Morning Physics, a weekend class for area high-school students featuring lectures by Leon and other Fermilab scientists. He also started a public education and outreach effort at Fermilab directed toward precollege students. Beyond Fermilab, Leon was a founder of a residential state-sponsored high school for science and mathematics and a Chicago-based teacher academy for math and science. His leadership in education and outreach inspired physicists to communicate with the public. The author of several books for the general public, including The God Particle, Leon wrote with the same humor, wit, and charm that drew people to his public lectures. He had no equal f in communicating the joys of physics.

Define the Human Right to Science

[These excerpts are from an article by Jessica M. Wyndham and Margaret Weigers Vitullo in the 30 November 2018 issue of Science.]

      The adoption of the Universal Declaration of Human Rights (UDHR) by the United Nations (UN) General Assembly will mark its 70th anniversary on 10 December. One right enshrined in the UDHR is the right of everyone to “share in scientific advancement and its benefits.” In 1966, this right was incorporated into the International Covenant on Economic, Social and Cultural Rights, a treaty to which 169 countries have voluntarily agreed to be bound. Unlike most other human rights, however, the right to science has never been legally defined and is often ignored in practice by the governments bound to implement it. An essential first step toward giving life to the right to science is for the UN to legally define it….

      The scientific community has contributed three key insights to the ongoing UN process. One is that the right to science is not only a right to benefit from material products of science and technology. It is also a right to benefit from the scientific method and scientific knowledge, whether to empower personal decision-making or to inform evidence-based policy. In addition, access to science needs to be understood as nuanced and multifaceted. People must be able to access scientific information, translated and actionable by a nonspecialist audience. Scientists must have access to the materials necessary to conduct their research, and access to the global scientific community. Essential tools for ensuring access include science education for all, adequate funding, and an information technology infrastructure that serves as a tool of science and a conduit for the diffusion of scientific knowledge. Also, scientific freedom is not absolute but is linked to and must be exercised in a manner consistent with scientific responsibility.

      …Three of the most important questions were: What should be the relationship between the right to benefit from science and intellectual property rights? How should government obligations under the right differ based on the available national resources? What is scientific knowledge and how should it be differentiated, if at all, from traditional knowledge?

      …The effort to define the right must not become mired in demands to resolve questions a priori that can only be answered over time. Insights from the scientific and engineering communities provide responses to many of the questions. Civil society must continue to illustrate how the right to science complements existing human rights protections. The scientific community, particularly in the 169 countries bound to implement the right, Margaret Weigers must demonstrate how the right can be instantiated a within their own national contexts….

      The power and potential of the right to science for empowering individuals, strengthening communities, and improving the quality of life can hardly be overstated. It is time for the UN process to reach a responsible and productive end and for the right to science to be put into practice as was intended when it was first recognized by the United Nations in 1948.

Climate Impacts Worsen, Agencies Say

[This brief article by Jeffrey Brainard is in the 30 November 2018 issue of Science.]

      In a stark display of the U.S. political tensions surrounding global warming, President Donald Trump’s administration is dismissing a major report, written by the government’s own experts, which warns that climate change poses a serious and growing threat to the nation’s economic and environmental health. More than 300 specialists contributed to the 1600-page report, formally known as Volume II of the Fourth National Climate Assessment, released 23 November by federal agencies. It warns that worsening heat waves, coastal flooding, wildfires, and other climate-related impacts are already afflicting the United States and could reduce its economic output by 10% in coming decades. But White House officials down-played such findings, claiming they are based on outdated climate models and “the most extreme” warming scenario. Despite the report, the administration says it has no plans to alter its efforts to weaken policies aimed at curbing climate change. Meanwhile, the World Meteorological Organization reported on 20 November that atmospheric concentrations of carbon dioxide, a primary warming gas, reached a global average of 405.5 parts per million (ppm) in 2017, up from 403.3 ppm in 2016. Many scientists believe concentrations will need to remain below 450 ppm to avoid catastrophic warming.

A Rigged Economy

[These excerpts are from an article by Joseph E. Stiglitz in the November 2018 issue of Scientific American.]

      The notion of the American Dream—that, unlike old Europe, we are a land of opportunity—is part of our essence. Yet the numbers say otherwise. The life prospects of a young American depend more on the income and education of his or her parents than hi almost any other advanced country. When poor-boy-makes-good anecdotes get passed around in the media, that is precisely because such stories are so rare.

      Things appear to be getting worse, partly as a result of forces, such as technology and globalization, that seem beyond our control, but most disturbingly because of those within our command. It is not the laws of nature that have led to this dire situation: it is the laws of humankind. Markets do not exist in a vacuum: they are shaped by rules and regulations, which can be designed to favor one group over another. President Donald Trump was right in saying that the system is rigged—by those in the inherited plutocracy of which he himself is a member. And he is making it much, much worse.

      America has long outdone others in its level of inequality, but in the past 40 years it has reached new heights. Whereas the income share of the top 0.1 percent has more than quadrupled and that of the top 1 percent has almost doubled, that of the bottom 90 percent has declined. Wages at the bottom, adjusted for inflation, are about the same as they were some 60 years ago! In fact, for those with a high school education or less, incomes have fallen over recent decades. Males have been particularly hard hit, as the U.S. has moved away from manufacturing industries into an economy based on services.

      Wealth is is even less equally distributed, with just three Americans having as much as the bottom 50 percent—testimony to how much money there is at the top and how little there is at the bottom. Families in the bottom 50 percent hardly have the cash reserves to meet an emergency. Newspapers are replete with stories of those for whom the breakdown of a car or an illness starts a downward spiral from which they never recover….

      Defenders of America’s inequality have a pat explanation. They refer to the workings of a competitive market, where the laws of supply and demand determine wages, prices and even interest rates—a mechanical system, much like that describing the physical universe. Those with scarce assets or skills are amply rewarded, they argue, because of the larger contributions they make to the economy. What they get merely represents what they have contributed. Often they take out less than they contributed, so what is left over for the rest is that much more.

      This fictional narrative may at one time have assuaged the guilt of those at the top and persuaded everyone else to accept this sorry state of affairs. Perhaps the defining moment exposing the lie was the 2008 financial crisis, when the bankers who brought the global economy to the brink of ruin with predatory lending, market manipulation and various other antisocial practices walked away with millions of dollars in bonuses just as millions of Americans lost their jobs and homes and tens of millions more worldwide suffered on their account. Virtually none of these bankers were ever held to account for their misdeeds.

      ….At the time of the Civil War, the market value of the slaves in the South was approximately half of the region’s total wealth, including the value of the land and the physical capital—the factories and equipment. The wealth of at least this part of this nation was not based on industry, innovation and commerce but rather on exploitation. Today we have replaced this open exploitation with more insidious forms, which have intensified since the Reagan-Thatcher revolution of the 1980s. This exploitation, I will argue, is largely to blame for the escalating inequality in the U.S.

      After the New Deal of the 1930s, American inequality went into decline. By the 1950s inequality had receded to such an extent that another Nobel laureate in economics, Simon Kuznets, formulated what came to be called Kuznets's law. In the early stages of development, as some parts of a country seize new opportunities, inequalities grow, he postulated; in the later stages, they shrink. The theory long fit the data—but then, around the early 1980s, the trend abruptly reversed.

      …Overall, wages are likely to be far more widely dispersed in a service economy than in one based on manufacturing, so the transition contributes to greater inequality. This fact does not explain, however, why the average wage has not improved for decades. Moreover, the shift to the service sector is happening in most other advanced countries: Why are matters so much worse in the U.S.?

      Again, because services are often provided locally, firms have more market power: the ability to raise prices above what would prevail in a competitive market. A small town in rural America may have only one authorized Toyota repair shop, which virtually every Toyota owner is forced to patronize. The providers of these local services can raise prices over costs, increasing their profits and the share of income going to owners and managers. This, too, increases inequality. But again, why is US. inequality practically unique?

      …In the U.S., the market power of large corporations, which was greater than in most other advanced countries to begin with, has increased even more than elsewhere. On the other hand, the market power of workers, which started out less than in most other advanced countries, has fallen further than elsewhere. This is not only because of the shift to a service-sector economy—it is because of the rigged rules of the game, rules set in a political system that is itself rigged through gerrymandering, voter suppression and the influence of money. A vicious spiral has formed: economic inequality translates into political inequality, which leads to rules that favor the wealthy, which in turn reinforces economic inequality.

      …We are already paying a high price for inequality, but it is just a down payment on what we will have to pay if we do not do something—and quickly. It is not just our economy that is at stake; we are risking our democracy.

      As more of our citizens come to understand why the fruits of economic progress have been so unequally shared, there is a real danger that they will become open to a demagogue blaming the country’s problems on others and making false promises of rectifying “a rigged system.” We are already experiencing a foretaste of what might happen. It could get much worse.

Dereliction of Duty

[This editorial is in the November 2018 issue of Scientific American.]

      There are several hundred people in Washington, D.C., paid with taxpayer dollars, who are not doing their jobs. This November we have the chance to do something about that because these people are members of the U.S. Congress, and in upcoming elections, they can be replaced with representatives who will live up to their responsibilities.

      Those responsibilities, set out by the Constitution, include oversight of the executive branch, in this case the Trump administration. That administration's agencies are supposed to craft policies based, in part, on good evidence and good science. For the past 21 months, many of them have not. Yet Congress has refused to hold them accountable.

      Exhibit A is the Environmental Protection Agency. Its mission, the agency says, is “to protect human health and the environment ... based on the best available scientific information.” Instead the EPA has ignored scientific evidence to justify lowering power plant emissions and greenhouse gas targets; made it more difficult for people to learn about potentially dangerous chemicals in their communities; replaced independent scientists on advisory boards with people connected to businesses the agency is supposed to regulate; and tried to make it harder to use science as a basis for regulations to protect human health.

      During all of this, Congress has done next to nothing.

      Consider what happened this past spring, when EPA director Scott Pruitt, who has since resigned amid a dozen ethics investigations, proposed that no research could be used to form environmental policy unless all data connected to it were publicly available. He said this proposed rule would ensure transparency. It was really a transparent effort to ignore science.

      Specifically, it would ignore research that links industrial pollution to human health. These studies include confidential patient data, such as names, addresses, birthdays and health problems—data that were only provided by patients under a guarantee of privacy. The Six Cities study, begun in the 1970s, was the first research to show that particulate matter in the air hurts and kills people. It has been replicated several times. But because its publications do not include all private patient data, the study would be ignored by the EPA when it considers permissible pollution levels. The World Health Organization estimates that this kind of pollution, largely from minute particulates, kills three million people worldwide every year. For these reasons, the rule has been condemned by every major health and science group.

      There were two congressional hearings involving the EPA after this rule was proposed. The House Committee on Energy and Commerce’s environmental subcommittee interviewed Pruitt, starting off with the chair, Republican Representative John Shimkus of Illinois, stating he was “generally pleased” with what the agency was doing. The senior minority member, Democratic Representative Paul Tonko of New York, did voice concerns about science, but the focus of the hearing remained elsewhere. In the Senate, an appropriations subcommittee gave Pruitt a much tougher time on his personal ethics but also spent almost no effort on science.

      Pruitt has departed, but there is no reason to think that his antiscience approach has gone with him. The healthstudiesrule is still under active consideration. Further, the EPA announced looser power plant standards this August despiteHdmitting, in its own document, that the extra pollution would lead to 1,400 additional deaths in the U.S. each year.

      Similar evidence-free approaches have taken hold at the Department of the Interior, which is scuttling a wildfire-eghting science program whose discoveries help firefighters save lives by forecasting the direction of infernos. The Department of Energy has stopped a set of new efficiency standards for gas furnaces and other appliances. Congress has been quiet.

      Congressional committees work by majority rule, so if the Republicans in the current majority do not want to hold hearings or use their control over agency budgets to compel changes, there are none. But the American people can make a change. The entire House of Representatives and one third of the Senate are up for reelection right now (except for those who are retiring). We can, with our votes, make them do their jobs.

Artificial Wood

[These excerpts are from an article by Sid Perkins in the November 2018 issue of Scientific American.]

      A new lightweight substance is as strong as wood yet lacks its standard vulnerabilities to fire and water.

      To create the synthetic wood, scientists took a solution of polymer resin and added a pinch of chitosan, a sugar polymer derived from the shells of shrimp and crabs. They freeze-dried the solution, yielding a structure filled with tiny pores and channels supported by the chitosan. Then they heated the resin to temperatures as high as 200 degrees Celsius to cure it, forging strong chemical bonds.

      The resulting material…is as crush-resistant as wood….

      Unlike natural wood, the new material does not require years to grow. Moreover, it readily repels water—samples soaked in water and in a strong acid bath for 30 days scarcely weakened, whereas samples of balsa wood tested under similar conditions lost two thirds of their strength and 40 percent of their crush resistance. The new material was also difficult to ignite and stopped burning when it was removed from the flame.

      …Its porosity lends an air-trapping capacity that could make it suitable as an insulation for buildings….

Kids these Days

[These excerpts are from an article by Michael Shermer in the December 2018 issue of Scientific American.]

      …suicide rates increased 46 percent between 2007 and 2015 among 15- to 19-year-olds. Why are iGeners [members of the Internet Generation] different from Millennials, Gen Xers and Baby Boomers?

      Twenge attributes the malaise primarily to the widespread use of social media and electronic devices, noting a positive correlation between the use of digital media and mental health problems. Revealingly, she also reports a negative correlation between lower rates of depression and higher rates of time spent on sports and exercise, in-person social interactions, doing homework, attending religious services, and consuming print media, such as books and magazines. Two hours a day on electronic devices seems to be the cutoff, after which mental health declines, particularly for girls who spend more time on social media, where FOMO (“fear of missing out”) and FOBLO (“fear of being left out”) take their toll….This, after noting that the percentage of girls who reported feeling left out increased from 27 to 40 between 2010 and 2015, compared with a percentage increase from 21 to 27 for boys.

      …iGeners have been influenced by their overprotective “helicoptering” parents and by a broader culture that prioritizes emotional safety above all else. The authors identify three “great untruths”:

      1. The Untruth of Fragility: “What doesn’t kill you makes you weaker?”

      2. The Untruth of Emotional Reasoning: “Always trust your feelings?”

      3. The Untruth of Us versus Them: “Life is a battle between good people and evil people.”

      Believing that conflicts will make you weaker, that emotions are a reliable guide for responding to environmental stressors instead of reason and that when things go wrong, it is the fault of evil people, not you, iGeners are now taking those insalubrious attitudes into the workplace and political sphere….

      Solutions? “Prepare the child for the road, not the road for the child” is the first folk aphorism Lukianoff and Haidt recommend parents and educators adopt. “Your worst enemy cannot harm you as much as your own thoughts, unguarded” is a second because, as Buddha counseled, “once mastered, no one can help you as much.” Finally, echoing Aleksandr Solzhenitsyn, “the line dividing good and evil cuts through the heart of every human being,” so be charitable to others.

      Such prescriptions may sound simplistic, but their effects are measurable in everything from personal well-being to societal harmony. If this and future generations adopt these virtues, the kids are going to be alright.

Income Inequality and Homicide

[These excerpts are from an article by Maia Szalavitz in the November issue of Scientific American.]

      Income inequality can cause all kinds of problems across the economic spectrum—but perhaps the most frightening is homicide. Inequality—the gap between a society’s richest and poorest—predicts murder rates better than any other variable....It is more tightly tied to murder than straight-forward poverty, for example, or drug abuse. And research conducted for the World Bank finds that both between and within countries, about half the variance in murder rates can be accounted for by looking at the most common measure of inequality….

      Another possible explanation is that as richer people retreat into ever more exclusive communities, their virtual disappearance masks rises in local inequality that are felt by former neighbors. A society in which millions struggle to pay their student loans and make a decent living while watching U.S. secretary of education Betsy DeVos—a woman of enormous wealth—cut the education budget and protect predatory for-profit schools is unlikely to be a safe and stable one. When men have little hope of a better future for either themselves or their kids, fights over what little status they have left take on outsize power. To break the cycle, everyone must recognize that it is in no one’s interest to escalate such pain.

The Decline of Africa’s Largesy Mammals

[These excerpts are from an article by Rene Bobe and Susana Carvalho in the 23 November 2018 issue of Science.]

      The human species is causing profound climatic, environmental, and biotic disruptions on a global scale. In the present time (now called the Anthropocene), most species of large terrestrial herbivores are threatened with extinction as their populations decline and their geographic ranges collapse under the pressure of human hunting, poaching, and encroachment. Although the scale of on-going anthropogenic ecological disruptions is unprecedented, human-driven extinctions are not new: There is strong evidence that humans played a major role in the wave of megafaunal losses at the end of the Pleistocene, between about 10,000 and 50,000 years ago. But when did humans, or our ancestors, begin to have such a profound effect on large herbivores to the point of causing extinctions?...

      Hominins—species on our side of the evolutionary divergence that separated us from the chimpanzees—first appeared in Africa in the late Miocene, about 7 million years ago. The late Miocene was a time of global climatic and environmental change, with an expansion of grasslands…in tropical latitudes and an increasing frequency of fires. At that time, large mammals were abundant in eastern Africa. At Lothagam, for example, in the Turkana Basin of Kenya, there were 10 species of megaherbivores, including the earliest records of the elephant family, Elephantidae. Near Lothagam is the early Pliocene site of Kanapoi, with a rich fossil record dated to about 4.2 million years ago. Kanapoi has the earliest record of the hominin genus Australopithecus, which coexisted with at least 11 species of megaherbivores: five proboscideans, two rhinocerotids, two giraffids, and two hippopotamids, along with a diverse fauna of large carnivores that included giant otters, hyenas, two species of saber-tooth felids, and three species of crocodiles. Kanapoi, at about 4.2 million years ago, with an area of 32 km2, had twice the number of megaherbivore species as the entire continent of Africa has today. Among the five species of proboscideans, there were the ancestors of the modern African and Asian elephants, Loxodonta and Elephas, respectively. From their first appearance in eastern Africa, both elephants fed predominantly on increasingly abundant grasses. It is probable that these elephants and other megaherbivores played a beneficial role for early hominins by opening up wooded environments, thereby result-ing in the mix of woodlands and grasslands where hominins seemed to thrive….

Cracking the Cambrian

[These excerpts are from an article by Joshua Sokol in the 23 November 2018 issue of Science.]

      …During the Cambrian, which began about 540 million years ago, nearly all modern animal groups—as diverse as mollusks and chordates—leapt into the fossil record. Those early marine animals exhibited a dazzling array of body plans, as though evolution needed to indulge a creative streak before buckling down. For more than a century, scientists have struggled to make heads or tails—sometimes literally—of those specimens, figure out how they relate to life today, and understand what fueled I the evolutionary explosion….

      Other sites around the world are also opening new vistas of the Cambrian. Scientists can now explore the animal explosion with a highlight reel of specimens, along with results from new imaging technologies and genetic and developmental stud-ips of living organisms….Researchers may be closer than ever to fitting these strange creatures into their proper places in the tree of life—and understanding the “explosion” that birthed them.

      Each new find brings the simple joy of unearthing and imagining a seemingly alien creature….

      How Cambrian species are related to today’s animals has been debated since the fossils first came to light. Walcott classified his oddities within known groups, noting that some Burgess Shale fossils, such as the brachiopods, persisted after the Cambrian or even into the present. So, for example, he concluded almost all the creatures resembling today’s arthropods were crustaceans.

      But later paleontologists had other ideas. Harvard University's Stephen Jay Gould perhaps best captured the charisma of Cambrian life in his 1989 book Wonderful Life: The Burgess Shale and the Nature of History, in which he lavished attention on the “weird wonders” excavated from Walcott’s city block-size quarry Gould argued that oddballs such as the aptly named Hallueigenia, a worm with legs and hard spines, seem unrelated to later animals. He slotted the unusual forms into their own phyla and argued that they were evolution's forgotten experiments, later cast aside by contingencies of fate.

      Contemporary paleontologists have settled on yet another way to understand them. Consider the arthropods, arguably Earth's most successful animals. In a family tree, the spray of recent branches that end in living arthropods—spiders, insects, crustaceans—constitutes a “crown” group. But some animals in the Burgess Shale probably come from earlier “stems” that branched off before the crown arthropods. These branches of the tree don’t have surviving descendants, like a childless great-uncle grinning out from a family photo. In that view, many of Gould's weird wonders are stem group organisms, related to the ancestors of current creatures although not ancestors themselves. Newer fossils from the Canadian Rockies help support that view. Caron argued in 2015, for example, that his specimens of Hallueigenia have features suggesting the animal belongs on a stem group of the velvet worms, creatures that still crawl around in tropical forests spitting slime….

      Bold claims that use anatomy to revise family trees engender similar controversy throughout the field. One argument that Hallueigenia fits with the velvet worms, for example, depends on the exact shape of its claws. But other teams counter that the claws aren’t diagnostic of ancestry.

      The uncertainties leave paleontologists ever hungry for newer, better specimens….

      Although show-stopping animals keep falling out of the strata, the full significance of the Cambrian explosion remains a mystery. Arthropods, the most diverse and common creatures known from the time, littered Cambrian ecosystems….the Cambrian witnessed both the birth and step-by-step diversification of many modern groups. Another approach yields a different answer, however. Geneticists use a tool called molecular clocks to trace back down the tree of life. By starting with genetic differences between living animals, which have accrued as a result of random mutations over the eons, molecular clocks can rewind time to the point where branches diverged.

      According to recent studies using that method, modem animals began to march off into their separate phyla some 100 million years before the Cambrian. The finding implies that those groups then hung out, inconspicuous or unnoticed in the fossil record, before suddenly stepping on stage.

      Paleontologists have a cryptic set of clues about life before the explosion. Long before the odd beasts of the Cambrian evolved, an even more alien set of ocean organisms left impressions on sedimentary rocks now seen in Namibia and Australia. The Ediacarans, as those fossils are called, taunt paleontologists with the same kind of interpretive challenge as the Cambrian’s weird wonders. But they’re even weirder. Their imprints suggest some grew in fractal patterns; others had three-part symmetry. Unhelpfully, they don’t have obvious mouths, guts, or appendages.

      …Caron and others keep hunting for fossil features that could reveal the relationships among Ediacaran, Cambrian, and present-day groups. Other researchers struggle to explain what caused the explosion of animal forms. Atmospheric oxygen may have spiked, enabling animals to grow bigger, stronger, and more active. Or erosion could have dumped toxic calcium into the oceans, prompting organisms to shunt it into building hard skeletons.

      Or biology itself could have led the way. Inventions such as predation, free swimming, and burrowing into the sea floor—all first seen in or shortly before the Cambrian—could have transformed a placid global ecology into a high-stakes contest, spurring waves of call-and-response innovation between groups. The explosion might also mark the moment when, after millions of years of quiet progress, animals had finally accrued the developmental recipes to build body parts and improvise on basic themes….Or, of course, multiple causes could have piled up together.

Fire and Rain

[These excerpts are from an editorial by Steve Metz in the November/December 2018 issue of The Science Teacher.]

      In the summer of 2018 a prolonged heat wave turned northern Europe brown. Sweden, Greece, and dozens of other European countries were ablaze with uncharacteristic wildfires, while states in the western U.S. experienced some of the largest fires in their history. Torrential rains flooded areas from California to Connecticut. Hurricane Florence dropped a meter of rain on the Carolinas, causing the Cape Fear River to crest at almost 19 meters (62 feet)….

      …Levels of atmospheric carbon dioxide reached 407 parts per million in August this year, tip over 20% in the past 40 years, and now higher than any time in at least the past 800,000 years. The last time atmospheric CO, concentrations were this high was more than three million years ago, when the temperature was 2°-3°C higher than during the pre-industrial era, and sea level was 15-25 meters higher than today….

      The effect of global warming on individual weather events is difficult to determine. Still, an emerging area of science—sometimes called “event attribution”—is rapidly advancing our understanding of the links between specific extreme events and human-caused climate change. For example, researchers estimated that “human interference in the climate system” increased Hurricane Florence’s rainfall forecast by over 50% and the storm’s projected diameter by about 80 kilometers….

      The events of summer 2018 warn us that climate change is not a far-off event. Global warming is real, it is caused mainly by human activity, and it is here now. Teachers must stand up for evidence-based climate science. We need to provide students with the accurate knowledge that can prepare and inspire them to take action on a personal, community, and global level.

A Physicist’s Final Reflections

[This excerpt is from an article by Andrew Robinson in the 16 November 2018 issue of Science.]

      …begins with Einstein. “Where did his ingenious ideas come from?” asks Hawking. He answers, “A blend of qualities, perhaps: intuition, originality, brilliance. Einstein had the ability to look beyond the surface to reveal the underlying structure. He was undaunted by common sense, the idea that things must be the way they seemed. He had the courage to pursue ideas that seemed absurd to others. And this set him free to be ingenious, a genius of his time and every other.”

      Was Hawking a genius, too? He never won a Nobel Prize, and the book gives no indication that Hawking regarded himself as a genius. On the other hand, he was one of the very few scientists since Einstein to become a household name….

Whose Science? A New Era in Regulatory “Science Wars”

[This excerpt is from an article by Wendy Wagner, Elizabeth Fisher and Pasky Pascual in the 9 November 2018 issue of Science.]

      …the reforms generally take the form of legislation or regulation, they do not simply suggest best practices for conducting scientific analyses but establish legal lines that cannot be crossed. Moreover, even though they create legal ground rules for scientific deliberations, the reforms have not been developed by the scientific community, but by members of Congress and political officials. In providing a birds’-eye view of the legal developments in regulatory science over the past 50 years, we identify just how idiosyncratic these current reforms are and why the scientific community needs to be aware of their implications.

      Although the agency’s underlying scientific analysis is often subject to scrutiny by stakeholders and political officials and review by the courts, these new proposals cut deeper and dictate in part how the formative scientific assessments themselves must be done. For example, these proposals require the exclusion of potentially relevant research during agencies’ initial review of the literature, dictate the types of computational models that must be considered in analyzing that information, and exclude respected scientists from peer reviewing the analysis. If the agency does not respect these legal lines, the agency’s review of the scientific literature is legally invalid and technically illegal. This contrasts with present practice where norms governing scientific analyses are rebuttable and subject to modification in light of specific contexts and scientific progress. The proposals thus reach down to control and limit the scientific record.

      The scientific community has been vocal in pointing out how the rules diverge from normal scientific practices, even while the legal requirements—some of which are still proposed and others which are final—purport to advance common goals, like data transparency and reproducibility….

Early Mongolians Ate Dairy but Lacked the Gene to Digest It

[These excerpts are from an article by Andrew Curry in the 9 November 2018 issue of Science.]

      More than 3000 years ago, herds of horses, sheep, and cows or yaks dotted the steppes of Mongolia. Their human caretakers ate the livestock and honored them by burying the animal bones with their own. Now, analysis of deposits on ancient teeth shows that early Mongolians milked their animals as well. That may not seem surprising. But the DNA of the same ancient individuals shows that as adults they lacked the ability to digest lactose, a key sugar in milk.

      The findings deepen an emerging puzzle, challenging an oft-told tale of how people evolve lactase persistence, the ability to produce a milk-digesting enzyme as adults….

      Most people in the world lose the ability to digest lactose after childhood. But in pastoralist populations, the story went, culture and DNA changed hand in hand. Mutations that allowed people to digest milk as adults would have given their carriers an advantage, enabling them to access a rich, year-round source of fat and protein. Dairying spread along with the adaptation, explaining why it is common in herding populations in Europe, Africa, and the Middle Fast.

      But a closer look at cultural practices around the world has challenged that picture. In modern Mongolia, for example, traditional herders get more than a third of their calories from dairy products. They milk seven kinds of mammals, yielding diverse cheeses, yogurts, and other fermented milk products, including alcohol made from mare’s milk….

      Geneticists are going back to the drawing board to understand why lactase persistence is common—and apparently selected for—in some dairying populations but absent in others….

      How dairying reached Mongolia is also a puzzle. The Yamnaya’s widespread genetic signature shows they replaced many European and Asian Bronze Age populations. But they seem to have stopped at the Altai Mountains west of Mongolia….

Facing Hatred

[These excerpts are from an editorial by Jose-Alain Sahel in the 9 November 2018, issue of Science.]

      …Two weeks ago, the mass killing at a Pittsburgh synagogue proved what I knew but wanted to forget—that no place on Earth is “safe” from hatred. But regardless of where any one of us lives and works, we are faced with the same, immense challenge: The quest for facts, enlightenment, and care versus ignorance and hatred matters more than ever.

      In the early 1930s, Albert Einstein asked Sigmund Freud to contribute to an initiative launched by the League of Nations that sought prominent individuals to promote peace and its values. Their exchange, written under the title Why War?, fell short of finding ways to counteract violence. It was published after Hitler had already been appointed chancellor. The rest is history. Should we simply believe that today there is a new cycle of history taking place? Moreover, in contrast to the League of Nations, nobody is asking the scientific community for help today. Science, with its fundamental quest for truth, or at least facts, and its open discourse, seems to have lost its iconic status, despite its contributions to societal well-being. The scientific community should not passively watch the disastrous rise of hatred worldwide.

      We are tasked with building a society of knowledge and care, where truth, integrity, and respect for all prevail. The heartening responses of health care providers to incidents in the United States and France epitomize this ideal. In 2015, after the mass killing at the Bataclan theater in Paris, nurses and physicians spontaneously converged on hospitals to help the victims, limiting the massive toll of the attacks. Likewise, in Pittsburgh, nurses and physicians treated the wounded, including the presumed killer, with efficiency and humanity. Helplines offering information and support were set up for a whole city in mourning. Religious and political leaders of all faiths and backgrounds, and community members from across the cultural spectrum, were united against hatred. In both cities, the responses were deeply rooted in society’s best, most inspiring traits….

      Caring means that each life matters, and that we all can and should be supported to grow and give back to society. The French Jewish philosopher Emmanuel Levinas asserted that looking into the face of one's fellow man invokes the imperative: “Thou shalt not kill.” This sounds naive and far too simplistic in the face of guns and strongly held prejudices. Yet, is there anything else in the world more meaningful than looking into human faces and listening?

      If we are truly an enlightened and caring society, then our response to violence must be to reject resignation and to include actions by those who seek truth and fact. This is now, as ever, our inheritance.

Money, Power, and Choices

[These excerpts are from an article by Maria Fergusob in the November 2018 issue of Phi Delta Kappan.]

      Most years, K-12 education ranks far below topics such as the economy, health care, and immigration among the issues of greatest concern to American voters. In the run-up to the 2018 midterm elections, though, education has taken on a sizable, if not quite leading, role. Across the country, in races big and small, pollsters report high levels of interest in where the candidates stand on teacher pay, student safety, and other school-related topics.

      On the surface, that might suggest a broad resurgence in support for public schooling. But the truth is that public education means very different things to different people. Some voters think of free and open education as a great equalizer, an essential democratic institution that undergirds our very sense of ourselves as a nation. Others think of it solely as a means of promoting their own children’s interests — and they vote accordingly. Not everyone buys into the notion that public schools are meant to be yours, mine, and ours together.

      Indeed, while public school systems, with their tax-based funding and local governing boards, may have been designed to secure the common good, they have changed dramatically over the last half century, becoming less and less equitable and more and more vulnerable to a host of political pressures and moneyed interests. Further, some of the most consequential debates and decisions about public schooling have occurred behind the curtain, outside the view of ordinary Americans. When they vote in local, state, and federal elections, most people — even voters who try to stay well-informed — know little about how campaign donations have influenced the positions their candidates take.

      …wealthy outsiders collaborate to “hijack the democratic process” and unduly influence voting in states and communities where they have no roots, no children, and no ownership. How, they ask, does that kind of outsider influence square with a system that is supposed to be locally controlled?

      It is a fair question. Even if many of these wealthy outsiders have the best of intentions, can we really say that a public system is locally controlled if one point of view is supported by so much external firepower? Unfortunately, the Supreme Court’s 2010 decision in the Citizens United case made this fair question a moot point. Money from all kinds of sources can and does influence elections….

Changing the Game on Climate

[These excerpts are from an interview with Gary Yohe in the 2018 Year in Review issue of The Nature Conservancy.]

      …Climate change poses many risks to nature and human health, and the cost of reducing those risks by reducing carbon emissions will only increase the longer we wait. We’ll get farther in both if, instead of relying on benefit-cost analysis, we accept the recent wisdom of the IPCC that climate change is a risk-management problem. There are real consequences—human lives, extreme weather, ecosystem collapse—that clarify the tradeoffs without undue reliance on monetary analyses.

      …Technology has been changing in favor of renewable energy and will continue to move in that direction. For individuals, speaking to ways that their communities and states can drive clean energy policy and implementation is key. Happily, it’s becoming clear to many corporations that curbing emissions is good for their bottom lines. Many major companies are committing themselves, but everyone needs to follow their lead.

      …Adaptation is essential. We’re not going to “fix” climate change. We’re already seeing its risks, damages and health effects, so supporting adaptation doesn’t mean giving up on mitigation. Both are an essential part of the most efficient portfolio of responses. To help, get educated and figure out what you and your communities can do.

Composites from Renewable and Sustainable Resources: Challenges and Innovations

[This excerpt is from an article by Amar K. Mohanty, Singaravelu Vivekanandhan, Jean-Mathieu Pin and Manjusri Misra in the 2 November 2018 issue of Science.]

      The era of natural fiber composites currently known as biocomposites dates to 1908 with the introduction of cellulose fiber-reinforced phenolic composites. This innovation was followed by synthetic glass fiber-reinforced polyester composites, which obtained commodity status in the 1940s. The use of biobased green polymers to manufacture auto parts began in 1941, when Henry Ford made fenders and deck lids from soy protein-based bioplastic. The use of composite materials, made with newable and sustainable resources, has become one of the vital components of the next generation of industrial practice. Their expanding use is driven by a variety of factors, including the need for sustainable growth, energy security, lower carbon footprint, and effective resource management, while functional properties of the materials are simultaneously being improved. Innovative sustainable resources such as biosourced materials, as well as wastes, coproducts, and recycled materials, can be used as both the matrix and reinforcement in composites to minimize the use of nonrenewable resources and to make better use of waste streams.

      Composite materials find a wide range of potential applications in construction and auto-parts structures, electronic components, civil structures, and biomedical implants. Traditionally, indus-trial sectors that require materials with superior mechanical properties use composites made from glass, aramid, and carbon fibers to reinforce thermoplastics such as polyamide (PA), poly-propylene (PP), and polyvinyl chloride) (PVC), as well as thermoset resins such as unsaturated polyester (UPE) and epoxy resin. In addition to fiber, mineral fillers such as talc, clay, and calcium carbonate are being used in composite manufacturing. Such hybrids of fiber and mineral fillers play a major role in industrial automotive, housing, and even packaging applications. Carbon black plays a vital role as a reinforcement, especially in rubber-based composites. The key environmental concern with regard to composite materials is the difficulty of removing individual components from their structures to enable recycling at the end of a material’s service life. At this point, most composite materials are either sent to a landfill or incinerated. Wood and other natural fibers (e.g., flax, jute, sisal, and cotton), collectively called “biofibers,” can be used to reinforce fossil fuel-based plastic, thus resulting in biocomposite materials. Synthetic glass fiber-reinforced biobased plastics such as polylactides (PLAs) are a type of bio-composite. Biofiber-PP and biofiber-UPE composites have reached commodity status in many auto parts, as well as decking, furniture, and housing applications. Hybrid biocomposites of natural and synthetic fibers as well as mixed matrix systems also represent a key strategy in engineering new classes of biobased composites. As part of feedstock selection, a wide range of renewable products that includes agricultural and forestry residues, wheat straw, rice straw, and waste wood, as well as undervalued industrial coproducts including biofuel coproducts such as lignin, bagasse, and clean municipal solid wastes, is currently being explored to deriVe chemicals and materials. Recent advancements in biorefinery concepts create new opportunities with side-stream product feedstock that can be valorized in the fabrication of a diverse array of biocomposites.

      Materials scientists can help in advancing sustainable alternatives by quantifying the environmental burden of a material through its product life-cycle analysis. The exponential growth of population and modernization of our society will lead to a threefold increase in the demand for global resources if the current resource-intensive path is continued. According to the United Nations, a truckload of plastic waste is poured into the sea every minute. By 2050, at current rates, the amount of plastic in the ocean will exceed the number of fish. The benefit of diverting plastic packaging material is estimated at around $80 billion to $120 billion, which is currently lost to the economy. If diverted for composite use, the recycled and waste plastic currently destined for landfills and incineration would be used for sustainable development, thereby reducing dependence on nonrenewable resources such as petroleum. Postindustrial food processing wastes are being explored as biofiller in biodegradable plastics for the development of compostable biocomposites. Low-value biomass and waste resources can be pyrolyzed to provide biocabon (biochar) as sustainable filler for biocomposite uses. The increased sustainability in composite industries requires basic and transformative research toward the design of entirely green composites. Renewable resourced-based sustainable polymers and bioplastics, as well as advanced green fibers such as lignin-based carbon fiber and nanocellulose, have great potential for sustainable composites. Biobased nonbiodegradable composites show promising applications in auto parts and other manufacturing applications that require durability. Biodegradable composites also show promise in sustainable packaging This comprehensive Review on composites from sustainable and renewable resources aims to summarize their current status, constraints on wider adoption, and future opportunities. In keeping with the broad focus of this article, we analyze the current development of such composites and discuss various fibers and fillers for reinforcements, current trends in polymer matrix systems, and integration of recycled and waste coproducts into composite systems to outline future research trends.

Shifting Summer Rains

[These excerpts are from an article by David McGee in the 2 November 2018 issue of Science.]

      Most of China's water supply depends on rainfall from the East Asian summer monsoon (EASM), a seasonal progression of rains that begins along the southern coast in spring, then sweeps north, reaching northeastern China in midsummer….Projections of the EASM’s response to future climate change are complicated by its complex interaction with the mid-latitude jet stream, which appears to govern the monsoon’s northward march each spring and summer. To investigate the monsoon's sensitivity and dynamics, many scientists have turned to examining its past changes recorded in natural archives. Although past climates are not a direct analog of the 21st-century climate, they offer vital tests of the ability to describe monsoon behavior through theories and numerical models….Through examination of trace elements in Chinese stalagmites (a proxy for local precipitation amount) and climate modeling experiments, they show that cooling episodes in the North Atlantic shifted the summer jet stream south, delaying the onset of monsoon rains in northeastern China and increasing rainfall in central China. The finding demonstrates that local rainfall in the EASM regions can vary in opposition to monsoon strength, and it highlights the importance of future high-latitude warming in determining precipitation patterns in China.

      Central to investigations of the EASM’s history are records of the oxygen isotope composition of rainfall recorded in stalagmites from Chinese caves…

      …The study presents measurements of trace impurities in the calcium carbonate lattice—substitutions of magnesium and strontium for calcium—in two stalagmites from central China that span the transition from the peak of the last ice age to the beginning of the current interglacial period, between 21,000 and 10,000 years ago. These trace-element variations are interpreted to reflect changes in the rate with which infiltrating waters passed through the rock above the cave, which should track local precipitation. Oxygen isotopes in these samples record the same pronounced variations as other Chinese stalagmites during the last deglaciation….

Restoring Lost Grazers Could Help Blunt Climate Change

[These excerpts are from an article by Elizabeth Pennisi in the 26 October 2018 of Science.]

      Restoring reindeer, rhinoceroses, and other large mammals could help protect grasslands, forests, and tundra from catastrophic wildfires and other threats associated with global warming, new studies suggest. The findings give advocates of so-called trophic rewilding—reintroducing lost species to reestablish healthy food webs—a new rationale for bringing back the big grazers….

      Rewilding is often associated with an am-bitious proposal to restore large mammals, including even ice age mammoths, to a huge park in Russia….But mammoth resurrection is still just a dream, and most rewilders are focused on restoring animals including giant tortoises, dam-building beavers, or herds of grazers.

      Now, it seems rewilding could offer a climate bonus. As the planet has warmed, fire seasons have become 25% longer than they were 30 years ago, and more areas are experiencing severe blazes….

      …In the case of white rhinos, fires averaged just 10 hectares when the animals were present—because they kept plants closely cropped, and their paths created fire breaks—but increased to an average of 500 hectares after the rhinos vanished….

      Others are skeptical. Like any ecosystem re-engineering effort, the long-term effects of rewilding are hard to anticipate….Some modeling, for example, suggests increased Arctic grazing will lead to greater carbon release, not less. And creating Arctic herds big enough to make a difference could be difficult….

Like this World of Ours

[This excerpt is from an article by Marcia Bartusiak in the Fall 2018 issue of MIT Spectrum.]

      …But speculation that planetary systems circle other stars started long, long ago—in ancient times. In the fourth century BCE, the Greek philosopher Epicurus, in a letter to his student Herodotus, surmised that there are “infinite worlds both like and unlike this world of ours.” As he believed in an infinite number of atoms careening through the cosmos, it only seemed logical that they’d ultimately construct limitless other worlds.

      The noted 18th-century astronomer William Herschel, too, conjectured that every star might be accompanied by its own band of planets but figured they could “never be perceived by us on account of the faintness of light.” He knew that a planet, visible only by reflected light, would be lost in the glare of its sun when viewed from afar.

      But astronomers eventually realized that a planet might be detected by its gravitational pull on a star, causing the star to systematically wobble like an unbalanced tire as it moves through the galaxy. Starting in 1938, Peter van de Kamp at Swarthmore College spent decades regularly photographing Barnard’s star, a faint red dwarf star located six light-years away that shifts its position in the celestial sky by the width of the Moon every 180 years, faster than any other star. By the 1960s, van de Kamp got worldwide attention when he announced that he did detect a wobble, which seemed to indicate that at least one planet was tagging along in the star’s journey. But by 1973, once Allegheny Observatory astronomer George Gatewood and Heinrich Eichhorn of the University of Florida failed to confirm the Barnard-star finding with their own, more sensitive photographic survey, van de Kamp’s celebrated claim of detecting the first extrasolar planet disappeared from the history books.

      The wobble technique lived on, however, in another fashion. Astronomers began focusing on how a stellar wobble would affect the star’s light. When a star is tugged radially toward the Earth by a planetary companion, the stellar light waves get compressed—that is, made shorter—and thus shifted toward the blue end, of the electromagnetic spectrum. When pulled away by a gravitational tug, the waves are extended and shifted the other way, toward the red end of the spectrum. Over time, these periodic changes in the star’s light can become discernible, revealing how fast the star is moving back and forth due to planetary tugs.

      In 1979, University of British Columbia astronomers Bruce Campbell and Gordon Walker pioneered a way to detect velocity changes as small as a dozen meters a second, sensitive enough for extrasolar planet hunting to begin in earnest. Constantly improving their equipment, planet hunters were even more encouraged in 1983 and 1984 by two momentous events: the Infrared Astronomical Satellite (IRAS) began seeing circumstellar material surrounding several stars in our galaxy; and optical astronomers, taking a special image of the dwarf star Beta Pictoris, revealed an edge-on disk that extends from the star for some 37 billion miles (60 billion kilometers). It was the first striking evidence of planetary systems in the making, suggesting that such systems might be common after all.

      The first indication of an actual planet orbiting another star arrived unexpectedly and within an unusual environment. In 1991, radio astronomers Alex Wolszczan and Dale Frail, while searching for millisecond pulsars at the Arecibo Observatory in Puerto Rico, saw systematic variations in the beeping of pulsar B1257+12, which suggested that three bodies were orbiting the neutron star. Rotating extremely fast, millisecond pulsars are spun up by accreting matter from a stellar companion. So, this system, reported Wolszczan and Frail, “probably consists of 'second generation' planets created at or after the end of the pulsar’s binary history.”

      The principal goal for extrasolar planet hunters, though, was finding evidence for “first generation” planets around stars like our Sun—planets that formed from the stellar nebula itself as a newborn star is created. That long-anticipated event at last occurred in 1994 when Geneva Observatory astrono-mers Michel Mayor and Didier Queloz, working from the Haute-Provence Observatory in southern France, discerned the presence of an object similar to Jupiter orbiting 51 Pegasi, a sunlike star 45 light-years distant in the constellation Pegasus….

Youth Climate Trial Showcases Science

[These excerpts are from an an article by Julia Rosen in the 26 October 2018 issue of Science.]

      Next week, barring a last-minute intervention by the Supreme Court, climate change will go to trial for just the second time in U.S. history. In a federal courtroom in Eugene, Oregon, 21 young people are scheduled to face off against the U.S. government, which they accuse of endangering their future by promoting policies that have increased emissions of carbon dioxide (CO2) and other planet warming gases. The plaintiffs aren’t asking for monetary damages. Instead, they want District Judge Ann Aiken to take the unprecedented step of ordering federal agencies to dramatically reduce the amount of CO2 in the atmosphere.

      Government attorneys are not expected to challenge the scientific consensus that human activities, including the burning of fossil fuels, cause global warming. But the outcome could hinge, in part, on how Aiken weighs other technical issues….

      The civil trial will be a milestone in a hard-fought legal battle that began in 2015, when environmental groups joined with youth activists and retired NASA scientist James Hansen to push for climate action. The lawsuit rests on the novel argument that the government has knowingly violated the plaintiffs’ rights to a “safe” climate by taking actions—such as subsidizing fossil fuels—that cause warming….

      If the trial proceeds, the youths’ lawyers will have to persuade the judge that the government's actions have helped cause climate change; that the warming exacerbated storms, droughts, and wildfires; and that individual plaintiffs have suffered injuries as a result….Hotter, dryer weather increases the risk of fires….And warmer air can hold more moisture, boosting rainfall by up to 20%—a factor he says worsened floods that affected plaintiffs living in Louisiana, Florida, and Colorado….

      The two sides disagree over whether the United States can reduce emissions as fast as the plaintiffs would like. But the government may also argue that any cuts would have a limited impact, because of the global nature of climate change. Other countries now produce roughly 88% of the world’s greenhouse gas emissions….Therefore, the solution requires international cooperation….

      Courts have rejected that argument as a rationale for inaction in previous cases….In the only other climate lawsuit to get to the trial stage—a landmark 2007 case that challenged EPA’s refusal to regulate CO2 from vehicles—the agency claimed doing so wouldn’t matter because U.S. cars contributed just 6% of global emissions. But the Supreme Court disagreed….

Imagine a World without Facts

[These excerpts are from an an editorial by Jeremy Berg in the 26 October 2018 issue of Science.]

      …We are now living in a world where the reality of facts and the importance of scientific inquiry and responsible journalism are questioned with distressing frequency. This trend needs to be called out and arrested; the consequences of allowing it to continue are potentially quite damaging.

      Facts are statements that have a very high probability of being verified whenever appropriate additional observations are made. Thus, facts can be reliably used as key components in interpreting other observations, in making predictions, and in building more complicated arguments….

      Consider the present “post-fact” world in this context. The lack of acceptance and cynical or ignorant questioning of well-documented evidence erode the perception that many propositions are well-supported facts, weakening the foundation on which many discussions and policies rest. Under these circumstances, numerous alternatives appear to be equally plausible because the evidence supporting some of these alternatives has been discounted. This creates a world of ignorance where many possibilities seem equally likely, causing subsequent discussions to proceed without much foundation and with outcomes determined by considerations other than facts.

      Overly discounting information from appropriately trained researchers based on well-conducted studies, or from well-qualified journalists who pursue information with good practices that include interactions with multiple independent sources, can falsely restrict the available evidence. If evidence is judged without regard for its methodological basis, as well as a thorough assessment of sources of bias, unreliable conclusions may be drawn. If shoddy evidence is accepted, false interpretations may appear to be plausible even though they lack substantial evidentiary support. If robust evidence is undervalued or ignored, excess uncertainty will remain even when some propositions should be considered well established.

      To avoid sliding further into a world without facts, we must articulate and defend the processes of evidence generation, evaluation, and integration. This includes not only clear statements of conclusions, but also clear understanding of the underlying evidence with recognition that some propositions have been well established, whereas others are associated with substantial remaining uncertainty. We should acknowledge and accept responsibility for, but not exaggerate, challenges within the scientific enterprise. At the same time, we should continue to call out statements put forward that are factually incorrect with reference to the most pertinent evidence. Without taking these steps forcefully, we risk living in a world where many things do not work as well as we need them to.

Your Genome, on Demand

[These excerpts are from an article by Ali Tozkamani and Eric Topol in the November-December 2018 issue of MIT Technology Review.]

      In early 2018, it was estimated that over 12 million people had had their DNA analyzed by a direct-to-consumer genetic test. A few months later, that number had grown to 17 million. Meanwhile, geneticists and data scientists have been improving our ability to convert genetic data into useful insights—forecasting which people are at triple the average risk for heart attack, or identifying women who are at high risk for breast cancer even if they don't have a family history or a BRCA gene mutation. Parallel advances have dramatically changed the way we search for and make sense of volumes of data, while smartphones continue their unrelenting march toward becoming the de facto portal through which we access data and make informed decisions.

      Taken together, these things will transform the way we acquire and use personal genetic information. Instead of getting tests reactively, on a doctor's orders, people will use the data proactively to help them make decisions about their own health.

      With a few exceptions, the genetic tests used today detect only uncommon forms of disease. The tests identify rare variants in a single gene that causes the disease.

      But most diseases aren’t caused by variants in a single gene. Often a hundred or more changes in genetic letters collectively indicate the risk of common diseases like heart attack, diabetes, or prostate cancer. Tests for these types of changes have recently become possible, and they produce what is known as your “polygenic” risk score. Polygenic risk scores are derived from the combination of these variants, inherited from your mother and father, and can point to a risk not manifest in either parent’s family history. We’ve learned from studies of many polygenic risk scores for different diseases that they provide insights we can’t get from traditional, known risk factors such as smoking or high cholesterol (in the case of heart attack). Your polygenic score doesn’t represent an unavoidable fate—many people who live into their 80s and 90s may harbor the risk for a disease without ever actually getting it. Still, these scores could change how we view certain diseases and help us understand our risk of contracting them.

      Genetic tests for rare forms of disease caused by a single gene typically give a simple yes or no result. Polygenic risk scores, in contrast, are on a spectrum of probability from very low risk to very high risk. Since they’re derived from combinations of genome letter changes that are common in the general population, they’re relevant to everybody. The question is whether we'll find a way to make proper use of the information we get from them….

      Statin drugs are a good case study for this. They’re widely used, even though 95% of the people taking them who haven't had heart disease or stroke get no benefit aside from a nice cholesterol lab test. We can use a polygenic risk score to reduce unnecessary statin use, which not only is expensive but also carries health risks such as diabetes. We know that if you are in the top 20% of polygenic risk for heart attack, you're more than twice as likely to benefit from statins as people in the bottom 20%; these people can also benefit greatly from improving their lifestyle (stop smoking, exercise more, eat more vegetables). So knowing your polygenic risk might cause you to take statins but also make some lifestyle changes….

      And it’s not just about heart disease. A polygenic risk score might tell you that you’re at high risk for breast cancer and spur you to get more intensive screening and avoid certain lifestyle risks. It might tell you that you’re at high risk for colon cancer, and therefore you should avoid eating red meat. It might tell you that you're at high risk for type 2 diabetes, and therefore you should watch your weight….

      Another challenge will be to convince people to forgo or delay medical interventions if they have a low risk of a certain condition. This will require them to agree that they're better off accepting a very low risk of a catastrophic outcome rather than needlessly exposing themselves to a medical treatment that has its own risks….

      You can't change your genetic risk. But you can use lifestyle and medical interventions to offset that risk….

Opening a Door to Eugenics

[These excerpts are from an article by Nathaniel Comfort in the November-December 2018 issue of MIT Technology Review.]

      If this is “the science,” the science is weird. We’re used to thinking of science as incrementally seeking causal explanations for natural phenomena by testing a series of hypotheses. Just as important, good science tries as hard as it can to disprove the working hypotheses.

      Sociogenomics has no experiments, no null hypotheses to accept or reject, no deductions from the data to general principles. Nor is it a historical science, like geology or evolutionary biology, that draws on a long-running record for evidence.

      Sociogenomics is inductive rather than deductive. Data is collected first, without a prior hypothesis, from longitudinal studies like the Framingham Heart Study, twin studies, and other sources of information—such as direct-to-consumer DNA companies like 23andMe that collect biographical and biometric as well, as genetic data on all their clients.

      Algorithms then chew up the data and spit out correlations between the trait of interest and tiny variations in the DNA, called SNPs (for single-nucleotide polymorphisms). Finally, sociogenomicists do the thing most scientists do at the outset: they draw inferences and make predictions, primarily about an individual’s future behavior.

      Sociogenomics is not concerned with causation in the sense that most of us think of it, but with correlation. The DNA data often comes in the form of genome-wide association studies (GWASs), a means of comparing genomes and linking variations of SNPs. Sociogenomics algorithms ask: are there patterns of SNPs that correlate with a trait, be it high intelligence or homosexuality or a love of gambling?

      Yes—almost always. The number of possible combinations of SNPs is so large that finding associations with any given trait is practically inevitable.

      …statistical significance does not equal biological significance. The number of people buying ice cream at the beach is correlated with the number of people who drown or get eaten by sharks at the beach. Sales figures from beachside ice cream stands could indeed be highly predictive of shark attacks. But only a fool would bat that waffle cone from your hand and claim that he had saved you from a Great White….

      Sociogenomics is the latest chapter in a tradition of hereditarian social science dating back more than 150 years. Each iteration has used new advances in science and unique cultural moments to press for a specific social agenda. It has rarely gone well.

      The originator of the statistical approach that sociogenomicists use was Francis Galion, a cousin of Charles Darwin. Galton developed the concept and method of linear regression—fitting the best line through a curve—in a study of human height. Like all the traits he studied, height varies continuously, following a bell-curve distribution. Galton soon turned his attention to personality traits, such as “genius,” “talent,” and “character.” As he did so, he became increasingly hereditarian. It was Galton who gave us the idea of nature versus nurture. In his mind, despite the “sterling value of nurture,” nature was “by far the more important.”

      Galton and his acolytes went on to invent modern biostatistics—all with human improvement in mind….

      …After prominent eugenicists canvassed, lobbied, and testified on their behalf, laws were passed in dozens of states banning “miscegenation” or other “dysgenic” marriage, calling for sexual sterilization of the unfit, and throttling the stream of immigrants from what certain politicians today might refer to as “shithole countries.”

      …Genetics has an abysmal record for solving social problems. In 1905, the French psychologist Simon Binet invented a quantitative measure of intelligence—the IQ test—to identify children who needed extra help in certain areas. Within 20 years, Binet was horrified to discover that people were being sterilized for scoring too low, out of a misguided fear that people of subnormal intelligence were sowing feeblemindedness genes like so much seed corn.

The Cell’s Power Plant

[These excerpts are from an article by Jonathan Shaw in the November-December 2018 issue of Harvard Magazine.]

      Mitochondria produce metabolic energy by oxidizing carbohydrates, protein, and fatty acids. In a five-part respiratory chain, the organelle captures oxygen and combines it with glucose and fatty acids to create the complex organic chemical ATP (adenosine triphosphate), the fuel on which life runs. Cells can also produce a quick and easy form of sugar-based energy without the help of mitochondria, through an anaerobic process called glycolysis, but a mitochondrion oxidizing the same sugar yields 15 times as much energy for the cell to use. This energy advantage is generally accepted as the reason that, between one billion and one and a half billion years ago, a single free-living bacterium and a single-celled organism with a nucleus entered into a mutually beneficial relationship in which the bacterium took up residence inside the cell. No longer free-living, that bacterium evolved to become what is now the mitochondrion, an intracellular organelle.

      Recent discoveries in the Mootha lab have suggested an alternative explanation for this unusual partnership that focuses on the organelle’s ability to detoxify oxygen by consuming it. But whatever the underlying reason, this ancient, extraordinary connection formed just once, Mootha says—and the evolutionary success it conferred was so great that the single cell multiplied and became the ancestor of all plants, animals, and fungi.

      Nobody knows what that first cell looked like, but the bacterium that hitched a ride inside it was probably a relative of the “bugs” that cause Lyme disease, typhus, and chlamydia. In fact, mitochondria are similar enough to these bacteria that when physicians target such intracellular infections with specialized antibiotics, mitochondria are impaired, too….

      The answer lies in evolutionary history. After mitochondria took tki up residence in cells more than a billion years ago—in what was probably a long, drawn-out process—many genes were transferred from the mitochondria into the genome of the host—in other words, into the nucleus of the cell, where most DNA resides. The result of this gene transfer is that the mitochondrial genome, with just 16,000 base pairs (the building blocks of DNA), has been stripped down to bare essentials. Compared to its ancestral form and also to living relatives, such as the Rickettsia bacterium that causes typhus, which has more than a million base pairs, its genome is now tiny.

      …An outpouring of research began to hint at the versatile, indispensable role that mitochondria play in the regulation of cell death, the immune system, and cell signaling. The traditional focus on energy production, in other words, may have misled researchers—all the way back to their interpretation of what happened more than a billion years ago, when that single cell and a lone bacterium entered into a long-term relationship.

      Oxygen levels on early Earth were low at that time, but rising, Mootha says. “We think of oxygen as a life-giving molecule, and it is, but it can also be very corrosive”—think of how it rusts a car. In biology, oxygen and its byproducts are known to cause cellular damage, and are implicated in aging.

      Mitochondria, on the other hand, are consumers of oxygen. Maybe, according to a hypothesis favored by the Mootha lab, the selective advantage that accrued to the first cell to host a mitochondrion was not only more energy, but better control of the toxic effects of oxygen. Normal gene expression supports this idea….

An Alternative Urban Green Carpet

[These excerpts are from an article by Maria Ignatieva and Marcus Hedblom in the 12 October 2018 issue of Science.]

      Lawns are a global phenomenon. They green the urban environment and provide amenable public and private open spaces. In Sweden, 52% of the urban green areas are lawns. In the United States, lawns cover 1.9% of the country’s terrestrial area and lawn grass is the largest irrigated nonfood crop. Assuming lawn would cover 23% of cities globally [on the basis of data from the United States and Sweden], it would occupy 0.15 million to 0.80 million km2 (depending on urban definitions)—that is, an area bigger than England and Spain combined or about 1.4% of the global grassland area. Yet, lawns exact environmental and economic costs, and given the environmental and economic impacts of climate change, it is time to consider new alternative “lawnscapes” in urban planning as beneficial and sustainable alternatives.

      Although lawns are widespread, their properties have received less attention from the scientific community compared to urban trees or any other types of green areas. Designers, urban planners, and politicians tend to highlight the positive ecosystem services provided by lawns. For example, lawns produce oxygen, sequestrate carbon, remove air pollution (although this has not been supported by good quantitative studies), reduce water runoff, increase water infiltration, mit-gate soil erosion, and increase groundwater recharging. But perhaps the most important positive ecosystem service is the aesthetic and recreational benefits they provide. Aesthetics are a primary factor in modern urban planning and landscaping practice. For example, in developing countries located in arid zones, designers argue that lawns and irrigated turfs considerably enhance the quality of urban life.

      Recent heat waves and an increasing prevalence of droughts have raised economic and environmental concerns about the effects of urban lawns on climate change. These circumstances have encouraged researchers and the public to reconsider the green-carpet concept and assess the controversial aspects of lawns. It has been argued that lawns moderate urban temperatures, but this when compared to the absence of any vegetation. In arid regions of the United States, lawn irrigation accounts for 75% of the total annual household water consumption. In Perth, Australia, the annual volume use of groundwater for irrigating public greenspaces is 73 gigaliters (GI) and an additional 72 Gls of water in unlicensed backyard bores for private lawn irrigation. Another concern is the contamination of groundwater or runoff water due to overuse of fertilizers, herbicides, and pesticides. In 2012, the U.S. home and garden sector used 27 million kg of pesticides. The positive effect of soil carbon sequestration on the climate footprint of intensively managed lawns was found to be negated by greenhouse gas emissions from management operations such as mowing, irrigation, and fertilization. Gasoline-powered lawn mowers emitted high amounts of carcinogenic exhaust pollutants. Moreover, substituting degraded open green areas with plastic lawns eliminates real nature from cities and arguably reduces overall sustainability given that they reduce habitats, decrease soil organisms, pollute runoff water, and may well have yet unknown negative consequences for human health through plastic particles.

      However, the most noticeable constraint to new alternative lawn thinking may be the contribution of lawns to urban aesthetic uniformity and urban ecological homogenization, where lawn plant communities become similar across different biophysical settings. From the vast variety of the grass genera, only a limited number of species are selected for lawns….

      The reason behind lawn uniformity may lie in its origin. Grass plots in ornamental gardens most likely appeared in medieval times and were probably obtained from the closest pastures and meadows. They were small and quite biodiverse (containing a large number of meadow herbaceous plants). In the 17th century; the role of lawns increased in the decorative grounds of geometrical gardens. For example, in iconic French Versailles, short-cut green grass was perfectly blended with the ideology of the power of man over nature. The English landscape, or the “natural” style of the 18th century, introduced an idealized version of urban nature: grazed grasslands with scarcely planted shade trees and the “pleasure ground” next to the mansion with a short, smoothly cut lawn. With the introduction of mowers and lawn-seed nurseries, the English pastoral vision flourished further in the public parks during the second half of the 19th century. In the 20th century, the modernistic prefabricated landscape was based on the same English picturesque model as the “natural” landscape, which was often mistaken for ecological quality. Consequently, monoculture and intensively managed lawnscapes dislodged the majority of native zonal plant communities in urban environments. At the beginning of the 21st century, perfect green lawns became part of the uptake by non-Western countries of the ideal Western lifestyle and culture….

      What is the state of alternative lawnscapes today? The most dramatic implementation of new lawn thinking is in Berlin, Germany, where spontaneous vegetation is accepted as a fundamental landscape design tool. In Gleisdreieck Park and Sudgelande Nature Park (both established on abandoned railways), some areas were left to “go wild” and thus have been colonized by spontaneous vegetation. This has successfully challenged societal norms of conventional green lawns and has been accepted by local people. Recent research in the United Kingdom and Sweden showed that people desire to change the monotonous lawn to a more diverse environment. Grass-free lawn is one of the latest movements in both nations and is based on the use of low-growing native (in Sweden) or native and exotic (in the United Kingdom) herbaceous plants without the use of grasses. The goal is to create a dense biodiverse and low-maintenance mat, which can be used for recreation.

      What about future research on urban lawns? Of course, new plant species that can survive heavy tramping or long drought, for example, are desirable. Beyond plant research, studies on planting design must go beyond the theoretical. Specific geographical, cultural, and social conditions of each country must be factored into such alternative lawn planning research. In Australia, South Africa, and Arizona, alternative lawns might be based on xeriscape local plants. In Chinese cities, historically proved groundcover species could provide sustainability and a sense of place. Lawns could become a universal model for experimental sustainable design and urban environment monitoring.

      New alternative lawns represent a new human-made “wild” nature, which is opposed to the “obedient” nature of conventional lawns. Creating a new, ecological norm requires demonstration displays, public education, and the introduction of “compromised” design solutions. For example, a strip of conventional lawn can frame prairie or meadow and other “wild” vegetation and show a presence of culture in nature. The use of colors such as gray, silver, yellow, and even brown in lawnlike covers can add a feeling of real nature. Thus, one crucial challenge is how to accelerate people’s understanding of sustainable alternatives and acceptance of a new vegetation aesthetic in urban planning and design.

The Persistence of Polio

[These excerpts are from a book review by Pamela J. Hines in the 12 October 2018 issue of Science.]

      In the mid-20th century, several successful vaccines against polio were developed, eventually leading to an international initiative in 1988 to eradicate the poliovirus from the planet. After the success of smallpox eradication in 1980, the optimism of the initiative’s advocates and supporters seemed well placed. But some viruses are more unruly than others.

      Is poliovirus eradication an achievable and worthwhile goal or a misapplication of public health efforts…?

      Discovery of polio’s key vaccines was first E, stalled by scientific missteps and later driven by scientists in competition. Many people dedicated their lives to the vaccination programs in hope of eradicating this disease, and others lost theirs to violent resistance against those same campaigns. Evolution and natural selection drove resurgence of new poliovirus strains in regions thought to be cleared. Poverty and remote geography enabled pockets of viral persistence. And the mismatch between where the motivation came from and where the action needed to happen weakened the end game….

      The poliovirus spreads in environments where poor sanitation contaminates water supplies. Children who encounter the virus early in life tend to be less affected, whereas children who are protected from early virus exposure by good water and sanitation systems may be left susceptible to more severe disease when they encounter the virus later on. Thus, better water and sanitation systems may have paradoxically increased the risk of severe disease, offering a potential explanation for why the U.S. polio epidemic seemed to hit children in well-off suburban neighborhoods particularly hard. But regardless of what drove the disease’s spread, the result was a hue and cry to stop the polio outbreaks from countries that had the scientific expertise and money to go after the problem.

      Because of the efforts that ensued, large portions of the world are now free from the constant threat of polio. Not since 1979 has a case of polio originated in the United States, which is stunning progress given that some 58,000 cases of polio originated in the United States in 1952.

      The 1988 initiative to completely eradi-cate polio worldwide, however, has failed to replicate this success….

      With polio cleared from wealthier nations, most of the action now occurs in developing areas, where public health systems must deal with a variety of pressing challenges, of which polio is only one. The virus persists in impoverished regions, where inadequate water and sanitation systems facilitate its spread. Weak health care systems struggle to sustain even minimal programs against scourges of childhood, ranging from measles to malnutrition.

      The vaccines do work. But controlling the poliovirus has proven to be more dif-ficult than expected.

A New Leaf

[These excerpts are from an article by Erik Stokstad in the 12 October 2018 issue of Science.]

      Mikael Fremont was up to his shoulders in rapeseed…to point out something nearly hidden: a mat of tattered, dead leaves covering the soil.

      Months earlier, Fremont had planted this vetch and clover along with the rapeseed. The two legumes had grown rapidly, preventing weeds from crowding out the emerging rapeseed and guarding it from hungry beetles and weevils. As a result, Fremont had cut by half the herbicide and insecticide he sprayed. The technique of mixing plant species in a single field had worked “perfectly,” he said.

      This innovative approach is just one of many practices, now spreading across France, that could help farmers achieve an elusive national goal. In 2008, the French government announced a dramatic shift in agricultural policy, calling for pesticide use to be slashed in half. And it wanted to hit that target in just a decade. No other country with as large and diverse an agricultural system had tried anything so ambitious….

      Since then, the French government has spent nearly half a billion euros on implementing the plan, called Ecophyto. It created a network of thousands of farms that test methods of reducing chemical use, improved national surveillance of pests and plant diseases, and funded research on technologies and techniques that reduce pesticide use. It has imposed taxes on farm chemicals in a bid to decrease sales, and even banned numerous pesticides, infuriating many farmers.

      The effort has helped quench demand on some farms. Overall, however, Ecophyto has failed miserably. Instead of declining, national pesti-cide use has increased by 12%, largely mirroring a rise in farm production….

      There is also optimism. Despite Ecophyto’s failure, it showed farmers have powerful options, such as mixing crops, planting new varieties, and tapping data analysis systems that help identify the best times to spray. With the right incentives and support, those tools might make a bigger difference this time around. And the fact that France isn't backing away from its ambitious goal inspires many observers….

      It’s not only farmers who will have to adjust if France is to meet its ambitious goals. Reducing the cost of food production to the environment and public health will likely increase the cost to consumers and taxpayers….

Teaching Tolerance

[These excerpts are from an article by Josh Grossberg is in the Autumn 2018 issue of USC Trojan Family.]

      The 2018 Winter Olympics were in full swing and the students in Ivy Schamis’ high school classroom were upbeat. The teacher was guiding the students in her Holocaust history class through a discussion about the 1936 Summer Olympics. – She described how a German man had given famed black track athlete Jesse Owens a pair of specially designed running shoes despite risk of repercussions from the Nazi government. – “Does anybody know who that man was?” Schamis asked. A young man raised his hand. For the first time Schamis could remember, a student knew the answer. The shoemaker was Adi Dassler, the founder of Adidas, 17-year-old Nicholas Dworet said proudly.

      That’s when the shooting started.

      Gunfire rang out, one bullet after another. Someone was firing into the classroom. “We just flew out of our seats and tried to take cover,” Schamis says, “but there was nowhere to hide.”

      Two minutes later, Dworet and another student, named Helena Ramsay, also 17, were killed by gunfire. They were among the 17 students slain on Feb. 14, 2018, in the tragic shooting at Marjory Stoneman Douglas High School in Parkland, Florida….

      It may seem ironic that a classroom dedicated to embracing cross-cultural understanding would be visited upon by such senseless violence. But it was that kind of violence that prompted Schamis to delve into the topic with her students in the first place.

      “The lessons of the Holocaust came right into Room 1214 that day,” Schamis says….

      Can education truly instill decency and respect, and do the actions of individuals matter in fighting hate? Vigorous evaluations and studies conducted by the institute provide reason for hope. USC Shoah Foundation research has shown that 78 percent of students recognized that one person can make a difference if they see an example of stereotyping against a group of people. Another 78 percent believe it’s important to speak up against stereotyping when they see it around them.

      “If we have racist and anti-Semitic ideologies coming at us, then we can develop in students the skills, knowledge, abilities and capacities to counter those ideologies, make better decisions and be more active in opposing those ideologies as dtizens,”Wiedeman says.

      Wiedeman and USC Shoah Foundation aren’t alone in be-lieving that empathy, openness and tolerance can be taught….

      Florida high school teacher Ivy Schamis says that despite the shooting in her classroom, she remains undaunted in her quest to spread a belief in shared human values. She’s more eager than ever to teach students the lessons of the Holocaust.

      “Hate is never OK. Tell everybody,” she says. “We have to be there for each other.”

The Good Fight

[These excerpts are from a letter to the editor by Michael P. Jansen is in the October 2018 issue of Chem 13 News.]

      …when people discover I’m a teacher (a chemistry teacher! AHHH!!!), it’s pretty much game over….

      I cannot take peoples' illusion that chemistry is memorization. Or that school is about getting good marks — so a student can get accepted to some university…in order to — you guessed it — get more good marks — so he can be a doctor (like his aunt in England).

      I hate that parents have their kids take a summer course “to get it out of the way”. I hate that students and parents and (I am sorry to say) many teachers and some “experts” believe that computers are the Holy Grail in this business.

      Let’s get real. School is — and always was and always will be — about learning. I don’t understand why this straight-forward message is so difficult. We need to promote learning, while de-emphasizing marks. This is a war. We need to fight — student by student, class by class, year over year, parent interview by parent interview. It won’t be easy; there will be casualties, like my sanity — or your sanity.

      Let’s stand together and fight the good fight. For chemistry education. For chemistry learning.

Japan Needs Gender Equality

[These excerpts are from an editorial by Yumiko Murakami and Francesca Borgonovi is in the 12 October 2018 issue of Science.]

      Last month, Tokyo Medical University (TMU) announced Yukiko Hayashi as its first female president. This comes on the heels of discovering that the insitution had manipulated entrance exam scores for many years to curb female enrollment. Hayashi may be an attempt by TMU to restore its reputation, but the scandal should be a wake-up call for Japanese society to ensure that men and women have equal opportunities to succeed….

      Yet, Japan boasts one of the most sophisticated educational systems in the world. Japanese 15-year-old students are among the highest performers in mathematics and science according to the OECD’s Programme for International Student Assessment (PISA). However, a gender gap is noticeable, especially among top-tier students. The latest PISA assessment in 2015 indicates that Japanese boys outperform girls in mathematics by ~15% of a standard deviation on the achievement scale. Among the top 10% of students in Japan, there is an even larger gender gap in both mathematics and science. But by international standards, Japanese girls perform at very high levels. For example, the highest-achieving girls in Japan performed significantly above the highest-achieving boys from most of the other 70 education systems assessed by PISA in both mathematics and science….

      Fostering confidence in female students, trainees, and professionals would be a highly effective way to close the gender gap in Japan. Gender mainstreaming in various segments of Japanese society is crucial to address unconscious biases. A better gender balance in professional occupations, especially in STEM fields, would boost self-efficacy in female students, and TMU’s decision to appoint a female president is a major step toward promoting more female role models in medicine. Medical schools should work with hospitals to improve working conditions so that female physicians are not hampered in their careers by life events such as pregnancies and child-rearing.

      Halving Japan’s gender gap in the labor force by 2025 could add almost 4 percentage points to projected growth in gross domestic product over the period from 2013 to 2025. Japan clearly needs to embrace women to bolster its economy. And having more highly educated women in medical professions is particularly beneficial for a country with a rapidly aging population. Gender equality will be the key for better lives for all Japanese.

Father of ‘the God Particle’ Dies

[This brief news article is in the 12 October 2018 issue of Science.]

      Leon Lederman, a Nobel Prize-winning physicist and passionate advocate for science education, died last week at age 96. He and two colleagues shared the physics Nobel in 1988 for their discovery 26 years earlier that elusive particles called neutrinos come in more than one type. Lederman became identified with the neologism in the title of his 1993 book, The God Particle: If the Universe Is the Answer, What Is the Question? He was referring to the Higgs boson, the last missing piece of physicists’ standard model of fundamental particles and forces, which was finally discovered in 2012. Some physicists scorned the title. Lederman puckishly claimed his publisher balked at Goddamn Particle, which would have conveyed how hard physicists were struggling to detect the Higgs.

Mythical Androids and Ancient Automatons

[These excerpts are from a book review by Sarah Olson in the 5 October 2018 issue of Science.]

      Long before the advent of modern robots and artificial intelligence (AI), automated technology existed in the storytelling and imaginations of ancient societies. Hephaestus, blacksmith of the mythical Greek gods, fashioned mechanical serving maids from gold and endowed them with learning, reason, and skill in Homer’s Iliad. Designed to anticipate their master’s requests and act on them without instruction, the Golden Maidens share similarities with modern machine learning, which allows AI to learn from experience without being explicitly programmed.

      The possibility of AI servants continues to tantalize the modern imagination, resulting in household automatons such as the Roomba robot vacuum and Amazon’s virtual assistant, Alexa. One could imagine a future iteration of Alexa as a fully automatic robot, perhaps washing our dishes or reminding us to pick up the kids after school. In her new book, Gods and Robots, Adrienne Mayor draws comparisons between mythical androids and ancient robots and the AI of today….

      Through detailed storytelling and careful analysis of popular myths, Mayor urges readers to consider lessons learned from these stories as we set about creating a new world with AI. Like Pandora, an automaton created by Hephaestus who opened a box and released evils into the world, AI could bring about unforeseen problems. Here, Mayor cites Microsoft’s 2016 experiment with the Twitter chatbot “Tay,” a neural network programmed to converse with Twitter users without requiring supervision. Only hours after going live, Tay succumbed to a group of followers who conspired to turn the chatbot into an internet troll. Microsoft’s iteration the following year suffered a similar fate.

      Despite her extensive knowledge of ancient mythology Mayor does little to demonstrate an understanding of modern Al, neural networks, and machine learning; the chatbots are among only a handful of examples of modern technology she explores.

      Instead, Mayor focuses on her own area of expertise: analyzing classical mythology. She recounts, for example, the story of the bronze robot Talos, an animated statue tasked with defending the Greek island of Crete from pirates….

      When Pandora opens her box of evils, she releases hope by accident—inadvertently helping humanity learn to live in a world now corrupted by evil. Like this story Gods and Robots is cautionary but optimistic. While warning of the risks that accompany the irresponsible deployment of technology Mayor reassures readers that AI could indeed bring about many of the wonderful things that our ancestors imagined.

Junk Food, Junk Science?

[This excerpt is from a book review by Cyan James in the 5 October 2018 issue of Science.]

      …In her latest book, Unsavory Truth, [Marion] Nestle levels a withering fusillade of criticism against food and beverage companies that use questionable science and marketing to push their own agendas about what should end up on our dinner tables.

      In her comprehensive review of companies’ nutrition research practices, Nestle (whose name is pronounced “Nes-sul” and who is not affiliated with the Swiss food company) reveals a passion for all things food, combined with pronounced disdain for the systematic way food companies influence the scientists and research behind the contents of our pantries. Her book is an autopsy of modern food science and advertising, pulling the sheet back on scores of suspect practices, as well as a chronicle of Nestle’s own brushes with food company courtship.

      Is it shocking that many food companies do whatever they can in the name of fatter profits? Maybe, but it's old hat for Nestle, who has spent five decades honing her expertise and is a leading scholar in the field of nutrition science. In this book, she details nearly every questionable food company tactic in the playbook, from companies that fund their own food science research centers and funnel media attention to nondietary explanations for obesity, to those that cherry-pick data or fund professional conferences as a plea for tacit approval.

      Even companies that hawk “benign” foods such as blueberries, pomegranate juice, and nuts come under the author’s strict scrutiny because, as she reminds readers, “Foods are not drugs. To ask whether one single food has special health benefits defies common sense.”

      Instead, Nestle urges eaters to look behind the claims to discover who funds food-science studies, influences governmental regulations, advises policy-makers, and potentially compromises researchers. Food fads, we learn, can spring from a few findings lifted out of context or interpreted with willful optimism. Companies, after all, can hold different standards for conducting and interpreting research than those of independent academic institutions or scientific organizations and can be driven by much different motives.

      Nestle wields a particularly sharp pen against junk-food purveyors, soda companies, and others who work hard to portray excess sugar as innocuous or who downplay the adverse effects their products can have. These companies adopt tactics such as founding research centers favorable to their bottom line, diverting consumer attention away from diet to focus instead on the role of exercise in health, or trying to win over individual researchers to favorably represent their products. Nestle calls foul on these attempts to portray junk foods as relatively harmless, even knocking the “health halo” shine from dark chocolates and other candies masquerading as responsible health foods.

      Not content to point to the many thorny problems lurking behind food company labels and glitzy sponsored meetings, however, Nestle offers a constructive set of suggestions for how nutrition scientists can navigate potential conflicts of interest.

      Consumers, too, bear a responsibility for promoting eating habits that steer clear of dubious advertising. Nestle advocates adhering to a few simple guidelines: “[E]at your veggies, choose relatively unprocessed foods, keep junk foods to a minimum, and watch out for excessive calories!” We have the power of our votes and our forks, she reminds us, and can use both to insist that food compa-nies help us eat well. Nestle’s determination to go to bat for public health shines through, illuminating even her occasional sections of workaday prose or dense details.

      Nestle marshals a convincing number of observations on modern food research practices while energetically delineating how food companies’ clout can threaten the integrity of the research performed on their products. There is indeed something rotten in the state of dietary science, but books like this show us that we consumers also hold a great deal of power.

Renewable Energy for Puerto Rico

[These excerpts are from an editorial by Arturo Massol-Deya, Jennie C. Stephens and Jorge L. Colon in the 5 October 2018 issue of Science.]

      Puerto Rico is not prepared for another hurricane. A year ago, Hurricane Maria obliterated the island’s electric grid, leading to the longest power outage in U.S. history. This disrupted medical care for thousands and contributed to an estimated 2975 deaths. The hurricane caused over $90 billion in damage for an island already in economic crisis. Although authorities claim that power was restored completely, some residents still lack electricity. Despite recovery efforts, the continued vulnerability of the energy infrastructure threatens Puerto Rico’s future. But disruptions create possibilities for change. Hurricane Maria brought an opportunity to move away from a fossil fuel-dominant system and establish instead a decentralized system that generates energy with clean and renewable sources. This is the path that will bring resilience to Puerto Rico.

      Puerto Rico is representative of the Caribbean islands that rely heavily on fossil fuels for electric power; 98% of its electricity comes from imported fossil fuels (oil, natural gas, and coal), whereas only 2% comes from renewable sources (solar, wind, or hydroelectric)….This makes the island's centralized electrical grid vulnerable to hurricanes that are predicted to increase in severity because of climate change.

      In Puerto Rico and the rest of the Caribbean, where sun, wind, water, and biomass are abundant sources of renewable energy, there is no need to rely on fossil fuel technology. Unfortunately, the government of Puerto Rico and the U.S. Federal Emergency Management Agency have been making decisions about the local power authority that are restoring the energy system to what it was before Hurricane Maria hit, perpetuating fossil fuel reliance….

      At this juncture, when the opportunity to build a sustainable and resilient electrical system presents itself, moving away from dependency on imported fossil fuels should be the guiding vision. Puerto Rico must embrace the renewable endogenous sources that abound on the island and build robust microgrids powered by solar and wind, install hybrid systems (such as biomass biodigesters), and create intelligent networks that can increase the resilience of the island. The Puerto Rican government and U.S. Congress should use Hurricane Maria as a turning point for pushing Puerto Rico toward using 100% renewable energy rather than a platform to plant generators across the island….

Boys Will Be Superintendents: School Leadership as a Gendered Profession

[These excerpts are from an article by Robert Maranto, Kristen Carroll, Albert Cheng and Manuel P. Teodoro in the October 2018 issue of Phi Delta Kappan.]

      In 19th-century America, both teaching and school leadership were mainly feminine pursuits, in part because most people considered women better at nurturing children, but also because local school boards could get away with paying women less. Eventually, however, progressives sought to professionalize educational leadership with larger, bureaucratic schools led by credentialed principals and su perintendents….At a time when professional meant male, this bureaucratization and professionalization of schools meant replacing female principals and superintendents with men.

      As Kate Rousmaniere…details in her social history of the principalship, graduate programs providing educational leadership credentials served as an increasingly common career pathway for men, particularly veterans who returned from World War II and attended graduate school on the GI Bill. In addition, the emerging field of athletic coaching not only attracted many men to jobs in education but also gave them dear routes to principal and superintendent posts….

      Thus, by the 1970s, researchers found that nearly 80 percent of school superintendents had coached athletic teams earlier in their careers….

      As a result, explains Rousmaniere, the numbers of women in school and district leadership declined through most of the 20th century. For example, the percentage of elementary school principal posts held by women fell from 55% in 1928 to 20% in 1973. Since high school enrollments were rare in the early part of the century, and secondary leadership positions were already seen as relatively prestigious, the number of women in high school leadership was always low, even in the 1920s, but by mid-century it fell to just 1%. By the 1960s, Rousmaniere notes, “It seemed to be the natural order of things that women taught and men managed….”

      Today, the outlook for women in leadership is significantly better, but there's still a serious imbalance. For example, using data on more than 7,500 public school principals from the U.S. Department of Education 2011-12 Schools and Staffing Survey and additional published sources, we recently found that while 90% of elementary teachers are women, only 66% of elementary principals are women. In secondary schools, meanwhile, women make up 63% of teachers but just 48% of principals.

      Before becoming principals, men and women are about equally likely to have served as department heads, vice principals, and dub advisers. In two key respects, however, their paths to the principalship differ. Women are twice as likely as men to have prior service as curricular specialists (31.3% of women and 16010 of men), and men are three times as likely (52.8% of men and 16.5% of women) to have had prior experience as athletic coaches….Overall, men are proportionately more likely to gain promotion to principal, and to do so more quickly than women. In traditional public schools, male principals have taught a mean of only 10.7 years before becoming principal compared to 13.2 years for women.

      The imbalance continues beyond the principalship, too. Nationwide, for example, just 24.1% of superintendents are women….

True Story

[These excerpts are from an article by Steve Mirsky in the October 2018 issue of Scientific American.]

      …The Washington Post tallied 4,229 “false or misleading claims” by Trump in his first 558 days in office…

      Here’s an example of my conundrum. Early this year, Trump refuted the idea of climate change: “The ice caps were going to melt, they were going to be gone by now but now they’re setting records, so okay, they’re at a record level.” But a researcher at the National Snow and Ice Data Center said that polar ice was at “a record low in the Arctic (around the North Pole) right now and near record low in the Antarctic (around the South Pole).” The Trump claim and the response were both published by the Pulitzer Prize-winning organization PolitiFact….

      I was reading a book. The book is called The Death of Truth. The writer’s name is Michiko Kakutani. She wrote that the Trump administration ordered the Centers for Disease Control and Prevention to avoid using the terms “science-based” and “evidence-based.” She says that in another book called 1984 there’s a society that does not even have the word “science” because, as she quoted from that other book, “‘the empirical method of thought, on which all the scientific achievements of the past were founded,’ represents an objective reality that threatens the power of Big Brother to determine what truth is.”…

      Maria Konnikova is a science journalist. She also has a doc-torate in psychology….She wrote an article for a place called Politico entitled “Trump's Lies vs. Your Brain.” She wrote, “If he has a particular untruth he wants to propagate ... he simply states it, over and over. As it turns out, sheer repetition of the same lie can eventually mark it as true in our heads.” She also wrote that because of how our brains work, “Repetition of any kind—even to refute the statement in question—only serves to solidify it.”

Clicks, Lies and Videotapes

[These excerpts are from an article by Brooke Borel in the October 2018 issue of Scientific American.]

      The consequences for public knowledge and discourse could be profound. Imagine, for instance, the impact on the upcoming midterm elections if a fake video smeared a politician during a tight race. Or attacked a CEO the night before a public offering. A group could stage a terrorist attack and fool news outlets into covering it, sparking knee-jerk retribution. Even if a viral video is later proved to be fake, will the public still believe it was true anyway? And perhaps most troubling: What if the very idea of pervasive fakes makes us stop believing much of what we see and hear—including the stuff that is real?

      …The path to fake traces back to the 1960s, when computer-generated imagery was first conceived. In the 1980s these special effects went mainstream, and ever since, movie lovers have watched the technology evolve from science-fiction flicks to Forrest Gump shaking hands with John F. Kennedy in 1994 to the revival of Peter Cushing and Carrie Fisher in Rogue One….

      Experts have long worried that computer-enabled editing would ruin reality. Back in 2000, an article in MIT Technology Review about products such as Video Rewrite warned that “seeing is no longer believing” and that an image “on the evening news could well be a fake—a fabrication of fast new video-manipulation technology.” Eighteen years later fake videos don't seem to be flooding news shows. For one thing, it is still hard to produce a really good one….

      The way we consume information, however, has changed. Today only about half of American adults watch the news on television, whereas two thirds get at least some news via social media, according to the Pew Research Center. The Internet has allowed for a, proliferation of media outlets that cater to niche audiences—including hyperpartisan Web sites that intentionally stoke anger, unimpeded by traditional journalistic standards. The Internet rewards viral content that we are able to share faster than ever before….And the glitches in fake video are less dis-cernible on a tiny mobile screen than a living-room TV.

      The question now is what will happen if a deepfake with significant social or political implications goes viral. With such a new barely studied frontier, the short answer is that we do not know….

      The science on written fake news is limited. But some research suggests that seeing false information just once is sufficient to make it seem plausible later on….

End of the Megafauna

[These excerpts are from a brief article in the Fall 2018 issue of Rotunda, put out by the American Museum of Natural History.]

      The American Museum of Natural History is world famous for its vertebrate paleontology halls, where the story of vertebrate life is traced from its beginnings to the near present, as told by the most direct form of evidence we have: the fossils themselves….

      At one end of the wing are a Columbian mammoth and an American mastodon. Both are very definitely proboscidean, or elephantlike, in body form, although their last common ancestor lived about 25 million years ago. On the North American mainland, populations of mammoths and mastodons were still living as recently as 12,000 years ago; all were gone 1,000 or so years later. A couple of island-bound groups of woolly mammoths struggled on, but these too had disappeared by 4,200 years ago. Asian and African elephants persisted. These magnificent beasts didn’t. Why?

      Elsewhere in the hall are members of Xenarthra, today an almost exclusively South American group that includes living armadillos, tree sloths, and anteaters. The largest of the living xenarthrans is the giant anteater (MyrInecophaga tridactyla), but as late as 12,000 to 13,000 years ago there were several much larger xenarthran species in both North and South America that ao may have weighed as much as 2,000-4,000 kg. Among these was gigantic Lestodo, whose closest living relatives, the two-and three-toed tree sloths Choloepus and Bradypus, weigh no more than 5 kg. They made it, Lestodon didn’t. Why?

      Many other Quaternary species prospered in their native environments for hundreds of thousands of years or more without ro suffering any imperiling losses. But beginning about 50,000 years ago, something started happening to large animals. Species sometimes disappeared singly, at other times in droves. Size must have mattered, because their smaller close relatives mostly weathered the extinction storm and are still with us.

      So why did these megafaunal extinctions occur?

      A short but honest reply would be that there is no satisfactory answer-not yet. The debate continues as fresh leads are traced and dead ends abandoned or refashioned in order to accommodate new evidence. It's a great time to be a Quaternary paleontologist!

The Marvelous Mola

[These excerpts are from a brief article in the Fall 2018 issue of Rotunda, put out by the American Museum of Natural History.]

      Mola may not be a household name, but the various species of this genus of sunfish are found in temperate and tropical seas all over the world and can reach nearly 11 feet (3.5 meters) in length and weigh up to 5,070 lbs (2,300 kg).

      To live large, these marine giants grow fast: in captivity, young sunfish can pack on more than 800 pounds in just over a year….

      Size gives the Mola mola several advantages. Females produce enormous numbers of eggs: one 4-foot (1.2 meter-long) female was estimated to be carrying 300 million eggs. The Mola mola also has a broad thermal range, from 36°F to 86°F, allowing it to dive more than 3,000 feet deep (914 meters). There is slim preliminary evidence that the ocean sunfish may he able to tolerate low oxygen levels, but that would be a boon, as such conditions are among a growing list of concerns in today’s oceans, along with human pollution and global sea temperature rise.

      As adults, their appetite for jellyfish means ocean sunfish are helping combat another modern marine threat. “As we overfish the ocean, in some regions, jellies can move in to fill those open niches,” says marine biologist Tierney Thys, who has tracked ocean sunfishes all over the globe. “As these local jelly populations increase, we need to keep our populations of jelly eaters–like the Mola mola–intact.”

Abortion Facts

[These excerpts are from an article by Michael Shermer in the September 2018 issue of Scientific American.]

      In May of this year the pro-life/pro-choice controversy leapt back into headlines when Ireland overwhelmingly approved a referendum to end its constitutional ban on abortion. Around the same time, the Trump administration proposed that Title X federal funding be withheld from abortion clinics as a tactic to reduce the practice, a strategy similar to that of Texas and other states to shut down clinics by burying them in an avalanche of regulations, which the U.S. Supreme Court struck down in 2016 as an undue burden on women for a constitutionally guaranteed right. If the goal is to attenuate abortions, a better strategy is to reduce unwanted pregnancies. Two methods have been proposed: abstinence and birth control. /p>

      Abstinence would obviate abortions just as starvation would forestall obesity. There is a reason no one has proposed chastity as a solution to overpopulation. Sexual asceticism doesn’t work, because physical desire is nearly as fundamental as food to our survival and flourishing. A 2008 study…found that among American adolescents ages 15 to 19, “abstinence-only education did not reduce the likelihood of engaging in vaginal intercourse" and that "adolescents who received comprehensive sex education had a lower risk of pregnancy than adolescents who received abstinence-only or no sex education.”

      …When women are educated and have access to birth-control technologies, pregnancies and, eventually, abortions decrease. A 2003 study…concluded that abortion rates declined as contraceptive use increased in seven countries (Kazakhstan, Kyrgyzstan, Uzbekistan, Bulgaria, Turkey, Tunisia and Switzerland). In six other nations (Cuba, Denmark, the Netherlands, Singapore, South Korea and the US.), contraceptive use and abortion rates rose simultaneously, but overall levels of fertility were falling during the period studied. After fertility levels stabilized, contraceptive use continued to increase, and abortion rates fell.

      Something similar happened in Turkey between 1988 and 1998, when abortion rates declined by almost half when unreli-able forms of birth control (for one, the rhythm method) were replaced by more modern technologies (for example, condoms)….

      To be fair, the multivariable mesh of correlations in all 1 these studies makes inferring direct causal links difficult for social scientists to untangle. But as I read the research, when women have limited sex education and no access to contraception, they are more likely to get pregnant, which leads to higher abortion rates. When women are educated about and have access to effective contraception, as well as legal and medically safe abortions, they initially use both strategies to control family size, after which contraception alone is often all that is needed and abortion rates decline. Admittedly, deeply divisive moral issues are involved. Abortion does end a human life, so it should not be done without grave consideration for what is at stake, as we do with capital punishment and war. Likewise, the recognition of equal rights, especially reproductive rights, should be acknowledged by all liberty-loving people. But perhaps progress for all human life could be more readily realized if we were to treat abortion as a problem to be solved rather than a moral issue over which to condemn others. As gratifying as the emotion of moral outrage is, it does little to bend the moral arc toward justice.

Alone in the Milky Way

[This excerpt is from an article by John Gribbin in the September 2018 issue of Scientifi American.]

      DNA evidence pinpoints two evolutionary bottle-necks in particular. A little more than 150,000 years ago the human population was reduced to no more than a few thousand—perhaps only a few hundred—breeding pairs. And about 70,000 years ago the entire human population fell to about 1,000. Although this interpretation of the evidence has been questioned by some researchers, if it is correct, all the billions of people now on Earth are descended from this group, which was so small that a species diminished to such numbers today would likely be regarded as endangered.

      That our species survived—and even flourished, eventually growing to number more than seven billion and advancing into a technological society—is amazing. This outcome seems far from assured.

      As we put everything together, what can we say? Is life likely to exist elsewhere in the galaxy? Almost certainly yes, given the speed with which it appeared on Earth. Is another technological civilization likely to exist today? Almost certainly no, given the chain of circumstances that led to our existence. These considerations suggest we are unique not just on our planet but in the whole Milky Way. And if our planet is so special, it becomes all the more important to pre-serve this unique world for ourselves, our descendants and the many creatures that call Earth home.

Why We Fight

[These excerpts are from an article by R. Brian Ferguson in the September 2018 issue of Scientifi American.]

      Do people, or perhaps just males, have an evolved predisposition to kill members of other groups? Not just a capacity to kill but an innate propensity to take up arms, tilting us toward collective violence? The word “collective” is key. People fight and kill for personal reasons, but homicide is not war. War is social, with groups organized to kill people from other groups. Today controversy over the historical roots of warfare revolves around two polar positions. In one, war is an evolved propensity to eliminate any potential competitors. In this scenario, humans all the way back to our common ancestors with chimpanzees have always made war. The other position holds that armed conflict has only emerged over recent millennia, as changing social conditions provided the motivation and organization to collectively kill. The two sides separate into what the late anthropologist Keith Otterbein called hawks and doves….

      If war expresses an inborn tendency, then we should expect to find evidence of war in small-scale societies throughout the prehistoric record. The hawks claim that we have indeed found such evidence….

      …If wars are natural eruptions of instinctive hate, why look for other answers? If human nature leans toward collective killing of outsiders, how long can we avoid it?

      The anthropologists and archaeologists in the dove camp challenge this view. Humans, they argue, have an obvious capacity to engage in warfare, but their brains are not hardwired to identify and kill outsiders involved in collective conflicts. Lethal group attacks, according to these arguments, emerged only when hunter-gatherer societies grew in size and complexity and later with the birth of agriculture. Archaeology, supplemented by observations of contemporary hunter-gatherer cultures, allows us to identify the times and, to some degree, the social circumstances that led to the origins and inten-sification of warfare.

      In the search for the origins of war, archaeologists look for four kinds of evidence. The artwork on cave walls is exhibit one. Paleolithic cave paintings from Grottes de Cougnac, Pech Merle and Cosquer in France dating back approximately 25,000 years show what some scholars perceive to be spears penetrating people, suggesting that people were waging war as early as the late Paleolithic period. But this interpretation is contested. Other scientists point out that some of the incomplete figures in those cave paintings have tails, and they argue that the bent or wavy lines that intersect with them more likely represent forces of shamanic power, not spears. (In contrast, wall paintings on the eastern Iberian Peninsula, probably made by settled agriculturalists thousands of years later, clearly show battles and executions.)

      Weapons are also evidence of war, but these artifacts may not be what they seem…

      Beyond art and weapons, archaeologists look to settlement remains for clues. People who fear attack usually take precautions….

      Skeletal remains would seem ideal for determining when war began, but even these require careful assessment. Only one of three or four projectile wounds leaves a mark on bone. Shaped points made of stone or bone buried with a corpse are sometimes ceremonial, sometimes the cause of death. Unhealed wounds to a single buried corpse could be the result of an accident, an execution or a homicide. Indeed, homicide may have been fairly common in the prehistoric world—but homicide is not war. And not all fights were lethal….

      The global archaeological evidence, then, is often ambiguous and difficult to interpret. Often different clues must be pieced together to produce a suspicion or probability of war….

      The preconditions that make war more likely include a shift to a more sedentary existence, a growing regional population, a concentration of valuable resources such as livestock, increasing social complexity and hierarchy, trade in high-value goods, and the establishment of group boundaries and collective identities. These conditions are sometimes combined with severe environmental changes….

      The preconditions for war are only part of the story, however, and by themselves, they may not suffice to predict outbreaks of collective conflicts. In the Southern Levant, for instance, those preconditions existed for thousands of years without evidence of war….

      People are people. They fight and sometimes kill. Humans have always had a capacity to make war, if conditions and culture so dictate. But those conditions and the warlike cultures they generate became common only over the past 10,000 years—and, in most places, much more recently than that. The high level of killing often reported in history, ethnography or later archaeology is contradicted in the earliest archaeological findings around the globe. The most ancient bones and artifacts are consistent with the ti-tle of Margaret Mead’s 1940 article: “Warfare Is Only an Invention—Not a Biological Necessity.”

Last Hominin Standing

[This excerpt is from an article by Kate Wong in the September 2018 issue of Scientifi American.]

      …It now looks as though H. sapiens originated far earlier than previously thought, possibly in locations across Africa instead of a single region, and that some of its distinguishing traits—including aspects of the brain—evolved piecemeal. Moreover, it has become abundantly clear that H. sapiens actually did mingle with the other human species it encountered and that interbreeding with them may have been a crucial factor in our success. Together these findings paint a far more complex picture of our origins than many researchers had envisioned—one that privileges the role of dumb luck over destiny in the success of our kind.

      Debate about the origin of our species has traditionally focused on two competing models. On one side was the Recent African Origin hypothesis…which argues that H. sapiens arose in either eastern or southern Africa within the past 200,000 years and, because of its inherent superiority, subsequently replaced archaic hominin species around the globe without interbreeding with them to any significant degree. On the other was the Multiregional Evolution model…which holds that modem H. sapiens evolved from Neandertals and other archaic human populations throughout the Old World, which were connected through migration and mating. In this view, H. sapiens has far deeper roots, reaching back nearly two million years.

      By the early 2000s the Recent African Origin model had a wealth of evidence in its favor. Analyses of the DNA of living people indicated that our species originated no more than 200,000 years ago. The earliest known fossils attributed to our species came from two sites in Ethiopia, Omo and Herto, dated to around 195,000 and 160,000 years ago, respectively. And sequences of mitochondrial DNA (the tiny loop of genetic material found in the cell’s power plants, which is different from the DNA contained in the cell’s nucleus) recovered from Neandertal fossils were distinct from the mitochondrial DNA of people today—exactly as one would expect if H. sapiens replaced archaic human species without mating with them.

      Not all of the evidence fit with this tidy story, however. Many archaeologists think that the start of a cultural phase known as the Middle Stone Age (MSA) heralded the emergence of people who were beginning to think like us. Prior to this technological shift, archaic human species throughout the Old World made pretty much the same kinds of stone tools fashioned in the so-called Acheulean style. Acheulean technology centered on the production of hefty hand axes that were made by taking a chunk of stone and chipping away at it until it had the desired shape. With the onset of the MSA, our ancestors adopted a new approach to toolmaking, inverting the knapping process to focus on the small, sharp flakes they detached from the core—a more efficient use of raw material that required sophisticated planning. And they began attaching these sharp flakes to handles to create spears and other projectile weapons. Moreover, some people who made MSA tools also made items associated with symbolic behavior, including shell beads for jewelry and pigment for painting. A reliance on symbolic behavior, including language, is thought to be one of the hallmarks of the modern mind.

      The problem was that the earliest dates for the MSA were more than 250,000 years ago—far older than those for the earliest H. sapiens fossils at less than 200,000 years ago. Did another human species invent the MSA, or did H. sapiens actually evolve far earlier than the fossils seemed to indicate? In 2010 another wrinkle emerged. Geneticists announced that they had recovered nuclear DNA from Neandertal fossils and sequenced it. Nuclear DNA makes up the bulk of our genetic material. Comparison of the Neandertal nuclear DNA with that of living people revealed that non-African people today carry DNA from Neandertals, showing that H. sapiens and Neandertals did interbreed after all, at least on occasion.

      Subsequent ancient genome studies confirmed that Neandertals contributed to the modern human gene pool, as did other archaic humans. Further, contrary to the notion that H. sapiens originated within the past 200,000 years, the ancient DNA suggested that Neandertals and H. sapiens diverged from their common ancestor considerably earlier than that, perhaps upward of half a million years ago. If so, H. sapiens might have originated more than twice as long ago as the fossil record indicated.

Talking through Time

[This excerpt is from an article by Christine Kenneally in the September 2018 issue of Scientifi American.]]

      That language is uniquely human has been assumed for a long time. But trying to workout exactly how and why that is the case has been weirdly taboo. In the 1860s the Societe de Linguistique de Paris banned discussion about the evolution of language, and the Philological Society of London banned it in the 1870s. They may have wanted to clamp down on unscientific speculation, or perhaps it was a political move—either way, more than a century’s worth of nervousness about the subject followed. Noam Chomsky, the extraordinarily influential linguist at the Massachusetts Institute of Technology, was, for decades, rather famously disinterested in language evolution, and his attitude had a chilling effect on the field. Attending an undergraduate linguistics class in Melbourne, Australia, in the early 1990s, I asked my lecturer how language evolved. I was told that linguists did not ask the question, because it was not really possible to answer it.

      Luckily, just a few years later, scholars from different disciplines began to grapple with the question in earnest. The early days of serious research in language evolution unearthed a perplexing paradox: Language is plainly, obviously, uniquely human. It consists of wildly complicated interconnecting sets of rules for combining sounds and words and sentences to create meaning. If other animals had a system that was the same, we would likely recognize it. The problem is that after looking for a considerable amount of time and with a wide range of methodological approaches, we cannot seem to find anything unique in ourselves—either in the human genome or in the human brain—that explains language.

      To be sure, we have found biological features that are both unique to humans and important for language. For example, humans are the only primates to have voluntary control of their larynx: it puts us at risk of choking, but it allows us to articulate speech. But the equipment that seems to be designed for language never fully explains its enormous complexity and utility.

Space for Nature

[These excerpts are from an editorial by Jonathan Baillie and Ya-Ping Zhang in the September 14, 2018, issue of Science.]

      How much of the planet should we leave for other forms of life? This is a question humanity must now grapple with. The global human population is 7.6 billion and anticipated to increase to around 10 billion by the middle of the century. Consumption is also projected to increase, with demands for food and water more than doubling by 2050. Simply put, there is finite space and energy on the planet, and we must decide how much of it we’re willing to share. This question requires deep consideration as it will determine the fate of millions of species and the health and well-being of future generations.

      About 20% of the world’s vertebrates and plants are threatened with extinction, mostly because humans have degraded or converted more than half of the terrestrial natural habitat. Moreover, we are harnessing biomass from other forms of life and converting it into crops and animals that are more useful to us. Livestock now constitute 60% of the mammalian biomass and humans another 36%. Only 4% remains for the more than 5000 species of wild mammals. This ratio is not surprising: Wild vertebrate populations have declined by more than 50% since 1970. Both from an ethical and a utilitarian viewpoint, this depletion of natural ecosystems is extremely troubling.

      Most scientific estimates of the amount of space needed to safeguard biodiversity and preserve ecosystem benefits suggest that 25 to 75% of regions or major ecosystems must be protected. But estimating how much space is required to protect current levels of biodiversity and secure existing ecosystem benefits is challenging because of limited knowledge of the number of species on this planet, poor understanding of how ecosystems function or the benefits they provide, and growing threats such as climate change. Thus, spatial targets will be associated with great uncertainty. However, targets set too low could have major negative implications for future generations and all life. Any estimate must therefore err on the side of caution.

      Current levels of protection do not even come close to the required levels. Just less than half of Earth's surface remains relatively intact, but this land tends to be much less productive. Only 3.6% of the oceans and 14.7% of land are formally protected. Many of these protected areas are “paper parks,” meaning they are not effectively managed, and one-third of the terrestrial protected lands are under intense human pressure….

      If we truly want to protect biodiversity and secure critical ecosystem benefits, the world’s governments must set a much more ambitious protected area agenda and ensure it is resourced….This will be extremely challenging, but it is possible, and anything less will likely result in a major extinction crisis and jeopardize the health and well-being of future generations.

Decoding the Puzzle of Human Consciousness

[These excerpts are from an article by Susan Backmore in the September 2018 issue of Scientifi American.]

      Might we humans be the only species on this planet to be truly conscious? Might lobsters and lions, beetles and bats be unconscious automata, responding to their worlds with no hint of conscious experience? Aristotle thought so, claiming that humans have rational souls but that other animals have only the instincts needed to survive. In medieval Christianity the “great chain of being” placed humans on a level above soulless animals and below only God and the angels. And in the 17th century French philosopher Rene Descartes argued that other animals have only reflex behaviors. Yet the more biology we learn, the more obvious it is that we share not only anatomy, physiology and genetics with other animals but also systems of vision, hearing, memory and emotional expression. Could it really be that we alone have an extra special something—this marvelous inner world of subjective experience?

      The question is hard because although your own consciousness may seem the most obvious thing in the world, it is perhaps the hardest to study. We do not even have a clear definition beyond appealing to a famous question asked by philosopher Thomas Nagel back in 1971: What is it like to be a bat? Nagel chose bats because they live such very different lives from our own. We may try to imagine what it is like to sleep upside down or to navigate the world using sonar, but does it feel like anything at all? The crux here is this: If there is nothing it is like to be a bat, we can say it is not conscious. If there is something (anything) it is like for the bat, it is conscious. So is there?

      We share a lot with bats: we, too, have ears and can imagine our arms as wings. But try to imagine being an octopus. You have eight curly, grippy, sensitive arms for getting around and catching prey but no skeleton, and so you can squeeze yourself through tiny spaces. Only a third of your neurons are in a central brain; the rest are in the nerve cords in each of your eight arms, one for each arm. Consider: Is it like something to be a whole octopus, to be its central brain or to be a single octopus arm? The science of consciousness provides no easy way of finding out.

      Even worse is the "hard problem" of consciousness: How does subjective experience arise from objective brain activity? How can physical neurons, with all their chemical and electrical communications, create the feeling of pain, the glorious red of the sunset or the taste of fine claret? This is a problem of dualism: How can mind arise from matter? Indeed, does it?...

      When lobsters or crabs are injured, are taken out of water or have a claw twisted off, they release stress hormones similar to cortisol and corticosterone. This response provides a physiological reason to believe they suffer. An even more telling demonstration is that when injured prawns limp and rub their wounds, this behavior can be reduced by giving them the same painkillers as would reduce our own pain.

      The same is true of fish. When experimenters injected the lips of rainbow trout with acetic acid, the fish rocked from side to side and rubbed their lips on the sides of their tank and on gravel, but giving them morphine reduced these reactions. When zebra fish were given a choice between a tank with gravel and plants and a barren one, they chose the interesting tank. But if they were injected with acid and the barren tank contained a painkiller, they swam to the barren tank instead. Fish pain may be simpler or in other ways different from ours, but these experiments suggest they do feel pain.

      Some people remain unconvinced. Australian biologist Brian Key argues that fish may respond as though they are in pain, but this observation does not prove they are consciously feeling anything….

The Silent, Reasonable Majority Must Be Heard

[These excerpts are from an article by Joshua P. Starr in the September 2018 issue of Phi Delta Kappan.]

      …on days when Democrats and Republicans hold their primary elections, only the most partisan of voters tend to show up at their polling places. Most people stay home, allowing their least reasonable neighbors to cast the majority of the ballots.

      That’s often how things work in school systems, too. When they have to make important decisions, most superintendents — at least, the ones I know — try to bracket off their personal beliefs and preferences in order to weigh the issues fairly and make the best choices they can. But they have to do so in the face of intense pressure from their most aggressive and opinionated constituents. Let's just say that it’s rarely the more thoughtful and fair-minded parents and community members who launch Twitter storms, light up the phones, and march into the district office to make demands on the superintendent….

      Whatever the demographics and political makeup of the loudest people in the room, the question is, How do we ensure that the quieter voices get heard, too? Most parents are reasonable — silent, but reasonable. They want a quality education for their sons and daughters. They have hopes for and concerns about their schools. They want problems to be resolved in expeditious and practical ways. But they rarely testify at board meetings, email the superintendent, or buttonhole their elected officials at the supermarket….

      System and school leaders can’t stop partisans from organizing and advocating, nor can they stop rabid bloggers or shut down toxic email chains. But they can certainly do more to reach out to a wider and more diverse set of parents and community members, rather than sitting back and waiting to see who shows up at board meetings and gets in line for the microphone…

      To be sure, school system leaders need to listen to those people who choose to speak up. But those can’t be the only voices that matter. The majority of parents may be silent much of the time, but they have just as much of a right to be heard, and their kids are just as deserving of an excellent education. We must find ways to hear what they have to say, rather than listening only to those who push and shove their way into our offices and board meetings.

It’s Time to Redefine the Federal Role in K-12 Education

[These excerpts are from an article by Jack Jenning in the September 2018 issue of Phi Delta Kappan.]

      What matters most in education? At its core, education comes down to a student, a teacher, and something to be taught and learned. Everything else (e.g., testing, accountability systems, and teacher evaluations) is secondary, having an indirect influence, at best, on what happens in the classroom. A person with the desire and readiness to learn, another person with the knowledge and skills to foster that learning, and the material to be learned. These are the fundamental elements of education (along with that additional element, money, without which public schooling cannot function), and they should be the starting points for any new federal policy agenda:

      No state has yet come close to ensuring that all young children enter school with the early math, literacy, and other skills that will allow them to succeed. A wealth of research shows that high-quality pre-school programs tend to be extraordinarily effective in helping kids become ready for kindergarten, but access to preschool is woefully inadequate in most of the country, especially for children from families below the middle class. Further, the quality of existing programs is wildly uneven, and many programs lack essential components that might enable them to improve, such as well-educated teachers, adequate salaries, careful teacher supervision, and assessment tools….

      An equally pressing problem, which states have shown little ability to solve on their own, has to do with raising the quality of the teaching force, which will require efforts to improve teacher recruitment, preparation, and retention. In each of these areas, we have failed to keep pace with other developed nations….Similarly, if we were serious about teacher recruitment, quality, and reten-tion, would we pay our teachers such meager wages? On average, teacher compensation is equivalent to about 60% of what comparably educated college graduates earn in other fields, whereas in most other developed countries, teacher pay is more or less comparable to that of college graduates….

      ...the federal government can and should (as it has done many times before) support curricular improvements in literacy, math, science, civics, language learning, and other subject areas.

      Finally, the funding of public education needs to be overhauled, but few states have shown the will or capacity to make meaningful changes, particularly when it comes to the distribution of resources among school districts….the American approach to school funding now stands out as one of the most dysfunctional systems in the world:

      If we want to make serious improvements in the areas of preschool education, teacher quality; and curriculum, then federal policy should address these things directly. And, in fact, while federal policy makers are often reluctant to fund programs that focus on teaching and curriculum — fearing that this would intrude on local control — there are a number of precedents for doing so….

      At the very least, policy advocates on all sides must recognize two basic truths about American education today: First, to ensure the future prosperity and cohesion of our nation, we must help our students achieve at higher levels than in the past; second, our schools do not currently provide all students with equal opportunities to become well educated. Given the urgency of the challenges posed, our politicians, educators, parents, business leaders, and other citizens must seek common ground on plausible solutions. We must get going, and fast.

Fake America Great Again

[These excerpts are from an article by Will Knight in the September/October 2018 issue of Technology Review.]

      These advances threaten to further blur the line between truth and fiction in politics. Already the internet accelerates and reinforces the dissemination of disinformation through fake social-media accounts. “Alternative facts” and conspiracy theories are common and widely believed. Fake news stories, aside from their possible influence on the last US presidential election, have sparked ethnic violence in Myanmar and Sri Lanka over the past year. Now imagine throwing new kinds of real-looking fake videos into the mix: politicians mouthing nonsense or ethnic insults, or getting caught behaving inappropriately on video—except it never really happened.…

      Are we about to enter an era when we can’t trust anything, even authentic-looking videos that seem to capture real “news”? How do we decide what is credible? Whom do we trust?

      Several technologies have converged to make fakery easier, and they’re readily accessible: smartphones let anyone capture video footage, and powerful computer graphics tools have become much cheaper. Add artificial-intelligence software, which allows things to be distorted, remixed, and synthesized in mind-bending new ways. AI isn’t just a better version of Photoshop or iMovie. It lets a computer learn how the world looks and sounds so it can conjure up convincing simulacra….

      There are well-established methods for identifying doctored images and video. One option is to search the web for images that might have been mashed together. A more technical solution is to look for telltale changes to a digital file, or to the pixels in an image or a video frame. An expert can search for visual inconsistencies—a shadow that shouldn’t be there, or an object that's the wrong size….

      In April, a supposed BBC news report announced the opening salvos of a nuclear conflict between Russia and NATO. The clip, which began circulating on the messaging platform WhatsApp, showed footage of missiles blasting off as a newscaster told viewers that the German city of Mainz had been destroyed along with parts of Frankfurt.

      It was, of course, entirely fake, and the BBC rushed to denounce it. The video wasn’t generated using AI, but it showed the power of fake video, and how it can spread rumors at warp speed. The proliferation of AI programs will make such videos far easier to make, and even more convincing.

      Even if we aren't fooled by fake news, it might have dire consequences for political debate. Just as we are now accustomed to questioning whether a photograph might have been Photoshopped, AI-generated fakes could make us more suspicious about events we see shared online. And this could contribute to the further erosion of rational political debate.

      In The Death of Truth, published this year, the literary critic Michiko Kakutani argues that alternative facts, fake news, and the general craziness of modern politics represent the culmination of cultural currents that stretch back decades. Kakutani sees hyperreal Al fakes as just the latest heavy blow to the concept of objective reality….

      Perhaps the greatest risk with this new technology, then, is not that it will be misused by state hackers, political saboteurs, or Anonymous, but that it will further undermine truth and objectivity itself. If you can’t tell a fake from reality, then it becomes easy to question the authenticity of anything. This already serves as a way for politicians to evade accountability.

      President Trump has turned the idea of fake news upside down by using the term to attack any media reports that criticize his administration. He has also suggested that an incriminating clip of him denigrating women, released during the 2016 campaign, might have been digitally forged. This April, the Russian government accused Britain of faking video evidence of a chemical attack in Syria to justify proposed military action. Neither accusation was true, but the possibility of sophisticated fakery is increasingly diminishing the credibility of real information. In Myanmar and Russia new legislation seeks to prohibit fake news, but in both cases the laws may simply serve as a way to crack down on criticism of the government….

      The truth will still be out there. But will you know it when you see it?

No, Big Tech Didn’t Make Us Polarized (But It Sure Helps)

[These excerpts are from an article by Adam Piore in the September/October 2018 issue of Technology Review.]

      …the most concerning problem highlighted by the 2016 election isn’t that the Russians used Twitter and Facebook to spread propaganda, or that the political consulting firm Cambridge Analytica illicitly gained access to the private information of more than 50 million Facebook users. It's that we have all, quite voluntarily, retreated into hyperpartisan virtual corners, owing in no small part to social media and internet companies that determine what we see by monitoring what we have clicked on in the past and giving us more of the same. In the process, opposing perspectives are sifted out, and we’re left with content that reinforces what we already believe.

      This is the famous “filter bubble,” a concept popularized in the 2011 book of the same name by Eli Pariser, an internet activist and founder of the viral video site Upworthy. “Ultimately, democracy works only if we citizens are capable of thinking beyond our narrow self-interest,” wrote Pariser. “But to do so, we need a shared view of the world we coinhabit. The filter bubble pushes us in the opposite direction—it creates the impression that our narrow self-interest is all that exists.”

      Or does it? The research suggests that things are not quite that simple.

      The legal scholar Cass Sunstein warned way back in 2007 that the internet was giving rise to an “era of enclaves and niches.” He cited a 2005 experiment in Colorado in which 60 Americans from conservative Colorado Springs and liberal Boulder, two cities about 100 miles apart, were assembled into small groups and asked to deliberate on three controversial issues (affirmative action, gay marriage, and an international treaty on global warming). In almost every case, people held more extreme positions after they spoke with like-minded others….

      Data from the polling firm Pew backs up the idea that polarization doesn’t come just from the internet. After the 2016 election, Pew found that 62 percent of Americans got news from social-media sites, but—in a parenthetical ignored in most articles about the study—only 18 percent said they did so “often.” A more recent Pew study found that only about 5 percent said they had “a lot” of trust in the information.

      “The internet is absolutely not the causal factor here,” says Ethan Zuckerman, who directs MIT’s Center for Civic Media….

      Lousy results such as this have led Zuckerman toward a more radical idea for countering filter bubbles: the creation of a taxpayer-funded social-media platform with a civic mission to provide a “diverse and global view of the world.”

      The early United States, he noted in an essay for the Atlantic, featured a highly partisan press tailored to very specific audiences. But publishers and editors for the most part abided by a strong cultural norm, republishing a wide range of stories from different parts of the nation and reflecting different political leanings. Public broadcasters in many democracies have also focused on providing a wide range of perspectives. It’s not realistic, Zuckerman argues, to expect the same from outlets like Facebook: their business model drives them to pander to our natural human desire to congregate with others like ourselves. A public social-media platform with a civic mission, says Zuckerman, could push unfamiliar perspectives into our feeds and push us out of our comfort zones. Scholars could review algorithms to make sure we’re seeing an unbiased representation of views. And yes, he admits, people would complain about publicly funding such a platform and question its even-handedness. But given the lack of other viable solutions, he says, it’s worth a shot.

      …if a Republican politician tells people that immigrants are moving in and changing the culture or taking locals’ jobs, or if a Democrat tells female students that Christian activists want to ban women’s rights, their words have power. Bavel’s research suggests that if you want to overcome partisan divisions, avoid the intellect and focus on the emotions….

      Maybe in the end it’s up to us to decide to expose ourselves to content from a wider array of sources, and then to engage with it. Sound unappealing? Well, consider the alternative: your latest outraged political post didn’t accomplish much, because the research shows that anyone who read it almost certainly agreed with you already.

The Road from Tahrir to Trump

[These excerpts are from an article by Zeynep Tufekci in the September/October 2018 issue of Technology Review.]

      Digital platforms allowed communities to gather and form in new ways, but they also dispersed existing communities, those that had watched the same TV news and read the same newspapers. Even living on the same street meant less when information was disseminated through algorithms designed to maximize revenue by keeping people glued to screens. It was a shift from a public, collective politics to a more private, scattered one, with political actors collecting more and more personal data to figure out how to push just the right buttons, person by person and out of sight….

      The US National Security Agency had an arsenal of hacking tools based on vulnerabilities in digital technologies—bugs, secret backdoors, exploits, shortcuts in the (very advanced) math, and massive computing power. These tools were dubbed “nobody but us” (or NOBUS, in the acronym-loving intelligence community), meaning no one else could exploit them, so there was no need to patch the vulnerabilities or make computer security stronger in general. The NSA seemed to believe that weak security online hurt its adversaries a lot more than it hurt the NSA.

      That confidence didn't seem unjustified to many. After all, the internet is mostly an American creation; its biggest companies were founded in the United States. Computer scientists from around the world still flock to the country, hoping to work for Silicon Valley. And the NSA has a giant budget and, reportedly, thousands of the world's best hackers and mathematicians….

      There doesn’t seem to have been a major realization within the US's institutions—its intelligence agencies, its bureaucracy, its electoral machinery—that true digital security required both better technical infrastructure and better public awareness about the risks of hacking, meddling, misinformation, and more. The US's corporate dominance and its technical wizardry in some areas seemed to have blinded the country to the brewing weaknesses in other, more consequential ones….

      Though smaller than Facebook and Google, Twitter played an outsize role thanks to its popularity among journalists and politically engaged people. Its open philosophy and easygoing approach to pseudonyms suits rebels around the world, but it also appeals to anonymous trolls who hurl abuse at women, dissidents, and minorities. Only earlier this year did it crack down on the use of bot accounts that trolls used to automate and amplify abusive tweeting.

      Twitter’s pithy, rapid-fire format also suits anyone with a professional or instinctual understanding of attention, the crucial resource of the digital economy.

      Say, someone like a reality TV star. Someone with an uncanny ability to come up with belittling, viral nicknames for his opponents, and to make boastful promises that resonated with a realignment in American politics—a realignment mostly missed by both Republican and Democratic power brokers.

      Donald Trump, as is widely acknowledged, excels at using Twitter to capture attention. But his campaign also excelled at using Facebook as it was designed to be used by advertisers, testing messages on hundreds of thousands of people and microtargeting them with the ones that worked best. Facebook had embedded its own employees within the Trump campaign to help it use the platform effectively (and thus spend a lot of money on it), but they were also impressed by how well Trump himself performed. In later internal memos, reportedly, Facebook would dub the Trump campaign an “innovator” that it might learn from. Facebook also offered its services to Hillary Clinton’s campaign, but it chose to use them much less than Trump’s did.

      …they started posting materials aimed at fomenting polarization. The Russian trolls posed as American Muslims with terrorist sympathies and as white supremacists who opposed immigration. They posed as Black Lives Matter activists exposing police brutality and as people who wanted to acquire guns to shoot police officers. In so doing, they not only fanned the flames of division but provided those in each group with evidence that their imagined opponents were indeed as horrible as they suspected. These trolls also incessantly harassed journalists and Clinton supporters online, resulting in a flurry of news stories about the topic and fueling a (self-fulfilling) narrative of polarization among the Democrats….

      Second, the new, algorithmic gatekeepers aren’t merely (as they like to believe) neutral conduits for both truth and falsehood. They make their money by keeping people on their sites and apps; that aligns their incentives closely with those who stoke outrage, spread misinformation, and appeal to people’s existing biases and preferences….

      Third, the loss of gatekeepers has been especially severe in local journalism. While some big US media outlets have managed (so far) to survive the upheaval wrought by the internet, this upending has almost completely broken local newspapers, and it has hurt the industry in many other countries. That has opened fertile ground for misinformation. It has also meant less investigation of and accountability for those who exercise power, especially at the local level. The Russian operatives who created fake local media brands across the US either understood the hunger for local news or just lucked into this strategy. Without local checks and balances, local corruption grows and trickles up to feed a global corruption wave playing a major part in many of the current political crises….

      This is also how Russian operatives fueled polarization in the United States, posing simultaneously as immigrants and white supremacists, angry Trump supporters and “Bernie bros.” The content of the argument didn't matter; they were looking to paralyze and polarize rather than convince. Without old-style gatekeepers in the way, their messages could reach anyone, and with digital analytics at their fingertips, they could hone those messages just like any advertiser or political campaign….

      Ubiquitous digital surveillance should simply end in its current form. There is no justifiable reason to allow so many companies to accumulate so much data on so many people. Inviting users to “click here to agree” to vague, hard-to-pin-down terms of use doesn’t produce “informed consent.” If, two or three decades ago, before we sleepwalked into this world, a corporation had suggested so much reckless data collection as a business model, we would have been horrified.

      There are many ways to operate digital services without siphoning up so much personal data. Advertisers have lived without it before, they can do so again, and it's probably better if politicians can’t do it so easily. Ads can be attached to content, rather than directed to people….

      But we didn’t get where we are simply because of digital technologies. The Russian government may have used online platforms to remotely meddle in US elections, but Russia did not create the conditions of social distrust, weak institutions, and detached elites that made the US vulnerable to that kind of meddling.

      Russia did not make the US (and its allies) initiate and then terribly mishandle a major war in the Middle East, the after-effects of which—among them the current refugee crisis—are still wreaking havoc, and for which practically nobody has been held responsible. Russia did not create the 2008 financial collapse: that happened through corrupt practices that greatly enriched financial institutions, after which all the culpable parties walked away unscathed, often even richer, while millions of Americans lost their jobs and were unable to replace them with equally good ones.

      Russia did not instigate the moves that have reduced Americans’ trust in health authorities, environmental agencies, and other regulators. Russia did not create the revolving door between Congress and the lobbying firms that employ ex-politicians at handsome salaries. Russia did not defimd higher education in the United States. Russia did not create the global network of tax havens in which big corporations and the rich can pile up enormous wealth while basic government services get cut.

      These are the fault lines along which a few memes can play an outsize role. And not just Russian memes: whatever Russia may have done, domestic actors in the United States and Western Europe have been eager, and much bigger, participants in using digital platforms to spread viral misinformation.

      Even the free-for-all environment in which these digital platforms have operated for so long can be seen as a symptom of the broader problem, a world in which the powerful have few restraints on their actions while everyone else gets squeezed. Real wages in the US and Europe are stuck and have been for decades while corporate profits have stayed high and taxes on the rich have fallen. Young people juggle multiple, often mediocre jobs, yet find it increasingly hard to take the traditional wealth-building step of buying their own home—unless they already come from privilege and inherit large sums.

      If digital connectivity provided the spark, it ignited because the kindling was already everywhere. The way forward is not to cultivate nostalgia for the old-world information gatekeepers or for the idealism of the Arab Spring. It’s to figure out how our institutions, our checks and balances, and our societal safeguards should function in the 21st century—not just for digital technologies but for politics and the economy in general. This responsibility isn’t on Russia, or solely on Facebook or Google or Twitter. It’s on us.

Controlling Cholera with Microbes

[These excerpts are from an article by Anne Trafton in the September/October 2018 issue of MIT News.]

      MIT engineers have developed a mix of natural and engineered bacteria designed to diagnose and treat cholera, an intestinal infection that causes severe dehydration.

      Cholera outbreaks are usually caused by contaminated drinking water, and infections can be fatal if not treated. The most common treatment is rehydration, which must be done intravenously if the patient is extremely dehydrated. However, intravenous treatment is not always available, and the disease kills an estimated 95,000 people per year.

      The MIT team’s new probiotic mix could be consumed regularly as a preventive measure in regions where cholera is common, but it could also be used to treat people soon after infection occurs….

      The researchers chose Lactococcus lactis, a strain of bacteria used to make cheese, and engineered into it a genetic circuit that detects a molecule produced by Vibrio cholerae, the microbe that causes cholera. When engineered L. lactis encounters this molecule, it turns on an enzyme that produces a red color detectable in stool samples.

      Serendipitously, while working on a way to further engineer L. lactis so that it could treat infections, the researchers discovered that the unmodified bacterium can kill cholera microbes by producing lactic acid. This natural metabolic by-product makes the gastrointestinal environment more acidic, inhibiting the growth of V. cholerae.

      In tests in mice, the researchers found that a mixture of engineered and unmodified L. lactis could successfully prevent cholera from developing and could also treat existing infections. The MIT team is now exploring the possibility of using this approach to combat other microbes.

Smog Patrol

[These excerpts are from an article by Peter Dizikes in the September/October 2018 issue of MIT News.]

      The Chinese government has implemented measures to clean up the air pollution that has smothered its cities for decades. Are they effective? A study coauthored by an MIT scholar shows that one key antipollution law is indeed working—but unevenly.

      The study examines a Chinese law, in effect since 2014, requiring coal-fired power plants to significantly reduce emissions of sulfur dioxide, a pollutant associated with respiratory illnesses. Overall, the concentration of these emissions at coal power plants fell by 13.9 percent.

      However, the law also called for greater emissions reductions in more heavily polluted and more populous regions, and those “key” regions are precisely where plants may have been least compliant. Only 50 percent reported meeting the new standard, and remote data—as opposed to readings supplied by the plants themselves—suggests that the results were even worse….

      Data from the two monitoring systems corresponded closely in “non-key” regions, where the maximum allowable concentration of sulfur dioxide was lowered from 400 to 200 milligrams per cubic meter. But in the key regions, where the limit was placed at 50 milligrams per cubic meter, the research found no evidence of correspondence.

      That tougher standard may have been harder for power plants to meet….

Can a Transgenic Chestnut Restore a Forest Icon?

[These excerpts are from an article by Gabriel Popkin in the August 31, 2018, issue of Science.]

      Two deer-fenced plots here contain some of the world's most highly regulated trees. Each summer researchers double-bag every flower the trees produce. One bag, made of breathable plastic, keeps them from spreading pollen. The second, an aluminum mesh screen added a few weeks later, prevents squirrels from stealing the spiky green fruits that emerge from pollinated flowers. The researchers report their every move to regulators with the U.S. Department of Agriculture….

      These American chestnut trees (Castanea dentata) are under such tight security because they are genetically modified (GM) organisms, engineered to resist a deadly blight that has all but erased the once widespread species from North American forests….

      If the regulators approve the request, it would be “precedent setting”—the first use of a GM tree to try to restore a native species in North America…

      American chestnuts, towering 30 meters or more, once dominated forests throughout the Appalachian Mountains. But in the early 1900s, a fungal infection appeared on trees at the Bronx Zoo in New York City, and then spread rapidly. The so-called chestnut blight—an accidental import from Asia—releases a toxin that girdles trees and kills everything above the infection site, though still-living roots sometimes send up new shoots. By midcentury, large American chestnuts had all but disappeared.

Revolutionary Technologies

[This excerpt is from an editorial by Jeremy Berg in the August 31, 2018, issue of Science.]

      …New technology is one of the most powerful drivers of scientific progress. For example, the earliest microscopes magnified images only 50-fold at most. When the Dutch fabric merchant and amateur scientist Antonie van Leeuwenhoek developed microscopes with more than 200-fold magnifications (likely to examine cloth), he used them to study many items, including pond water and plaque from teeth. His observations of “animalcules” led to fundamental discoveries in microbiology and cell biology, and spurred the elaboration of improved microscopes. Today, various light microscopes remain prime tools in modern biology. This example embodies two characteristics of a revolutionary technology: a capability for addressing questions better than extant technologies, and the possibility of being utilized and adapted by many other investigators.

      The discovery of x-rays in 1895 ushered in a multifaceted revolution in imaging. As scientists sought to understand the nature of these electromagnetic waves, they realized that they were diffracted by crystals, establishing that the wavelengths of x-rays were comparable to the separation between atoms in crystals. In 1913, William Henry Bragg and his son William Lawrence Bragg found that diffraction patterns could be interpreted to reveal the arrangement of atoms in a crystal. The Braggs determined the structures of many simple substances, including table salt and diamond. Others began using similar techniques to reveal more complex structures of inorganic and organic compounds. In the late 1950s, these methods were extended to determine the structure of proteins, and eventually to larger proteins and protein complexes. Thousands of structures are now reported each year and are foundational to our understanding of biochemistry and cell biology. Technical innovations, improved commercial and shared-facility instrumentation, and powerful software continue to drive the x-ray crystallography revolution.

Voting for the Earth

[These excerpts are from an article by Erich Pica in the Summer 2018 issue of the Friends of the Earth Newsmagazine.]

      In poll after poll, Americans from across the political spectrum stress the importance of clean air, pristine water, protected wildlife, healthy food, preserved forests and prairies, and a stable climate.

      In one recent Gallup poll, 62 percent of Americans said they believe the government needs to do more for the environment and 76 percent would like to see a greater investment in alternative energy sources like solar and wind power.

      You’d never know the U.S. had such a consensus on the environment. Mainstream media coverage of environmental issues is spotty at best. National politicians on both sides of the aisle do little more than pay lip service to environmental issues…

      When environmentalists vote, it forces politicians to pay attention to environmental issues and act on them...or face losing their jobs.

      Unfortunately, too many people who care about the environment have been staying home on election day, giving politicians a free pass on issues affecting our planet.

      We simply can’t afford to let climate change and the environment fall to the bottom of the list any longer.

      We’re at a make-or-break point for our planet. We’ve endured several of history's warmest years in the last decade. The Arctic just experienced its warmest winter on record, with temperatures up to 36 degrees Fahrenheit higher than average. Wildfire and hurricane seasons have been more severe.

      Scientists estimate that we need to cut greenhouse gas emissions dramatically by 2050—to 80 percent below levels we saw in 1990— if we want to save ourselves from catastrophe.

      In short, we need to vote like our lives and our planet depend on it...because they do.

Putting Sleep Myths to Bed

[These excerpts are from an article by Adrian Woolfson in the August 24, 2018, issue of Science.]

      In humans, the master timekeeper underwriting the architecture of sleep is a small region buried deep within the brain, known as the suprachiasmatic nucleus. The pacemaker activity of this biological timepiece is controlled by several genes, which have names like Period, Clock, and Timeless. Mutations in these can transform us from night owls into morning larks or something in between. Fortunately, errant wanderings are typically adjusted by “zeitgebers”—literally, “time-givers”—principally in the form of blue light.

      In Nodding off, [Alice] Gregory makes the point that although sleep studies typically focus on individuals, sleep is not always a solitary pastime and may be adversely affected by our choice of bedmate. Indeed, sleep researchers are now attempting to address the reality that adults often do not sleep in isolation.

      The structure of sleep also changes across an individual’s lifetime. The sleep patterns of young children, for example, are profoundly influenced by their belief systems. Gregory cautions parents against sending misbehaving children to bed early because this association between sleep and punishment may inadvertently condemn them to years of insomnia.

      Similarly, she explains why the Sisyphean struggle to force teenagers to wake up early is invariably destined to fail. Their pattern of melatonin release differs to those of adults and children, and the recapitulation of this phenomenon in other mammals suggests that it has been hard-wired by evolution.

      While extolling the virtues of sleep and its fundamental importance to our health, Gregory reveals some interesting tidbits, including the fact that dolphins sleep with just half of their brain at a time and that male armadillos have erections during non-REM (rapid eye movement) sleep, unlike their human counterparts, who experience this phenomenon only during REM sleep. The heterogeneity of sleep across different species indicates its essential function but also how it may be modified to perform different functions.

      Although the precise function of sleep remains enigmatic, poor-quality sleep and sleep deprivation may have a profound impact on our health…..

      It is noteworthy, and apparently contrary to the central thesis of these two tomes, that genius has sometimes emerged within the context of abnormal sleep patterns. The serial micronapper Leonardo da Vinci’s remarkable canon of work, for example, was achieved on a sleeping pattern comprising naps of just 15 minutes taken every 4 hours….

Back to the Blackboard

[These excerpts are from an article by Jill Lepore in the September 10, 2018, issue of The New Yorker.]

      Is education a fundamental right? The Constitution, drafted in the summer of 1787, does not mention a right to education, but the Northwest Ordinance, passed by Congress that same summer, held that “religion, morality, and knowledge, being necessary to good government and the happiness of mankind, schools and the means of education shall forever be encouraged.” By 1868 the constitutions of twenty-eight of the thirty-two states in the Union had provided for free public education, open to all. Texas, in its 1869 constitution, provided for free public schooling for “all the inhabitants of this State,” a provision that was revised to exclude undocumented immigrants only in 1975.

      [Judge] Justice skirted the questions of whether education is a fundamental right and whether undocumented immigrants are a suspect class. Instead of applying the standard of “strict scrutiny” to the Texas law, he applied the lowest level of scrutiny to the law, which is known as the “rational basis test.” He decided that the Texas law failed this test. The State of Texas had argued that the law was rational because undocumented children are expensive to educate—they often require bilingual education, free meals, and even free clothing. But, Justice noted, so are other children, including native-born children, and children who have immigrated legally, and their families are not asked to bear the cost of their special education. As to why Texas had even passed such a law, he had two explanations, both cynical: “Children of illegal aliens had never been explicitly afforded any-judicial protection, and little political uproar was likely to be raised in their behalf.”

      In September, 1978, Justice ruled in favor of the children. Not long afterward, a small bouquet arrived at his house, sent by three Mexican workers. Then came the hate mail. A man from Lubbock wrote, on the back of a postcard, “Why in the hell don’t you illegally move to mexico?”

      …Later, [Supreme Court Justice] Marshall came back at him, asking, “Could Texas pass a law denying admission to the schools of children of convicts?” Hardy said that they could, but that it wouldn’t be constitutional. Marshall's reply: “We are dealing with children. I mean, here is a child that is the son of a murderer, but he can go to school, but the child that is the son of an unfortunate alien cannot?”

Ancient DNA Reveals Tryst between Extinct Human Species

[These excerpts are from an article by Gretchen Vogel in the August 24, 2018, issue of Science.]

      The woman may have been just a teenager when she died more than 50,000 years ago, too young to have left much of a mark on her world. But a piece of one of her bones, unearthed in a cave in Russia's Denisova valley in 2012, may make her famous. Enough ancient DNA lingered within the 2-centimeter fragment to reveal her startling ancestry: She was the direct offspring of two different species of ancient humans—neither of them ours….her mother was Neanderthal and her father was Denisovan, the mysterious group of ancient humans discovered in the same Siberian cave in 2011. It is the most direct evidence yet that various ancient humans mated with each other and had offspring.

      Based on other ancient genomes, researchers already had concluded that Denisovans, Neanderthals, and modern humans interbred in ice age Europe and Asia. The genes of both archaic human species are present in many people today. Other fossils found in the Siberian cave have shown that all three species lived there at different times….

      …the proportion of genes in which her chromosorne pairs harbored different variants—so-called heterozygous alleles—was close to 50% across all chromosomes, suggesting the maternal and paternal chromosomes came directly from different groups. And her mitochondrial DNA, which is inherited maternally, was uniformly Neanderthal, so the researchers concluded she was a first-generation hybrid of a Denisovan man and Neanderthal woman….

Retreat on Economics at the EPA

[These excerpts are from an editorial by Kevin Boyle and Matthew Kotchen in the August 24, 2018, issue of Science.]

      Rigorous economic analysis has long been recognized as essential for sound, defensible decision-making by government agencies whose regulations affect human health and the environment The acting administrator (since July 2018) of the U.S. Environmental Protection Agency (EPA) has emphasized the importance of transparency and public trust. These laudable goals are enhanced by external scientific review of the EPA’s analytical procedures. Yet, in June 2018, the EPA’s Science Advisory Board (SAB) eliminated its Environmental Economics Advisory Committee (EEAC). The agency should be calling for more—not less—external advice on economics, given the Trump administration’s promotion of economic analyses that push the boundaries of well-established best practices. The pattern is clear: When environmental regulations are expected to provide substantial public benefits, assumptions are made to substantially diminish their valuations.

      …today, many economic analyses that support the Trump administration’s regulatory rollbacks conflict with the EPA's previous findings. The 2017 analysis for eliminating the Waters of the United States rule turned favorable only after excluding all benefits of protecting wetlands. Eliminating the Clean Power Plan is supported in another 2017 analysis only after changing assumptions about the scope of climate damages, the measurement of health effects, and the impact on future generations. Differing assumptions also underlie the economic justification of the administration’s 2018 proposal to roll back automotive fuel economy standards.

Inside Our Heads

[These excerpts are from an article by Thomas Suddendorf in the September 2018 issue of Scientific American.]

      …It also turns out that animal and human cognition, though similar in many respects, differ in two profound dimensions. One is the ability to form nested scenarios, an inner theater of the mind that allows us to envision and mentally manipulate many possible situations and anticipate different outcomes. The second is our drive to exchange our thoughts with others. Taken together, the emergence of these two characteristics transformed the human mind and set us on a world-changing path.

      … animal studies have not met similar stringent criteria for establish-ing foresight, nor have they demonstrated deliberate practice. Does this mean we should conclude that animals do not have the relevant capacities at all? That would be premature. Absence of evidence is not evidence of absence, as the saying goes. Establishing competence in animals is difficult; establishing the absence of competence is even harder.

      Consider the following study, in which my colleague Jon Redshaw of the University of Queensland in Australia and I tried to assess one of the most fundamental aspects of thinking about the future: the recognition that it is largely uncertain. When one realizes that events may unfold in more than one way, it makes sense to prepare for various possibilities and to make contingency plans. Human hunters demonstrate this when they lay a trap in front of all their prey’s potential escape routes rather than just in front of one. Our simple test of this capacity was to show a group of chimpanzees and orangutans a vertical tube and drop a reward at the top so they could catch it at the bottom. We compared the apes’ performance with that of a group of human children aged two to four doing the same thing. Both groups readily anticipated that the reward would reappear at the bottom of the tube: they led their hand under the exit to prepare for the catch.

      Next, however, we made events a little harder to predict. The straight tube was replaced by an upside-down Y-shaped tube that had two exits. In preparation for the drop, the apes and the two-year-old children alike tended to cover only one of the potential exits and thus ended up catching the reward in only half of the trials. But four-year-olds immediately and consistently covered both exits with their hands, thus demonstrating the capacity to prepare for at least two mutually exclusive versions of an imminent future event. Between ages two and four, we could see this contingency planning increase. in frequency. We saw no such ability among the apes.

      This experiment does not prove, however, that apes and two-year-old humans have no understanding that the future can unfold in distinct ways. As I mentioned, there is a fundamental problem when it comes to showing the absence of a capacity. Perhaps the animals were not motivated, did not understand the basic task or could not coordinate two hands. Or maybe we simply tested the wrong individuals, and more competent animals might be able to pass.

      To truly prove this ability is absent, a scientist would have to test all animals, at all times, on some fool-proof task. Clearly, that is not practical. All we can do is give individuals the chance to demonstrate competence. If they consistently fail, we can become more confident that they really do not have the capacity in question, but even then, future work may prove that wrong….

      The earliest evidence of deliberate practice is more than a million years old. The Acheulean stone tools of Homo erectus some 1.8 million years ago already suggest considerable foresight, as they appeared to have been carried from one place to another for repeated use. Crafting these tools requires considerable knowledge about rocks and how to work them. At some sites, such as Olorgesailie in Kenya, the ground is still littered with shaped stones, raising the question of why our ancestors kept making more tools when there were plenty lying around. The answer is that they were probably practicing how to manufacture those tools. Once they were proficient, they could wander the plains knowing they could make a new tool if the old one broke. These ancestors were armed and ready to reload….

      None of this is an excuse for arrogance. It is, in fact, a call for care. We are the only creatures on this planet with these abilities. As Spider-Man’s Uncle Ben declared, communicating complex ideas in an urge to connect with his superhero nephew, “With great power comes great responsibility.”

An Evolved Uniqueness

[These excerpts are from an article by the Kevin Laland in the September 2018 issue of Scientific American.]

      Most people on this planet blithely assume, largely without any valid scientific rationale, that humans are special creatures, distinct from other animals. Curiously, the scientists best qualified to evaluate this claim have often appeared reticent to acknowledge the uniqueness of Homo sapiens, perhaps for fear of reinforcing the idea of human exceptionalism put forward in religious doctrines. Yet hard scientific data have been amassed across fields ranging from ecology to cognitive psychology affirming that humans truly are a remarkable species.

      The density of human populations far exceeds what would be typical for an animal of our size. We live across an extraordinary geographical range and control unprecedented flows of energy and matter: our global impact is beyond question. When one also considers our intelligence, powers of communication, capacity for knowledge acquisition and sharing—along with magnificent works of art, architecture and music we create—humans genuinely do stand out as a very different kind of animal. Our culture seems to separate us from the rest of nature, and yet that culture, too, must be a product of evolution.

      The challenge of providing a satisfactory scientific explanation for the evolution of our species’ cognitive abilities and their expression in our culture is what I call “Darwin’s Unfinished Symphony.” That is because Charles Darwin began the investigation of these topics some 150 years ago, but as he himself confessed, his understanding of how we evolved these attributes was in his own words “imperfect” and “fragmentary” Fortunately, other scientists have taken up the baton, and there is an increasing feeling among those of us who conduct research in this field that we are closing in on an answer.

      The emerging consensus is that humanity’s accomplishments derive from an ability to acquire knowledge and skills from other people. Individuals then build iteratively on that reservoir of pooled knowledge over long periods. This communal store of experience enables creation of ever more efficient and diverse solutions to life’s challenges. It was not our large brains, intelligence or language that gave us culture but rather our culture that gave us large brains, intelligence and language. For our species and perhaps a small number of other species, too, culture transformed the evolutionary process.

      The term “culture” implies fashion or haute cuisine, but boiled down to its scientific essence, culture comprises behavior patterns shared by members of a community that rely on socially transmitted information. Whether we consider automobile designs, popular music styles, scientific theories or the foraging of small-scale societies, all evolve through endless rounds of innovations that add incremental refinements to an initial baseline of knowledge. Perpetual, relentless copying and innovation—that is the secret of our species’ success….

      Animals also “innovate.” When prompted to name an innovation, we might think of the invention of penicillin by Alexander Fleming or the construction of the World Wide Web by Tim Berners-Lee. The animal equivalents are no less fascinating. My favorite concerns a young chimpanzee called Mike, whom primatologist Jane Goodall observed devising a noisy dominance display that involved banging two empty kerosene cans together. This exhibition thoroughly intimidated Mike’s rivals and led to him shooting up the social rankings to become alpha male in record time. Then there is the invention by Japanese carrion crows of using cars to crack open nuts. Walnuts shells are too tough for crows to crack in their beaks, but they nonetheless feed on these nuts by placing them in the road for cars to run over, returning to retrieve their treats when the lights turn red. And a group of starlings—birds famously fond of shiny objects used as nest decorations—started raiding a coin machine at a car wash in Fredericksburg, Va., and made off with, quite literally, hundreds of dollars in quarters….

      Such stories are more than just enchanting snippets of natural history. Comparative analyses reveal intriguing patterns in the social learning and innovation exhibited by animals. The most significant of these discoveries finds that innovative species, as well as animals most reliant on copying, possess unusually large brains (both in absolute terms and relative to body size). The correlation between rates of innovation and brain size was initially observed in birds, but St. Andrews in Scotland this research has since been replicated in primates….

      …The results suggested that natural selection does not favor more and more social learning but rather a tendency toward better and better social learning. Animals do not need a big brain to copy, but they do need a big brain to copy well.

      …Those primates that excel at social 11 learning and innovation are the same species that have the most diverse diets, use tools and extractive foraging, and exhibit the most complex social behavior. In fact, statistical analyses suggest that these abilities vary in lockstep so tightly that one can align primates along a single dimension of general cognitive performance, which we call primate intelligence (loosely analogous to IQ. in humans).

      Chimpanzees and orangutans excel in all these performance measures and have high primate intelligence, whereas some nocturnal prosimians are poor at most of them and have a lower metric. The strong correlations between primate intelligence and both brain size measures and performance in laboratory tests of learning and cognition validate the use of the metric as a measure of intelligence….

      Cultural drive is not the only cause of primate brain evolution: diet and sociality are also important because fruit-eating primates and those living in large, complex groups possess large brains. It is difficult, however, to escape the conclusion that high intelligence and longer lives co-evolved in some primates because their cultural capabilities allowed them to exploit high-quality but difficult-to-access food resources, with the nutrients gleaned “paying” for brain growth. Brains are energetically costly organs, and social learning is paramount to animals gathering the resources necessary to grow and maintain a large brain efficiently.

      …Why haven’t chimpanzees sequenced genomes or built space rockets? Mathematical theory has provided some answers. The secret comes down to the fidelity of information transmission from one member of a species to another, the accuracy with which learned information passes between transmitter and receiver. The size of a species’ cultural repertoire and how long cultural traits persist in a population both increase exponentially with transmission fidelity. Above a certain threshold, culture begins to ratchet up in complexity and diversity. Without accurate transmission, cumulative culture is impossible. But once a given threshold is surpassed, even modest amounts of novel invention and refinement lead rapidly to massive cultural change. Humans are the only living species to have passed this threshold.

      Our ancestors achieved high-fidelity transmission through teaching—behavior that functions to facilitate a pupil's learning. Whereas copying is widespread in nature, teaching is rare, and yet teaching is universal in human societies once the many subtle forms this practice takes are recognized….

      …Agriculture freed societies from the constraints that the peripatetic lives of hunter-gatherers imposed on population size and any inclinations to create new technologies. In the absence of this constraint, agricultural societies flourished, both because they outgrew hunter-gatherer communities through allowing an increase in the carrying capacity of a particular area for food production and because agriculture triggered a raft of associated innovations that dramatically changed human society. In the larger societies supported by increasing farming yields, beneficial innovations were more likely to spread and be retained. Agriculture precipitated a revolution not only by triggering the invention of related technologies—ploughs or irrigation technology, among others—but also by spawning entirely unanticipated initiatives, such as the wheel, city-states and religions.

      The emerging picture of human cognitive evolution suggests that we are largely creatures of our own making. The distinctive features of humanity—our intelligence, creativity, language, as well as our ecological and demographic success—are either evolutionary adaptations to our ancestors' own cultural activities or direct consequences of those adaptations. For our species' evolution, cultural inheritance appears every bit as important as genetic inheritance.

      We tend to think of evolution through natural selection as a process in which changes in the external environment, such as predators, climate or disease, trigger evolutionary refinements in an organism’s traits. Yet the human mind did not evolve in this straightforward way. Rather our mental abilities arose through a convoluted, reciprocal process in which our ancestors constantly constructed niches (aspects of their physical and social environments) that fed back to impose selection on their bodies and minds, in endless cycles….But our ability to think, learn, communicate and control our environment makes humanity genuinely different from all other animals.

The So-Called Right to Try

[These excerpts are from an article by the Claudia Wallis in the September 2018 issue of Scientific American.]

      There’s no question about it: the new law sounds just great. President Donald Trump, who knows a thing or two about marketing, gushed about its name when he signed the “Right to Try” bill into law on May 30. He was surrounded by patients with incurable diseases, including a second grader with Duchenne muscular dystrophy, who got up from his small wheelchair to hug the president. The law aims to give such patients easier access to experimental drugs by bypassing the Food and Drug Administration.

      The crowd-pleasing name and concept are why 40 states had already passed similar laws, although they were largely symbolic until the federal government got onboard. The laws vary but generally say that dying patients may seek from drugmakers any medicine that has passed a phase I trial---a minimal test of safety. “We’re going to be saving tremendous numbers of lives,” Trump said. “The current FDA approval process can take many, many years. For countless patients, time is not what they have.”

      But the new law won’t do what the president claims. Instead it gives false hope to the most vulnerable patients….

      In fact, for decades pharmaceutical companies have made unapproved drugs available through programs overseen by the FDA. This “expanded access” is aimed at extremely ill patients who, for one reason or another, do not qualify for formal drug studies. A 2016 report shows that the FDA receives more than 1,000 annual requests on behalf of such patients and approves 99.7 percent of them. It acts immediately in emergency cases or else within days, according to FDA commissioner Scott Gottlieb.

      Of course, there are barriers to getting medicines that may not be effective or safe. Some patients cannot find a doctor to administer them or an institution that will let them be used on-site. And many of these drugs are simply not made available. Drugmakers cannot be compelled to do so: a 2007 federal court decision found “there is no fundamental right... of access to experimental drugs for the terminally ill.” The new law changes none of this….

      Unethical companies, however, may find fresh opportunities to prey on desperate patients under the new law. It releases doctors, hospitals and drugmakers from liability. And although it stipulates that manufacturers can charge patients only what it costs to provide the drug, there is no required preapproval of these charges by the FDA, as there is with expanded access. Such issues led dozens of major patient-advocacy groups to oppose the legislation, which was originally drafted and promoted by the Goldwater Institute, a libertarian think tank.

Oil in Your Wine

[These excerpts are from an article by the Lucas Laursen in the September 2018 issue of Scientific American.]

      Every great bottle of wine begins with a humble fungal infection. Historically, wine-makers relied on naturally occurring yeasts to convert grape sugars into alcohol; modern vintners typically buy one of just a few laboratory-grown strains. Now, to set their products apart, some of the best winemakers are revisiting nature’s lesser-used microbial engineers. Not all these strains can withstand industrial production processes and retain their efficacy—but a natural additive offers a possible solution, new research suggests.

      Industrial growers produce yeast in the presence of oxygen, which can damage cell walls and other important proteins during a process called oxidation. This can make it harder for yeasts—which are dehydrated for shipping—to perform when winemakers revive them. Biochemist Emilia Matallana of the University of Valencia in Spain and her colleagues have been exploring practical ways to fend off such oxidation for years. After showing that pure antioxidants worked, they began searching for a more affordable natural source. They found it in argan, an olivelike fruit used for food and cosmetics. The trees it grows on are famously frequented by domesticated goats.

      Matallana and her team treated three varieties of wine yeast (Saccharomyces cerevisiae) with argan oil, dehydrated them and later rehydrated them. The oil protected important proteins in the yeasts from oxidation and boosted wine fermentation….

      Microbiologists are now interested in 1 studying how and why each yeast strain responded to the argan oil as it did….The oil may one day enable vintners to use I a wider range of specialized yeasts, putting more varied wines on the menu. As for how the oil affected the wine's taste, Matallana says it was “nothing weird.”

Why Sex Matters in Alzheimer’s

[This excerpt is from an article by Rebecca Nebel in the September 2018 issue of Scientific American.]

      Growing older may be inevitable, but getting Alzheimer’s disease is not. Although we can’t stop the aging process, which is the biggest risk factor for Alzheimer’s, there are many other factors that can be modified to lower the risk of dementia.

      Yet our ability to reduce Alzheimer’s risk and devise new strategies for prevention and treatment is impeded by a lack of knowledge about how and why the disease differs between women and men. There are tantalizing hints in the literature about factors that act differently between the sexes, including hormones and specific genes, and these differences could be important avenues of research. Unfortunately, in my experience, most studies of Alzheimer’s risk combine data for women and men….

      Risk factors are just one of the areas in which we need more research into the differences between the sexes in Alzheimer’s. Scientists have often overlooked sex differences in diagnosis, clinical trial design, treatment outcomes and caregiving. This bias has impeded progress in detection and care.

      Approaches that incorporate sex differences into research have advanced innovation in respect to many diseases. We need to do the same in Alzheimer’s. Looking at these differences will greatly enhance our understanding of this thief of minds and improve health outlooks for all.

Creative Science

[These excerpts are from an editorial by Steve Metz in the September 2018 issue of The Science Teacher.]

      In his famous 1959 Rede lecture, the chemist and novelist C. P. Snow lamented the modern creation of “Two Cultures,” one scientific and technical, the other humanistic and artistic. But it seems clear that science and the arts both spring from the same deep well of human creativity and imagination, as Richard Holmes explains in his wonderful book, The Age of Wonder. Holmes describes the deep connection Enlightenment scientists like astronomers Caroline and William Herschel, physicist Michael Faraday, and chemists Antoine Lavoisier and Sir Humphry Davies (who was also a published poet) had with Romantic artists, including the poets Samuel Coleridge and John Keats, the novelist Mary Shelley, and others.

      Job security increasingly requires imagination and creativity. As routine tasks become digitized and automated, successful workers will be those who imagine and create. Along with critical thinking, communication, and collaboration, creativity is one of the “Four Cs”—the 21st century learning and innovation skills that prepare students for increasingly complex life and work environments….It’s also at the apex of Bloom's taxonomy of learning objectives….

      Students often associate creativity with painting, music, and writing, but not with science. They think that scientists and engineers follow routine procedures and that science itself is a set of facts and vocabulary to memorize. And who can blame them? Too often their science classes involve passive note-taking, rote memorization, and step-by-step laboratory activities designed to produce a single right answer.

Our Looming Lead Problem

[These excerpts are from an article by Frederick Rowe Davis in the August 17, 2018, issue of Science.]

      On 25 April 2014, Flint Mayor Dayne Walling ceremoniously shut off a valve to the Detroit water supply and opened the flow of the Flint River into local homes and businesses. He marked the occasion by drinking from a glass filled with water from the new source. Eighteen months later, the water supply was switched back to Detroit. What occurred in between—the city’s failure to control infrastructure corrosion, the deterioration of fresh water entering Flint residences, citizen complaints, government denials, elevated lead levels in children, and public outcry—would become the basis for a crisis that rose to national attention in 2015….

      But the story of Flint is also one of resilience. Residents rose up and challenged the indifference and false palliatives offered by the authorities, who for many months proclaimed that the water supply in Flint was safe for drinking….

      The water crisis in Flint is profoundly worrisome: Numerous children suffered lead poisoning as a direct result of a bureaucratic focus on the fiscal rather than the social. With the huge amount of lead incorporated into the nation's infrastructure, many other communities are just a few poor decisions from a similar fate.

Global Warming Policy: Is Population Left Out in the Cold?

[These excerpts are from an article by John Bongaarts and Brian C. O’Neill in the August 17, 2018, issue of Science.]

      Would slowing human population growth lessen future impacts of anthropogenic climate change? With an additional 4 billion people expected on the planet by 2100, the answer seems an obvious “yes.” Indeed, substantial scientific literature backs up this intuition. Many nongovernmental organizations undertake climate- and population-related activities, and national adaptation plans for most of the least-developed countries recognize population growth as an important component of vulnerability to climate impacts. But despite this evidence, much of the climate community, notably the Intergovernmental Panel on Climate Change (IPCC), the primary source of scientific information for the international climate change policy process, is largely silent about the potential for population policy to reduce risks from global warming. Though the latest IPCC report includes an assessment of technical aspects of ways in which population and climate change influence each other, the assessment does not extend to population policy as part of a wide range of potential adaptation and mitigation responses. We suggest that four misperceptions by many in the climate change community play a substantial role in neglect of this topic, and propose remedies for the IPCC as it prepares for the sixth cycle of its multiyear assessment process.

      Population-related policies—such as offering voluntary family planning services as well as improved education for women and girls—can have many of the desirable characteristics of climate response options: benefits to both mitigation and adaptation, co-benefits with human well-being and other environmental issues, synergies with Sustainable Development Goals (SDGs), and cost effectiveness. These policies can also enable women to achieve their desired family size, and lead to lower fertility and slower population growth. The resulting demographic changes can not only lessen the emissions that drive climate change but also improve the ability of populations to adapt to its consequences.

      • MISPERCEPTION 1: POPULATION GROWTH IS NO LONGER A PROBLEM

      • MISPERCEPTION 2: POPULATION POLICIES ARE NOT EFFECTIVE

      • MISPERCEPTION 3: POPULATION DOES NOT MATTER MUCH FOR CLIMATE

      • MISPERCEPTION 4: POPULATION POLICY IS TOO CONTROVERSIAL TO SUCCEED

Administrators: Be Intentional ‘For All’

[These excerpts are from an editorial by Sharon Delesbore in the September 2018 issue of NSTA Reports.]

      School district leaders and campus administrators must take the helm and realize that science instruction must be a priority for a sustainable society. Because science understanding is not assessed as frequently as math and reading—and often left out of funding calculations—its importance has been woefully negated, and our workforce is suffering from lack of qualified science-literate candidates. Even more dismal is the rarity of science-literate candidates from underrepresented populations in the global schema. This is not just about ethnicity or low socioeconomic status, but also about access, now more than ever.

      …What I do not see is an influx of campus administrators seeking opportunities to develop their capacity in science education to support their teachers.

      As educators and humans in general, we tend to focus on and assist in areas in which we are strong, confident, and successful. When math or science is discussed, the common comments are “I was not good at that,” or “Those subjects scare me.” Many adults believe science and math are difficult subjects and transfer those beliefs to their children at an early age, inadvertently laying the foundation for barriers for their children. Combined with the negative reinforcement of little or poor experiences with science engagement, they are creating a formula for STEM evasion.

Clinical Trials Need More Diversity

[This excerpt is from an editorial by the editors in the September 2018 issue of Scientific American.]

      Nearly 40 percent of Americans belong to a racial or ethnic minority, but the patients who participate in clinical trials for new drugs skew heavily white—in some cases, 80 to 90 percent. Yet nonwhite patients will ultimately take the drugs that come out of clinical studies, and that leads to a real problem. The symptoms of conditions such as heart disease, cancer and diabetes, as well as the contributing factors, vary across lines of ethnicity, as they do between the sexes. If diverse groups aren’t part of these studies, we can't be sure whether the treatment will work in all populations or what side effects might emerge in one group or another.

      This isn’t a new concern. In 1993 Congress passed the National Institutes of Health Revitalization Act, which required the agency to include more women and people of color in their research studies. It was a step in the right direction, and to be sure, the percentage of women in clinical trials has grown significantly since then.

      But participation by minorities has not increased much at all: a 2014 study found that fewer than 2 percent of more than 10,000 cancer clinical trials funded by the National Cancer Institute focused on a racial or ethnic minority. And even if the other trials fulfilled those goals, the 1993 law regulates only studies funded by the NIH, which represent a mere 6 percent of all clinical trials.

      The shortfall is especially troubling when it comes to trials for diseases that particularly affect marginalized racial and ethnic groups. For example, Americans of African descent are more likely to suffer from respiratory ailments than white Americans are; however, as of 2015, only 1.9 percent of all studies of respiratory disease included minority subjects, and fewer than 5 percent of NIH-funded respiratory research included racial minorities.

      The problem is not necessarily that researchers are unwilling to diversify their studies. Members of minority groups are often reluctant to participate. Fear of discrimination by medical professionals is one reason. Another is that many ethnic and racial minorities do not have access to the specialty care centers that recruit subjects for trials. Some may also fear possible exploitation, thanks to a history of unethical medical testing in the U.S. (the infamous Tuskegee experiments, in which black men were deliberately left untreated for syphilis, are perhaps the best-known example). And some minorities simply lack the time or financial resources to participate.

      The problem is not confined to the U.S., either. A recent study of trials involving some 150,000 patients in 29 countries at five different time points over the past 21 years showed that the ethnic makeup of the trials was about 86 percent white….

Remember the Alamo! Remember Goliad!

[These excerpts are from pages 154-157 of The Eagle and the Raven/u> by James A. Michener.]

      For thirteen relentless days the Mexican troops besieged the defenders of the Alamo. Santa Anna conducted the battle under a blood-red flag to the marching tune Deguello, each symbol traditionally meaning: ‘Surrender now or you will be executed when we win.’ In the final charge through the walls of the Alamo there would be no quarter; the men inside knew that this time the often windy cry ‘Victory or Death’ meant what it said.

      Early in the morning of the thirteenth day Santa Anna’s foot soldiers, lashed forward by officers who did not care how many of their men were lost in the attack, stormed the walls, overcame the defenders, and slaughtered every man inside, American or Mexican. Jim Bowie was slain in his sick bed. Captain Travis was shot on the walls. What exactly happened to Davy Crockett would never be known—dead inside the walls or murdered later outside as a prisoner of war. But dead.

      The Texicans lost 186, the Mexicans about 600. One Texican, a grizzled French veteran of the Napoleonic wars, had fled the fort before the last fight began. Santa Anna had won a crushing victory and had been remorseless in exterminating Texicans.

      Any evaluation of Mexico’s stunning victory at the Alamo must consider two hideous acts of Santa Anna, acts so contemptuous of the customary decencies which always existed between honorable adversaries that they enraged the Americans, kindling fires of revenge that would be extinguished only in the equally horrible acts that followed the great Tejas victory at San Jacinto. When a group of prisoners was brought to Santa Anna he said scornfully: ‘I do not want to see those men living’ and despite the rules which ensured safety to soldiers who surrendered honorably, he growled: ‘Shoot them,’ and they were executed.

      Enraging the Texicans even more was his treatment of the bodies of the men who had died bravely trying to defend the Alamo. Instead of providing the bodies with a decent burial, or turning them over to their friends for proper burial, he had the corpses thrown into a great pile, as if they were useless timbers, then cremated in slow fires for two and a half days. When the heat subsided, scavenging citizens probed the ashes for metal items of value, after which the bones were left to dogs and vultures. When news of the desecration circulated, a fearful oath was sworn by Texicans: ‘Remember the Alamo!’

      Flushed by his triumph, Santa Anna behaved as if he were all-powerful. When a detachment of his army won a second victory at Goliad, some miles southeast of San Antonio, taking more than four hundred prisoners during several skirmishes, he ordered the general in charge to execute every man….

      On a bright spring morning, the Mexicans marched nearly four hundred unarmed prisoners in three different groups along country roads, suddenly turning on them and murdering three hundred and forty-two. Those who escaped by fleeing across fields, ducking into woods, and throwing themselves into streams in which they swam to safety, spread news of the massacre. Across Texas men whispered with a terrifying lust for vengeance: ‘Remember the Alamo! Remember Goliad!’

How Islands Shrink People

[These excerpts are from an article by Ann Gibbons in the August 3, 2018, issue of Science.]

      Living on an island can have strange effects. On Cyprus, hippos dwindled to the size of sea lions. On Flores in Indonesia, extinct elephants weighed no more than a large hog, but rats grew as big as cats. All are examples of the so-called island effect, which holds that when food and predators are scarce, big animals shrink and little ones grow. But no one was sure whether the same rule explains the most famous example of dwarfing on Flores, the odd extinct hominin called the hobbit, which lived 60,000 to 100,000 years ago and stood about a meter tall.

      Now, genetic evidence from modern pygmies on Flores—who are unrelated to the hobbit—confirms that humans, too, are subject to so-called island dwarfing….Flores pygmies differ from their closest relatives on New Guinea and in East Asia in carrying more gene variants that promote short stature. The genetic differences testify to recent evolution—the island rule at work. And they imply that the same force gave the hobbit its short stature….

      The team found no trace of archaic DNA that could be from the hobbit. Instead, the pygmies were most closely related to other East Asians. The DNA suggested that their ancestors came to Flores in several waves: in the past 50,000 years or so, when modern humans first reached Melanesia; and in the past 5000 years, when settlers came from both East Asia and New Guinea.

      The pygmies’ genomes also reflect an environmental shift. They carry an ancient version of a gene that encodes enzymes to break down fatty acids in meat and seafood. It suggests their ancestors underwent a “big shift in diet” after reaching Flores, perhaps eating pygmy elephants or marine foods…

      The discovery fits with a recent study suggesting evolution also favored short stature in people on the Andaman Islands….Such selection on islands boosts the theory that the hobbit, too, was once a taller species, who dwindled in height over millennia on Flores.

Did Kindness Prime Our Species for Language?

[These excerpts are from an article by Michael Erard and Catherine Matacie in the August 3, 2018, issue of Science.]

      The idea is rooted in a much older one: that humans tamed themselves. This self-domestication hypothesis, which got its start with Charles Darwin, says that when early humans started to prefer cooperative friends and mates to aggressive ones, they essentially domesticated themselves….Along with tameness came evolutionary changes seen in other domesticated mammals—smoother brows, shorter faces, and more feminized features—thanks in part to lower levels of circulating androgens (such as testosterone) that tend to promote aggression.

      Higher levels of neurohormones such as serotonin were also part of the domestication package. Such pro-social hormones help us infer others' mental states, learn through joint attention, and even link objects and labels—all prerequisites for language….

      If early humans somehow developed their own lower-stress “domesticated” environment—perhaps as a result of easier access to food—it could have fostered more cooperation and reduced aggression….

      …In a famous experiment, Russian geneticist Dmitry Belyaev and colleagues selected for tameness among captured Siberian silver foxes starting in the 1950s. If a wild fox did not attack a human hand placed into its cage, it was bred. Over 50 generations, the foxes came to look like other domesticated species, with shorter faces, curly tails, and lighter coloring—traits that have since been linked to shifts in prenatal hormones.

      Unlike their wild counterparts, tame foxes came to understand the importance of human pointing and gazing….That ability to “mind read” is key to language. Thus, even though the foxes don’t vocalize in complex ways, they show that selection only for tameness can carry communication skills in its wake.

Nudge, not Sludge

[These excerpts are from an editorial by Richard H. Thaler in the August 3, 2018, issue of Science.]

      Helpful nudges abound—good signage, text reminders of appointments, and thoughtfully chosen default options are all nudges. For example, by automatically enrolling people into retirement savings plans from which they can easily opt out, people who always meant to join a plan but never got around to it will have more comfortable retirements.

      Yet, the same techniques for nudging can be used for less benevolent purposes. Take the enterprise of marketing goods and services. Firms may encourage buyers in order to maximize profits rather than to improve the buyers’ welfare (think of financier Bernie Madoff who defrauded thousands of investors). A common example is when firms offer a rebate to customers who buy a product, but then require them to mail in a form, a copy of the receipt, the SKU bar code on the packaging, and so forth. These companies are only offering the illusion of a rebate to the many people like me who never get around to claiming it. Because of such thick sludge, redemption rates for rebates tend to be low, yet the lure of the rebate still can stimulate sales—call it “buy bait.”

      Public sector sludge also comes in many forms. For example, in the United States, there is a program called the earned income tax credit that is intended to encourage work and transfer income to the working poor. The Internal Revenue Service has all the information necessary to make adjustments for credit claims by any eligible taxpayer who files a tax return. But instead, the rules require people to fill out a form that many eligible taxpayers fail to complete, thus depriving themselves of the subsidy that Congress intended they receive.

      Similarly, one of the most important rights of citizens is the ability to vote. Increased voter participation can be nudged by automatically registering anyone who applies for a driver’s license. But voter participation can also be decreased through sludge, as the state of Ohio has recently done, by purging from its list of eligible voters those who have not voted recently and who have not responded to a postcard prompt. Defenders of such sludge claim that it serves as a protection against voter fraud, despite the fact that people who intentionally vote illegally are rare.

      So, sludge can take two forms. It can discourage behavior that is in a person's best interest such as claiming a rebate or tax credit, and it can encourage self-defeating behavior such as investing in a deal that is too good to be true.

      Let’s continue to encourage everyone to nudge for good, but let's also urge those in both the public and private sectors to engage in sludge cleanup campaigns. Less sludge will make the world a better place.

Calculus of Probabilities

[This excerpt is from pages 216-217 of Great Ideas of Modern Mathematics: Their Nature and Use by Jagjit Singh (1959).]

      Chevalier de Mere was fond of a dice game which was played in the following way. A die was thrown four times in succession and one of the players bet that a six would appear at least once in four throws while the other bet against. Mere found that there was greater chance in favour of the first player, that is, of getting a six at least once in four throws. Tired of it, he introduced a variation. The game was now played with two dice instead of one and the betting was on the appearance or non-appearance of at least one double-six in twenty-four throws. Mere found that this time the player who bet against the appearance of a double-six won more frequently. This seemed strange, as at first sight the chance of getting at least one six in four throws should be the same as that of at least one double-six in twenty-four. Mere asked the contemporary mathematician, Fermat, to explain this paradox.

      Fermat showed that while the odds in favour of a single six in four throws were a little more than even (actually about 51:49), those favour of a double-six in twenty-four throws were a little less than even, being 49:51. In solving this paradox Fermat virtually created a new science, the Calculus of Probabilities. It was soon discovered that the new calculus could not only handle problems posed by gamblers like Mere, but it could also aid financial speculators engaged in marine insurance.

Western Machine Civilization

[These excerpts are from pages 50-52 and 64-65 of Great Ideas of Modern Mathematics: Their Nature and Use by Jagjit Singh (1959).]

      Lewis Mumford has divided the history of the Western machine civilisation during the past millennium into three successive but over-lapping and interpenetrating phases. During the first phase—the eotechnic phase—trade, which at the beginning was no more than an irregular trickle, grew to such an extent that it transformed the whole life of Western Europe. It is true that the development of trade led to a steady growth of manufacture as well, but throughout this period (which lasted till about the middle of the eighteenth century) trade on the whole dominated manufacture. Thus it was that the minds of men were occupied more with problems connected with trade, such as the evolution of safe and reliable methods of navigation, than with those of manufacture. Consequently, while the two ancient sources of power, wind and water, were developed at a steadily accelerating pace to increase manufacture, the attention of most leading scientists, particularly during the last three centuries of this phase, was directed towards the solution of navigational problems. The chief and most difficult of these was that of finding the longitude of a ship at sea. It was imperative that a solution should be found as the inability to determine longitudes led to very heavy shipping losses. Newton had tackled it, although without providing a satisfactorily practical answer. In fact, as Hessen has shown, Newton’s masterpiece, the Principia, was in part an endeavour to deal with the problems of gravity, planetary motions and the shape and size of the earth, in order to meet the demands for better navigation. It was shown that the most promising method of determining longitude from observation of heavenly bodies was provided by the moon. The theory of lunar motion, therefore, began to absorb the attention of an increasing number of distinguished mathematicians of England, France, Germany and America.

      Although more arithmetic and algebra were devoted to Lunar Theory than to any other question of astronomy or mathematical physics, a solution was not found till the middle of the eighteenth century, when successful chronometers, that could keep time on a ship in spite of pitching and rolling in rough weather, were constructed. Once the problem of longitude was solved it led to a further growth of trade, which in turn induced a corresponding increase in manufacture. A stage was now reached when the old sources of power, namely wind and water, proved too ‘weak, fickle, and irregular’ to meet the needs of a trade that had burst all previous bounds. Men began to look for new sources of power rather than new trade routes.

      This change marks the beginning of Mumford's second phase, the palaeotechnic phase, which ushered in the era of the ‘dark Satanic mills’. As manufacture began to dominate trade, the problem of discovering new prime movers became the dominant social problem of the time. It was eventually solved by the invention of the steam engine. The discovery of the power of steam—the chief palaeotechnic source of power—was not the work of ‘pure’ scientists; it was made possible by the combined efforts of a long succession of technicians, craftsmen and engineers from Porta, Rivault, Caus, Branca, Savery and Newcomen to Watt and Boulton.

      Although the power of steam to do useful work had been known since the time of Hero of Alexandria (A.D. 50), the social impetus to make it the chief prime mover was lacking before the eighteenth century. Further, a successful steam engine could not have been invented even then had it not been for the introduction by craftsmen of more precise methods of measurement in engineering design. Thus, the success of the first two engines that Watt erected at Bloomfield colliery in Staffordshire, and at John Wilkinson’s new foundry at Broseley, depended in a great measure on the accurate cylinders made by Wilkinson’s new machine tool with a limit of error not exceeding ‘the thickness of a thin six pence’ in a diameter of seventy-two inches. The importance of the introduction of new precision tools, producing parts with increasingly narrower ‘tolerances’, in revolutionising production has never been fully recognised. The transformation of the steam engine from the wasteful burner of fuel that it was at the time of Newcomen into the economical source of power that it became in the hands of Watt and his successors seventy years later, was achieved as much by the introduction of precision methods in technology as by Black’s discovery of the latent heat of steam….

      For about 150 years after Newton the study of the motion of material bodies, whether cannon balls and bullets, to meet the needs of war, or the moon and planets to meet those of navigation, closely followed the Newtonian tradition. Then as it was just about beginning to end up in high-brow pedantry it was rescued by the emergence of a new science—electricity—in much the same way as cybernetics was to rescue mathematical logic a century later.

      Though known from earliest times electricity became a quantitative science in 1819, when Oersted accidentally observed that the flow of an electric current in a wire deflected a compass needle in its neighbourhood. This was the first explicit revelation of the profound connection between electricity and magnetism, already suspected on account of a number of analogies between the two. A little later Faraday showed that this connection was no mere one-sided affair. If electricity in motion produced magnetism, then equally magnets in motion produced electricity. In other words, electricity in motion produced the same effects as stationary magnets and magnets in motion the same effects as electricity.

      This reciprocal relation between electricity and magnetism led straightway to a whole host of new inventions from the electric telegraph and telephone to the electric motor and dynamo. In fact, it is the seed from which has sprouted the whole of heavy electrical industry which was destined to transform the paleotechnic phase of Western machine civilisation with its ugly, dark and satanic mills, into the neotechnic phase based on electric power. But before this industry could arise results of two generations of experiments and prevailing ideas in different fields o physics—electricity, magnetism and light—had to be rationalised and synthesised in a mathematically coherent theory capable of experimental verification. Now the results of the mathematical theory depended for their verification on the establishment of accurate units for electricity—a task necessary before it could be commercialised for household use. The theory, thus verified, in turn formed the basis of electrical engineering--itself the result of a complete interpenetration of deductive reasoning and experimental practice. It reached the apogee of its success when Hertz experimentally demonstrated the existence of electromagnetic waves, which Maxwell had postulated on purely theoretical grounds, and from which wireless telegraphy and all that it implies was to arise later.

      Maxwell’s theory was actually a mathematisation of the earlier physical intuitions of Faraday. In this he used all the mathematical apparatus of mechanics and calculus of the Newtonian period. But in some important and puzzling respects the new laws of electromagnetism differed from those of Newton. In the first place, all the forces between bodies that he considered as, for example, the force of earth’s gravity on falling bodies, acted along the line joining their centres. But in the case of a magnetic pole it was urged to move at right angles to the line joining it to the current-carrying wire. Secondly, electromagnetic theory was differentiated from the earlier gravitation theory of Newton in its insistence that electric and magnetic energy actually resided in the surrounding empty space. According to this view the forces acting on electrified and magnetised bodies did not form the whole system of forces in action but only served to reveal the presence of a vastly more intricate system of forces acting everywhere in free space….

Cinderella and Equations

[This excerpt is from page 43 of Great Ideas of Modern Mathematics: Their Nature and Use by Jagjit Singh (1959).]

      It is needless to add that it is easier to write equations, whether differential, integral or integro-differential, than to solve them. Only a small class of such equations has been solved explicitly. In some cases, when, due to its importance in physics or elsewhere, we cannot do without an equation of the insoluble variety, we use the equation itself to define the function, just as Prince Charming used the glass slipper to define Cinderella as the girl who could wear it. Very often the artifice works; it suffices to isolate the function from other undesirables in much the same way as the slipper sufficed to distinguish Cinderella from her ugly sisters.

An Earlier Perception of Computers Later

An Earlier Perception of Computers

      [This excerpt is from pages 24-26 of Great Ideas of Modern Mathematics: Their Nature and Use by Jagjit Singh (1959).]

      Now, as Norbert Wiener has remarked, the human and animal nervous systems, which too are capable of the work of a computation system, contain elements—the nerve cells or neurons—which are ideally suited to act as relays :

      `While they show rather complicated properties under the influence of electrical currents, in their ordinary physiological action they conform very nearly to the "all-or-none" principle; that is, they are either at rest, or when they "fire" they go through a series of changes almost independent of the nature and intensity of the stimulus.' This fact provides the link between the art of calculation and the new science of Cybernetics, recently created by Norbert Wiener and his collaborators

      This science (cybernetics) is the study of the 'mechanism of control and communication in the animal and the machine', and bids fair to inaugurate a new social revolution likely to be quite as profound as the earlier Industrial Revolution inaugurated by the invention of the steam engine. While the steam engine devalued brawn, cybernetics may well devalue brain—at least, brain of a certain sort. For the new science is already creating machines that imitate certain processes of thought and do some kinds of mental work with a speed, skill and accuracy far beyond the capacity of any living human being.

      The mechanism of control and communication between the brain and various parts of an animal is not yet clearly understood. We still do not know very much about the physical process of thinking in the animal brain, but we do know that the passage of some kind of physico-chemical impulse through the nerve-fibres between the nuclei of the nerve cells accompanies all thinking, feeling, seeing, etc. Can we reproduce these processes by artificial means? Not exactly, but it has been found possible to imitate them in a rudimentary manner by substituting wire for nerve-fibre, hardware for flesh, and electro-magnetic waves for the unknown impulse in the living nerve-fibre. For example, the process whereby flatworms exhibit negative phototropism—that is, a tendency to avoid light—has been imitated by means of a combination of photocells, a Wheatstone bridge and certain devices to give an adequate phototropic control for a little boat. No doubt it is impossible to build this apparatus on the scale of the flatworm, but this is only a particular case of the general rule that the artificial imitations of living mechanisms tend to be much more lavish in the use of space than their prototypes. But they more than make up for this extravagance by being enormously faster. For this reason, rudimentary as these artificial reproductions of cerebral processes still are, the thinking machines already produced achieve their respective purposes for which they are designed incomparably better than any human brain.

      As the study of cybernetics advances—and it must be remembered that this science is just an infant barely ten years old—there is hardly any limit to what these thinking-machines may not do for man. Already the technical means exist for producing automatic typists, stenographers, multilingual interpreters, librarians, psychopaths, traffic regulators, factory-planners, logical truth calculators, etc. For instance, if you had to plan a production schedule for your factory, you would need only to put into a machine a description of the orders to be executed, and it would do the rest. It would know how much raw material is necessary and what equipment and labour are required to produce it. It would then turn out the best possible production schedule showing who should do what and when.

      Or again, if you were a logician concerned with evaluating the logical truth of certain propositions deducible from a set of given premises, a thinking machine like the Kahn-Burkhart Logical Truth Calculator could work it out for you very much faster and with much less risk of error than any human being. Before long we may have mechanical devices capable of doing almost anything from solving equations to factory planning. Nevertheless, no machine can create more thought than is put into it in the form of the initial instructions. In this respect it is very definitely limited by a sort of conservation law, the law of conservation of thought or instruction. For none of these machines is capable of thinking anything new.

      A 'thinking machine' merely works out what has already been thought of beforehand by the designer and supplied to it in the form of instructions. In fact, it obeys these instructions as literally as the unfortunate Casabianca boy, who remained on the burning deck because his father had told him to do so. For instance, if in the course of a computation the machine requires the quotient of two numbers of which the divisor happens to be zero, it will go on, Sisyphus-wise, trying to divide by zero for ever unless expressly forbidden by prior instruction. A human computer would certainly not go on dividing by zero, whatever else he might do. The limitation imposed by the aforementioned conservation law has made it necessary to bear in mind what Hartree has called the 'machine-eye view' in designing such machines. In other words, it is necessary to think out in advance every possible contingency that might arise in. the course of the work and give the machine appropriate instructions for each case, because the machine will not deviate one whit from what the `Moving Finger’ of prior instructions has already decreed. Although the limitation imposed by this conservation law on the power of machines to produce original thinking is probably destined to remain forever, writers have never ceased to speculate on the danger to man from robot machines of his own creation. This, for example, is the moral of stories as old as those of Famutus and Frankenstein, and as recent as those of Karel Capek's play, R.U.R., Olaf Stapledon’s First and Last Men.

      It is true that as yet there is no possibility whatsoever of constructing Frankenstein monsters, Rossum robots or Great Brains—that is, artificial beings possessed of a 'free will' of their own. This, however, does not mean that the new developments in this field are without danger to mankind. The danger from the robot machines is not technical but social. It is not that they will disobey man but that if introduced on. a large enough scale, they are liable to lead to widespread unemployment.

Irrational Numbers

[This excerpt is from pages 15-16 of Great Ideas of Modern Mathematics: Their Nature and Use by Jagjit Singh (1959).]

      The discovery of magnitudes which, like the diagonal of a unit square, cannot be measured by any whole number or rational fraction, that is, by means of integers, singly or in couples, was first made by Pythagoras some 2500 years ago. This discovery was a great shock to him. For he was a number mystic who looked upon integers…as the essence and principle of all things in the universe. When, therefore, he found that the integers did not suffice to measure even the length of the diagonal of a unit square, he must have felt like a Titan cheated by the gods. He swore his followers to a secret vow never to divulge the awful discovery to the world at large and turned the Greek mind once for all away from the idea of applying numbers to measure geometrical lengths. He thus created an impassable chasm between algebra and geometry that was not bridged till the tie of Descartes nearly 2000 years later.

The Value of Mathematics

[This excerpt is from page 3 of Great Ideas of Modern Mathematics: Their Nature and Use by Jagjit Singh (1959).]

      Mathematics is too intimately associated with science to be explained away as a mere game. Science is serious work serving social ends. To isolate mathematics from the social conditions which bring mathematics…into existence is to do violence to history.

A Quantum Future Awaits

[This excerpt is from an article by Jacob M. Taylor in the July 27, 2018, issue of Science.]

      A century ago, the quantum revolution quietly began to change our lives. A deeper understanding of the behavior of matter and light at atomic and subatomic scales sparked a new field of science that would vastly change the world's technology landscape. Today, we rely upon the science of quantum mechanics for applications ranging from the Global Positioning System to magnetic resonance imaging to the transistor. The advent of quantum computers presages yet another new chapter in this story that will enable us to not only predict and improve chemical reactions and new materials and their properties, for example, but also to provide insights into the emergence of spacetime and our universe. Remarkably, these advances may begin to be realized in a few years.

      From initial steps in the 1980s to today, science and defense agencies around the world have supported the basic research in quantum information science that enables advanced sensing, communication, and computational systems. Recent improvements in device performance and quantum bit (“qubit”) approaches show the possibility of moderate-scale quantum computers in the near future. This progress has focused the scientific community on, and engendered substantial new industrial investment for, developing machines that produce answers we cannot simulate even with the world’s fastest supercomputer (currently the Summit supercomputer at the US. Department of Energy’s Oak Ridge National Laboratory in Tennessee).

      Achieving such quantum computational supremacy is a natural first goal. It turns out, however, that devising a classical computer to approximate quantum systems is sometimes good enough for the purposes of solving certain problems. Furthermore, most quantum devices have errors and produce correct results with a decreasing probability as problems become more complicated. Only with substantial math from quantum complexity theory can we actually separate “stupendously hard” problems to solve from just “really hard” ones. This separation of classical and quantum computation is typically described as approaching quantum supremacy. A device that demonstrates a separation may rightly deserve to be called the world’s first quantum computer and will represent a leap forward for theoretical computer science and even for our understanding of the universe.

Astro Worms

[This excerpt is from an article by Katherine Kornei in the August 2018 issue of Scientific American.]

      Caenorhabditis elegans would make an ace fighter pilot. That's because the roughly one-millimeter-long roundworm, a type of nematode that is widely used in biological studies, is remarkably adept at tolerating acceleration. Human pilots lose consciousness when they pull only 4 or 5 g’s (1 g is the force of gravity at Earth’s surface), but C. eiegans emerges unscathed from 400,000 g’s, new research shows.

      This is an important benchmark; rocks have been theorized to experience similar forces when blasted off planet surfaces and into space by volcanic eruptions or asteroid impacts. Any hitchhiking creatures that survive could theoretically seed another planet with life, an idea known as ballistic panspermia.

Capture that Carbon

[These excerpts are from an article by Madison Freeman and David Yellen in the August 2018 issue of Scientific American.]

      The conclusion of the Paris Agreement in 2015, in which almost every nation committed to reduce their carbon emissions, was supposed to be a turning point in the fight against climate change. But many countries have already fallen behind their goals, and the U.S. has now announced it will withdraw from the agreement. Meanwhile emissions worldwide continue to rise.

      The only way to make up ground is to aggressively pursue an approach that takes advantage of every possible strategy to reduce emissions. The usual suspects, such as wind and solar energy and hydropower, are part of this effort, but it must also include investing heavily in carbon capture, utilization and storage (CCUS)---a cohort of technologies that pull carbon dioxide from smokestacks, or even from the air, and convert it into useful materials or store it underground….

      Without CCUS, the level of cuts needed to keep global warming to two degrees Celsius (3.6 degrees Fahrenheit)—the upper limit allowed in the Paris Agreement—probably cannot be achieved, ad6Ording to the International Energy Agency. By 2050 carbon capture and storage must provide at least 13 percent of the reductions needed to keep warming in check, the agency calculates….

      CCUS technologies can also help decarbonize emissions in heavy industry—including production of cement, refined metals and chemicals—which accounts for almost a quarter of U.S. emissions. In addition, direct carbon-removal technology—which captures and converts carbon dioxide from the air rather than from a smokestack—can offset emissions from industries that cannot readily implement other clean technology, such as agriculture.

      The basic idea of carbon capture has faced a lot of opposition. Skepticism has come from climate deniers, who see it as a waste of money, and from passionate supporters of climate action, who fear that it would be used to justify continued reliance on fossil fuels. Both groups are ignoring the recent advances and the opportunity they present By limiting investment in decarbonization, the world will miss a major avenue for reducing emissions both in the electricity sector and in a variety of industries. CCUS can also create jobs and profits from what was previously only a waste material by creating a larger economy around carbon.

      For CCUS to succeed, the federal government must kick in funding for basic research and development and offer incentives such as tax breaks for carbon polluters who adopt the technology. The Trump administration has repeatedly tried to slash energy technology R&D, with the Department of Energy;s CCUS R&D cut by as much as 76 percent in proposed budgets. But this funding must be protected….

      The transition to clean energy has become inevitable. But that transition’s ability to achieve deep decarbonization will falter without this wide range of solutions, which must include CCUS.

Physics Makes Rules, Evolution Rolls the Dice

[These excerpts are from a book review by Chico Camargo in the July 20, 2018, issue of Science.]

      Picture a ladybug in motion. The image that came into your head is probably one of a small, round red-and-black insect crawling up a leaf. After reading Charles Cockell’s The Equations of Life, however, you may be more likely to think of this innocuous organism as a complex biomechanical engine, every detail honed and operating near thermodynamic perfection.

      In a fascinating journey across physics and biology Cockell builds a compelling argument for how physical principles constrain the course of evolution. Chapter by chapter, he aims his lens at all levels of biological organization, from the molecular machinery of electron transport to the social organisms formed by ant colonies. In each instance, Cockell shows that although these structures might be endless in their detail, they are bounded in their form. If organisms were pawns in a game of chess, physics would be the board and its rules, limiting how the game unfolds.

      Much of the beauty of this book is in the diversity of principles it presents. In the chapter dedicated to the physics of the ladybug, for example, Cockell first describes an unassuming assignment in which students are asked to study the properties of the insect. Physical principles emerge naturally: from the surface tension and viscous forces between the ladybug’s feet and vertical surfaces, to the diffusion-driven pattern formation on its back, to the thermodynamics of surviving as a small insect at water-freezing temperatures. These discussions are accompanied by a series of equations that one would probably not expect to see in a single textbook, as various branches of physics—from physical chemistry to optics—are discussed side by side.

      Physics itself is different at different scales. A drop of water, for example, is inconsequential to a human being. If you are a ladybug, however, water surface tension is a potential problem: Having a drop of water on your back might become as burdensome as a heavy backpack that can’t be discarded. For a tiny ant, a droplet large enough can turn into a watery prison because the molecular forces in play are too strong for the insect to escape….

      At the end of every chapter, the reader is reminded of how the laws of physics nudge, narrow, mold, shape, and restrict the “endless forms most beautiful” that Charles Darwin once described. Cockell’s persistence pays off as he gears up for his main argument: If life exists on other planets, it has to abide by the same laws as on Earth.

      Because the atoms in the Milky Way behave the same as in any other galaxy Cockell argues that water in other galaxies will still be an abundant solvent, carbon should still be the preferred choice for self-replicating complex molecules, and the thermodynamics of life should still be the same. Sure, a cow on a hypothetical planet 10 times the diameter of Earth would need wider, stronger legs, but there is no reason to believe that replaying evolution on another planet would lead to unimaginable life forms. Rather, one should expect to see variations on the same theme.

      Cockell ends the book by celebrating the elegant equations that represent the relations between form and function. Rather than being a lifeless form of reductionism, equations, he argues, are our window into what physics renders possible (or impossible) for life to achieve. In equations, we express how our biosphere is full of symmetry, pattern, and law. Within them, we express the boldest claim of them all: that these limitations should be no less than universal.

Space, Still the Final Frontier

[These excerpts are from an editorial by Daniel N. Baker and Amal Chandran in the July 20, 2018, issue of Science.]

      …At the height of the Cold War in the 1960s and 1970s, space science and human space exploration offered a channel for citizens from the East and West to communicate and share ideas. Space has continued to be a domain of collaboration and cooperation among nations. The International Space Station has been a symbol of this notion for the past 20 years, and it is expected to be used by many nations until 2028. By contrast, there have been recent trends toward increased militarization of space with more—not less—fractionalization among nations. As well, the commercial sector is becoming a key player in exploring resource mining, tourism, colonization, and national security operations in spate. Thus, space is becoming an arena for technological shows of economic and military force. However, nations are realizing that the Outer Space Treaty of 1967 needs to be reexamined in light of today’s new space race—a race that now includes many more nations. No one nation or group of nations has ever claimed sovereignty over the “high frontier” of space, and, simply put, this should never be allowed to happen….

      As was true during the Cold War, there are still political differences on Earth, but in space we should together seek to push forward the frontiers of knowledge with a common sense of purpose and most certainly in a spirit of peaceful cooperation.

A Path to Clean Water

[These excerpts are from an article by Klaus Kummerer, Dionysios D. Dionysiou, Oliver Olsson and Despo Fatta-Kassinos in the July 20, 2018, issue of Science.]

      Chemicals, including pharmaceuticals, are necessary for health, agriculture and food production, industrial production, economic welfare, and many other aspects of modern life. However, their widespread use has led to the presence of many different chemicals in the water cycle, from which they may enter the food chain. The use of chemicals will further increase with growth, health, age, and living standard of the human population. At the same time, the need for clean water will also increase, including treated wastewater for food production and high-purity water for manufacturing electronics and pharmaceuticals. Climate change is projected to further reduce water availability in sufficient quantity and quality. Considering the limits of effluent treatment, there is an urgent need for input prevention at the source and for the development of chemicals that degrade rapidly and completely in the environment.

      Conventional wastewater treatment has contributed substantially to the progress in health and environmental protection. However, as the diversity and volume of chemicals used have risen, water pollution levels have increased, and conventional treatment of wastewater and potable water has become less efficient. Even advanced wastewater and potable water treatments, such as extended filtration and activated carbon or advanced oxidation processes, have limitations, including increased demand for energy and additional chemicals; incomplete or, for some pollutants, no removal from the wastewater; and generation of unwanted products from parent compounds, which may be more toxic than their parent compounds. Microplastics are also not fully removed, and advanced treatment such as ozonation can lead to the increased transfer of antibiotic resistance genes, preferential enhancement of opportunistic bacteria, and strong bacterial population shifts.

      Furthermore, water treatment is far from universal. Sewer pipes can leak, causing wastewater and its constituents to infiltrate groundwater. During and after heavy rain events, wastewater and urban stormwater runoff is redirected to protect sewage treatment plants; this share of wastewater is not treated. Such events, as well as urban flooding, are likely to increase in the future because of climate change. Globally, 80% or more of wastewater is not treated.

      … given the ever-increasing list of chemicals that are introduced into the aquatic environment, attempts to assess harm and introduce thresholds will tend to lag new introductions. A preventive approach is therefore also needed. For example, giving companies relief from effluent charges if they use compounds from a list proven to be of low toxicity and readily mineralized-such as the abovementioned cellulose microbeads-could provide strong incentives for creating more sustainable products.

Prepare for Water Day Zero

[These excerpts are from an editorial by the editors in the August 2018 issue of Scientific American.]

      Earlier this year ominous headlines blared that Cape Town, South Africa, was headed for Day Zero—the date when the city’s taps would go dry because its reservoirs would become dangerously low on water. That day—originally expected in mid-April—has been postponed until at least 2019 as of this writing, thanks to water rationing and a welcome rainy season. But the conditions that led to this desperate situation will inevitably occur again, hitting cities all over the planet.

      As the climate warms, extreme droughts and vanishing water supplies will likely become more common. But even without the added impact of climate change, normal rainfall variation plays an enormous role in year-to-year water availability. These ordinary patterns now have extraordinary effects because urban populations have had a tremendous growth spurt: by 2050 the United Nations projects that two thirds of the world’s people will live in cities. Urban planners and engineers need to learn from past rainfall variability to improve their predictions and take future demand into account to build more resilient infrastructure.

      …since 2015 the region has been suffering from the worst drought in a century, and the water in those reservoirs dwindled perilously. Compounding the problem, Cape Town's population has grown substantially, increasing demand. The city actually did a pretty good job of keeping demand low by reducing leaks in the system, a major cause of water waste….But the government of South Africa was slow to declare a national disaster in the areas hit hardest by the drought, paving the way for the recent crisis.

      Cape Town is not alone. Since 2014 southeastern Brazil has been suffering its worst water shortage in 80 years, resulting from decreased rainfall, climate change, poor water management, deforestation and other factors. And many cities in India do not have access to municipal water for more than a few hours a day, if at all….

      In the U.S., the situation is somewhat better, but many urban centers still face water problems. California’s recent multiyear drought led to some of the state's driest years on record. Fortunately, about half of the state's urban water usage is for landscaping, so it was able to cut back on that fairly easily. But cities that use most of their water for more essential uses, such as drinking water, may not be so adaptable. In addition to the problems that drought, climate change and population growth bring, some cities face threats of contamination; crises such as the one in Flint, Mich., arose because the city changed the source of its water, causing lead to leach into it from pipes. If other cities are forced to change their water suppliers, they could face similar woes.

      Fortunately, steps can be taken to avoid urban water crises. In general, a “portfolio approach” that relies on multiple water sources is probably most effective. Cape Town has already begun implementing a number of water-augmentation projects, including tapping groundwater and building water-recycling plants. Many other cities will need to repair existing water infrastructure to cut down on leakage….

      The global community has an opportunity right now to take action to prevent a series of Day Zero crises. If we don’t act, many cities may soon face a time when there isn’t a drop to drink.

It’s Critical

[These excerpts are from an editorial by Steve Metz in the August 2018 issue of The Science Teacher.]

      One of the most important things students can learn in their science classes is the ability to think critically. Science content is important, of course. Our future scientists and engineers need deep understanding of the big ideas of science, as do all citizens. But students must also develop the life-long habit of critical, analytical thinking and evidence-based reasoning. Scientific facts and ideas are not enough. The ability to think critically gives these ideas meaning and is required for assessment of truth and falsehood.

      On the internet and social media the importance of critical thinking—and the notable lack thereof—are on full display. Clickbait promotes outrageous headlines untethered to reality. Statements are made with no concern for supporting evidence. We often accept a claim or counterclaim based on personal belief, reliance on authority, or downright prejudice rather than on evidence-based critical thinking.

      A recent study by researchers at Stanford found that students from middle school to college could not distinguish between a real and false news source, concluding that “Overall, young people’s ability to reason about the information on the Internet can be summed up in one word: bleak”….This is particularly troubling in a world where people—especially young people—increasingly depend on social media as their primary source of news and information.

      The unfortunate fact is that if left to itself, human thinking can be biased, distorted, uninformed, and incomplete. We often believe what we want to believe. Confirmation bias is real, especially in a world where media allow us to choose to view only what uncritically supports our own beliefs. The results include acceptance of fantastic conspiracy theories, pseudoscience, angry rhetoric, and untested assumptions—often leading to poor decision-making, both at the personal and societal level.

      Critical thinking is a difficult, higher-order skill that, like all such skills, requires intensive, deliberate practice. At its core is a healthy skepticism that questions everything, treats all conclusions as tentative, and sets aside interpretations that are not supported by multiple lines of reliable evidence….

Bringing Darwin Back

[These excerpts are from an article by Adam Piore in the August 2018 issue of Scientific American.]

      Straight talk about evolution in classrooms is less common than one might think. According to the most comprehensive study of public school biology teachers, just 28 percent implement the major recommendations and conclusions of the National Research Council, which call for them to “unabashedly introduce evidence that evolution has occurred and craft lesson plans so that evolution is a theme that unifies disparate topics in biology,” according to a 2011 Science article by Pennsylvania State University political scientists Michael Berkman and Eric Plutzer.

      Conversely, 13 percent of teachers (found in virtually every state in the Union and the District of Columbia) reported explicitly advocating creationism or intelligent design by spending at least an hour of class time presenting it in a positive light. Another 5 percent said they endorsed creationism in passing while answering student questions.

      The majority—60 percent of teachers—either attempted to avoid the topic of evolution altogether, quickly blew past it, allowing students to debate evolution, or “watered down” their lessons, Plutzer says. Many said they feared the reaction of students, parents and religious members of their community. And although only 2 percent of teachers reported avoiding the topic entirely, 17 percent, or roughly one in six teachers, avoided discussing human evolution. Many others simply raced through it.

      To confront these challenges, several organizations have launched new kinds of training sessions that are aimed at better preparing teachers for what they will face in the classroom. Moreover, a growing number of researchers have begun to examine the causes of these teaching failures and new ways to overcome them.

      Among many educators, a new idea has begun to take root: perhaps it is time to rethink the way evolution teachers grapple with religion (or choose not to grapple with it) in the classroom….

      For decades the most high-stakes, high-profile battles over evolution education were fought in the courts and state legislatures. The debate centered on, among other things, whether the subject itself could be banned or whether lawmakers could require that equal time be given to the biblical account of creation or the idea of “intelligent design.” Now, with those questions largely resolved—courts have overwhelmingly sided with those pushing to keep evolution in the classroom and creationism out of it—the battle lines have moved into the schools themselves….

      Today, many are now realizing, the far larger obstacle for the vast majority of ordinary science teachers is the legacy of acrimony left over from the decades of legal battles. In many communities, evolution education remains so charged with controversy that teachers either water down their lesson plans, devote as little time as possible to the subject or attempt to avoid it altogether.

      Meanwhile teachers in deeply religious communities such as Baconton face an additional challenge. Often they lack tools and methods that allow them to teach evolution in a way that does not force those students to take sides—a choice that usually does not go well for the scientists perceived to be at war with their community.

      Without such tools, even those teachers who do feel confident with the material often have trouble convincing students to listen to their lesson plans with an open mind—or to listen to them at all….

STEMM Education Should Get “HACD”

[These excerpts are from an article by Robert Root-Bernstein in the July 6, 2018, issue of Science.]

      If you’ve ever had a medical procedure, chances are you benefited from the arts. The stethoscope was invented by a French flautist/physician named Rene Laennec who recorded his first observations of heart sounds in musical notation. The suturing techniques used for organ transplants were adapted from lace-making by another Frenchman, Nobel laureate Alexis Carrel. The methods (and some of the tools) required to perform the first open-heart surgeries were invented by an African-American innovator named Vivien Thomas, whose formal training was as a master carpenter.

      But perhaps you’re more of a technology lover. The idea of instantaneous electronic communication was the invention of one of America's most famous artists, Samuel Morse, who built his first telegraph on a canvas stretcher. Actress Hedy Lamm collaborated with the avant-garde composer George Antheil to invent modern encryption of electronic messages. Even the electronic chips that run our phones and computers are fabricated using artistic inventions: etching, silk-screen printing, and photolithography.

      On 7 May 2018, the Board on Higher Education and Workforce of the U.S. National Academies of Sciences, Engineering, and Medicine (NASEM) released a report recommending that humanities, arts, crafts, and design (HACD) practices be integrated with science, technology, engineering, mathematics, and medicine (STEMM) in college and post-graduate curricula. The motivation for the study is the growing divide in American educational systems between traditional liberal arts curricula and job-related specialization….

      Because the ecology of education is so complex, the report concludes that there is no one, or best, way to integrate arts and humanities with STEMM learning, nor any single type of pedagogical experiment or set of data that proves incontrovertibly that integration is the definitive answer to improved job preparedness. Nonetheless, a preponderance of evidence converges on the conclusion that incorporating HACD into STEMM pedagogies can improve STEMM performance.

      Large-scale statistical studies have demonstrated significant correlations between the persistent practice of HACD with various measures of STEMM achievement….STEMM professionals with avocations such as wood- and metalworking, printmaking, painting, and music composition are more likely to file and license patents and to found companies than those who lack such experience. Likewise, authors that publish high-impact papers are more likely to paint, sculpt, act, engage in wood- or metalworking, or pursue creative writing….

      Every scientist knows that correlation is not causation, but many STEMM professionals report that they actively integrate their HACD and STEMM practices….

The Power of Many

[These excerpts are from an article by Elizabeth Pennisi in the June 29, 2018, issue of Science.]

      Billions of years ago, life crossed a threshold. Single cells started to band together, and a world of formless, unicellular life was on course to evolve into the riot of shapes and functions of multicellular life today, from ants to pear trees to people. It’s a transition as momentous as any in the history of life, and until recently we had no idea how it happened.

      The gulf between unicellular and multicellular life seems almost unbridgeable. A single cell's existence is simple and limited. Like hermits, microbes need only be concerned with feeding themselves; neither coordination nor cooperation with others is necessary, though some microbes occasionally join forces. In contrast, cells in a multicellular organism, from the four cells in some algae to the 37 trillion in a human, give up their independence to stick together tenaciously; they take on specialized functions, and they curtail their own reproduction for the greater good, growing only as much as they need to fulfill their functions. When they rebel, cancer can break out….

      Multicellularity brings new capabilities. Animals, for example, gain mobility for seeking better habitat, eluding predators, and chasing down prey. Plants can probe deep into the soil for water and nutrients; they can also grow toward sunny spots to maximize photosynthesis. Fungi build massive reproductive structures to spread their spores….

      …The evolutionary histories of some groups of organisms record repeated transitions from single-celled to multicellular forms, suggesting the hurdles could not have been so high. Genetic comparisons between simple multicellular organisms and their single-celled relatives have revealed that much of the molecular equipment needed for cells to band together and coordinate their activities may have been in place well before multicellularity evolved. And clever experiments have shown that in the test m tube, single-celled life can evolve the beginnings of multicellularity in just a few hundred generations—an evolutionary instant.

      Evolutionary biologists still debate what drove simple aggregates of cells to become more and more complex, leading to the wondrous diversity of life today. But embarking Lon that road no longer seems so daunting….

      …Some have argued that 2-billion-year-old, coil-shaped fossils of what may be blue-green or green algae—found in the United States and Asia and dubbed Grypania spiralis—or 2.5-billion-year-old microscopic filaments recorded in South Africa represent the first true evidence of multicellular life. Other kinds of complex organisms don’t show up until much later in the fossil record. Sponges, considered by many to be the most primitive living animal, may date back to 750 million years ago, but many researchers consider a group of frondlike creatures called the Ediacarans, common about 570 million years ago, to be the first definitive animal fossils. Likewise, fossil spores suggest multicellular plants evolved from algae at least 470 million years ago.

      Plants and animals each made the leap to multicellularity just once. But in other groups, the transition took place again and again. Fungi likely evolved complex multicellularity in the form of fruiting bodies—think mushrooms—on about a dozen separate occasions….The same goes for algae: Red, brown, and green algae all evolved their own multicellular forms over the past billion years or so….

See-through Solar Cells Could Power Offices

[These excerpts are from an article by Robert F. Service in the June 29, 2018, issue of Science.]

      Lance Wheeler looks at glassy skyscrapers and sees untapped potential. Houses and office buildings, he says, account for 75% of electricity use in the United States, and 40% of its energy use overall. Windows, because they leak energy, are a big part of the problem….

      A series of recent results points to a solution, he says: Turn the windows into solar panels. In the past, materials scientists have embedded light-absorbing films in window glass. But such solar windows tend to have a reddish or brown tint that architects find unappealing. The new solar window technologies, however, absorb almost exclusively invisible ultraviolet (UV) or infrared light. That leaves the glass clear while blocking the UV and infrared radiation that normally leak through it, sometimes delivering unwanted heat. By -cutting heat gain while generating power, the windows “have huge prospects.” Wheeler says, including the possibility that a large office building could power itself.

      Most solar cells, like the standard crystalline silicon cells that dominate the industry, sacrifice transparency to maximize their efficiency, the percentage of the energy in sunlight converted to electricity. The best silicon cells have an efficiency of 25%. Meanwhile, a new class of opaque solar cell materials, called perovskites, are closing in on silicon with top efficiencies of 22%. Not only are the perovskites cheaper than silicon, they can also be tuned to absorb specific frequencies of light by tweaking their chemical recipe.

Tomorrow’s Earth

[These excerpts are from an editorial by Jeremy Berg in the June 29, 2018, of Science.]

      Our planet is in a perilous state. The combined effects of climate change, pollution, and loss of biodiversity are putting our health and well-being at risk. Given that human actions are largely responsible for these global problems, humanity must now nudge Earth onto a trajectory toward a more stable, harmonious state. Many of the challenges are daunting, but solutions can be found….

      Many of today’s challenges can be traced back to the “Tragedy of the Commons” identified by Garrett Hardin in his landmark essay, published in Science 50 years ago. Hardin warned of a coming population-resource collision based on individual self-interested actions adversely affecting the common good. In 1968, the global population was about 3.5 billion; since then, the human population has more than doubled, a rise that has been accompanied by large-scale -changes in land use, resource consumption, waste generation, and societal structures….

      Through collective action, we can indeed achieve planetary-scale mitigation of harm. A case in point is the Montreal Protocol on Substances that Deplete the Ozone Layer, the first treaty to achieve universal ratification by all countries in the world. In the 1970s, scientists had shown that chemicals used as refrigerants and propellants for aerosol cans could catalyze the destruction of ozone. Less than a decade later, these concerns were exacerbated by the discovery of seasonal ozone depletion over Antarctica. International discussions on controlling the use of these chemicals culminated in the Montreal Protocol in 1987. Three decades later, research has shown that ozone depletion appears to be decreasing in response to industrial and domestic reforms that the regulations facilitated.

      More recent efforts include the Paris Agreement of 2015, which .aims to keep a global temperature rise this century well below 2°C and to strengthen the ability of countries to deal with the impacts of climate change, and the United Nations Sustainable Development Goals. As these examples show, there is widespread recognition that we must reverse damaging planetary change for the sake of the next generation. However, technology alone will not rescue us. For changes to be willingly adopted by a majority of people, technology and engineering will have to be integrated with social sciences and psychology….Although human population growth is escalating, we have never been so affluent. Along with affluence comes increasing use of energy and materials, which puts more pressure on the environment. How can humanity maintain high living standards without jeopardizing the basis of our survival?

      As our “Tomorrow’s Earth” series…will highlight, rapid research and technology developments across the sciences can help to facilitate the implementation of potentially corrective options. There will always be varying expert opinions on what to do and how to do it. But as long as there are options, we can hope to find the right paths forward.

Greenhouse Gases

[This excerpt is from chapter nine of Caesar’s Last Breath by Sam Kean.]

      Greenhouse gases got their name because they trap incoming sunlight, albeit not directly. Most incoming sunlight strikes the ground first and warms it. The ground then releases some of that heat back toward space as infrared light. (Infrared light has a longer wavelength than visible light; for our purposes, it's basically the same as heat.) Now, if the atmosphere consisted of nothing but nitrogen and oxygen, this infrared heat would indeed escape into space, since diatomic molecules like N2 and O2 cannot absorb infrared light. Gases like carbon dioxide and methane, on the other hand, which have more than two atoms, can and do absorb infrared heat. And the more of these many-atomed molecules there are, the more heat they absorb. That’s why scientists single them out as greenhouse gases: they’re the only fraction of the air that can trap heat this way.

      Scientists define the greenhouse effect as the difference between a planet’s actual temperature and the temperature it would be with-out these gases. On Mars, the sparse CO2 coverage raises its temp by less than 10°F. On Venus, greenhouse gases add a whopping 900°F. Earth sits between these extremes. Without greenhouse gases, our average global temperature would be a chilly 0°F, below the freezing point of water. With greenhouse gases, the average temp remains a balmy 60°F. Astronomers often talk about how Earth orbits at a perfect distance from the sun — a “Goldilocks distance” where water neither freezes nor boils. Contra that cliché, it’s actually the combination of distance and greenhouse gases that gives us liquid H2O. Based on orbiting distance alone, we’d be Hoth.

      By far the most important greenhouse gas on Earth, believe it or not, is water vapor, which raises Earth’s temperature 40 degrees all by itself. Carbon dioxide and other trace gases contribute the remaining 20. So if water actually does more, why has CO2 become such a bogeyman? Mostly because carbon dioxide levels are rising so quickly. Scientists can look back at the air in previous centuries by digging up small bubbles trapped beneath sheets of ice in the Arctic. From this work, they know that for most of human history the air contained 280 molecules of carbon dioxide for every million particles overall. Then the Industrial Revolution began, and we started burning ungodly amounts of hydrocarbons, which release CO2 as a by-product. To give you a sense of the scale here, in an essay he wrote for his grandchildren in 1882, steel magnate Henry Bessemer boasted that Great Britain alone burned fifty-five Giza pyramids’ worth of coal each year. Put another way, he said, this coal could “build a wall round London of 200 miles in length, 100 feet high, and 41 feet 11 inches in thickness --a mass not only equal to the whole cubic contents of the Great Wall of China, but sufficient to add another 346 miles to its length.” And remember, this was decades before automobiles and modern shipping and the petroleum industry. Carbon dioxide levels reached 312 parts per million in 1950 and have since zoomed past 400.

      People who pooh-pooh climate change often point out, correctly, that CO2 concentrations have been fluctuating for millions of years, long before humans existed, sometimes peaking at levels a dozen times higher than those of today. It’s also true that Earth has natural mechanisms for removing excess carbon dioxide—a nifty negative feedback loop whereby ocean water absorbs excess CO2, converts it to minerals, and stores it underground. But when seen from a broader perspective, these truths deteriorate into half-truths. Concentrations of CO2 varied in the past, yes — but they’ve never spiked as quickly as in the past two centuries. And while geological processes can sequester CO2 underground, that work takes millions of years. Meanwhile, human beings have dumped roughly 2,500 trillion pounds of extra CO2 into the air in the past fifty years alone. (That's over 1.6 million pounds per second. Think about how little gases weigh, and you can appreciate how staggeringly large these figures are.) Open seas and forests will gobble up roughly half that CO2, but nature simply can’t bail fast enough to keep up.

      Things look even grimmer when you factor in other greenhouse gases. Molecule for molecule, methane absorbs twenty-five times more heat than carbon dioxide. One of the main sources of methane on Earth today is domesticated cattle: each cow burps up an average of 570 liters of methane per day and farts 30 liters more; worldwide, that adds up to 175 billion pounds of CH4 annually — some of which degrades due to natural processes, but much of which doesn’t. Other gases do even more damage. Nitrous oxide (laughing gas) sponges up heat three hundred times more effectively than carbon dioxide. Worse still are CFCs, which not only kill ozone but trap heat several thousand times better than carbon dioxide. Collectively CFCs account for one-quarter of human-induced global warming, despite having a concentration of just a few parts per billion in the air.

      And CFCs aren’t even the worst problem. The worst problem is a positive feedback loop involving water. Positive feedback —like the screech you hear when two microphones get acquainted— involves a self-perpetuating cycle that spirals out of control. In this case, excess heat from greenhouse gases causes ocean water to evaporate at a faster rate than normal. Water, remember, is one of the best (i.e., worst) greenhouse gases around, so this increased water vapor traps more heat. This causes temperatures to inch up a bit more, which causes more evaporation. This traps still more heat, which leads to more evaporation, and so on. Pretty soon it's Venus outside. The prospect of a runaway feedback loop shows why we should care about things like a small increase in CFC concentrations. A few parts per billion might seem too small to make any difference, but if chaos theory teaches us anything, it’s that tiny changes can lead to huge consequences.

Chaos

[This excerpt is from chapter eight of Caesar’s Last Breath by Sam Kean.]

      …But Lorenz made us confront the fact that we might never be able to lift our veil of ignorance—that no matter how hard we stare into the eye of a hurricane, we might never understand its soul. Over the long run, that may be even harder to accept than our inability to bust up storms. Three centuries ago we christened ourselves Homo sapiens, the wise ape. We exult in our ability to think, to know, and the weather seems well within our grasp—it’s just pockets of hot and cold gases, after all. But we’d do well to remember our etymology: gas springs from chaos, and in ancient mythology chaos was something that not even the immortals could tame.

Fallout

[This excerpt is from chapter seven of Caesar’s Last Breath by Sam Kean.]

      The world had never known a threat quite like fallout. Fritz Haber during World War I had also weaponized the air, but after a good stiff breeze, Haber's gases generally couldn't harm you. Fallout could— it lingered for days, months, years. One writer at the time commented about the anguish of staring at every passing cloud and wondering what dangers it might hold. “No weather report since the one given to Noah,” he said, “has carried such foreboding for the human race.”

      More than any other danger, fallout shook people out of their complacency about nuclear weapons. By the early 1960s, radioactive atoms (from both Soviet and American tests) had seeded every last square inch on Earth; even penguins in Antarctica had been exposed. People were especially horrified to learn that fallout hit growing children hardest. One fission product, strontium-90, tended to settle onto breadbasket states in the Midwest, where plants sucked it up into their roots. It then began traveling up the food chain when cows ate contaminated grass. Because strontium sits below calcium on the periodic table, it behaves similarly in chemical reactions. Strontium-90 therefore ended up concentrated in calcium-rich milk—which then got concentrated further in children’s bones and teeth when they drank it. One nuclear scientist who had worked at Oak Ridge and then moved to Utah, downwind of Nevada, lamented that his two children had absorbed more radioactivity from a few years out West than he had in eighteen years of fission research.

      Even ardent patriots, even hawks who considered the Soviet Union the biggest threat to freedom and apple pie the world had ever seen, weren’t exactly pro-putting-radioactivity-into-children's-teeth. Sheer inertia allowed nuclear tests to continue for a spell, but by the late 1950s American citizens began protesting en masse. The activist group SANE ran ads that read “No contamination without representation,” and within a year of its founding in 1957, SANE had twenty-five thousand members. Detailed studies of weather patterns soon bolstered their case, since scientists now realized just how quickly pollutants could spread throughout the atmosphere. Pop culture weighed in as well, with Spiderman and Hulk and Godzilla — each the victim of a nuclear accident—debuting during this era. The various protests culminated in the United States, the Soviet Union, and Great Britain signing a treaty to stop all atmospheric nuclear testing in 1963. (China continued until 1974, France until 1980.) And while this might seem like ancient history JFK signed the test-ban treaty, after all—we're still dealing with the fallout of that fallout today, in several ways.

Nuclear Bombs

[This excerpt is from chapter seven of Caesar’s Last Breath by Sam Kean.]

      The Manhattan Project wasn’t a scientific breakthrough as much as an engineering triumph. All the essential physics had been worked out before the war even started, and the truly heroic efforts involved not blackboards and eurekas but elbow grease and backbreaking labor. Consider the refinement of uranium. Among other steps, workers had to convert over 20,000 pounds of raw uranium ore into a gas (uranium hexafluoride) and then whittle it down, almost atom by atom, to 112 pounds of fissionable uranium-235. This required building a $500 million plant ($6.6 billion today) in Oak Ridge, Tennessee, that sprawled across forty-four acres and used three times as much electricity as all of Detroit. All that fancy theorizing about bombs would have gone for naught if not for this unprecedented investment.

      Plutonium was no picnic, either. Making plutonium (it doesn’t exist in nature) proved every bit as challenging and costly as refining uranium. Detonating the stuff was an even bigger hassle. Although plutonium is quite radioactive —inhaling a tenth of a gram of it will kill most adults—the small amount of plutonium that scientists at Los Alamos were working with wouldn’t undergo a chain reaction and explode unless they increased its density dramatically. Plutonium metal is already pretty dense, though, so the only plausible way to do this was by crunching it together with a ring of explosives. Unfortunately, while it's easy to blow something apart with explosives, it's well-nigh impossible to collapse it into a smaller shape in a coherent way. Los Alamos scientists spent many hours screaming at one another over the details.

      By spring 1945, they’d finally sketched out a plausible setup for the explosives. But the idea needed confirming, so they scheduled the famous Trinity test for July 16, 1945. Responsibility for arming the device—nicknamed the Gadget—fell to Louis Slotin, a young Canadian physicist who had a reputation for being foolhardy (perfect for bomb work). After he'd climbed the hundred-foot Trinity tower and assembled the bomb, Slotin and his bosses accepted a $2 billion receipt for it and drove off to watch from the base camp ten miles distant.

      At 5:30 a.m. the ring of explosives went off and crushed the Gadget’s grapefruit-sized plutonium core into a ball the size of a peach pit. A tiny dollop in the middle— beryllium mixed with polonium—then kicked out a few subatomic particles called neutrons, which really got things hopping. These neutrons stuck to nearby plutonium atoms, rendering them unstable and causing them to fission, or split. This splitting released loads of energy: the kick from a single plutonium atom can make a grain of sand jump visibly, even though a plutonium atom is a hundred thousand million billion times smaller. Crucially, each split also released more neutrons. These neutrons then glommed onto other plutonium atoms, rendered them unstable, and caused more fissioning.

      Within a few millionths of a second, eighty generations of plutonium atoms had fissioned, releasing an amount of energy equal to fifty million pounds of TNT. What happened next gets complicated, but all that energy vaporized everything within a chip shot of the bomb — the metal tower, the sand below, every lizard and scorpion. More than vaporized, actually. The temperature near the core spiked so high, to tens of millions of degrees, that electrons within the vapor were torn loose from their atoms and began to roam around on their own, like fireflies. This produced a new state of matter called a plasma, a sort of ubergas most commonly found inside the nuclear furnace of stars.

      Given the incredible energies here, even sober scientists like Robert Oppenheimer (director of the Manhattan Project) had seriously considered the possibility that Trinity would ignite the atmo-sphere and fry everything on Earth's surface. That didn’t happen, obviously, but each of the several hundred men who watched that morning — some of whom slathered their faces in sunscreen, and shaded their eyes behind sunglasses—knew they’d unleashed a new type of hell on the world. After Trinity quieted down Oppenheimer famously recalled a line from the Bhagavad Gita: “Now I am become Death, the destroyer of worlds.” Less famously, Oppenheimer also recalled something that Alfred Nobel once said, about how dynamite would render war so terrible that humankind would surely give it up. How quaint that wish seemed now, in the shadow of a mushroom cloud.

      After the attacks on Hiroshima and Nagasaki in early August, most Manhattan Project scientists felt a sense of triumph. Over the next few months, however, the stories that emerged from Japan left them with a growing sense of revulsion. They'd known their marvelously engineered bombs would kill tens of thousands of people, obviously. But the military had already killed comparable numbers of civilians during the firebombings of Dresden and Tokyo. (Some historians estimate that more human beings died during the six hours of the Tokyo firebombing at least 100,000—than in any attack in history, then or since.)

      What appalled most scientists about Hiroshima and Nagasaki, then, wasn’t the immediate body count but the lingering radioactivity. Most physicists before this had a rather cavalier attitude about radioactivity; stories abound about their macho disdain for the dangers involved. Japan changed that. Fallout from the bomb continued to poison people for months afterward—killing their cells, ulcerating their skin, turning even the salt in their blood and the fillings in their teeth into tiny radioactive bombs.

Night Light

[This excerpt is from the interlude between the sixth and seventh chapter of Caesar’s Last Breath by Sam Kean.]

      For some context here, recall that Joseph Priestley’s Lunar Society met on the Monday nearest the full moon because its members needed moonlight to find their way home. But Priestley’s generation was among the last to have to worry about such problems. Several of the gases that scientists discovered in the late 1700s burned with uncanny brightness, and within a half century of Priestley’s death in 1804, gas lighting had become standard throughout Europe. Edison’s lightbulb gets all the historical headlines, but it was coal gas that first eradicated darkness in the modern world.

      Human beings had artificial lighting before 1800, of course --- wood fires, candles, oil lamps. But however romantic bonfires and candlelit dinners seem nowadays, those are actually terrible sources of light. Candles especially throw off a sickly, feeble glow that, as one historian joked, did little but “make darkness visible.” (A French saying from the time captured the sentiment in a different way: “By candlelight, a goat is ladylike.”) Not everyone could afford candles on a daily basis anyway imagine if all your lightbulbs needed replacing every few nights. Larger households and businesses might go through 2,500 candles per year. To top it off, candles released noxious smoke indoors, and it was all too easy to knock one over and set your house or factory ablaze.

      In retrospect coal gas seems the obvious solution to these problems. Coal gas is a heterogeneous mix of methane, hydrogen, and other gases that emerge when coal is slowly heated. Both methane and hydrogen burn brilliantly alone, and when burned together, they produce light dozens of times bolder and brighter than candlelight. But as with laughing gas, people considered coal gas little more than a novelty at first. Hucksters would pack crowds into dark rooms for a halfpenny each and dazzle them with gas pyrotechnics. And it wasn’t just the brilliance that impressed. Because they didn’t depend on wicks, coal-gas flames could defy gravity and leap out sideways or upside down. Some showmen even combined different flames to make flowers and animal shapes, somewhat like balloon animals.

      Gradually, people realized that coal gas would make fine interior lighting. Gas jets burned steadily and cleanly, without a candle’s flickering and smoking, and you could secure gas fixtures to the wall, decreasing the odds of things catching fire. An eccentric engineer named William Murdoch— the same man who invented a steam locomotive in James Watt's factory, before Watt told him to knock it off—installed the world’s first gas-lighting system in his home in Birmingham in 1792. Several local businessmen were impressed enough to install gas lighting in their factories shortly thereafter.

      After these early adopters, city governments began using coal gas to light their streets and bridges. Cities usually stored the gas inside giant tanks (called gasometers) and piped it through underground mains, much like water today. London alone had forty thousand gas street lamps by 1823, and other cities in Europe followed suit. (Paris didn't want bloody London usurping its reputation as the city of light, after all.) For the first time in history, human settlements would have been visible from space by night.

      Public buildings came online next, including railway stations, churches, and especially theaters, which benefitted more than probably any other institution. With more light available, theater directors could position actors farther back onstage, allowing for more depth of movement. A related technology called limelight—which involved streaming oxygen and hydrogen over burning quicklime—provided even brighter light and led to the first spotlights. Because the audience could see them clearly now, actors could also get by with less makeup and could gesture in more realistic, less histrionic ways.

      Even villages in rural England had rudimentary gas mains by the mid-1800s, and the spread of cheap, consistent lighting changed society in several ways. Crime dropped, since thugs and lowlifes could no longer hide under the cloak of darkness. Nightlife exploded as taverns and restaurants began staying open later. Factories instituted regular working hours since they no longer had to shut down after sunset in the winter, and some manufacturers operated all night to churn out goods.

Science in 1900

[This excerpt is from chapter six of Caesar’s Last Breath by Sam Kean.]

      Chemists in the 1780s fulfilled maybe the oldest dream of humankind, to snap the tethers of gravity and take flight. A century later a physicist solved one of humankind’s most enduring mysteries, why the sky is blue. So you can forgive scientists for having a pretty lofty view of themselves circa 1900, and for assuming that they had a full reckoning of how air worked. Thanks to chemists from Priestley to Ramsay, they now knew all its major components. Thanks to the ideal gas law, they now knew how air responded to almost any change in temperature and pressure you could throw at it. Thanks to Charles and Gay-Lussac and other balloonists, they now knew what the air was like even miles above our heads. There were a few loose ends, sure, in fields like atomic physics and meteorology. But all scientists needed to do was extrapolate from known gas laws to cover those cases. They must have felt achingly close to figuring out their world.

      Guess what. Scientists not only ran into difficulties tying those loose ends together, they eventually had to construct whole new laws of nature, out of sheer desperation, to make sense of what was going on. Atomic physics of course led to the absurdities of quantum mechanics and the horrors of nuclear warfare. And tough as it is to believe, meteorology, one of the sleepiest branches of science around, stirred to life chaos theory, one of the most profound and troubling currents in twentieth-century thought.

Steel

[This excerpt is from the interlude between the fifth and sixth chapter of Caesar’s Last Breath by Sam Kean.]

      The story of Bessemer’s discoveries in this field is long and convoluted, and there’s no room to get into it all here. (It also involved several other chemists and engineers, most of whom he stingily refused to credit in later years.) Suffice it to say that through a series of happy accidents and shrewd deductions, Bessemer figured out two shortcuts to making steel.

      He’d start by melting down cast iron, same as most smelters. He then added oxygen to the mix, to strip out the carbon. But instead of using iron ore to supply the oxygen atoms, like everyone else, Bessemer used blasts of air —a cheaper, faster substitute. The next shortcut was even more important. Rather than mix in lots and lots of oxygen gas and strip all the carbon out of his molten cast iron, Bessemer decided to stop the air flow partway through. As a result, instead of carbon-free wrought iron, he was left with somewhat-carbon-infused steel. In other words, Bessemer could make steel directly, without all the extra steps and expensive material.

      He’d first investigated this process by bubbling air into molten cast iron with a long blowpipe. When this worked, he arranged for a larger test at a local foundry seven hundred pounds of molten iron in a three-foot-wide cauldron. Rather than rely on his own lungs, this time he had several steam engines blast compressed air through the mixture. The workers at the foundry gave Bessemer pitying looks when he explained that he wanted to make steel with puffs of air. And indeed, nothing happened for ten long minutes that afternoon. All of a sudden, he later recalled, “a succession of mild explosions” rocked the room. White flames erupted from the cauldron and molten iron whooshed out in “a veritable volcano,” threatening to set the ceiling on fire.

      After waiting out the pyrotechnics, Bessemer peered into the cauldron. Because of the sparks, he hadn't been able to shut down the blasts of air in time, and the batch was pure wrought iron. He grinned anyway: here was proof his process worked. All he had to do now was figure out exactly when to cut the airflow off, and he’d have steel.

      At this point things moved quickly for Bessemer. He went on a patent binge over the next few years, and the foundry he set up managed to screw down the cost of steel production from around £40 per ton to £7. Even better, he could make steel in under an hour, rather than weeks. These improvements finally made steel available for large-scale engineering projects a development that, some historians claim, ended the three-thousand-year-old Iron Age in one stroke, and pushed humankind into the Age of Steel.

      Of course, that’s a retrospective judgment. At the time, things weren’t so rosy, and Bessemer actually had a lot of trouble persuading people to trust his steel. The problem was, each batch of steel varied significantly in quality, since it proved quite tricky to judge when to stop the flow of air. Worse, the excess phosphorus in English iron ore left most batches brittle and prone to fracturing at cold temperatures. (Bessemer, the lucky devil, had run his initial tests on phosphorus-free ore from Wales; otherwise they too would have failed.) Other impurities introduced other structural problems, and each snafu sapped the public's confidence in Bessemer steel a little more. Like Thomas Beddoes with gaseous medicine, colleagues and competitors accused Bessemer of overselling steel, even of perpetrating a fraud.

      Over the next decade Bessemer and others labored with all the fervor of James Watt to eliminate these problems, and by the 1870s steel was, objectively, a superior metal compared to cast iron—stronger, lighter, more reliable. But you can’t really blame engineers for remaining wary. Steel seemed too good to be true —it seemed impossible that puffs of air could really toughen up a metal so much— and years of problems with steel had corroded their faith anyway….

The Final Mysterians

[These excerpts are from an article by Michael Shermer in the July 2018 issue of Scientific American.]

      …For millennia, the greatest minds of our species have grappled to gain purchase on the vertiginous ontological cliffs of three great mysteries—consciousness, free will and God—without ascending anywhere near the thin air of their peaks. Unlike other inscrutable problems, such as the structure of the atom, the molecular basis of replication and the causes of human violence, which have witnessed stunning advancements of enlightenment, these three seem to recede ever further away from understanding, even as we race ever faster to catch them in our scientific nets.

      …I contend that not only consciousness but also free will and God are mysterian problems—not because we are not yet smart enough to solve them but because they can never be solved, not even in principle, relating to how the concepts are conceived in language. Call those of us in this camp the “final mysterians.”

      …It is not possible to know what it is like to be a bat (in philosopher Thomas Nagel's famous thought experiment), because if you altered your brain and body from humanoid to batoid, you would just be a bat, not a human knowing what it feels like to be a bat….

      …We are not inert blobs of matter bandied about the pinball machine of life by the paddles of nature's laws; we are active agents within the causal net of the universe, both deter-mined by it and helping to determine it through our choices….

      If the creator of the universe is supernatural—outside of space and time and nature's laws—then by definition, no natural science can discover God through any measurements made by natural instruments. By definition, this God is an unsolvable mystery. If God is part of the natural world or somehow reaches into our universe from outside of it to stir the particles (to, say, perform miracles like healing the sick), we should be able to quantify such providential acts. This God is scientifically soluble, but so far all claims of such measurements have yet to exceed statistical chance. In any case, God as a natural being who is just a whole lot smarter and more powerful than us is not what most people conceive of as deific.

      Although these final mysteries may not be solvable by science, they are compelling concepts nonetheless, well deserving of our scrutiny if for no other reason than it may lead to a deeper understanding of our nature as sentient, volitional, spiritual beings.

The Science of Anti-Science Thinking

[These excerpts are from an article by Douglas T. Kenrick, Adam B. Cohen, Steven L. Neuberg and Robert B. Cialdini in the July 2018 issue of Scientific American.]

      On a regular basis, government decision makers enact policies that fail to heed decades of evidence on climate change. In public opinion surveys, a majority of Americans choose not to accept more than a century of evidence on evolution by natural selection. Academic intellectuals put the word “science” in quotes, and members of the lay public reject vaccinations for their children.

      Scientific findings have long met with ambivalent responses: A welcome mat rolls out instantly for horseless buggies or the latest smartphones. But hostility arises just as quickly when scientists’ findings challenge the political or religious status quo. Some of the British clergy strongly resisted Charles Darwin’s theory of evolution by natural selection. Samuel Wilberforce, bishop of Oxford, asked natural selection proponent Thomas Huxley, known as “Darwin’s bulldog,” on which side of his family Huxley claimed descent from an ape.

      In Galileo’s time, officials of the Roman Catholic Church, well-educated and progressive intellectuals in most respects, expressed outrage when the Renaissance scientist reported celestial observations that questioned the prevailing belief that Earth was the center of the universe. Galileo was placed under house arrest and forced to recant his views as heresy.

      In principle, scientific thinking should lead to decisions based on consideration of all available information on a given question. When scientists encounter arguments not firmly grounded in logic and empirical evidence, they often presume that purveyors of those alternative views either are ignorant of the facts or are attempting to discourage their distribution for self-serving reasons—tobacco company executives suppressing findings linking tobacco use to lung cancer, for instance. Faced with irrational or tendentious opponents, scientists often grow increasingly strident. They respond by stating the facts more loudly and clearly in the hope that their interlocutors will make more educated decisions.

      Several lines of research, however, reveal that simply presenting a litany of facts does not always lead to more objective decision making. Indeed, in some cases, this approach might actually backfire. Human beings are intelligent creatures, capable of masterful intellectual accomplishments. Unfortunately, we are not completely rational decision makers….

      Although natural selection stands out as one of the most solidly supported scientific theories ever advanced, the average citizen has not waded through textbooks full of evidence on the topic. In fact, many of those who have earned doctorates in scientific fields, even for medical research, have never taken a formal course in evolutionary biology In the face of these challenges, most people rely on mental shortcuts or the pronouncements of experts, both strategies that can lead them astray. They may also rely—at their own peril—on intuition and gut instinct….

      Fear increases the tendency toward conformity. If you wish to persuade others to reduce carbon emissions, take care whom you scare: a message that arouses fear of a dystopian future might work well for an audience that accepts the reality of climate change but is likely to backfire for a skeptical audience….

Fish Bombs

[These excerpts are from an article by Katherine Kornei in the July 2018 issue of Scientific American.]

      Rogue fishers around the world toss explosives into the sea and scoop up bucketloads of stunned or dead fish, a practice that is illegal in many nations and can destroy coral reefs and wreak havoc on marine biodiversity. Catching perpetrators amid the vastness of the ocean has long proved almost impossible, but researchers working in Malaysia have now adapted acoustic sensors—originally used to locate urban gunfire—to pinpoint these marine blasts within tens of meters.

      Growing human populations and international demand for seafood are pushing fishers to increase their catches….Shock waves from the explosions rupture the fishes’ swim bladders, immobilizing the fish and causing some to float to the surface. And the bombs themselves are easy to make: ammonium nitrate (a common fertilizer) and diesel fuel are mixed in an empty bottle and topped with a detonator and waterproof fuse….

      Malaysian officials are proposing an initiative to promote fish farming….

Auto Mileage Rollback Is a Sick Idea

[These excerpt s are from an article by Rob Jackson in the July 2018 issue of Scientific American.]

      Seven years ago representatives from General Motors, Ford, Chrysler and other car manufacturers joined President Barack Obama to announce historic new vehicle mileage standards. The industry-supported targets would have doubled the fuel efficiency of cars and light trucks in the U.S. to 54.5 miles per gallon by 2025.

      But in April the Environmental Protection Agency announced plans to roll back part or all of the new standards, saying they were “wrong” and based on “politically charged expediency.” Let me explain why this terrible idea should unify Republicans and Democrats in opposition. The rollback is going to harm us economically and hurt us physically.

      The Obama-era standards made sense for many reasons, starting with our wallets. It is true that each vehicle would initially cost $1,000 to $2,000 more as manufacturers researched lighter materials and built stronger vehicles. In return, though, we would save about $3,000 to $5,000 in gas over the life of each vehicle, according to a 2016 report by Consumers Union. (Because gas prices were higher in 2011 and 2012, when the standards were proposed, estimated savings back then were significantly higher—about $8,000 per car. Prices have risen somewhat since 2016.) This research will also help auto companies compete internationally.

      National security and trade deficits are also reasons to keep the existing standards. Despite a growing domestic oil industry, the U.S. imported more than 10 million barrels of oil daily last year, about a third of it coming from OPEC nations. Imports added almost $100 billion to our trade deficit, sending hard-earned dollars to Canada, Saudi Arabia, Venezuela, Iraq and Colombia. Better gas mileage could eliminate half of our OPEC imports. It would also make our country safer and more energy-independent.

      The biggest reason to support the fuel-efficiency standards, however, is the link between vehicle exhaust and human health. More than four in 10 Americans—some 134 million of us—live in regions with unhealthy particulate pollution and ozone in the air. That dirty air makes people sick and can even kill them. A 2013 study by the Massachusetts Institute of Technology estimated that about 200,000 Americans now die every year from air pollution. The number-one cause of those deaths—more than 50,000 of them—is air pollution from road traffic….

      Here is what a rollback in mileage standards would mean: Thousands of Americans would die unnecessarily from cardiovascular and other diseases every year. Our elderly would face more bronchitis and emphysema. More children would develop asthma—a condition that, according to an estimate by the Centers for Disease Control and Prevention, affects more than one in 12. Millions of your sons and daughters have it. My son does, too.

      Rarely in my career have I seen a proposal more shortsighted and counterproductive than this one. Please say there is still time to change our minds.

How Did Homo sapiens Evolve?

[This excerpt is from an editorial by Julia Galway-Witham and Chris Stringer in the June 22, 2018, issue of Science.]

      Over the past 30 years, understanding of Homo sapiens evolution has advanced greatly. Most research has supported the theory that modern humans had originated in Africa by about 200,000 years ago, but the latest findings reveal more complexity than anticipated. They confirm interbreeding between H. sapiens and other hominin species, provide evidence for H. sapiens in Morocco as early as 300,000 years ago, and reveal a seemingly incremental evolution of H. sapiens cranial shape. Although the cumulative evidence still suggests that all modern humans are descended from African H. sapiens populations that replaced local populations of archaic humans, models of modern human origins must now include substantial interactions with those populations before they went extinct. These recent findings illustrate why researchers must remain open to challenging the prevailing theories of modern human origins.

      Although living humans vary in traits such as body size, shape, and skin color, they clearly belong to a single species, H. sapiens, characterized by shared features such as a narrow pelvis, a large brain housed in a globular braincase, and reduced size of the teeth and surrounding skeletal architecture. These traits distinguish modern humans from other now-extinct humans (members of the genus Homo), such as the Neandertals in western Eurasia (often classified as H. neanderthalensis) and, by inference, from the Denisovans in eastern Eurasia (a genetic sister group of Neandertals). How did H. sapiens relate to these other humans in evolutionary and taxonomic terms, and how do those relationships affect evolving theories of modern human origins?

      By the 1980s, the human fossil record had grown considerably, but it was still insufficient to demonstrate whether H. sapiens had evolved from local ancestors across much of the Old World (multiregional evolution) or had originated in a single region and then dispersed from there (single origin). In 1987, a study using mitochondrial DNA from living humans indicated a recent and exclusively African origin for modern humans. In the following year one of us coauthored a review of the fossil and genetic data, expanding on that discovery and supporting a recent African origin (RAO) for our species.

      The RAO theory posits that by 60,000 years ago, the shared features of modern humans had evolved in Africa and, via population dispersals, began to spread from there across the world. Some paleoanthropologists have resisted this single-origin view and the narrow definition of H. sapiens to exclude fossil humans such as the Neandertals. In subsequent decades, genetic and fossil evidence supporting the RAO theory continued to accumulate, such as in studies of the genetic diversity of African and non-African modern humans and the geographic distribution of early H. sapiens fossils, and this model has since become dominant within mainstream paleoanthropology. In recent years, however, new fossil discoveries, the growth of ancient DNA research, and improved dating techniques have raised questions about whether the RAO theory of H. sapiens evolution needs to be revised or even abandoned.

      Different views on the amount of genetic and skeletal shape variation that is reasonably subsumed within a species definition directly affect developing models of human origins. For many researchers, the anatomical distinctiveness of modern humans and Neandertals has been sufficient to place them in separate species; for example, variation in traits such as cranial shape and the anatomy of the middle and inner ears are greater between Neandertals and H. sapiens than between well-recognized species of apes. Yet, Neandertal genome sequences and the discovery of past interbreeding between Neandertals and H. sapiens provide support for their belonging to the same species under the biological species concept, and this finding has revived multiregionalism. The recent recognition of Neandertal art further narrows--or for some researchers removes--the perceived behavioral gap between the two supposed species.

      These challenges to the uniqueness of H. sapiens were a surprise to many and question assignments of hominin species in the fossil record. However, the limitations of the biological species concept have long been recognized. If it were to be implemented rigorously, many taxa within mammals-such as those in Equus, a genus that includes horses, donkeys, and zebras-would have to be merged into a single species. Nevertheless, in our view, species concepts need to have a basis in biology. Hence, the sophisticated abilities of Neandertals, however interesting, are not indicative of their belonging to H. sapiens. The recently recognized interbreeding between the late Pleistocene lineages of H. sapiens, Neandertals, and Denisovans is nonetheless important, and the discovery of even more compelling evidence to support Neandertals and modem humans belonging to the same species would have a profound effect on models of the evolution of H. sapiens.

Students Report Less Sex, Drugs

[This brief article by Jeffrey Brainard is in the June 22, 2018, issue of Science.]

      Fewer U.S. high school students report having sex and taking illicit drugs, but other risky activity remains alarmingly high, according to a biennial report released last week by the U.S. Centers for Disease Control and Prevention. Even as sexual activity declined, fewer reported using condoms during their most recent intercourse, increasing their risks of HIV and other sexually transmitted diseases. And nearly one in seven students reported misusing prescription opioids, for example by taking them without a prescription—a behavior that can lead to future injection drug use and risks of overdosing and contracting HIV. Gay, lesbian, and bisexual students reported experiencing significantly higher levels of violence in school, including bullying and sexual violence, and higher risks for suicide, depression, substance use, and poor academic performance than other students did. Nearly 15,000 students took the survey.

Emerging Stem Cell Ethics

[These excerpts are from an editorial by Douglas Sipp, Megan Munsie and Jeremy Sugarman in the June 22, 2018, issue of Science.]

      It has been 20 years since the first derivation of human embryonic stem cells. That milestone marked the start of a scientific and public fascination with stem cells, not just for their biological properties but also for their potentially transformative medical uses. The next two decades of stem cell research animated an array of bioethical debates, from the destruction of embryos to derive stem cells to the creation of human-animal hybrids. Ethical tensions related to stem cell clinical translation and regulatory policy are now center stage….Care must be taken to ensure that entry of stem cell-based products into the medical marketplace does not come at too high a human or monetary price.

      Despite great strides in understanding stem cell biology very few stem cell-based therapeutics are as yet used in standard clinical practice. Some countries have responded to patient demand and the imperatives of economic competition by promulgating policies to hasten market entry of stem cell-based treatments. Japan, for example, created a conditional approvals scheme for regenerative medicine products and has already put one stem cell treatment on the market based on preliminary evidence of efficacy. Italy provisionally approved a stem cell product under an existing European Union early access program. And last year, the United States introduced an expedited review program to smooth the path for investigational stem cell-based applications, at least 16 of which have been granted already. However, early and perhaps premature access to experimental interventions has uncertain consequences for patients and health systems.

      A staggering amount of public money has been spent on stem cell research globally. Those seeking to develop stem cell products may now not only leverage that valuable body of resulting scientific knowledge but also find that their costs for clinical testing are markedly reduced by deregulation. How should this influence affordability and access? The state and the taxpaying public's interests should arguably be reflected in the pricing of stem cell products that were developed through publicly funded research and the regulatory subsidies. Detailed programs for recouping taxpayers' investments in stem cell research and development must be established.

      Rushing new commercial stem cell products into the market also entails considerations inherent to the ethics of using pharmaceuticals and medical devices. For example, once a product is approved for a given indication, it becomes possible for physicians to prescribe it for “off-label use.” We have already witnessed the untoward effects of the elevated expectations that stem cells can serve as a kind of cellular panacea, a misconception that underlies the direct-to-consumer marketing of unproven uses of stem cells. Once off-label use of approved products becomes an option, there may be a new flood of untested therapeutic claims with which to contend….

      The new frontiers of stem cell-based medicine also raise questions about the use of fast-tracked products. In countries where healthcare is not considered a public good, who should pay for post-market efficacy testing? Patients already bear a substantial burden of risk when they volunteer for experimental interventions. Frameworks that ask them to pay to participate in medical research warrant much closer scrutiny than has been seen thus far.

      …For stem cell treatments, attaining this balance will require frank and open discussion between all stakeholders, including the patients it seeks to benefit and the taxpayers who make it possible.

A Good Day’s Work

[This excerpt is from an editorial by Steve Metz in the June 2018 issue of The Science Teacher.]

      …It is easy to drift through science and math classes wondering, “Why do I need to learn this?” Many do not see a college science major or science career in their future, making the need to learn science less than obvious. For underrepresented students of color and young women—who often lack exposure to STEM role models, especially in engineering and the physical sciences—a vision of a STEM career can seem even more remote.

      Of course, the best reason for learning science is that knowing science is important in and of itself, as part of humankind’s search for understanding. Scientific knowledge makes everything—a walk in the woods, reading a newspaper, a family visit to a science museum or beach—simply more interesting. The skepticism and critical thinking that are part of the scientific world view are essential for informed civic participation and evidence-based social discourse.

      But the next-best answer to the question—why do I need to learn this?—may strike students as more practical and persuasive: STEM careers can provide excellent employment opportunities with good salaries and better-than average job security. The Bureau of Labor Statistics reports that among occupations surveyed, all 20 of the fastest growing and 18 of the 20 highest paying are STEM fields.

      As science teachers, we need to get the word out: Learning science, engineering, and mathematics can lead to a life's work that is interesting, rewarding, and meaningful. Our classes must encourage students to pursue these fields and provide them with the skills necessary for success.

Copper Hemispheres

[This excerpt is from the second chapter of Caesar’s Last Breath by Sam Kean.]

      In general, a summons to stand in judgment before the Holy Roman Emperor was not an occasion for celebration. But Otto Gericke, the mayor of Magdeburg, Germany, felt his confidence soar as his cart rattled south. After all, he was about to put on perhaps the greatest science experiment in history.

      Gericke, a classic gentleman-scientist, was obsessed with the idea of vacuums, enclosed spaces with nothing inside. All most people knew about vacuums then was Aristotle’s dictum that nature abhors and will not tolerate them. But Gericke suspected nature was more open-minded than that, and he set about trying to create a vacuum in the early 1650s. His first attempt involved evacuating water from a barrel, using the local fire brigade’s water pump. The barrel was initially full of water and perfectly sealed, so that no air could get in. Pumping out the water should therefore have left only empty space behind. Alas, after a few minutes of pumping, the barrel staves leaked and air rushed in anyway. He next tried evacuating a hollow copper sphere using a similar setup. It held up longer, but halfway through the process the sphere imploded, collapsing with a bang that left his ears ringing.

      The violence of the implosion startled Gericke — and set his mind racing. Somehow, mere air pressure or more precisely, the difference in air pressure between the inside and outside of the sphere—had crumpled it. Was a gas really strong enough to crunch metal? It didn't seem likely. Gases are so soft, after all, so pillowy. But Gericke could see no other answer, and when his mind made that leap, it proved to be a turning point in the human relationship with gases. For perhaps the first time in history, someone realized just how strong gases are, how muscular, how brawny. Conceptually, it was but a short step from there to steam power and the Industrial Revolution.

      Before the revolution could begin, however, Gericke had to convince his contemporaries how powerful gases were, and luckily he had the scientific skills to pull this off. In fact, a demo he put together over the next decade began stirring up such wild rumors in central Europe that Emperor Ferdinand III eventually summoned Gericke to court to see for himself.

      On the 220-mile trip south, Gericke carried two copper hemispheres; they fit together to form a twenty-two-inch spherical shell. The walls of the hemispheres were thick enough to withstand crumpling this time, and each half had rings welded to it where he could affix a rope. Most important, Gericke had bored a hole into one hemisphere and fitted it with an ingenious air valve, which allowed air to flow through in one direction only.

      Gericke arrived at court to find thirty horses and a sizable crowd awaiting him. These were the days of drawing and quartering convicts, and Gericke announced to the crowd that he had a similar ordeal planned for his copper sphere: he believed that Ferdinand’s best horses couldn’t tear the two halves apart. You could forgive the assembled for laughing: without someone holding them together, the hemispheres fell apart from their own weight. Gericke ignored the naysayers and reached into his cart for the key piece of equipment, a sort of cylinder on a tripod. It had a tube coming off it, which he attached to the one-way valve on the copper sphere. He then had several local blacksmiths—the burliest men he could find— start cranking the machine’s levers and pistons. Every few seconds it wheezed. Gericke called the contraption an “air pump.”

      It worked like this. Inside the air pump’s cylinder was a special airtight chamber fitted with a piston that moved up and down. At the start of the process the piston was depressed, meaning the chamber had no air inside. Step one involved a blacksmith hoisting the piston up. Because the chamber and copper sphere were connected via the tube, air inside the sphere could now flow into the chamber. Just for argument's sake, let's say there were 800 molecules of air inside the sphere to start. (Which is waaaaay too low, but it's a nice round number.) After the piston got hoisted up, maybe half that air would flow out. This left 400 molecules in the sphere and 400 in the chamber.

      Now came the key step. Gericke closed the one-way valve on the sphere, trapping 400 molecules in each half. He then opened another, separate valve on the chamber, and had the smithy stamp the piston down. This recollapsed the chamber and expelled all 400 molecules. The net result was that Gericke had now pumped out half the original air in the sphere and expelled it to the outside world.

      Equally important, Gericke was now back at the starting point, with the piston depressed. As a result he could reopen the valve on the sphere and repeat the process. This time 200 molecules (half the remaining 400) would flow into the chamber, with the other 200 remaining in the sphere. By closing the one-way valve a second time he could once again trap those 200 molecules in the chamber and expel them. Next round, he’d expel an additional 100 molecules, then 50, and so on. It got harder each round to hoist the piston up—hence the stout blacksmiths —but each cycle of the pump removed half the air from the sphere.

      As more and more air disappeared from inside, the copper sphere started to feel a serious squeeze from the outside. That’s because gazillions of air molecules were pinging its surface every second. Each molecule was minuscule, of course, but collectively they added up to thousands of pounds of force (really). Normally air from inside the sphere would balance this pressure by pushing outward. But as the blacksmiths evacuated the inside, a pressure imbalance arose, and the air on the outside began squeezing the hemispheres together tighter and tighter: given the sphere’s size, there would have been 5,600 pounds of net force at perfect vacuum. Gericke couldn't have known all these details, and it's not clear how close he got to a perfect vacuum. But after watching that first copper shell crumple, he knew that air was pretty brawny. Even brawnier, he was gambling, than thirty horses.

      After the blacksmiths had exhausted the air (and themselves), Gericke detached the copper sphere from the pump, wound a rope through the rings on each side, and secured it to a team of horses. The crowd hushed. Perhaps some maiden raised a silk handkerchief and dropped it. When the tug-of-war began, the ropes snapped taut and the sphere shuddered. The horses snorted and dug in their hooves; veins bulged on their necks. But the sphere held—the horses could not tear it apart. Afterward, Gericke picked up the sphere and flicked a secondary valve open with his finger. Hissss. Air rushed in, and a second later the hemispheres fell apart in his hands; like the sword in the stone, only the chosen one could perform the feat. The stunt so impressed the emperor that he soon elevated plain Otto Gericke to Otto von Guericke, official German royalty.

      In later years von Guericke and his acolytes came up with several other dramatic experiments involving vacuums and air pressure. They showed that bells in evacuated glass jars made no noise when rung, proving that you need air to transmit sound. Similarly, they found that butter exposed to red-hot irons inside a vacuum would not melt, proving that vacuums cannot transmit heat convectively. They also repeated the hemisphere stunt at other sites, spreading far and wide von Guericke’s discovery about the strength of air. And it’s this last discovery that would have the most profound impact on the world at large. Our planet's normal, ambient air pressure of 14.7 pounds per square inch might not sound impressive, but that works out to one ton of force per square foot. It's not just copper hemispheres that feel this, either. For an average adult, twenty tons of force are pressing inward on your body at all times. The reason you don't notice this crushing burden is that there's another twenty tons of pressure pushing back from inside you. But even when you know the forces balance here, it all still seems precarious. I mean, in theory a piece of aluminum foil, if perfectly balanced between the blasts from two fire hoses, would survive intact. But who would risk it? Our skin and organs face the same predicament vis-à-vis air, suspended inside and out between two torrential forces.

      Luckily, our scientific forefathers didn’t tremble in fear of such might. They absorbed the lesson of von Guericke— that gases are shockingly strong—and raced ahead with new ideas. Some of the projects they took on were practical, like steam engines. Some shaded frivolous, like hot-air balloons. Some, like explosives, chastised us with their deadly force. But all relied on the raw physical power of gases.

Ammonia

[These excerpts are from the second chapter of Caesar’s Last Breath by Sam Kean.]

      The alchemy of air started with an insult. Fritz Haber was born into a middle-class German Jewish family in 1868, and despite an obvious talent for science, he ended up drifting between several different industries as a young man—dye manufacturing, alcohol production, cellulose harvesting, molasses production—without distinguishing himself in any of them. Finally, in 1905 an Austrian company asked Haber—by then a balding fellow with a mustache and pince-nez glasses—to investigate a new way to manufacture ammonia gas (NH3).

      The idea seemed straightforward. There’s plenty of nitrogen gas in the air (N2), and you can get hydrogen gas (H2) by splitting water molecules with electricity. To make ammonia, then, simply mix and heat the gases: N2 + 3H2 --> 2NH3. Voila. Except Haber ran into a heckuva catch-22. It took enormous heat to crack the nitrogen molecules in half so they could react; yet that same heat tended to destroy the product of the reaction, the fragile ammonia molecules. Haber spent months going in circles before finally issuing a report that the process was futile.

      The report would have languished in obscurity —negative results win no prizes— if not for the vanity of a plump chemist named Walther Nernst. Nernst had everything Haber coveted. He worked in Berlin, the hub of German life, and he'd made a fortune by inventing a new type of electric lightbulb. Most important, Nernst had earned scientific prestige by discovering a new law of nature, the Third Law of Thermodynamics. Nernst’s work in thermodynamics also allowed chemists to do something unprecedented: examine any reaction--like the conversion of nitrogen into ammonia—and estimate the yield at different temperatures and pressures. This was a huge shortcut. Rather than grope blindly, chemists could finally predict the optimum conditions for reactions.

      Still, chemists had to confirm those predictions in the lab, and here's where the conflict arose. Because when Nernst examined the data in Haber's report, he declared that the yields for ammonia were impossible— 50 percent too high, according to his predictions.

      Haber swooned upon hearing this. He was already a high-strung sort—he had a weak heart and tended to suffer nervous breakdowns. Now Nernst was threatening to destroy the one thing he had going for himself, his reputation as a solid experimentalist. Haber carefully redid his experiments and published new data more in line with Nernst’s predictions. But the numbers remained stubbornly higher, and when Nernst ran into Haber at a conference in May 1907, he dressed down his younger colleague in front of everyone.

      Honestly, this was a stupid dispute. Both men agreed that the industrial production of ammonia via nitrogen gas was impossible; they just disagreed over the exact degree of impossibility. But Nernst was a petty man, and Haber— who had a chivalrous streak—could not let this insult to his honor stand. Contradicting everything he’d said before, Haber now decided to prove that you could make ammonia from nitrogen gas after all. Not only could he rub Nernst’s fat nose in it if he succeeded, he could perhaps patent the process and grow rich. Best of all, unlocking nitrogen would make Haber a hero throughout Germany, because doing so would provide Germany with the one thing it lacked to become a world power— a steady supply of fertilizer….

      Beyond fiddling with temperatures and pressures, Haber focused on a third factor, a catalyst. Catalysts speed up reactions without getting consumed themselves; the platinum in your car’s muffler that breaks down pollutants is an example. Haber knew of two metals, manganese and nickel, that boosted the nitrogen-hydrogen reaction, but they worked only above 1300°F, which fried the ammonia. So he scoured around for substitute catalysts, streaming these gases over dozens of different metals to see what happened. He finally hit upon osmium, element 76, a brittle metal once used to make lightbulbs. It lowered the necessary temperature to “only” 1100°F, which gave ammonia a fighting chance.

      Using his nemesis Nernst’s equations, Haber calculated that osmium, if used in combination with the high-pressure jackets, might boost the yield of ammonia to 8 percent, an acceptable result at last. But before he could lord his triumph over Nernst, he had to confirm that figure in the lab. So in July 1909— after several years of stomach pains, insomnia, and humiliation— Haber daisy-chained several quartz canisters together on a tabletop. He then flipped open a few high-pressure valves to let the N2 and H2 mix, and stared anxiously at the nozzle at the far end.

      It took a while: even with osmium's encouragement, nitrogen breaks its bonds only reluctantly. But eventually a few milky drops of ammonia began to trickle out of the nozzle. The sight sent Haber racing through the halls of his department, shouting for everyone to “Look! Come look!” By the end of the run, they had a whole quarter of a teaspoon.

      They eventually cranked that up into a real gusher— a cup of ammonia every two hours. But even that modest output persuaded BASF to purchase the technology and fast-track it. As he often did to celebrate a triumph, Haber threw his team an epic party. “When it was over,” one assistant recalled, “we could only walk home in a straight line by following the streetcar tracks.”

      Haber’s discovery proved to be an inflection point in history—right up there with the first time a human being diverted water into an irrigation canal or smelted iron ore into tools. As people said back then, Haber had transformed the very air into bread.

      Still, Haber’s advance was as much theoretical as anything: he proved you could make ammonia (and therefore fertilizer) from nitrogen gas, but the output from his apparatus barely could have nourished your tomatoes, much less fed a nation like Germany. Scaling Haber's process up to make tons of ammonia at once would require a different genus of genius—the ability to turn promising ideas into real, working things. This was not a genius that most BASF executives possessed. They saw ammonia as just another chemical to add to their portfolio, a way to pad their profits a little. But the thirty-five-year-old engineer they put in charge of their new ammonia division, Carl Bosch, had a grander vision. He saw ammonia as potentially the most important— and lucrative—chemical of the new century, capable of transforming food production worldwide. As with most visions worth having, it was inspiring and dicey all at once.

      Bosch decided to tackle each of the many subproblems with ammonia production independently. One issue was getting pure enough nitrogen, since regular air contains oxygen and other “impurities.” For help here, Bosch turned to an unlikely source, the Guinness Brewing company. Fifteen years earlier Guinness had developed the most powerful refrigeration devices on the planet, so powerful they could liquefy air. (As with any substance, if you chill the gases in the air enough, they'll condense into puddles of liquid.) Bosch was more interested in the reverse process —taking cold liquid air and boiling it. Curiously, although liquid air contains many different substances mixed together, each substance within it boils off at a separate temperature when heated. Liquid nitrogen happens to boil at –320°F. So all Bosch had to do was liquefy some air with the Guinness refrigerators, warm the resulting pool of liquid to -319°F, and collect the nitrogen fumes. Every time you see a sack of fertilizer today, you can thank Guinness stout.

      The second issue was the catalyst. Although effective at kick-starting the reaction, osmium would never work in industry: as an ore it makes gold look cheap and plentiful, and buying enough osmium to produce ammonia at the scales Bosch envisioned would have bankrupted the firm. Bosch needed a cheap substitute, and he brought the entire periodic table to bear on the problem, testing metal after metal after metal. In all, his team ran twenty thousand experiments before finally settling on aluminum oxide and calcium mixed with iron. Haber the scientist had sought perfection—the best catalyst. Bosch the engineer settled for a mongrel.

      Pristine nitrogen and cut-rate catalysts meant nothing, however, if Bosch couldn't overcome the biggest obstacle, the enormous pres-sures involved. A professor in college once told me that the ideal piece of equipment for an experiment falls apart the moment you take the last data point: that means you wasted the least possible amount of time maintaining it. (Typical scientist.) Bosch’s equipment had to run for months without fail, at temperatures hot enough to make iron glow and at pressures twenty times higher than in locomotive steam engines. When BASF executives first heard those figures, they gagged: one protested that an oven in his department running at a mere seven times atmospheric pressure — one-thirtieth of what was proposed—had exploded the day before. How could Bosch ever build a reaction vessel strong enough?

      Bosch replied that he had no intention of building the vessel himself. Instead he turned to the Krupp armament company, makers of legendarily large cannons and field artillery. Intrigued by the challenge, Krupp engineers soon built him the chemistry equivalent of the Big Bertha: a series of eight-foot-tall, one-inch-thick steel vats. Bosch then jacketed the vessels in concrete to further protect against explosions. Good thing, because the first one burst after just three days of testing. But as one historian commented, “The work could not be allowed to stop because of a little shrapnel.” Bosch’s team rebuilt the vessels, lining them with a chemical coating to prevent the hot gases from corroding the insides, then invented tough new valves, pumps, and seals to withstand the high-pressure beatings.

      Beyond introducing these new technologies, Bosch also helped introduce a new approach to doing science. Traditional science had always relied on individuals or small groups, with each person providing input into the entire process. Bosch took an assembly-line approach, running dozens of small projects in parallel, much like the Manhattan Project three decades later. Also like the Manhattan Project, he got results amazingly quickly—and on a scale most scientists had never considered possible. Within a few years of Haber's first drips, the BASF ammonia division had erected one of the largest factories in the world, near the city of Oppau. The plant contained several linear miles of pipes and wiring, and used gas liquefiers the size of bungalows. It had its own railroad hub to ship raw materials in, and a second hub for transporting its ten thousand workers. But perhaps the most amazing thing about Oppau was this: it worked, and it made ammonia every bit as quickly as Bosch had promised. Within a few years, ammonia production doubled, then doubled again. Profits grew even faster.

      Despite this success, by the mid-1910s Bosch decided that even he had been thinking too small, and he pushed BASF to open a larger and more extravagant plant near the city of Leuna. More steel vats, more workers, more miles of pipes and wiring, more profit. By 1920 the completed Leuna plant stretched two miles wide and one mile across—“a machine as big as a town,” one historian marveled.

      Oppau and Leuna launched the modern fertilizer industry, and it has basically never slowed down since. Even today, a century later, the Haber-Bosch process still consumes a full 1 percent of the world’s energy supply. Human beings crank out 175 million tons of ammonia fertilizer each year, and that fertilizer grows half the world’s food. Half. In other words, one of every two people alive today, 3.6 billion of us, would disappear if not for Haber-Bosch. Put another way, half your body would disappear if you looked in a mirror: one of every two nitrogen atoms in your DNA and proteins would still be flitting around uselessly in the air if not for Haber’s spiteful genius and Bosch’s greedy vision.

Report Details Persistent Hostility to Women in Science

[These excerpts are from an article by Meredith Wadman in the June 15, 2018, issue of Science.]

      Ask someone for an example of sexual harassment and they might cite a professor’s insistent requests to a grad student for sex. But such lurid incidents account for only a small portion of a serious and widespread harassment problem in science, according to a report released this week by the National Academies of Sciences, Engineering, and Medicine. Two years in the making, the report describes pervasive and damaging “gender harassment”—behaviors that belittle women and imply that they don’t belong, including sexist comments and demeaning jokes. Between 17% and 50% of female science and medical students reported this kind of harassment in large surveys conducted by two major university systems across 36 campuses….

      Decades of failure to curb sexual harassment, despite civil rights laws that make it illegal, underscore the need for a change in culture, the report says….The authors suggest universities take measures to publicly report the number of harassment complaints they receive and investigations they conduct, use committee-based advising to prevent students from being in the power of a single harasser, and institute alternative, less formal ways for targets to report complaints if they don’t wish to start an official investigation….

      The report says women in science, engineering, or medicine who are harassed may abandon leadership opportunities to dodge perpetrators, leave their institutions, or leave science altogether. It also highlights the ineffectiveness of ubiquitous, online sexual harassment training and notes what is likely massive underreporting of sexual harassment by women who justifiably fear retaliation. To retain the talents of women in science, the authors write, will require true cultural change rather than “symbolic compliance” with civil rights laws.

Seaweed Masses Assault Caribbean Islands

[These excerpts are from an article by Katie Langin in the June 15, 2018, issue of Science.]

      In retrospect, 2011 was just the first wave. That year, massive rafts of Sargassum—a brown seaweed that lives in the open ocean—washed up on beaches across the Caribbean, trapping sea turtles and filling the air with the stench of rotting eggs….Before then, beachgoers had sometimes noticed “little drifty bits on the tideline,” but the 2011 deluge of seaweed was unprecedented,…piling up meters thick in places…. "

      Locals hoped the episode, a blow to tourism and fisheries, was a one-off….Now, the Caribbean is bracing for what could be the mother of all seaweed invasions, with satellite observations warning of record-setting Sargassum blooms and seaweed already swamping beaches. Last week, the Barbados government declared a national emergency….

      Before 2011, open-ocean Sargassum was mostly found in the Sargasso Sea, a patch of the North Atlantic Ocean enclosed by ocean currents that serves as a spawning ground for eels. So when the first masses hit the Caribbean, scientists assumed they had drifted south from the Sargasso Sea. But satellite imagery and data on ocean currents told a different story….

      Since 2011, tropical Sargassum blooms have recurred nearly every year, satellite imagery showed….

      Yet in satellite data prior to 2011, the region is largely free of seaweed....That sharpens the mystery of the sudden proliferation….Nutrient inputs from the Amazon River, which discharges into the ocean around where blooms were first spotted, may have stimulated Sargassum growth. But other factors, including changes in ocean currents and increased ocean fertilization from iron in airborne dust, are equally plausible….

      In the meantime, the Caribbean is struggling to cope as yearly bouts of Sargassum become “the new normal”….the blooms visible in satellite imagery dwarf those of previous years….

HIV—No Time for Complacency

[These excerpts are from an article by Quarralisha Abdool Karim and Salim S. Abdool in the June 15, 2018, issue of Science.]

      Today, the global HIV epidemic is widely viewed as triumph over tragedy. This stands in stark contrast to the first two decades of the epidemic, when AIDS was synonymous with suffering and death….

      The AIDS response has now become a victim of these successes: As it eases the pain and suffering from AIDS, it creates the impression that the epidemic is no longer important or urgent. Commitment to HIV is slowly dissipating as the world’s attention shifts elsewhere. Complacency is setting in.

      However, nearly 5000 new cases of HIV infection occur each day, defying any claim of a conquered epidemic. The estimated 36.7 million people living with HIV, 1 million AIDS-related deaths, and 1.8 million new infections in 2016 remind us that HIV remains a serious global health challenge. Millions need support for life-long treatment, and millions more still need to start antiretroviral treatment, many of whom do not even know their HIV status. People living with HIV have more than a virus to contend with; they must cope with the stigma and discrimination that adversely affect their quality of life and undermine their human rights.

      A further crucial challenge looms large: how to slow the spread of HIV. The steady decline in the number of new infections each year since the mid-1990s has almost stalled, with little change in the past 5 years. HIV continues to spread at unacceptable levels in several countries, especially in marginalized groups, such as men who have sex with men, sex workers, people who inject drugs, and transgender individuals. Of particular concern is the state of the HIV epidemic in sub-Saharan Africa, where young women aged 15 to 24 years have the highest rates of new infections globally. Their sociobehavioral and biological risks—including age-disparate sexual coupling patterns between teenage girls and men in their 30s, limited ability to negotiate safer sex, genital inflammation, and vaginal dysbiosis—are proving difficult to mitigate. Current HIV prevention technologies, such as condoms and pre-exposure prophylaxis, have had limited impact in young women in Africa, mainly due to their limited access, low uptake, and poor adherence.

      There is no room for complacency when so much more remains to be done for HIV prevention and treatment. The task of breaking down barriers and building bridges needs greater commitment and impetus. Now is not the time to take the foot off the pedal….

Does Tailoring Instruction to “Learning Styles” Help Students Learn?

[These excerpts are from an article by Daniel T. Willingham in the Summer 2018 issue of American Educator.]

      Research has confirmed the basic summary I offered in 2005; using learning-styles theories in the classroom does not bring an advantage to students. But there is one new twist. Researchers have long known that people claim to have learning preferences—they’ll say, “I’m a visual learner” or “I like to think in words.” There's increasing evidence that people act on those beliefs; if given the chance, the visualizer will think in pictures rather than words. But doing so confers no cognitive advantage. People believe they have learning styles, and they try to think in their preferred style, but doing so doesn't help them think….

      It’s fairly obvious that some children learn more slowly or put less effort into schoolwork, and researchers have amply confirmed this intuition. Strategies to differentiate instruction to account for these disparities are equally obvious: teach at the learner’s pace and take greater care to motivate the unmotivated student. But do psychologists know of any nonobvious student characteristics that teachers could use to differentiate instruction?

      Learning-styles theorists think they’ve got one: they believe students vary in the mode of study or instruction from which they benefit most. For example, one theory has it that some students tend to analyze ideas into parts, whereas other students tend to think more holistically. Another theory posits that some students are biased to think verbally, whereas others think visually.

      When we define learning styles, it’s important to be clear that style is not synonymous with ability. Ability refers to howwell you can do something. Style is the way you do it. I find an analogy to sports useful: two basketball players might be equally good at the game but have different styles of play; one takes a lot of risks, whereas the other is much more conservative in the shots she takes. To put it another way, you’d always be pleased to have more ability, but one style is not supposed to be valued over another; it’s just the way you happen to do cognitive work. But just as a conservative basketball player wouldn’t play as well if you forced her to take a lot of chancy shots, learning-styles theories hold that thinking will not be as effective outside of your preferred style.

      In other words, when we say someone is a visual learner, we don’t mean they have a great ability to remember visual detail (although that might be true). Some people are good at remembering visual detail, and some people are good at remembering sound, and some people are gifted in moving their bodies. That’s kind of obvious because pretty much every human ability varies across individuals, so some people will have a lot of any given ability and some will have less. There’s not much point in calling variation in visual memory a “style” when we already use the word “ability” to refer to the same thing.

      The critical difference between styles and abilities lies in the idea of style as a venue for processing, a way of thinking that an individual favors. Theories that address abilities hold that abilities are not interchangeable; I can’t use a mental strength (e.g., my excellent visual memory) to make up for a mental weakness (e.g., my poor verbal memory). The independence of abilities shows us why psychologist Howard Gardner's theory of multiple intelligences is not a theory of learning styles. Far from suggesting that abilities are exchangeable, Gardner explicitly posits that different abilities use different “codes” in the brain and therefore are incompatible. You can’t use the musical code to solve math problems, for example….

      In short, recent experiments do not change the conclusion that previous reviewers of this literature have drawn: there is not convincing evidence to support the idea that tailoring instruction according to a learning-styles theory improves student outcomes….

      Research from the last 10 years confirms that matching instruction to learning style brings no benefit. But other research points to a new conclusion: people do have biases about preferred modes of thinking, even though these biases don’t help them think better.

      …In sum, people do appear to have biases to process information one way or another (at least for the verbalizer/visualizer and the intuitive/reflective styles), but these biases do not confer any advantage. Nevertheless, working in your preferred style may make it feel as though you’re learning more.

      But if people are biased to think in certain ways, maybe catering to that bias would confer an advantage to motivation, even if it doesn't help thinking? Maybe honoring learning styles would make students more likely to engage in class activities? I don’t believe either has been tested, but there are a few reasons I doubt we'd see these hypothetical benefits. First, these biases are not that strong, and they are easily overwhelmed by task features; for example, you may be biased to reflect rather than to intuit, but if you feel hurried, you’ll abandon reflection because it’s time-consuming. Second, and more important, there are the task effects. Even if you're a verbalizer, if you're trying to remember sentences, it doesn’t make sense for me to tell you to verbalize (for example, by repeating the sentences to yourself) because visualizing (for example, by creating a visual mental image) will make the task much easier. Making the task more difficult is not a good strategy for motivation….

      One educational implication of this research is obvious: educators need not worry about their students’ learning styles. There's no evidence that adopting instruction to learning styles provides any benefit. Nor does it seem worthwhile to identify students’ learning styles for the purpose of warning them that they may have a pointless bias to process information one way or another. The bias is only one factor among many that determine the strategy an individual will select-the phrasing of the question, the task instructions, and the time allotted all can impact thinking strategies.

      A second implication is that students should be taught fruitful thinking strategies for specific types of problems. Although there’s scant evidence that matching the manner of processing to a student's preferred style brings any benefit, there’s ample evidence that matching the manner of processing to the task helps a lot. Students can be taught useful strategies for committing things to memory, reading with comprehension, overcoming math anxiety, or avoiding distraction, for example. Learning styles do not influence the effectiveness of these strategies.

Bilingual Boost

[These excerpts are from an article by Jane C. Hu in the June 2018 issue of Scientific American.]

      Children growing up in low-income homes score lower than their wealthier peers on cognitive tests and other measures of scholastic success, study after study has found. Now mounting evidence suggests a way to mitigate this disadvantage: learning another language.

      …researchers probed demographic data and intellectual assessments from a subset of more than 18,000 kindergartners and first graders in the U.S. As expected, they found children from families with low socioeconomic status (based on factors such as household income and parents’ occupation and education level) scored lower on cognitive tests. But within this group, kids whose families spoke a second language at home scored better than monolinguals.

      Evidence fora “bilingual advantage”—the idea that speaking more than one language improves mental skills such as attention control or ability to switch between tasks—has been mixed. Most studies have had only a few dozen participants from mid- to high-socioeconomic-status backgrounds perform laboratory-based tasks.

      …sought out a data set of thousands of children who were demographically representative of the U.S. population. It is the largest study to date on the bilingual advantage and captures more socioeconomic diversity than most others….The analysis also includes a real-world measure of children's cognitive skills: teacher evaluations.

      The use of such a sizable data set “constitutes a landmark approach” for language studies…..the data did not contain details such as when bilingual subjects learned each language or how often they spoke it. Without this information…it is difficult to draw conclusions about how being bilingual could confer cognitive advantages….

Suffocated Seas

[These excerpts are from an article by Lucas Joel in the June 2018 issue of Scientific American.]

      Earth’s largest mass extinction to date is sometimes called the Great Dying—and for good reason: it wiped out about 70 percent of life on land and 95 percent in the oceans. Researchers have long cited intense volcanism in modern-day Siberia as the main culprit behind the cataclysm, also known as the Permian Triassic mass extinction, 252 million years ago. A recent study pins down crucial details of the killing mechanism, at least for marine life: oceans worldwide became oxygen-starved, suffocating entire ecosystems.

      Scientists had previously suspected that anoxia, or a lack of oxygen, was responsible for destroying aquatic life. Supporting data came from marine rocks that formed in the ancient Tethys Ocean—but that body of water comprised only about 15 percent of Earth’s seas. That is hardly enough to say anything definitive about the entire marine realm….

      …This approach enabled the researchers to spot clues in rocks from Japan that formed around the time of the extinction in the middle of the Panthalassic Ocean, which then spanned most of the planet and held the majority of its seawater….

      The findings may have special relevance in modern times because the trigger for this ancient anoxia was most likely climate change caused by Siberian volcanoes pumping carbon dioxide into the atmosphere. And today, as human activity warms the planet, the oceans hold less oxygen than they did many decades ago. Brennecka cautions against speculating about the future but adds: “I think it’s pretty clear that when large-scale changes happen in the oceans, things die.”

The Origin of the Earth

[This excerpt is the start of an article by Harold C. Urey in the October 1952 issue of Scientific American.]

      It is probable that as soon as man acquired a large brain and the mind that goes with it he began to speculate on how far the earth extended, on what held it up, on the nature of the sun and moon and stars, and on the origin of all these things. He embodied his speculations in religious writings, of which the first chapter of Genesis is a poetic and beautiful example. For centuries these writings have been part of our culture, so that many of us do not realize that some of the ancient peoples had very definite ideas about the earth and the solar system which are quite acceptable today.

      Aristarchus of the Aegean island of Samos first suggested that the earth and the other planets moved about the sun—an idea that was rejected by astronomers until Copernicus proposed it again 2,000 years later. The Greeks knew the shape and the approximate size of the earth, and the cause of eclipses of the sun. After Copernicus the Danish astronomer Tycho Brahe watched the motions of the planet Mars from his observatory on the Baltic island of Hveen; as a result Johannes Kepler was able to show that Mars and the earth and the other planets move in ellipses about the sun. Then the great Isaac Newton proposed his universal law of gravitation and laws of motion, and from these it was possible to derive an exact description of the entire solar system. This occupied the minds of some of the greatest scientists and mathematicians in the centuries that followed.

      Unfortunately it is a far more difficult problem to describe the origin of the solar system than the motion of its parts. The materials that we find in the earth and the sun must originally have been in a rather different condition. An understanding of the process by which these materials were assembled requires the knowledge of many new concepts of science such as the molecular theory of gases, thermodynamics, radioactivity and quantum theory. It is not surprising that little progress was made about these lines until the 20th century.

Beavers, Rebooted

[These excerpts are from an editorial by Ben Goldfarb in the June 8, 2018, issue of Science.]

      In 1836, an explorer named Stephen Meek wandered down the piney slopes of Northern California’s Klamath Mountains and ended up here, in the finest fur trapping ground he’d ever encountered. This swampy basin would ultimately become known as the Scott Valley, but Meek’s men named it Beaver Valley after its most salient resource: the rodents whose darns shaped its ponds, marshes, and meadows. Meek’s crew caught 1800 beavers here in 1850 alone, shipping their pelts to Europe to be felted into waterproof hats. More trappers followed, and in 1929 one killed and skinned the valley’s last known beaver.

      The massacre spelled disaster not only for the beavers, but also for the Scott River’s salmon, which once sheltered in beaver-built ponds and channels. As old beaver dams collapsed and washed away, wetlands dried up and streams carved into their beds. Gold mining destroyed more habitat. Today, the Scott resembles a postindustrial sacrifice zone, its once lush floodplain buried under heaps La mine tailings…

      All is not lost, however. Beyond one slag heap, a tributary called Sugar Creek has been transformed into a shimmering pond, broad as several tennis courts and fringed with willow and alder. Gilmore tugged up her shorts and waded into the basin, sandals sinking deep into chocolatey mud. Schools of salmon fry flowed like mercury around her ankles. It was as if she had stepped into a time machine and been transported back to the Scott's fecund past. This oasis, Gilmore explained, is the fruit of a seemingly quixotic effort to re-beaver Beaver Valley. At the downstream end of the pond stood the structure that made the resurrection possible: a rodent-human collaboration known as a beaver darn analog (BDA). Human hands felled and peeled Douglas fir logs, pounded them upright into the stream bed, and wove a lattice of willow sticks through the posts. A few beavers that had recently returned to the valley promptly took over, gnawing down nearby trees and reinforcing the dam with branches and mud….

Dig Seeks Site of First English Settlement in the New World

[These excerpts are from an editorial by Andrew Lawler in the June 8, 2018, issue of Science.]

      In 1587, more than 100 men, women, and children settled on Roanoke Island in what is now North Carolina. War with Spain prevented speedy resupply of the colony—the first English settlement in the New World, backed by Elizabethan courtier Sir Walter Raleigh. When a rescue mission arrived 3 years later, the town was abandoned and the colonists had vanished.

      What is commonly called the Lost Colony has captured the imagination of generations of professional and amateur sleuths, but the colonists’ fate is not the only mystery. Despite more than a century of digging, no trace has been found of the colonists’ town—only the remains of a small workshop and an earthen fort that may have been built later, according to a study to be published this year. Now, after a long hiatus, archaeologists plan to resume digging this fall….

      The first colonists arrived in 1585, when a voyage from England landed more than 100 men here, among them a science team including Joachim Gans, a metallurgist from Prague and the first known practicing Jew in the Americas. According to eyewitness accounts, the colonists built a substantial town on the island’s north end. Gans built a small lab where he worked with scientist Thomas Harriot. After the English assassinated a local Native American leader, however, they faced hostility. After less than a year, they abandoned Roanoke and returned to England.

      A second wave of colonists, including women and children, arrived in 1587 and rebuilt the decaying settlement. Their governor, artist John White, returned to England for supplies and more settlers, but war with Spain delayed him in England for 3 years. When he returned here in 1590, he found the town deserted.

      By the time President James Monroe paid a visit in 1819, all that remained was the outline of an earthen fort, presumed to have been built by the 1585 all-male colony. Digs near the earthwork in the 1890s and 1940s yielded little. The U.S. National Park Service (NPS) subsequently reconstructed the earthen mound, forming the centerpiece of today’s Fort Raleigh National Historic Site.

      Then in the 1990s, archaeologists led by Ivor Noel Hume of The Colonial Williamsburg Foundation in Virginia uncovered remains of what archaeologists agree was the workshop where Gans tested rocks for precious metals and Harriot studied plants with medicinal properties, such as tobacco. Crucibles and pharmaceutical jars littered the floor, along with bits of brick from a special furnace. The layout closely resembled those in 16th century woodcuts of German alchemical workshops.

      In later digs Noel Hume determined that the ditch alongside the earthwork cuts across the workshop—suggesting the fort was built after the lab and possibly wasn’t even Elizabethan. NPS refused to publish these controversial results, and Noel. Hume died in 2017. But the foundation intends to publish his paper in coming months….

Knowledge Can Be Power

[These excerpts are from an article by Peter Salovey in the June 2018 issue of Scientific American.]

      If knowledge is power, scientists should easily be able to influence the behavior of others and world events. Researchers spend their entire careers discovering new knowledge—from a single cell to the whole human, from an atom to the universe.

      Issues such as climate change illustrate that scientists, even if armed with overwhelming evidence, are at times powerless to change minds or motivate action….

      For many, knowledge about the natural world is superseded by personal beliefs. Wisdom across disciplinary and political divides is needed to help bridge this gap. This is where institutions of higher education can provide vital support. Educating global citizens is one of the most important charges to universities, and the best way we can transcend ideology is to teach our students, regardless of their majors, to think like scientists. From American history to urban studies, we have an obligation to challenge them to be inquisitive about the world, to weigh the quality and objectivity of data presented to them, and to change their minds when confronted with contrary evidence.

      Likewise, STEM majors' college experience must be integrated into a broader model of liberal education to prepare them to think critically and imaginatively about the world and to understand different viewpoints. It is imperative for the next generation of leaders in science to be aware of the psychological, social and cultural factors that affect how people understand and use information.

      Through higher education, students can gain the ability to recognize and remove themselves from echo chambers of ideologically-driven narratives and help others do the same. Students at Yale, the California Institute of Technology and the University of Waterloo, for instance, developed an Internet browser plug-in that helps users distinguish bias in their news feeds. Such innovative projects exemplify the power of universities in teaching students to use knowledge to fight disinformation.

      For a scientific finding to find traction in society, multiple factors must be considered. Psychologists, for example, have found that people are sensitive to how information is framed. My research group discovered that messages focused on positive outcomes have more success in encouraging people to adopt illness-prevention measures, such as applying sunscreen to lower their risk for skin cancer, than loss-framed messages, which emphasize the downside of not engaging in such behaviors. Loss-framed messages are better at motivating early-detection behaviors such as mammography screening.

      Scientists cannot work in silos and expect to improve the world, particularly when false narratives have become entrenched in communities…

      Universities are conveners of experts and leaders across disciplinary and political boundaries. Knowledge is power but only if individuals are able to analyze and compare information against their personal beliefs, are willing to champion data-driven decision making over ideology, and have access to a wealth of research findings to inform policy discussions and decisions.

Constitutional Right to Contraception

[This excerpted editorial from the March 8, 2018, issue of The New York Times appeared in the June 2018 issue of Population Connection.]

      Landmark Supreme Court decisions in 1965 and 1972 recognizing a constitutional right to contraception made it more likely that women went to college, entered the work force, and found economic stability. That's all because they were better able to choose when, or whether, to have children.

      A 2012 study from the University of Michigan found that by the 1990s, women who had early access to the birth control pill had wage gains of up to 30 percent, compared with older women.

      It’s mind-boggling that anyone would want to thwart that progress, especially since women still have so far to go in attaining full equality in the United States. But the Trump administration has signaled it may do just that, in a recent announcement about funding for a major family planning program, Title X.

      Since 1970, the federal government has awarded Title X grants to providers of family planning services — including contraception, cervical cancer screenings, and treatment for sexually transmitted infections — to help low-income women afford them. It’s a crucial program.

      Yet the Trump administration appeared to accept the conservatives’ retrograde thinking with a recent announcement from the Department of Health and Human Services’ Office of Population Affairs outlining its priorities for awarding Title X grants. Alarmingly, unlike previous funding announcements, the document makes zero reference to contraception. In setting its standards for grants, it disposes of nationally recognized clinical standards, developed with the federal Centers for Disease Control and Prevention, that have long been guideposts for family planning. Instead, the government says it wants to fund “innovative” services and emphasizes “fertility awareness” approaches, which include the so-called rhythm method. These have long been preferred by the religious right, but are notoriously unreliable.

Trump on Family Planning

[This excerpted editorial from the March 11, 2018, issue of the St. Louis Post-Dispatch appeared in the June 2018 issue of Population Connection.]

      The Trump administration's answer to questions surrounding family planning and safe sex is to give preference for $260 million in grants to groups stressing abstinence and “fertility awareness.” Instead of urging at-risk members of the public to use condoms and other forms of protection, the administration favors far-less safe and effective measures such as the rhythm method.

      Effective and accessible contraception has helped lower rates of unplanned pregnancies in the U.S., thereby reducing the number of abortions. The federal Centers for Disease Control and Prevention reported last year that there were fewer abortions in 2014 than at any time since abortion was legalized in 1973. Adolescent pregnancies decreased 55 percent between 1990 and 2011. Birth rates for women between 15 and 19 declined an additional 35 percent between 2011 and 2016, according to the data.

      Much of the goal of family planning and contraception is to reduce the abortion rate by limiting unintended pregnancies and to decrease the number of sexually transmitted infections. There are enormous health, social, and economic benefits for women who control their own reproductive health.

      The administration’s emphasis on abstinence and natural family planning — including the so-called rhythm method — is part of a familiar pattern of shifting away from scientific, evidence-based policies toward non-scientific ideologies. With the current shift, Trump is undermining nearly fifty years of successful family planning efforts.

      Abstinence is 100 percent effective if practiced consistently. That’s a big if. Fertility awareness is effective if practitioners have a nearly medical understanding of hormonal cycles and adhere to them unfailingly.

  Website by Avi Ornstein, "The Blue Dragon" – 2016 All Rights Reserved