Increase Your Brain Power
Sonia in Vert
Shared Idea
Interesting Excerpts
Awards and Honors
This Week's Puzzle
Last Week's Puzzle
Interesting Excerpts
The following excerpts are from articles or books that I have recently read. They caught my interest and I hope that you will find them worth reading. If one does spark an action on your part and you want to learn more or you choose to cite it, I urge you to actually read the article or source so that you better understand the perspective of the author(s).
How Cleaner Air Changes the Climate

[These excerpts are from an an article by Elizabeth Pennisi in the April 13, 2018, issue of Science.]

      Human influence on the climate is a tug-of-war, with greenhouse gas-induced warming being held partly in check by cooling from aerosol emissions. In a Faustian bargain, humans have effectively dampened global climate change through air pollution. Increased greenhouse gas concentrations from fossil fuel use are heating the planet by trapping heat radiation. At the same time, emissions of aerosols—particles that make up a substantial fraction of air pollution—have an overall cooling effect by reflecting incoming sunlight. The net effect of greenhouse gases and aerosols is the -1°C of global warming observed since 1880 CE. The individual contributions of greenhouse gases and aerosols are, however, much more uncertain. Recent climate model simulations indicate that without anthropogenic aerosols, global mean surface warming would be at least 0.5°C higher, and that in their absence there would also be a much greater precipitation change….

      Since 1990, there has been little change in the global volume of anthropogenic aerosol emissions. Regionally, however, there are large differences, with reductions in Europe and the United States balanced by increases in Africa and Asia….

Human Mutation Rate a Legacy from Our Past

[These excerpts are from an an article by Elizabeth Pennisi in the April 13, 2018, issue of Science.]

      Kelley Harris wishes humans were more like paramecia. Every newborn’s DNA carries more than 60 new mutations, some of which lead to birth defects and disease, including cancers. “If we evolved parameciumlike replication and DNA repair processes, that would never happen,” says Harris….Researchers have learned that these single-cell protists go thousands of generations without a single DNA error—and they are figuring out why human genomes seem so broken in comparison.

      The answer, researchers reported at the Evolution of Mutation Rate workshop here late last month, is a legacy of our origins. Despite the billions on Earth today, humans numbered just thousands in the early years of our species. In large populations, natural selection efficiently weeds out deleterious genes, but in smaller groups like those early humans, harmful genes that arise—including those that foster mutations—can survive.

      Support comes from data on a range of organisms, which show an inverse relationship between mutation rate and ancient population size. This understanding offers insights into how cancers develop and also has implications for efforts to use DNA to kdate branches on the tree of life.

      Mutations occur, for example, when cells copy their DNA incorrectly or fail to repair damage from chemicals or radiation. Some mistakes are good, providing variation that enables organisms to adapt. But some of these genetic mistakes cause the mutation rate to rise, thus fostering more mutations.

      For a long time, biologists assumed mutation rates were identical among all species, and so predictable that they could be used as “molecular clocks.” By counting differences between the genomes of two species or populations, evolutionary geneticists could date when they diverged. But now that geneticists can compare whole genomes of parents and their offspring, they can count the actual number of new mutations per generation.

      That has enabled researchers to measure mutation rates in about 40 species, including newly reported numbers for orangutans, gorillas, and green African monkeys. The primates have mutation rates similar to humans….But…bacteria, paramecia, yeasts, and nematodes—all of which have much larger populations than humans—have mutation rates orders of magnitude lower.

      The variation suggests that in some species, genes that cause high mutation rates—for instance, by interfering with DNA repair—go unchecked. In 2016, Lynch detailed a possible reason, which he calls the drift barrier hypothesis.… Genetic drift plays a bigger role in smaller populations. In large populations, harmful mutations are often counteracted by later beneficial mutations. But in a smaller population with fewer individuals reproducing, the original mutation can be preserved and continue to do damage.

      Today, 7.6 billion people inhabit Earth, but population geneticists focus on the effective population size, which is the number of people it took to produce the genetic variation seen today. In humans, that's about 10,000—not so different from that of other primates. Humans tend to form even smaller groups and mate within them. In such small groups, Harris says, “we can’t optimize our biology because natural selection is imperfect”….

      …Among Europeans, the excess cytosine to thymine mutations existed in early farmers but not in hunter-gatherers, she reported. She speculates that these farmers’ wheat diet may have led to nutrient deficiencies that predisposed them to a mutation in a gene that in turn favored the cytosine-to-thymine changes, suggesting environment can lead to changes in mutation rate. Drift likely played a role in helping the mutation-promoting gene stick around.

What We All Need to Know about Vaping

[These excerpts are from an article by Susan Leonard in the April 2018 issue of Phi Delta Kappan.]

      …Among these experts’ many concerns was the fact that most delivery devices contain large concentrations of propylene glycol, which is a known irritant when inhaled. Little is known about the effect of long-term inhalation of this chemical. Additionally, because these devices are unregulated, users have no idea what other chemicals they may be inhaling, nor what the short- or long-term effects of that exposure are.

      …Nicotine causes the release of adrenaline, which elevates the heart rate, increases blood pressure, and constricts blood vessels, potentially leading to long-term heart problems. The effect of vaping is almost immediate, but it also wears off quickly and encourages the user to want to nape again and again, needing more and more to feel the same effects.

      Candidly, it is hard for me to believe that this industry is truly trying to do good by helping smokers wean themselves off harmful tobacco cigarettes when they market the devices with fruity flavors and pack their juices with more nicotine than cigarettes are allowed to contain under Food and Drug Administration regulations. Nicotine is highly and quickly addicting, and addiction creates a lucrative product, especially if those users are young.

      During adolescence, the brain is sensitive to novel experiences. Unfortunately, just as young people want most to experiment with new and risky behaviors, their immature brains have very different sensitivities to drugs and alcohol. Exposure to nicotine during this stage can lead to long-term changes in neurology and behavior. Chronic nicotine exposure during adolescence also alters the subsequent response of the serotonin system. Such alterations are often permanent….

      We should also not ignore the gateway effect nicotine has on its users. In 2012, nearly 90% of U.S. adults 18-34 years of age who had used cocaine had smoked cigarettes (i.e., used nicotine) first. Behavioral experiments have proven that nicotine “primes” the brain to en-hance the effects of cocaine. So, it's not just that one risky behavior leads to another — nicotine actually affects how the brain works, making other drugs (most commonly marijuana and cocaine) more pleasurable and desirable. And the younger a person is when beginning to use drugs and alcohol, the more likely it is that person will move on to other drugs and/or develop a serious addiction ….

      …Vaping is not just smokeless smoking – it’s a real and present danger to our students.

A Wider Vision of Learning

[These excerpts are from an article by Elliot Washor in the April issue of Phi Delta Kappan.]

      Today, school leaders tend to be fixated on using big data to crunch a narrow set of numbers, rather than actually thinking big — and deep and broad —about learning. And the more sophisticated the technology they apply (or misapply) to the same handful of indicators, the less clearly they see their students. They use test results to assign learners to groups so that schools can provide “appropriate” interventions, but they don’t actually know very much about the varying talents and interests of the individuals they put into those groups, nor do they know much about the personal struggles they may be facing that can profoundly affect their performance.

      Further, they may not be able to imagine other ways of assessing students, or — consistent with Abraham Maslow’s observation that a fear of knowing is a fear of doing — they may lack the courage to look more deeply at the talent that sits before them. After all, if they did so, then they might feel obliged to act on what they learn, and this could interfere with their ability to complete the tasks they’ve been assigned. Worse yet, if educators allowed themselves to know more about their students, then they might be forced to acknowledge that the whole system needs to be redesigned.

      But why, given its rapidly expanding ability to collect and analyze seemingly endless streams of data, does the educational system remain so narrowly fixated on the same few indicators? Why can’t we use the power of big data to collect different and better measures that look more broadly and deeply at the things students can and want to do, not just in the classroom but also outside of school?

      …Traditional indicators and measures are at best incomplete; often, they are perniciously inaccurate, even as they delude us into believing we actually know the individual learner. A student might ace an interim standardized test, indicating they are on track to succeed, while also facing significant issues at home that could increase their likelihood of dropping out of school before graduation. Or a student might be engaged in pursuing a passion that presents valuable learning opportunities but does not necessarily improve their grades and test scores. We can’t understand students unless we expand our vision.

Do Students’ High Scores on International Assessments Translate to Low Levels of Creativity?

[This excerpt is from an article by Stefan Johansson in the April 2018 issue of Phi Delta Kappan.]

      Although it’s true that test scores have been misused, Zhao’s critique of PISA — arguing that high PISA scores in East Asian countries are related to low levels of creativity — is difficult, if not impossible, to support. Certainly, nobody other than Zhao has been able to establish any causal relationships between increasing scores on international large-scale assessments and decreasing levels of innovation across countries.

      To back up his thesis, Zhao compares the mathematics scores from the 2009 PISA with results of the Global Entrepreneurship Monitor (GEM) study, which is a survey of, among other things, perceived levels of entrepreneurial capability (i.e., an individual’s confidence in his or her ability to succeed in entrepreneurship) in a wide range of countries. On first glance, the comparison is striking — countries with high scores on PISA (such as Japan, Korea, and Singapore) all had low scores on their perceived entrepreneurial capacity. At the same time, countries that performed in the middle of the pack on PISA (such as Sweden and the U.S.) ranked fairly high on entrepreneurial capacity, and one of PISA’s lowest performers (the United Arab Emirates) reported the highest entrepreneurial capacity of all.

      Zhao finds this pattern to be evidence of a statistically significant relationship, showing that countries with high PISA scores tend to be less innovative. Moreover, he asserts, many Chinese and Singaporeans themselves “blame their education for their shortage of creative and entrepreneurial talents,” and states that if they’re correct, then the relation-ship “could be causal.” That is, Zhao appears to be arguing that the way these countries teach math causes their students not only to do well on tests but also to become less creative. Thus, he concludes, it’s a mistake for U.S. policy makers to pursue reforms that make their schools more like those in East Asia. If they continue to push for more emphasis on standardization, test taking, and highly rigorous academic work, then students’ creativity will be seriously harmed, and the American workforce will become less innovative.

      But on closer inspection, this argument turns out to be pretty weak. It may be true that Singapore and China haven’t produced many world-famous musicians, but other than that, there isn’t much evidence that East Asians suffer from a lack of creativity or, if they do, that it has anything to do with their NSA scores.

      Zhao’s reliance on self-reported levels of entrepreneurial capacity is especially problematic. For one thing, the construct is not well defined, making it difficult to interpret people's self-assessments. For another, it is unclear why entrepreneurial abilities should be equated with creativity, or why creativity should be distinguished from mathematical proficiency. Actually, mathematical reasoning and problem solving are often described as deeply creative activities. For example, Haylock argued that there are at least two major ways in which the term creativity is used in the context of mathematics: 1) thinking that is divergent and overcomes fixation and 2) the thinking behind a product that is perceived as outstanding by a large group of people. Further, creativity is associated with long periods of work and reflection rather than rapid and unique insights.

      Perhaps more important, the context is so different from one country to another that it may not be possible to compare those self-assessments at all, or to know what to make of results that seem to show that people in East Asia are less innovative than their counterparts in the West. For example, it can be very difficult to start a business or pursue other forms of entrepreneurship in a country that is tightly governed by the state, while it is relatively easy to do so in the U.S. or Sweden — one would expect such differences to affect the GEM results.

Avoiding Difficult History

[This excerpt is from a ‘worthy item’ in the April 2018 issue of Phi Delta Kappan.]

      The Southern Poverty Law Center (SPLC) has found that U.S. students are receiving an inadequate education about the role of slavery in American history. Surveys of high school seniors and social studies teachers, analysis of state content standards, and a view of history textbooks revealed what the SPLC considers seven key problems with current practice:

      1. We teach about slavery without context, preferring feel-good stories of heroes like Harriet Tubman over the more difficult story of the role of slave labor in building the nation.

      2. We subscribe to a view of history that acknowledges flaws only to the extent that they have been solved and avoids exploration of the continuing legacy of those flaws.

      3. We teach about slavery as an exclusively southern institution even though it existed in all states when the Declaration of Independence was signed.

      4. We rarely connect slavery to the White supremacist ideology that grew up to protect it.

      5. We rely on pedagogy, such as simulations, that is poorly suited to the subject and potentially traumatizing.

      6. We rarely connect slavery to the present, or even historical events such as the Great Migration and the Harlem Renaissance.

      7. We tend to foreground the White experience by focusing on the political and economic impacts of slavery.

      The report identifies 10 key concepts that should be incorporated into instruction on slavery in the U.S. These include the facts that the slave trade was central to the growth of the U.S. economy and was the chief cause of the Civil War….

Edge of Extinction

[These excerpts are from an article by Sanjay Kumar in the April 6, 2018, issue of Science.]

      The first dinosaur fossils found in Asia, belonging to a kind of sauropod, were unearthed in 1828 in Jabalpur, in central India’s Narmada Valley. Ever since, the subcontinent has yielded a stream of important finds, from some of the earliest plant remains through the reign of dinosaurs to a skull of the human ancestor Homo erectus….

      Much of that fossil richness reflects India’s long, solitary march after it broke loose from the supercontinent Gondwanaland, starting some 150 million years ago. During 100 million years of drifting, the land mass acquired a set of plant and animal species, including many dinosaurs, that mix distinctive features with ones seen elsewhere. Then, 50 million to 60 million years ago, India began colliding with Asia, and along the swampy edges of the vanishing ocean between the land masses, new mammals emerged, including ancestral horses, primates, and whales.

      Now, that rich legacy is colliding with the realities of present-day India. Take a site in Himachal Pradesh state where, in the late 1960s, an expedition by Panjab University and Yale University excavated a trove of humanoid fossils, including the most complete jaw ever found of a colossal extinct ape, Gigantopithecus bilaspurensis. The discovery helped flesh out a species known previously only through teeth and fragmentary jaws. Today’s paleontologists would love to excavate further at the site, Sahni says. But it “has been completely flattened”—turned into farm fields, with many of its fossils lost or sold. To India’s paleontologists, that is a familiar story.

      In the early 1980s, for example, blasting at a cement factory in Balasinor in Gujarat revealed what the workers believed were ancient cannon balls. A team led by Dhananjay Mohabey, a paleontologist then at the Geological Survey of India in Kolkata, realized they were dinosaur eggs. Mohabey and his colleagues soon uncovered thousands more in hundreds of nests, as well as many other fossils. Examining one Cretaceous period clutch in 2010, Jeffrey Wilson of the University of Michigan in Ann Arbor discerned what appeared to be snake bones. He and Mohabey recovered more fossil fragments and confirmed that a rare snake (Sanajeh indicus) had perished while coiled around a dinosaur egg. It was the first evidence, Mohabey says, of snakes preying on dinosaur hatchlings.

      Mohabey and others have since documented seven dinosaur species that nested in the area. (In a separate find in Balasinor, other researchers unearthed the skeleton of a horned carnivore called Rajasaurus narmadensis — the royal Narmada dinosaur.) But locals and visitors soon began pillaging the sites. In the 1980s, dinosaur eggs were sold on the street for pennies.

      In 1997, local authorities designated 29 hectares encompassing the nesting sites as the Balasinor Dinosaur Fossil Park in Raiyoli. But poaching continued largely unabated in the park and outside its boundaries, Mohabey says. Even now, the park is not fully fenced and the museum building, ready since 2011, is still not open….

United States to Ease Car Emission Rules

[This news brief by Jeffrey Brainard is in the April 6, 2018, issue of Science.]

      U.S. President Donald Trump’s administration last week announced it intends to roll back tough auto mileage standards championed by former President Barack Obama to combat climate change. The standards, released in 2012, called for doubling the average fuel economy of new cars and light trucks, to 23.2 kilometers per liter by 2025. The Environmental Protection Agency (EPA) estimated the rules would prevent about 6 billion tons of carbon emissions by 2025. But on 2 April, EPA Administrator Scott Pruitt said the agency would rewrite the standards, arguing that Obama’s EPA “made assumptions ... that didn’t comport with reality, and set the standards too high.” In a formal finding, Pruitt argued the standards downplay costs and are too optimistic about the deployment of new technologies and consumer demand for electric vehicles. Clean car advocates disputed many of Pruitt’s claims, noting that auto sales have been strong despite stiffer tailpipe rules. “Backing off now is irresponsible and unwarranted,” said Luke Tonachel of the Natural Resources Defense Council in Washington, D.C. Pruitt’s move could also set up a legal clash with California state regulators, who have embraced the Obama-era standards and say they want to keep them.

Mist Hardships

[This excerpt is from the first chapter of Caesar’s Last Breath, by Sam Kean.]

      The most deadly gas outburst in history took place in Iceland in 1783, when a volcanic fissure spewed poisonous gas for eight months, ultimately releasing 7 million tons of hydrochloric acid, 15 million tons of hydrofluoric acid, and 122 million tons of sulfur dioxide. Locals called the event the Moduhardindin, or the “mist hardships,” after the strange, noxious fumes that emerged —"air bitter as seaweed and reeking of rot," one witness remembered. The mists killed 80 percent of the sheep in Iceland, plus half the cattle and horses. Ten thousand people there also died—one-fifth of the population—mostly of starvation. When the mists wafted over to England, they mixed with water vapor to form sulfuric acid, killing twenty thousand more people. The mists also killed crops across huge swaths of Europe, inducing long-term food shortages that helped spark the French Revolution six years later.

Tiny Dancers

[These excerpts are from a brief article in the Spring 2018 issue of the American Museum of Natural History Rotunda.]

      Relying on sight, scent, and touch, honey bees navigate the world—and even dance to it.

      Their antennae are highly sensitive to vibrations. They also have numerous receptors that respond to odors and other stimuli. Together, these keen senses may explain how worker bees are able to pick up and interpret the so-called “waggle dance,” which they use to share the location of food with fellow worker bees of the colony.

      While the process is not completely understood, it goes a little something like this: a successful forager uses an elaborate dance pattern to indicate both the direction of food in relation to the Sun and its distance from the hive. The dancer adjusts the direction over time to account for the movement of the Sun, as do the foragers in the field.

      The worker bees don’t actually see the waggle dance within the pitch-black hive, perhaps giving new meaning to the phrase “dancing in the dark” Instead, the bees sense air vibrations through their antennae, which are held close to the dancing, waggling bee….

      The dance is accompanied by an olfactory message, too: pollen brought back by the returning dancing bee or regurgitated nectar conveys the scent of the food at the forage site. Finally, the richness of the nectar source is indicated by the duration of the dance. The bees don’t exactly measure the length of the dance, but the longer the bee dances, the more foragers are recruited—essentially matching the workforce to the harvest at hand.

Volcanic Eruptions

[This excerpt is from the first chapter of Caesar’s Last Breath, by Sam Kean.]

      It took Mount Saint Helens two thousand years to build up its beautiful cone and about two seconds to squander it. It quickly shrank from 9,700 feet to 8,400 feet, shedding 400 million tons of weight in the process. Its plume of black smoke snaked sixteen miles high and created its own lightning as it rose. And the dust it spewed swept across the entire United States and Atlantic Ocean, eventually circling the world and washing over the mountain again from the west seventeen days later. Overall the eruption released an amount of energy equivalent to 27,000 Hiroshima bombs, roughly one per second over its nine-hour eruption.

      With all that in mind, it’s worth noting that Mount Saint Helens was actually small beer as far as eruptions go. Although it vaporized a full cubic mile of rock, that’s only 8 percent of what Krakatoa ejected in 1883 and 3 percent of what Tambora did in 1815. Tambora also decreased sunlight worldwide by 15 percent, disrupted the mighty Asian monsoons, and caused the infamous Year Without a Summer in 1816, when temperatures dropped so much that snow fell in New England in summertime. And Tambora itself would tremble before the truly epic outbursts in history, like the Yellowstone eruption 2.1 million years ago that launched 585 cubic miles of Wyoming into the stratosphere. (This megavolcano will likely have an encore someday and will bury much of the continental United States in ash.)

Continental Drift

[These excerpts are from the first chapter of Caesar’s Last Breath by Sam Kean.]

      To say that geologists didn't embrace Wegener's theory is a bit like saying that General Sherman didn’t receive the warmest welcome in Atlanta. Geologists loathed plate tectonics, even got pleasure out of loathing it. But as more and more evidence trickled in through the 1940s and 1950s, the drifting of continental plates didn't seem so silly anymore. The balance finally tipped in the late 1960s, and in one of the most stunning reversals in science history, pretty much every geologist on earth had accepted Wegener’s ideas by 1980. The rout was so complete that nowadays we have a hard time appreciating the theory’s importance. In the same way that the theory of natural selection shored up biology, plate tectonics took a hodgepodge of facts about earthquakes, mountains, volcanoes, and the atmosphere, and fused them together into one overarching schema.

      Continental plates can sometimes shift all at once in dramatic fashion, jolts we call earthquakes. Much more commonly the plates slowly grind past one another moving at about the rate that fingernails grow. (Think about that next time you clip your nails: we’re that much closer to the Big One.) When one plate slips beneath the other, a process called subduction, the friction of this grinding produces heat, which melts the lower plate and reduces it to magma. Some of this magma disappears into the bowels of Earth; but the lighter fraction of it actually climbs back upward through random cracks in the crust, swimming toward the surface. (That’s why hunks of pumice, a volcanic rock, float when tossed into water, because pumice comes from material with such a low density.) The heat of grinding also liberates carbon dioxide from the melting plate, as well as lesser amounts of hydrogen sulfide, sulfur dioxide, and other gases, including trace amounts of nitrogen.

      Meanwhile, as hot magma pushes upward through cracks in the crust, water in the crust seeps downward through those same cracks. And here’s where things get dangerous. One key fact about gases — it comes up over and over—is that they expand when they get warmer. Related to this, the gaseous version of a substance always takes up far more space than the liquid or solid version. So when that liquid water dribbling downward meets that magma bubbling upward, the water flashes into steam and expands with supernoval force, suddenly occupying 1,700 times more volume than before. Firefighters have special reason to fear this phenomenon: when they splash cold water onto hot, hissing fires, the burst of steam in an enclosed space can flash-burn them. So it goes with volcanoes. We ogle the orange lava pouring down the slopes, but it’s gases that cause the explosions and that do most of the damage.

      Around the world roughly six hundred volcanoes are active at any moment. Most of them lie along the famous Ring of Fire around the Pacific Ocean, which rests atop several unstable plates. In the case of Mount Saint Helens, the Juan de Fuca plate off Washington State is grinding away against the North American plate, and doing so roughly a hundred miles beneath the surface. This depth leaves a heavy cap of rock over the pools of magma and thereby prevents constant venting of noxious fumes. But when one of these deep pockets does pop, there’s that much more shrapnel.

Dirty Politics

[These excerpts are from an article by Margaret Talbot in the April 2, 2018, issue of The New Yorker.]

      …[Scott]Pruitt, who is forty-nine, looked cheerful, as he generally does at public appearances. (He declined my requests for an interview.) Unlike many people who have joined the chaotic Trump Administration, he seems unconflicted about his new role, his ideological and career goals fitting together as neatly as Lego blocks. The former attorney general of Oklahoma, Pruitt ascended politically by fighting one regulation after another. In his first year at the E.P.A., he has proposed repealing or delaying more than thirty significant environmental rules. In February, when the White House announced its intention to reduce the E.P.A.'s budget by twenty-five per cent—one of the largest cuts for any federal agency—Pruitt made no objections. His schedule is dominated by meetings and speaking engagements with representatives of the industries he regulates. He has met only a handful of times with environmental groups….

      Under Pruitt, even the dirtiest forms of pollution are getting a reprieve. On February 2, 2014, as much as thirty-nine thousand tons of coal ash began spilling into the Dan River from a Duke Energy power plant in Eden, North Carolina. Like many utilities, the Dan River Steam Station had recently transitioned from coal combustion to natural gas, which is cheaper. But the plant still had waste ponds containing more than a million tons of coal ash; the ponds were separated from the river by an earthen dam. When a guard made his rounds that day, he noticed that the water level in the ponds was rapidly dropping, as though someone had opened a bathtub drain….

      Recent technological changes have caused the wastewater produced by coal-fired power plants to become even more toxic. Some of the worst wastewater is discharged by “wet scrubbers,” which remove pollutants from smokestack emissions. Holleman, the attorney at the Southern Environmental Law Center, said, “I like to say they’re using twenty-first-century technology to take pollutants out of the air and thirteenth-century technology to put it in the water. But someone told me I was insulting the thirteenth century….”

      Pruitt may parrot Trump's views, but he has a far more polished manner. In public appearances, he’s well spoken and unflaggingly polite. When conservative journalists prod him to snipe at the E.P.A.’s “lifelong bureaucrats,” he chuckles and declines the bait. In an interview with a Fort Worth radio station, Pruitt described the E.P.A.’s career employees as “hardworking folks” who, in the Obama years, had lost “their mission.” He told the Daily Signal that he was talking to career employees about “the rule of law and process and federalism,” but emphasized that he was listening to them, too.

      As administrator, Pruitt has become adept at presenting his views with bland jargon. He defends his frequent meetings with industry representatives as time spent with “stakeholders who care about outcomes.” (And he describes them as “farmers and ranchers,” not as “fossil-fuel lobbyists.”) He touts “fuel diversity,” ex-plaining, “It’s not the job of the E.P.A. to say to the utility company in any state of the country, ‘You should choose renewables over natural gas or coal.’ . . . We need more choices, not less.” And Pruitt has adopted a favored term of the anti-regulatory right, “cooperative federalism”: putting more of the onus for environmental rule-making and enforcement on states….

      At the same time that Pruitt has been pledging to clean up some Superfund sites, he has been dismantling important Superfund regulations. In December, he announced that he would eliminate a 2016 rule requiring hard-rock-mining operations, such as gold, silver, and lead mines, to provide evidence that they had the financial resources to clean up any toxic messes that they created. The rule came about after environmental groups sued the E.P.A. over “heap-leach” mining, in which cyanide is used to extract gold from open pits. Multiple companies using this method had caused vast contaminations, then declared bankruptcy or sheltered assets. The 1980 Superfund law states that polluters, not taxpayers, must pay for remediation of disaster sites….

      One of the engineers said that it might take a while to “rebuild capacity” after Pruitt. But it would be done. The public, he reminded everyone, “is expecting us to protect the planet.” He said, “Pruitt is a temporary interloper. We are the real agency.”


[These excerpts from are from an article by Jennifer A. Francis in the March 2018 issue of Scientific Americn.]

      …At the current rate of change, there was a real possibility that within a century, the world could witness a summer Arctic Ocean that would be ice-free, a state not seen for thousands of years. Today I am startled again because it now appears that the ocean will likely be free of summer ice by 2040—a full 60 years earlier than we had predicted little more than a decade ago.

      The Arctic is changing exactly the way scientists thought it would but faster than even the most aggressive predictions. The recent behavior is off the charts. In just three years more than a dozen climate records that had each stood for many decades have crumbled, including those for disappearing summer sea ice, decreasing winter sea ice, warming air and thawing ground.

      These trends signal trouble for people around the world. The last time the Arctic was only slightly warmer than today—about 125,000 years ago—oceans were 13 to 20 feet higher. Goodbye Miami, New Orleans, the naval base in Norfolk, Va., most of New York City and Silicon Valley, as well as Venice, London and Shanghai. New research suggests that rapid Arctic warming also tends to reroute the jet stream in ways that could allow punishing weather patterns to linger across North America, central Europe and Asia longer than usual, subjecting millions of people to unyielding heat waves, droughts or relentless storms. Plankton are increasing throughout the southern Arctic Ocean, which may disrupt foods chains that support commercial fisheries. And the massive ice melt is adding to an enormous blob of freshwater south of Greenland that may be slowing the Gulf Stream, which could significantly change weather patterns for continents on both sides of the Atlantic Ocean….

      In only 40 years the extent of ice across the Arctic Ocean in summer has shrunk by half. Yes, half. The volume of sea ice, year-round, is way down, too—about a quarter of what it was in the early 1980s. Until recently, scientists had thought it would take until at least the middle of this century to reach these extremes.

      Summer sea ice is vanishing quickly because of feedbacks—vicious cycles that can amplify a small change. For example, when a bit of extra heat melts bright-white ice, more of the dark ocean surface is exposed, which reflects less of the sun’s energy back to space. That absorbed heat then warms the area further, which melts even more ice, leading to yet more warming….

      Permafrost—soil that usually remains frozen year-round—has been thawing. Buildings constructed atop permafrost are collapsing, trees are toppling and roads are buckling. In addition to disrupting daily life for local residents, thawing soils also can release large quantities of heat-trapping gases into the atmosphere. When the organic matter that has been locked in permafrost for thousands of years thaws, bacteria break it down into carbon dioxide (if oxygen is present) or methane (if it is not). Arctic permafrost contains about twice as much carbon as the atmosphere holds now, so widespread thaw could greatly exacerbate global warming—which would lead to even faster thaw….

      Are these impacts still avoidable? Yes and no. Because the climate’s response lags behind the increases in greenhouse gas concentrations and because carbon dioxide has a very long lifetime in the atmosphere, future change is already baked into the system. But the magnitude and pace can be reduced if society moves quickly to slow emissions and if methods can be developed to extract carbon from the atmosphere in large quantities. Progress on both these fronts is rapid, though likely too little, too late, to preserve the earth and the Arctic as we have known them. Prepare for the unexpected.

Good Gun Policy Needs Research

[These excerpts are from an article by Alan I. Leshner and Victor J. Dzau in the March 16, 2018, issue of Science.]

      The tragic shooting at a school in Parkland, Florida, last month triggered another round of proposals from local, state, and federal policy-makers about controlling firearm-related violence without violating broad interpretations of the rights to keep and bear arms provided by the US. Constitution. Unfortunately, there is only very sparse scientific evidence that can help figure out which policies will be effective. Earlier this month, the RAND Corporation released a comprehensive analysis on gun policy in the United States, and among its conclusions is that too few policies and outcomes have been the subject of rigorous scientific investigation. Even the seemingly popular view that violent crime would be reduced by laws prohibiting the purchase or possession of guns by individuals with mental illness was deemed to have only moderate supporting evidence. If the nation is serious about getting firearm-related violence under control, it must rise above its aversion to providing financial support for firearm-related research, and the scientific community will have to expeditiously carry out the needed research.

      There used to be more federally funded research on firearm-related violence than there is now. Although its program was small relative to other public health issues, the U.S. Centers for Disease Control and Prevention (CDC) did support research on firearm violence through its National Center for Injury Prevention and Control. However, the 1996 “Dickey Amendment” prohibited the CDC from funding activities that promoted or advocated for gun control. Although arguably some research might still have been considered acceptable, the amendment was interpreted as an outright prohibition of CDC support for any gun violence research. In 2011, Congress enacted similar restrictions affecting the Department of Health and Human Services, resulting in a dearth of scientific activity on any aspect of the availability and possession of firearms and the violence that might be related to them….

      It’s time to stop the polarized “debates” that lack a science base and turn our energies toward constructive, informed examination. The IOM-NRC report has spelled out a research path that calls for a closer examination of the characteristics of firearm-related violence; the risk and protective factors (like growing up in violence-prone environments) that increase the probability of firearm violence; the effectiveness of diverse violence prevention and other interventions; and the impact of various gun safety technologies. And the RAND analysis calls for research on the effects of specific firearm policies, such as whether background checks that investigate all types of mental health histories do reduce gun injuries, suicides, and homicides and whether raising the minimum age for purchasing firearms (to 21 years old) reduces firearm suicides among youth.

      Without science to drive firearm policy development and implementation, we risk inventing policies based on personal ideology or intuition. If we are serious that gun violence is a major public threat, then let's rise to the moment and take the next science-based steps.

Who Holds the Power?

[These excerpts are from a book review by Sean P. Cornelius in the March 9, 2018, issue of Science.]

      What was the cause of Donald Trump’s stunning victory over Hillary Clinton in the 2016 U.S. presidential election? Was it the peculiarities of the electoral college? Voter resistance to three-term rule by a single party? Anxiety about illegal immigration?

      As Niall Ferguson explains in The Square and the Tower, the answer lies largely in one word: networks. Specifically, without the cyber infrastructure that facilitated Russian interference, the “alt-right” networks that churned out memes and “fake news,” and the social media that gave them wing, history may have turned out very differently…..

      The book’s enigmatic title evokes the heart of the archetypical medieval town—the high tower of the state looming over the noisy public square below. It’s an apt metaphor for Ferguson's central point: Networks have always been with us, and their interaction with hierarchies has catalyzed some of the most momentous events in history.

      Effective networks can topple hierarchies, as shown in Luther's Reformation against the Catholic church. But under the right circumstances, the tower can cast its shadow over the square anew. Look no further than the age of empire and colonialism that lasted from Napoleon’s defeat to the First World War.

      The study of networks can be traced to the work of 18th-century mathematician Leonhard Euler. It was Euler who used the mathematical language of graphs to solve a puzzle that vexed the citizens of Konigsberg: whether it was possible to walk through the town crossing each of its seven bridges exactly once. (No.)….

      The Square and the Tower offers an enthralling “reboot” of history from a novel perspective, spanning antiquity to the present day. Ferguson, at once insightful and droll, builds his case meticulously. And, like the best historians, he always pauses to learn from the past and anticipate the future. If only for this reason, the book is well worth a read.

      After all, we live in a time when networks appear ascendant. Smartphone usage has penetrated deep into the developing world, Twitter has galvanized revolutions in Tunisia and Egypt, and cryptocurrency has (at times) rivaled Fortune 500 companies in market capitalization. Surely this all signifies the dawn of a new era, the inexorable, final triumph of networks over the ossified hierarchies of the past? If history is any guide, don’t count on it.

Slow Coolant Phaseout Could Worsen Warming

[These excerpts are from an article by April Reese in the March 9, 2018, issue of Science.]

      In the summer of 2016, temperatures in Phalodi, an old caravan town on a dry plain in northwestern India, reached a blistering 51°C—a record high during a heat wave that claimed more than 1600 lives across the country. Wider access to air conditioning (AC) could have prevented many deaths—but only 8% of India’s 249 million households have AC….As the nation’s economy booms, that figure could rise to 50% by 2050….And that presents a dilemma: As India expands access to a life-saving technology, it must comply with international mandates—the most recent imposed just last fall—to eliminate coolants that harm stratospheric ozone or warm the atmosphere.

      “Growing populations and economic development are exponentially increasing the demand for refrigeration and air conditioning,” says Helena Molin Valdes, head of the United Nations’s (UN’s) Climate & Clean Air Coalition Secretariat in Paris. “If we continue down this path,” she says, “we will put great pressure on the climate system.” But a slow start to ridding appliances of the most damaging compounds, hydrofluorocarbons (HFCs), suggests that the pressure will continue to build. HFCs are now “the fastest-growing [source of greenhouse gas] emissions in every country on Earth,” Molin Valdes says.

      HFCs, already widely used in the United States and other developed countries, are up-and-coming replacements for hydrochlorofluorocarbons (HCFCs) found today in most AC units and refrigerators in India and other developing nations. HCFCs are themselves replacements for chlorofluorocarbons (CFCs), ozone-destroying chemicals banned under the 1987 Montreal Protocol on Substances that Deplete the Ozone Layer. But HCFCs are potent greenhouse gases, as well as a threat to ozone, and they are now being phased out under a 2007 amendment to the protocol. Developed countries are to ditch them by 2020; developing countries have until 2030.

      To meet those deadlines, manufacturers have turned to HFCs, which do not destroy ozone. But they are a serious climate threat. The global warming potency of HFC-134a, commonly used in vehicle AC units, is 1300 times that of carbon dioxide. Clamping down on HFCs, a 2014 analysis found, could avoid a full 0.5°C of future warming.

      As with the HCFC phaseout, developed countries agreed to make the first move: They must begin abandoning the production and consumption of HFCs next year and achieve an 85% reduction by 2036….

      Some climate experts are more hopeful, pointing out that developing countries have an opportunity to bypass HFCs altogether. “The alternative when developed countries phased out HCFCs was HFCs. But developing countries are in a different position: They’re at the beginning of phasing out HCFCs and can leap directly past HFCs” to benign alternatives, says Nathan Borgford-Parnell, regional assessment initiative coordinator for the UN’s Climate & Clean Air Coalition.

      India is crafting a National Cooling Action Plan that aims to do just that. It will include better city planning and building design, and it will embrace novel coolants, says Stephen Andersen of the Institute for Governance & Sustainable Development in Washington, D.C., who helped develop the plan.

      Meanwhile, six AC manufacturers in India have already begun “leapfrogging” to hydrocarbon-based coolants such as R-290— refrigerant-grade propane—that have lower warming potential….

Health Security’s Blind Spot

[These excerpts are from an article by Seth Berkley in the March 9, 2018, issue of Science.]

      The severity of this year's influenza virus is a reminder of the daunting task facing the global health community as it struggles to prevent infectious diseases from sparking deadly epidemics. Today, yellow fever and cholera continue to spread in Africa, while Brazil is in the midst of a major yellow fever outbreak. It was only recently that Zika virus and Ebola virus epidemics were in the headlines. The world needs to harness every resource and tool in the battle to catch outbreaks before they catch us. Prevention is always the first line of defense, and nations must maintain vigilant surveillance—and yet, effective and affordable, quick and definitive diagnostics are absent in the countries where they are most needed. This represents one of our most serious global health security blind spots.

      During the 2014 Ebola epidemic in West Africa, the first cases were initially misdiagnosed as cholera, and then later as Lassa fever on the basis of clinical symptoms. It took nearly 3 months before blood samples sent to Europe finally identified the disease id as Ebola, during which time it was allowed to spread. Similarly, in Nigeria, a lack of rapid diagnostics is making it difficult to get ahead of the current yellow fever outbreak with targeted vaccination. Throughout 2016 and the first 8 months of 2017, Nigerian laboratories were unable to carry out tests on almost all suspected cases of yellow fever, owing to a shortage of chemicals needed for those diagnostics. When these reagents eventually became available last fall, yellow fever had spread to multiple states. As of last month, there were more than 350 suspected yellow fever cases over 16 states and 45 deaths. The world’s poorest countries simply cannot equip and maintain g their limited laboratory facilities.

      But the problem is not just how well-stocked laboratories are, it’s also about how quickly and reliably they can W respond. For yellow fever, whenever lab tests are positive or inconclusive in Africa, samples are sent to a Regional Reference Laboratory for confirmation. For the whole of Africa there is just one such facility in Dakar, Senegal. Even under the best conditions, these lab tests are expensive and take at least a month. What’s more, about 40% of samples found to be positive by Nigerian national laboratories have tested negative in Senegal, creating uncertainty about the reliability of the test….

      Ultimately, to achieve sustainable global epidemic preparedness, we need to stimulate the development of cutting-edge diagnostic technologies—both for laboratories and for use in the field in remote locations—and make them available and affordable in low-income countries….Early detection through reliable, available, and efficient testing is essential to stopping outbreaks before they spread. With many diseases presenting similar first symptoms, it’s all too easy to get a diagnosis wrong and potentially miss an outbreak. And given the ease and speed at which pathogens can now travel in the modern urban-dense global village, any delay in diagnosis will inevitably and increasingly be measured in lives lost.

Fever Dilemma>

[This excerpt is from an article by Gretchen Vogel in the March 9, 2018, issue of Science.]

      The toddler on her mother's lap is listless, her eyes dull. She has a fever, little appetite, and a cough. Her journey to the health clinic took an hour by bush taxi, and she had to wait two more hours to be examined. When it's finally her turn, the nurse practitioner pricks her finger and blots a drop of blood onto a rapid diagnostic test (RDT) for malaria. In 15 minutes the answer is clear: The child has malaria. She receives antimalarial drugs, which will most likely vanquish the parasites from her bloodstream within days, and she is sent home to recover.

      If the test is negative, however, things get complicated. If malaria isn't making her sick, what is? Is it pneumonia, typhoid, or Lassa fever? Meningitis? Or more than one infection at the same time? If she has bacterial meningitis, the right antibiotic could save her life. If she has Lassa fever, antibiotics won’t help.

      Until recently, nearly every child with a temperature above 38.5°C was treated for malaria in regions where the disease is endemic. It was one of the most common and deadliest causes of fever, and there was no easy way to rule it out: A definitive diagnosis required a microscope and a skilled technician—unavailable in many places. To be safe, health workers were trained to treat most fevers with a dose of antimalarial medicine. Public health campaigns helped spread the word: If your child has a fever, get them treated for malaria!

      In the past decade, malaria RDTS—which use antibodies to detect the parasite’s proteins—have transformed the landscape. The tests help reduce unnecessary prescriptions for malaria medicines, but they have exposed a new problem: the previously hidden prevalence of “negative syndrome”—feverish kids who don’t have malaria. Even in places with the highest rates of malaria, only about half of fevers are actually due to the disease. In many places, that figure is 10% or less. In 2014, the World Health Organization (WHO) estimated that 142 million suspected malaria cases tested negative worldwide.

      Negative test results pose a dilemma for health care workers, who in remote areas may be community volunteers with minimal training. When their one diagnostic test comes up negative, they are left empty-handed, with nothing to offer except some advice: Return if the child gets sicker….

Materials’ Quantum Leap

[This brief article by David Rotman is in the March/April 2018 issue of Technology Review.]

      The prospect of powerful new quantum computers comes with a puzzle. They’ll be capable of feats of computation inconceivable with today’s machines, but we haven't yet figured out what we might do with those powers.

      One likely and enticing possibility: precisely designing molecules.

      Chemists are already dreaming of new proteins for far more effective drugs, novel electrolytes for better batteries, compounds that could turn sunlight directly into a liquid fuel, and much more efficient solar cells.

      We don’t have these things because molecules are ridiculously hard to model on a classical computer. Try simulating the behavior of the electrons in even a relatively simple molecule and you run into complexities far beyond the capabilities of today’s computers.

      But it’s a natural problem for quantum computers, which instead of digital bits representing 1s and 0s use “qubits” that are themselves quantum systems. Recently, IBM researchers used a quantum computer with seven qubits to model a small molecule made of three atoms.

      It should become possible to accurately simulate far larger and more interesting molecules as scientists build machines with more qubits and, just as important, better quantum algorithms.

Genetic Fortune-Telling

[This brief article by Antonio Regalado is in the March/April 2018 issue of Technology Review.]

      One day, babies will get DNA report cards at birth. These reports will offer predictions about their chances of suffering a heart attack or cancer, of getting hooked on tobacco, and of being smarter than average.

      The science making these report cards possible has suddenly arrived, thanks to huge genetic studies—some involving more than a million people.

      It turns out that most common diseases and many behaviors and traits, including intelligence, are a result of not one or a few genes but many acting in concert. Using the data from large ongoing genetic studies, scientists are creating what they call “polygenic risk scores.”

      Though the new DNA tests offer probabilities, not diagnoses, they could greatly benefit medicine. For example, if women at high risk for breast cancer got more mammograms and those at low risk got fewer, those exams might catch more real cancers and set off fewer false alarms.

      Pharmaceutical companies can also use the scores in clinical trials of preventive drugs for such illnesses as Alzheimer’s or heart disease. By picking volunteers who are more likely to get sick, they can more accurately test how well the drugs work.

      The trouble is, the predictions are far from perfect. Who wants to know they might develop Alzheimer’s? What if someone with a low risk score for cancer puts off being screened, and then develops cancer anyway?

      Polygenic scores are also controversial because they can predict any trait, not only diseases. For instance, they can now forecast about 10 percent of a person’s performance on 10 tests. As the scores improve, it’s likely that DNA IQ predictions will become routinely available. But how will parents and educators use that information?

      To behavioral geneticist Eric Turkheimer, the chance that genetic data will be used for both good and bad is what makes the new technology “simultaneously exciting and alarming.”

Zero Carbon Natural Gas

[These excerpts are from an article by James Temple in the March/April 2018 issue of Technology Review.]

      The world is probably stuck with natural gas as one of our primary sources of electricity for the foreseeable future. Cheap and readily available, it now accounts for more than 30 percent of US electricity and 22 percent of world electricity. And although it's cleaner than coal, it’s still a massive source of carbon emissions.

      A pilot power plant just outside Houston, in the heart of the US petroleum and refining industry, is testing a technology that could make clean energy from natural gas a reality. The company behind the 50-megawatt project, Net Power, believes it can generate power at least as cheaply as standard natural-gas plants and capture essentially all the carbon dioxide released in the process.

      If so, it would mean the world has a way to produce carbon-free energy from a fossil fuel at a reasonable cost. Such natural-gas plants could be cranked up and down on demand, avoiding the high capital costs of nuclear power and sidestepping the unsteady supply that renewables generally provide….

      A key part of pushing down the costs depends on selling that carbon dioxide. Today the main use is in helping to extract oil from petroleum wells. That’s a limited market, and not a particularly green one. Eventually, however, Net Power hopes to see growing demand for carbon dioxide in cement manufacturing and in making plastics and other carbon-based materials.

      Net Power’s technology won’t solve all the problems with natural gas, particularly on the extraction side. But as long as we’re using natural gas, we might as well use it as cleanly as possible. Of all the clean-energy technologies in development, Net Power’s is one of the furthest along to promise more than a marginal advance in cut-ting carbon emissions.

A Contraceptive Gel for Men Is about to Go on Trial

[These excerpts are from an article by Emily Muller in the March/April 2018 issue of Technology Review.]

      After more than a decade of work, government researchers in the US are ready to test an unusual birth control method for men—a topical gel that could prevent the production of sperm.

      And no, gentlemen, you don’t nib it on your genitals.

      The clinical trial, which begins in April and will run for about four years, will be the largest effort in the US to test a hormonal form of birth control for men.

      Currently, the most effective options men have for birth control are condoms or a vasectomy….

      The new gel contains two synthetic hormones, testosterone and a form of progestin. Progestin blocks the testes from making enough testosterone for normal sperm production. The replacement testosterone is needed to counteract the hormone imbalances the progestin causes but won’t make the body produce sperm.

      More than 400 couples will participate in the study, which will take place at sites in the US, the UK, Italy, Sweden, Chile, and Kenya. Men in the trial will take home a pump bottle of the gel and rub about half a teaspoon of it on their upper arms and shoulders every day. The gel dries within a minute….

      The gel can suppress sperm levels for about 72 hours, so if men forget a dose, “there is a bit of forgiveness….”

      Men will use the gel for at least four months while their partners also use some form of female contraception. Researchers will monitor the men's sperm levels, which need to drop to less than one million per milliliter to effectively prevent pregnancy…Once the sperm count is low enough, the women will go off their birth control. The couples will then use the contraceptive gel as their only form of daily birth control for a year….

      Still, the question is: will men use it...?

      Men’s attitudes on their role in contraception vary by country, but a 2010 survey indicated that at least 25 percent of men worldwide would consider using a hormonal contraceptive.

Seeking Resilience in Marine Ecosystems

[These excerpts are from an article by Emily S. Darling and Isabelle M. Cote in the March 2, 2018, issue of Science.]

      Resilience is a popular narrative for conservation and provides an opportunity to communicate optimism that ecosystems can recover and rebound from disturbances. A resilience lens also reinforces the need for continued conservation investments, even in degraded ecosystems. It is probably for these reasons that resilience has become a conceptual cornerstone in the management of tropical coral reefs, which are one of the ecosystems most vulnerable to climate change.

      The term “resilience” captures two dynamic processes: the ability of ecosystems to resist and absorb disturbance, and their ability to recover. Recent observations suggest that coral reef recovery is increasingly unlikely. The time windows for reefs to recover from consecutive mass bleaching events have shrunk from 25 to 30 years a few decades ago to just 6 years—far shorter than the 10 to 15 years that even the fastest-growing corals need to bounce back from catastrophic mortality. There has been some recovery from recent bleaching events on reefs that are isolated or deeper and more structurally complex. Understanding more about how to catalyze such recovery processes is important. However, if climate change continues unabated, resilience may come not from the ability of coral reefs to recover but from their ability to resist.

      The idea of protecting resistant species or resistant areas is not new, but for various reasons, it is not often put into practice. To increase ecosystem resistance to climate change, there are two options: to increase intrinsic resistance, provided by traits that allow species to cope with a changing climate, or to increase extrinsic resistance, provided by locations that are less vulnerable to climate disturbances.

      Individuals or species that survive extreme climate events can have traits that underpin a general ability to cope or adapt to new environmental conditions. For corals, resistant traits include tolerance to warmer and acidified waters, salinity fluctuations, herbicides, diseases, and storms. These traits might be associated with aspects of the coral microbiome, or the ability of corals to draw on energy reserves or flexible feeding strategies. Natural populations of “super corals” that are tolerant of stressful conditions might arise after repeated bleaching events or in more variable thermal environments, such as reefs exposed to tidal heat pulses….

      Whether marine ecosystems resist, recover, restructure, or vanish hinges on how extreme future climate change is. It is almost certain that most of today’s coral reefs will be transformed beyond recognition in the coming decades. Shifts in coral communities toward smaller, weedier species and dominance by other groups, such as algae and sponges, will alter ecosystem functioning and reduce the services that coral reefs provide to society.

      These ecological shifts will force millions of people to adapt and change how they use and depend on the fisheries, tourism, and coastal protection provided by coral reefs. The political will necessary to improve the resilience of coral reefs or marine ecosystems might or might not materialize in time. Regardless, any fight for the remaining “reefs of hope” can, and must, occur alongside improving the resilience of people and communities to help dampen the coming climate shocks.

Energizing Global Environmental Cooperation

[These excerpts are from an article by Michael Blanding in the Winter 2018 issue of Spectrum.]

      China’s rapid economic growth during the 2000s was largely powered by coal, which was causing severe local air pollution and contributing to the global rise in climate-warming greenhouse gases. She began to wonder: “How could we innovate in a way that would address these two interlinked challenges at the same time?”

      …Karplus’s team developed a detailed model of the country’s economy and energy system that projected the potential long-term impact of different policy initiatives. The team’s results have shown the value of implementing national climate targets by pricing CO2, a system the Chinese government began piloting in 2013.

      At the time, the US and China were in a major stalemate over which nation would lower its emissions first. While the US claimed its own actions would be futile without addressing China’s higher and growing emissions, China maintained that developed nations like the US should take the lead in cutting carbon. Key to arguments on both sides were projections that China’s fossil energy usage would climb with no end in sight. In contrast, results from the MIT-Tsinghua model, shared with policy makers in both nations, suggested China’s energy-related CO2 emissions could peak by 2030 without undermining its economic development.

      The US and China made a historic agreement in 2014 that both sides would limit climate-related missions—setting the stage for the larger global agreement in Paris in 2015….

Framing Uncertainty

[These excerpts are from an article by Steve Nadis in the Winter 2018 issue of Spectrum.]

      The barrage of hurricanes that in 2017 pummeled the Caribbean islands, United States, and even Ireland—fueled in part by unusually balmy Atlantic Ocean temperatures—provided a glimpse of consequences that may accompany a warming planet. If left unchecked, climate change will act as a “threat-multiplier,” increasing the probability and intensity of such extreme weather events, according to Kerry Emanuel….The situation is exacerbated, Emanuel noted in a Washington Post editorial last fall, by policies that have enabled the number of people living in vulnerable coastal zones to triple, worldwide, since 1970.

      How can we get a handle on both the natural and manmade facets of potential climate-related disasters? Fortunately, researchers…have, for nearly a quarter of a century, been developing a comprehensive tool for dissecting the full range of impacts of climate on people and people on climate….

      “Projecting climate change into the future requires an understanding of both the natural environment and of how human development occurs,” explains Ronald Prinn….

      The Joint Program’s 2016 Outlook report, for example, showed that even if all nations met their pledges to the 2015 Paris climate accord, the Earth's average temperature rise by 2100 would still exceed the established goal of 2 degrees Celsius or less. The authors then presented a set of emissions scenarios that could satisfy the 2 degrees C target, though complicated tradeoffs would arise. Increased energy production from the wind and sun means more land devoted to turbines and solar panels. An enhanced reliance on biomass energy could similarly take up land and water that might otherwise have been reserved for growing crops. Obtaining cooling water for power plants also gets harder as river temperatures rise.

      “It’s becoming ever more apparent that things we used to study separately are all interconnected,” says Prinn. “Meanwhile, environmental changes are occurring far faster than expected, which is another way of saying the need for the IGSM is now greater than ever.”

Squirrels with a Rainy Day Fund

[This brief article by Mitch Leslie is in the March 2, 2018, issue of Science.]

      Scurrying around the South Dakota prairie, 13-lined ground squirrels mark the approach of winter by bingeing. By the time a squirrel holes up to hibernate, its weight will have soared by about 40%, thanks to extra fat that will tide the creature over until spring.

      During droughts, migrations, bleak winters, and other challenges, organisms often face times when resources are scarce. To get by, the ground squirrel, like many other creatures, stockpiles resources t use later. It can gain more than 2% of its body weight in a single day as it gorges on seeds, grasshoppers, and other delicacies.

      But the tactic has downsides. A roly-poly rodent is easy prey for a hawk or coyote. The rainy day fund can also run out prematurely. So once a squirrel is nice and tubby, it enters hibernation, slashing its energy expenditure by 90%. Its body temperature drops to just above freezing and its heart rate falls to as low as 5 beats per second, down from the usual 350 to 400.

      Packing on the fat requires metabolic and behavioral adjustments. But somehow, the squirrel dodges the health problems that plague obese people. Although it develops some of the metabolic defects of type 2 diabetes, the animal isn’t sick. And by spring, it is lean and spry and ready to begin the cycle again.

Resilience by Regeneration

[This brief article by Elizabeth Pennisi is in the March 2, 2018, issue of Science.]

      Humans should envy the axolotl. Our powers of regeneration are limited. Broken bones knit, wounds heal, and large parts of the liver can regenerate, but that’s about it. But the axolotl—a large salamander also called the Mexica walking fish because it looks like a 20-centimeter eel with stumpy legs—can replace n entire missing limb or even its tail, which means regrowing the spinal cord, backbone, and muscles. About 30 research teams are probing how these salamanders do it. In the axolotl, they’ve found, various tissues work together to detect limb loss and coordinate regrowth. In the process, the animals reactivate the same genetic circuits that guided the formation of those structures during embryonic development, causing generalist stem cells to specialize.

      Axolotls are only one of several regenerators in the animal kingdom. Flatworms called planarians are even more resilient—able to surge back after losing 90% of their bodies. One simple fragment of these 2-centimeter-long aquatic worms can rejuvenate the brain, skin, gut, and all the other functional organs. Again, stem cells are key, and a special set of genes active in muscles tells those stem cells what to do, activating growth and specialization genes in the right cells at the right time. So the planarian can rebuild itself almost from scratch, whereas the axolotl can rebuild only if the main body axis is intact. This year, researchers took another step toward detailing the molecules underlying regeneration by sequencing th genomes of those two species. The ultimate hope: One day, we’ll be able to coax injured humans to execute the same repairs.

Fish that Switch Sex to Survive

[This brief article by Elizabeth Pennisi is in the March 2, 2018, issue of Science.]

      Fish are masters of reproductive resilience. About 450 species witch sexes over their lifetimes to maximize their number of offspring. The fish do so by undergoing hormonal changes that transform their organs from those of one sex to the other. Patterns of sex switching vary by species. Big females produce more eggs than little females, so for some species, such as clownfish, it’s better to be a male early in life when more runty and then switch to a female later on. But among males that fight each other for females or territory—such as groupers, sea breams, and porgies—being a too-small male can mean no offspring at ll. In that case, it’s better to stay a small female instead.

      Now, this age-old strategy is allowing fish like the sea bream to adapt to a modern challenge that also disrupts the sex balance: overfishing. Fishers favor the biggest catch. Because one sex is usually bigger than the other, the bigger sex risks being fished out. But researchers have found that sea breams—flavorful, reddish fish common in warmer Atlantic Ocean coastal waters—are ready. Removing big males prompts earlier-than-usual sex changes in some females, so the sex balance is preserved. Still, it’s more a short-term strategy than a long-term solution, researchers say. The fish are switching sex at younger ages, so females don’t have a chance to grow big. That trend translates into fewer offspring and a shrinking population. That resilience strategy keeps them reproducing for now—but the fish can’t save themselves all on their own.

Asia’s Hunger for Sand Takes Toll on Ecology

[These excerpts are from an article by Christina Larson in the March 2, 2018, issue of Science.]

      Across Asia, rampant extraction of sand for construction is eroding coastlines and scouring waterways. “For a resource we think is infinite, we are beginning to realize that it’s not,” says Aurora Torres, an ecologist at the German Centre for Integrative Biodiversity Research in Leipzig. “It’s a global concern, but especially acute in Asia, where all trends show that urbanization and the region's big construction boom are going to continue for many years.” And it is taking an environmental toll that scientists are beginning to assess—and environmentalists hope to reduce.

      Already, scientists have linked poorly regulated and often illegal sand removal to declines in seagrasses in Indonesia and in charismatic species such as the Ganges River dolphin and terrapins in India and Malaysia. In eastern China’s Poyang Lake, dredging boats are sucking up tens of millions of tons of sand a year, altering the hydrology of the country’s largest freshwater lake, a way station for migratory birds. Conservation groups are urging governments to crack down. But the political clout of developers means it will be an uphill—and perilous—battle. Last September, for example, two activists with Mother Nature Cambodia who were filming illegal sand dredging off the Cambodian coast were arrested and convicted of “violation of privacy.” They spent several months in jail before being released last month.

      Used to make concrete and glass, sand is an essential ingredient of nearly every modern highway, airport, dam, windowpane, and solar panel. Although desert sand is plentiful, its wind-tumbled particles are too smooth—and therefore not cohesive enough—for construction material. Instead, builders prize sand from quarries, coastlines, and riverbeds. “The very best sand for construction is river sand; it's the right particle size and shape….”

      Between 1994 and 2012, global cement production—a proxy for concrete use—tripled, from 1.37 billion to 3.7 billion tons, driven largely by Asian construction....sand mining—on an industrial scale and by individual operators—“greatly exceeds natural renewal rates” and “"is increasing exponentially.”

      ….colleagues explain how sand mining has driven declines of seagrass meadows off of Indonesia. Sediment plumes stirred up by the dredging block sunlight, impeding photosynthesis…The meadows nourish several species, including the dugong, which is in decline….

      Another sand mining victim is the southern river terrapin, a critically endangered turtle in Southeast Asia….

      Also under siege, in Bangladesh and India, is the northern river terrapin. “Sand mining is one of the biggest problems and reasons why they are so endangered today….When the sand banks are gone, the [terrapin] is gone.” Other creatures directly affected by river sand mining, scientists say, are the gharial—a rare crocodile found in northern India—and the Ganges River dolphin.

      Poyang Lake, a key wintering ground on the East Asian-Australasian Flyway, hosts dozens of migratory species, including almost all of the 4000 or so surviving Siberian cranes. But sand dredging campaigns in the middle Yangtze Basin have expanded rapidly since the early 2000s, when such activities were banned on sections of the lower Yangtze….

      “We are not saying we need to stop sand mining altogether. We are saying we need to minimize the impacts,” says Jack Liu, a biologist at Michigan State University in East Lansing who is spearheading an effort to assemble a comprehensive picture of the damage. Construction standards should be raised to extend building longevity, he says, and building materials should be recycled. Those sand grains on the beach may not be innumerable after all.

How to Get Wyoming Wind to California

[These excerpts are from an article by James Temple in the March/April 2018 issue of Technology Review.]

      Several miles south of Rawlins, Wyoming, on a cattle ranch east of the Continental Divide, construction crews have begun laying down roads and pads that could eventually underpin up to 1,000 wind turbines. Once complete, the Chokecherry and Sierra Madre project could generate around 12 million megawatt-hours of electricity annually, making it the nation’s largest wind farm.

      But how do you get that much wind power to where it's actually needed?

      The Denver-based company behind the project hopes to erect a series of steel transmission towers that would stretch a high-voltage direct-current transmission (HVDC) line 730 miles across the American West. It could carry as much as 3,000 megawatts of Wyoming wind power to the electricity markets of California, Nevada, and Arizona. With the right deals in place, the transmission line could deliver solar-generated electricity back as well, balancing Wyoming’s powerful late-afternoon winds with California’s bright daytime sun….

      Transmission isn’t sexy. It’s basic infrastructure: long wires and tall towers. But a growing body of studies conclude that building out a nationwide network of DC transmission lines could help enable renewable sources to supplant the majority of US energy generation, offering perhaps the fastest, cheapest, and most efficient way of slashing greenhouse-gas emissions.

      Developing these transmission lines, however, is incredibly time-consuming and expensive. The TransWest project was first proposed in 2005, but the developers will be lucky to secure their final permits and begin moving dirt at the end of next year.

      There’s no single agency in charge of overseeing or ushering along such projects, leaving companies to navigate a thicket of overlapping federal, state, county, and city jurisdictions—every one of which must sign off for a project to begin. As a result, few such transmission lines ever get built.

      Direct current, in which electric charges constantly flow in a single direction, is an old technology. DC and AC—alternating current—were the subject of one of the world's first technology standards battles, pitting Thomas Edison against his former protégé Nikola Tesla in the “War of the Currents” starting in the 1880s.

      AC won this early war, mainly because, thanks to the development of transformers, its voltage could be cranked up for long-distance transmission and stepped down for homes and businesses.

      But a series of technological improvements have substantially increased the functionality of DC, opening up new ways of designing and interconnecting the electricity grid….

      With direct-current lines, grid operators have more options for energy sources throughout the day, allowing them to tap into, say, cheap wind two states away during times of peak demand instead of turning to nearby but more expensive natural-gas plants for a few hours. The fact that regions can depend on energy from distant states for their peak demand also means they don't have to build as much high-cost generation locally.

      A national direct-current grid could also help lower emissions to as much as 80 percent below 1990 levels within 15 years, all with commercially available technology and without increasing the costs of electricity, according to a study published earlier in Nature Climate Change.

      The researchers produced an idealized transmission network that connected 32 nodes across the nation, linking hydroelectric power in the Pacific Northwest, solar in California, wind energy in the Southwest, and nuclear on the East Coast. Simply put, the system balances out the intermittency of renewable energy sources over long distances, meaning there’s always reliable generation somewhere. Being able to tap into it from any corner of the nation lowers the cost of supplying energy at peak demand, reduces the amount of generation required in any single area, minimizes excess generation, and eliminates the need to develop expensive grid-scale storage systems….

      There are already a handful of DC transmission lines in the US and a growing number of proposals, including the New England Clean Power Link, which would transport 1,000 megawatts of renewable power from Canada into New England. Houston's Clean Line Energy has at least a half-dozen proposals in various stages, including the Plains and Eastern Clean Line connecting western Oklahoma to markets in the Southeast, and the Grain Belt Express Clean Line stretching from Kansas to Indiana.

      But all of these are moving through the approvals process at a dawdling pace. The TransWest developers, who have secured permission along the two-thirds of the line's path that lies on federal land since taking over the project in 2008, are still working to finalize approvals from states and private landowners.

      Most developers and energy policy experts say what’s needed to accelerate these projects is a federal authority with greater power to push them through….

Tech Companies Should Stop Pretending AI Won’t Destroy Jobs

[This excerpt is from an article by Kai-Fu Lee in the March/April 2018 issue of Technology Review.]

      The rise of China as an AI super-power isn’t a big deal just for China. The competition between the US and China has sparked intense advances in AI that will be impossible to stop anywhere. The change will be massive, and not all of it good. Inequality will widen. As my Uber driver in Cambridge has already intuited, AI will displace a large number of jobs, which will cause social discontent. Consider the progress of Google DeepMind’s AlphaGo software, which beat the best human players of the board game Go in early 2016. It was subsequently bested by AlphaGo Zero, introduced in 2017, which learned by playing games against itself and within 40 days was superior to all the earlier versions. Now imagine those improvements transferring to areas like customer service, telemarketing, assembly lines, reception desks, truck driving, and other routine blue-collar and white-collar work. It will soon be obvious that half of our job tasks can be done better at almost no cost by AI and robots. This will be the fastest transition humankind has experienced, and we're not ready for it.

      Not everyone agrees with my view. Some people argue that it will take longer than we think before jobs disappear, since many jobs will be only partially replaced, and companies will try to redeploy those displaced internally. But even if true, that won't stop the inevitable. Others remind us that every technology revolution has created new jobs as it displaced old ones. But it’s dangerous to assume this will be the case again.

      Then there are the symbiotic optimists, who think that AI combined with humans should be better than either one alone. This will be true for certain professions—doctors, lawyers—but most jobs won't fall in that category Instead they are routine, single-domain jobs where AI excels over the human by a large margin.

      Others think we’ll be saved by a universal basic income. “Take the extra money made by AI and distribute it to the people who lost their jobs,” they say. “This additional income will help people find their new path, and replace other types of social welfare.” But UBI doesn’t address people’s loss of dignity or meet their need to feel useful. It’s just a convenient way for a beneficiary of the AI revolution to sit back and do nothing.

      And finally, there are those who deny that AI has any downside at all—which is the position taken by many of the largest Al companies. It’s unfortunate that Al experts aren’t trying to solve the problem. What’s worse, and unbelievably selfish, is that they actually refuse to acknowledge the problem exists in the first place.

      These changes are coming, and we need to tell the truth and the whole truth. We need to find the jobs that Al can’t do and train people to do them. We need to reinvent education. These will be the best of times and the worst of times. If we act rationally and quickly, we can bask in what’s best rather than wallow in what’s worst.

50 Years Ago: Repeal of Tennessee’s “Monkey Law”

[This excerpt is from an article by Glenn Branch and Ann Reid in the Winter 2017/2018 special edition of Scientific American.]

      Even before the legal defeat of intelligent design in Pennsylvania, evolution’s opponents were already beginning to try yet a different tack: requiring, or more commonly allowing, science teachers to misrepresent evolution as scientifically controversial, often under the rubric of academic freedom. More than 70 such bills have been introduced since 2004, with three enacted, in Mississippi, Louisiana and Tennessee.

      It is unclear to what extent science teachers in those three states take advantage of those laws. But according to a national survey conducted in 2007, about one in eight public high school biology teachers present creationism as scientifically credible in their classrooms, despite the unconstitutionality of the practice. Scientific, educational and legal concerns are often overridden by personal or community attitudes of doubt and denial.

      Indeed, the same survey revealed that six in 10 of the instructors were teaching evolution less than forthrightly—compromising on the accuracy, completeness or rigor of their treatment, often for fear of provoking a creationist backlash. Such fears, sadly, appear to be warranted: more than one in five of the teachers reported experiencing community resistance to their teaching of evolution.

      Fortunately, the treatment of evolution in state science standards is getting better on the whole, which means that textbooks, curricula and, ideally, teachers are following suit. But scientific knowledge and pedagogical know-how are not the only equipment that educators need to teach evolution. They also need the confidence to persist, even in the face of doubt and denial.

      Creationists are as active as ever, with a few even in the bully pulpits afforded by high political office. And the legal rulings that established the obstacles that have so far thwarted attacks on evolution education could be overturned by a reactionary Supreme Court or circumvented by public support of religious schools not subject to constitutional strictures. So the evolution wars are by no means over.

Post-Truth: A Guide for the Perplexed

[These excerpts are from an article by Kathleen Higgins in the Winter 2017/2018 special edition of Scientific American.]

      Post-truth refers to blatant lies being routine across society, and it means that politicians can lie without condemnation. This is different from the cliché that all politicians lie and make promises they have no intention of keeping—this still expects honesty to be the default position. In a post-truth world, this expectation no longer holds.

      This can explain the current political situation in the U.S. and elsewhere. Public tolerance of inaccurate and undefended allegations, non sequiturs in response to hard questions and outright denials of facts is shockingly high. Repetition of talking points passes for political discussion, and serious interest in issues and options is treated as the idiosyncrasy of wonks. The lack of public indignation when political figures claim disbelief in response to growing scientific evidence of the reality of climate change is part of this larger pattern. “Don’t bother me with facts” is no longer a punchline. It has become a political stance. It’s worth remembering that it has not always been this way: the exposure of former U.S. president Richard Nixon’s lies was greeted with outrage….

      When political leaders make no effort to ensure that their “facts” will bear scrutiny, we can only conclude that they take an arrogant view of the public. They take their right to lie as given, perhaps particularly when the lies are transparent Many among the electorate seem not to register the contempt involved, perhaps because they would like to think that their favored candidate is at least well intentioned and would not deliberately mislead them.

      Much of the public hears what it wants to hear because many people get their news exclusively from sources whose bias they agree with. But contemptuous leaders and voters who are content with hand-waving and entertaining bluster undermine the democratic idea of rule by the people. The irony is that politicians who benefit from post-truth tendencies rely on truth, too, but not because they adhere to it. They depend on most people's good-natured tendency to trust that others are telling the truth, at least the vast majority of the time.

      Scientists and philosophers should be shocked by the idea of post-truth, and they should speak up when scientific findings are ignored by those in power or treated as mere matters of faith. Scientists must keep reminding society of the importance of the social mission of science—to provide the best information possible as the basis for public policy. And they should publicly affirm the intellectual virtues that they so effectively model: critical thinking, sustained inquiry and revision of beliefs on the basis of evidence_ Another line from Nietzsche is especially pertinent now: “Three cheers for physics!—and even more for the motive that spurs us toward physics—our honesty!”

Misleading Semantics of Creationism

[This short essay by John Rennie is in the Winter 2017/2018 special edition of Scientific American.]

      “Creation science” is a contradiction in terms. A central tenet of modern science is methodological naturalism—it seeks to explain the universe purely in terms of observed or testable natural mechanisms. Thus, physics describes the atomic nucleus with specific concepts governing matter and energy, and it tests those descriptions experimentally. Physicists introduce new particles, such as quarks, to flesh out their theories only when data show that the previous descriptions cannot adequately explain observed phenomena. The new particles do not have arbitrary properties, moreover—their definitions are tightly constrained, because the new particles must fit within the existing framework of physics.

      In contrast, intelligent-design theorists invoke shadowy entities that conveniently have whatever unconstrained abilities are needed to solve the mystery at hand. Rather than expanding scientific inquiry, such answers shut it down. (How does one disprove the existence of omnipotent intelligences?)

      Intelligent design offers few answers. For instance, when and how did a designing intelligence intervene in life’s history? By creating the first DNA? The first cell? The first human? Was every species designed, or just a few early ones? Proponents of intelligent-design theory frequently decline to be pinned down on these points. They do not even make real attempts to reconcile their disparate ideas about intelligent design. Instead they pursue argument by exclusion—that is, they belittle evolutionary explanations as farfetched or incomplete and then imply that only design-based alternatives remain.

      Logically, this is misleading: even if one naturalistic explanation is flawed, it does not mean that all are. Moreover, it does not make one intelligent-design theory more reasonable than another. Listeners are essentially left to fill in the blanks for themselves, and some will undoubtedly do so by substituting their religious beliefs for scientific ideas.

      Time and again, science has shown that methodological naturalism can push back ignorance, finding increasingly detailed and informative answers to mysteries that once seemed impenetrable: the nature of light, the causes of disease, how the brain works. Evolution is doing the same with the riddle of how the living world took shape. Creationism, by any name, adds nothing of intellectual value to the effort.

Reason (and Science) for Hope

[These excerpts are from a book review by Michael Shermer in the February 23, 2018, issue of Science.]

      For most of us, it is easier to imagine the world going to hell in a handbasket than it is to picture a rosy future. We can readily conjure up incremental improvements such as increased Internet bandwidth, improved automobile navigation systems, or another year added to our average life span. But what really gets our imaginations roiling are images of nuclear Armageddon, robots run amok, and terrorists in trucks mowing down pedestrians. The reason for this asymmetry is an evolved feature of human cognition called the negativity bias, a phenomenon explored in depth by the Harvard psychologist and linguist Steven Pinker in his magisterial new book, Enlightenment Now

      Pinker begins with the Enlightenment because the scientists and scholars who drove that movement took the methods developed in the Scientific Revolution and applied them to solving problems in all fields of knowledge. “Dare to know” was Immanuel Kant’s oft-quoted one-line summary of the age he helped launch, and with knowledge comes power over nature, starting with the second law of thermodynamics, which Pinker fingers as the cause of our natural-born pessimism.

      In the world in which our ancestors evolved the cognition and emotions that we inherited, entropy dictated that there were more ways for things to go bad than good. Thus, our modern psychology is tuned to a world that was more dangerous than it is today, he argues. Because our ancestors’ survival depended on vigilantly scanning for negative stimuli, good experiences (e.g., a pain-free day) often go unnoticed, as we attempt to respond to the failures that could spell the end of our existence. But instead of interpreting accidents, plagues, famine, and disease as the wrath of angry gods, vengeful demons, or bewitching women like our medieval ancestors did, we enlightened thinkers now know that's just entropy talking its course.

      In 75 charts and graphs and thousands of statistics, Pinker documents how we have systematically applied knowledge to problems in order to propel ourselves to unimaginable levels of progress. Since the Enlightenment, he reveals, more people live longer, healthier, happier, and more meaningful lives filled with enriching works of art, music, literature, science, technology, and medicine. This is not to mention improvements to food, drink, clothes, transportation, and houses, nor the ever-increasing ease of international travel or instant access to much of the world’s knowledge that many of us enjoy today.

      Exceptions are no counter to Pinker’s massive data set. Follow the trend lines, not the headlines, he urges. For example, although military engagements make the news daily, “[w]ar between countries is obsolescent, and war within countries is ab-sent from five-sixths of the world’s surface.” “In most times and places, homicides kill far more people than wars,” he adds, “and homicide rates have been falling as well.”

      We’re not just less likely to fall victim to an intentional death. On the whole, we are safer than ever. “Over the course of the 20th century, Americans became 96 percent less likely to be killed in a car accident, 88 percent less likely to be mowed down on the sidewalk, 99 percent less likely to die in a plane crash, 59 percent less likely to fall to their deaths, 92 percent less likely to die by fire, 90 percent less likely to drown, 92 percent less likely to be asphyxiated, and 95 percent less likely to be killed on the job.”

      Each area of improvement has specific causes that Pinker carefully identifies, but he attributes our overall progress to “Enlightenment humanism,” a secular worldview that values science and reason over superstition and dogma. It is a heroic journey, Pinker concludes with rhetorical flair: “We are born into a pitiless universe, facing steep odds against life-enabling order and in constant jeopardy of falling apart.” Nevertheless, “human nature has also been blessed with resources that open a space for a kind of redemption. We are endowed with the power to combine ideas recursively, to have thoughts about our thoughts. We have an instinct for language, allowing us to share the fruits of our experience and ingenuity. We are deepened with the capacity for sympathy—for pity, imagination, compassion, commiseration.”

      This is our story, not vouchsafed to any one tribe but “to any sentient creature With the power of reason and the urge to persist in its being.” With this fact, there is reason (and science) for hope.


[These excerpts are from an article by Michael Shermer in the March 2018 issue of Scientific American.]

      …Far from lurching backward, Pinker notes, today’'s fact-checking ethic “would have served us well in earlier decades when false rumors regularly set off pogroms, riots, lynchings, and wars (including the Spanish-American War in 1898, the escalation of the Vietnam War in 1964, the Iraq invasion of 2003, and many others).” And contrary to our medieval ancestors, he says, “few influential people today believe in werewolves, unicorns, witches, alchemy, astrology, bloodletting, miasmas, animal sacrifice, the divine right of kings, or supernatural omens in rainbows and eclipses.”

      Ours is called the Age of Science for a reason, and that reason is reason itself, which in recent decades has come under fire by cognitive psychologists and behavioral economists who assert that humans are irrational by nature and by postmodernists who aver that reason is a hegemonic weapon of patriarchal oppression. Balderdash! Call it “factiness,” the quality of seeming to be factual when it is not. All such declarations are self-refuting, inasmuch as “if humans were incapable of rationality, we could never have discovered the ways in which they were irrational, because we would have no benchmark of rationality against which to assess human judgment, and no way to carry out the assessment,” Pinker explains. “The human brain is capable of reason, given the right circumstances; the problem is to identify those circumstances and put them more firmly in place.”

      Despite the backfire effect, in which people double down on their core beliefs when confronted with contrary facts to reduce cognitive dissonance, an “affective tipping point” may be reached when the counterevidence is overwhelming and especially when the contrary belief becomes accepted by others in one’s tribe. This process is helped along by “debiasing” programs in which people are introduced to the numerous cognitive biases that plague our species, such as the confirmation bias and the availability heuristic, and the many ways not to argue: appeals to authority, circular reasoning, ad hominem and especially ad Hitlerem. Teaching students to think critically about issues by having them discuss and debate all sides, especially articulating their own and another's position is essential, as is asking, “What would it take for you to change your mind?” This is an effective thinking tool employed by Portland State University philosopher Peter Boghossian.

      “However long it takes,” Pinker concludes, “we must not let the existence of cognitive and emotional biases or the spasms of irrationality in the political arena discourage us from the Enlightenment ideal of relentlessly pursuing reason and truth.” That’s a fact.

Paleo Diets, GMOs and Food Taboos

[These excerpts are from an article by Michael Shermer in the Winter 2017/2018 special edition of Scientific American.]

      …In its essence, the see-food diet was the first so-called Paleo diet, not today’s popular fad, premised on the false idea that there is a single set of natural foods—and a correct ratio of them—that our Paleolithic ancestors ate. Anthropologists have documented a wide variety of foods consumed by traditional peoples, from the Masai diet of mostly meat, milk and blood to New Guineans’ fare of yams, taro and sago. As for food ratios, according to a 2000 study entitled “Plant-Animal Subsistence Ratios and Macronutrient Energy Estimations in Worldwide Hunter-Gatherer Diets,” published in the American Journal of Clinical Nutrition, the range for carbohydrates is 22 to 40 percent, for protein 19 to 56 percent, and for fat 23 to 58 percent.

      And what constitutes “natural” anyway? Humans have been genetically modifying foods through selective breeding for more than 10,000 years. Were it not for these original genetically modified organisms—and today’s more engineered GMOs designed for resistance to pathogens and herbicides and for better nutrient profiles—the plan et could sustain only a tiny fraction of its current population. Golden rice, for example, was modified to enhance vitamin A levels, in part, to help Third World children with nutritional deficiencies that have caused millions to go blind. As for health and safety concerns, according to A Decade of EU-Funded GMO Research, a 2010 report published by the European Commission: “The main conclusion to be drawn from the efforts of more than 130 research projects, covering a period of more than 25 years of research, and involving more than 500 independent research groups, is that biotechnology, and in particular GMOs, are not per se more risky than e.g. conventional plant breeding technologies.”

      …Given the importance of food for survival and flourishing, I suspect GMOs—especially in light of their association with large corporations such as Monsanto that operate on the market-pricing model—feel like an infringement of communal sharing and equality matching. Moreover, the elevation of “natural foods” to near-mythic status, coupled with the taboo many genetic-modification technologies are burdened with—remember when in vitro fertilization was considered unnatural?—makes GMOs feel like a desecration. It need not be so. GMOs are scientifically sound, nutritionally valuable and morally noble in helping humanity during a period of rising population. Until then, eat, drink and be merry.

The “True” Human Diet

[These excerpts are from an article by Peter S. Unger in the Winter 2017/2018 special edition of Scientific American.]

      People have been debating the natural human diet for thousands of years, often framed as a question of the morality of eating other animals. The lion has no choice, but we do. Take the ancient Greek philosopher Pythagoras, for example: “Oh, how wrong it is for flesh to be made from flesh?” The argument hasn’t changed much for ethical vegetarians in 2,500 years, but today we also have Sarah Palin, who wrote in Going Rogue: An American Life, “If God had not intended for us to eat animals, how come He made them out of meat?” Have a look at Genesis 9:3—“Every moving thing that liveth shall be meat for you.”

      While humans don't have the teeth or claws of a mammal evolved to kill and eat other animals, that doesn't mean we aren’t “supposed” to eat meat, though. Our early Homo ancestors invented weapons and cutting tools in lieu of sharp carnivorelike teeth. There is no explanation other than meat eating for the fossil animal bones riddled with stone tool cut marks at fossil sites. It also explains our simple guts, which look little like those evolved to process large quantities of fibrous plant foods.

      But gluten isn’t unnatural either. Despite the pervasive call to cut carps, there is plenty of evidence that cereal grains were staples, at least for some, long before domestication. People at Ohalo II on the shore of the Sea of Galilee ate wheat and barley during the peak of the last ice age, more than 10,000 years before these grains were domesticated. Paleobotanists have even found starch granules trapped in the tartar on 40,000-year-old Ncandertal teeth with the distinctive shapes of barley and other grains and the telltale damage that comes from cooking. There is nothing new about cereal consumption.

      This leads us to the so-called Paleolithic Diet. As a palcoanthropologist I’m often asked for my thoughts about it. I’m not really a fan—I like pizza and French fries and ice cream too much. Nevertheless, diet gurus have built a strong case for discordance between what we eat today and what our ancestors evolved to eat. The idea is that our diets have changed too quickly for our genes to keep up, and the result is said to be “metabolic syndrome,” a cluster of conditions that include elevated blood pressure, high blood sugar level, obesity and abnormal cholesterol levels. It’s a compelling argument. Think about what might happen if you put diesel in an automobile built for regular gasoline. The wrong fuel can wreak havoc on the system, whether you're filling a car or stuffing your face.

      It makes sense, and it’s no surprise that Paleolithic diets remain hugely popular. There are many variants on the general theme, but foods rich in protein and omega-3 fatty acids show up again and again. Grass-fed cow meat and fish are good, and carbohydrates should come from nonstarchy fresh fruits and vegetables. On the other hand, cereal grains, legumes, dairy, potatoes, and highly refined and processed foods are out. The idea is to eat like our Stone Age ancestors—you know, spinach salads with avocado, walnuts, diced turkey, and the like.

      I am not a dietician and cannot speak with authority about the nutritional costs and benefits of Paleolithic diets, but I can comment on their evolutionary underpinnings. From the standpoint of paleoecology, the Paleolithic diet is a myth. Food choice is as much about what is available to be eaten as it is about what a species evolved to eat. And just as fruits ripen, leaves flush and flowers bloom predictably at different times of the year, foods available to our ancestors varied over deep time as the world changed around them from warm and wet to cool and dry and back again. Those changes are what drove our evolution….

      Many paleoanthropologists today believe that increasing climate fluctuation through the Pleistocene sculpted our ancestors—whether their bodies or their wit, or both—for the dietary flexibility that has become a hallmark of humanity. The basic idea is that our ever changing world winnowed out the pickier eaters among us. Nature has made us a versatile species, which is why we can find something to satiate us on nearly all its myriad biospheric buffet tables. It's also why we have been able to change the game, transition from forager to farmer, and really begin to consume our planet.

Are Engineered Foods Evil?

[These excerpts are from an article by David H. Freedman in the Winter 2017/2018 special edition of Scientific American.]

      …The bulk of the science on GM safety points in one direction. Take it from David Zilberman, a U.C. Berkeley agricultural and environmental economist and one of the few researchers considered credible by both agricultural chemical companies and their critics. He argues that the benefits of GM crops greatly outweigh the health risks, which so far remain theoretical. The use of GM crops “has lowered the price of food,” Zilberman says. “It has increased farmer safety by allowing them to use less pesticide. It has raised the output of corn, cotton and soy by 20 to 30 percent, allowing some people to survive who would not have without it. If it were more widely adopted around the world, the price [of food] would go lower, and fewer people would die of hunger.”

      In the future, Zilberman says, those advantages will become all the more significant. The United Nations Food and Agriculture Organization estimates that the world will have to grow 70 percent more food by 2050 just to keep up with population growth. Climate change will make much of the world's arable land more difficult to farm. GM crops, Zilberman says, could produce higher yields, grow in dry and salty land, withstand high and low temperatures, and tolerate insects, disease and herbicides.

      Despite such promise, much of the world has been busy banning, restricting and otherwise shunning GM foods. Nearly all the corn and soybeans grown in the U.S. are genetically modified, but only two GM crops, Monsanto's MON810 maize and BASF's Amflora potato, are accepted in the European Union. Ten E.U. nations have banned MON810, and although BASF withdrew Amflora from the market in 2012, four E.U. nations have taken the trouble to ban that, too. Approval of a few new GM corn strains has been proposed there, but so far it has been repeatedly and soundly voted down. Throughout Asia, including in India and China, governments have yet to approve most GM crops, including an insect-resistant rice that produces higher yields with less pesticide. In Africa, where millions go hungry, several nations have refused to import GM foods in spite of their lower costs (the result of higher yields and a reduced need for water and pesticides). Kenya has banned them altogether amid widespread malnutrition. No country has definite plans to grow Golden Rice, a crop engineered to deliver more vitamin A than spinach (rice normally has no vitamin A), even though vitamin A deficiency causes more than one million deaths annually and half a million cases of irreversible blindness in the developing world.

      Globally, only a tenth of the world's cropland includes GM plants. Four countries—the U.S., Canada, Brazil and Argentina—grow 90 percent of the planet's GM crops. Other Latin American countries are pushing away from the plants. And even in the U.S., voices decrying genetically modified foods are becoming louder. In 2016 the U.S. federal government passed a law requiring labeling of GM ingredients in food products, replacing GM-labeling laws in force or proposed in several dozen states.

      The fear fueling all this activity has a long history. The public has been worried about the safety of GM foods since scientists at the University of Washington developed the first genetically modified tobacco plants in the 1970s. In the mid-1990s, when the first GM crops reached the market, Greenpeace, the Sierra Club, Ralph Nader, Prince Charles and a number of celebrity chefs took highly visible stands against them. Consumers in Europe became particularly alarmed: a survey conducted in 1997, for example, found that 69 percent of the Austrian public saw serious risks in GM foods, compared with only 14 percent of Americans.

      In Europe, skepticism about GM foods has long been bundled with other concerns, such as a resentment of American agribusiness. Whatever it is based on, however, the European attitude reverberates across the world, influencing policy in countries where GM crops could have tremendous benefits. “In Africa, they don’t care what us savages in America are doing,” Zilberman says. “They look to Europe and see countries there rejecting GM, so they don't use it.” Forces fighting genetic modification in Europe have rallied support for “the precautionary principle," which holds that given the kind of catastrophe that would emerge from loosing a toxic, invasive GM crop on the world, GM efforts should be shut down until the technology is proved absolutely safe.

      But as medical researchers know, nothing can really be “proved safe.” One can only fail to turn up significant risk after trying hard to find it—as is the case with GM crops….

      The human race has been selectively breeding crops, thus altering plants’ genomes, for millennia. Ordinary wheat has long been strictly a human-engineered plant; it could not exist outside of farms, because its seeds do not scatter. For some 60 years scientists have been using “mutagenic” techniques to scramble the DNA of plants with radiation and chemicals, creating strains of wheat, rice, peanuts and pears that have become agricultural mainstays. The practice has inspired little objection from scientists or the public and has caused no known health problems.

      The difference is that selective breeding or mutagenic techniques tend to result in large swaths of genes being swapped or altered. GM technology in contrast, enables scientists to insert into a plant's genome a single gene (or a few of them) from another species of plant or even from a bacterium, virus or animal. Supporters argue that this precision makes the technology much less likely to produce surprises. Most plant molecular biologists also say that in the highly unlikely case that an unexpected health threat emerged from a new GM plant, scientists would quickly identify and eliminate it. “We know where the gene goes and can measure the activity of every single gene around it,” Goldberg says. “We can show exactly which changes occur and which don’t.”

      And although it might seem creepy to add virus DNA to a plant, doing so is, in fact, no big deal, proponents say. Viruses have been inserting their DNA into the genomes of crops, as well as humans and all other organisms, for millions of years. They often deliver the genes of other species while they are at it, which is why our own genome is loaded with genetic sequences that originated in viruses and nonhuman species. “When GM critics say that genes don’t cross the species barrier in nature, that’s just simple ignorance,”" says Alan McHughen, a plant molecular geneticist at U.C. Riverside. Pea aphids contain fungi genes. Triticale is a century-plus-old hybrid of wheat and rye found in some flours and breakfast cereals. Wheat itself, for that matter, is a cross-species hybrid. “Mother Nature does it all the time, and so do conventional plant breeders,” McHughen says.

      Could eating plants with altered genes allow new DNA to work its way into our own? It is possible but hugely improbable. Scientists have never found genetic material that could survive a trip through the human gut and make it into cells. Besides, we are routinely exposed to—and even consume—the viruses and bacteria whose genes end up in GM foods. The bacterium Bacillus thuringlensis, for example, which produces proteins fatal to insects, is sometimes enlisted as a natural pesticide in organic farming. “We’ve been eating this stuff for thousands of years,” Goldberg says.

      In any case, proponents say, people have consumed as many as trillions of meals containing genetically modified ingredients over the past few decades. Not a single verified case of illness has ever been attributed to the genetic alterations. Mark Lynas, a prominent anti-GM activist who in 2013 publicly switched to strongly supporting the technology has pointed out that every single news-making food disaster on record has been attributed to non-GM crops, such as the Escherichia coli-infected organic bean sprouts that killed 53 people in Europe in 2011.

      Critics often disparage U.S. research on the safety of genetically modified foods, which is often funded or even conducted by GM companies, such as Monsanto. But much research on the subject comes from the European Commission, the administrative body of the E.U., which cannot be so easily dismissed as an industry tool. The European Commission has funded 130 research projects, carried out by more than 500 independent teams, on the safety of GM crops. None of those studies found any special risks from GM crops….

      Some scientists say the objections to GM food stem from politics rather than science—that they are motivated by an objection to large multinational corporations having enormous influence over the food supply; invoking risks from genetic modification just provides a convenient way of whipping up the masses against industrial agriculture. “This has nothing to do with science,” Goldberg says. “It’s about ideology.” Former anti-GM activist Lynas agrees. He has gone as far as labeling the anti-GM. crowd “explicitly an antiscience movement….”

      There is a middle ground in this debate. Many moderate voices call for continuing the distribution of GM foods while maintaining or even stepping up safety testing on new GM crops. They advocate keeping a close eye on the health and environmental impact of existing ones. But they do not single out GM crops for special scrutiny, the Center for Science in the Public Interest’s Jaffe notes: all crops could use more testing. “We should be doing a better job with food oversight altogether,” he says.

      Even Schubert agrees. In spite of his concerns, he believes future GM crops can be introduced safely if testing is improved. "Ninety percent of the scientists I talk to assume that new GM plants are safety-tested the same way new drugs are by the FDA," he says. "They absolutely aren't, and they absolutely should be."

      Stepped-up testing would pose a burden for GM researchers, and it could slow down the introduction of new crops. “Even under the current testing standards for GM crops, most conventionally bred crops wouldn’t have made it to market,” McHughcn says. “What’s going to happen if we become even more strict?”

      That is a fair question. But with governments and consumers increasingly coming down against GM crops altogether, additional testing may be the compromise that enables the human race to benefit from those crops’ significant advantages.

How to Break the Climate Deadlock

[These excerpts are from an article by Naomi Oreskes in the Winter 2017/2018 special edition of Scientific American.]

      The main claim of politicians, lobbyists and CEOs who lead the charge to minimize the government’s role in addressing climate change is that the world should rely on the marketplace to fix the problem. Greenhouse gas emissions are part of the world’s economy, so if they are a problem, markets will respond, for instance, by offering technologies to prevent climate change or allow us to adapt to it.

      In truth, however, energy markets do not account for the “external,” or social, costs of using fossil fuels. These are not reflected in the price we pay at the pump, the wellhead or the electricity meter. For example, pollution from coal causes disease, damages buildings and contributes to climate change. When we buy electricity generated from coal, we pay for the electricity, but we do not pay for these other real, measurable costs.

      In a properly functioning market, people pay the true cost of the goods and services they use. If I dump my garbage in your backyard, you are right to insist that I pay for that privilege, assuming you are willing to let me do it at all. And if you do not insist, you can be pretty sure that I will keep on dumping my garbage there. In our markets today, people are dumping carbon dioxide into the atmosphere without paying for that privilege. This is a market failure. To correct that failure, carbon emissions must have an associated cost that reflects the toll they take on people and the environment. A price on carbon encourages individuals, innovators and investors to seek alternatives, such as solar and wind power, that do not cause carbon pollution. When economist Hoesung Lee became the new head of the Intergovernmental Panel on Climate Change in 2015, he named carbon pricing as the world’s top climate change priority.

      Several countries and regions have implemented carbon prices. In British Columbia, a carbon tax has helped cut fuel consumption and carbon emissions without harming economic growth. To prevent taxes from rising overall, the government also lowered personal and corporate income taxes; the province now has the lowest personal income tax rate and one of the lowest corporate tax rates in Canada.

      Another way to remedy the market failure of pollution is to create a trading system where people can buy the right to pollute—a right that they can use, save or sell. A company that can reduce its emissions more than the law requires can sell any unused credits, whereas a company that cannot meet the standards can buy credits until it figures out how to solve its pollution problem….

      Some people, pointing to the recent rapid drop in the cost of solar power, argue that the market is responding and thus proves that we can solve climate change without Paris, without a big international treaty or even without government intervention at all. Others say that if we just implement a carbon tax, the market will do the rest. Neither position captures the whole story.

      To stop climate change, we need a new energy system. This means large-scale renewable power, coupled with dramatic increases in energy efficiency, demand management and energy storage. Solar and wind power work and in many domains are now cost-competitive, but they are not at the scale needed to replace enough fossil-fuel power plants to stop the ongoing rise in atmospheric CO2 levels. After half a century of heavy and sustained public investment, nuclear power remains costly and controversial, and we still lack an accepted means of nuclear waste disposal. Carbon capture and storage—which collects emissions and puts them underground—is a great idea but has yet to be implemented.

      A price on carbon will push demand in the right direction, but it needs to be reinforced by the pull of public investment in innovation. The most likely way we will get the innovation we need, at the scale we need, in the time frame we need and at a retail price that people can afford, is if the public sector plays a significant role….

      Carbon capture and storage requires special attention. The emissions-reduction goals being promised by many countries assume that these nations will be capturing carbon and storing it in the ground. The dirty secret is that a proved system does not exist, not to mention a cost-effective one….

Exploration before Exploitation

[These excerpts are from an editorial by Erik E. Cordes and Lisa A. Levin in the February 16, 2018, issue of Science.]

      The current U.S. administration has proposed to open up 90% of the U.S. continental shelf to oil and gas drilling as part of a new Bureau of Ocean Energy Management (BOEM) Draft Proposed Program. Although there is a clear need to move beyond fossil fuels for America’s energy needs and energy security, there are also a number of immediate existential threats posed by an increase in offshore drilling.

      The sites that would be opened include vast areas of the ocean floor that remain unexplored, even unmapped. The maps that exist of our oceans, which make up about 70% of the surface of Earth, are at a resolution of about 5 kilometers. If Manhattan were under water, a map of it would be 4 pixels by 1 pixel across—easy enough to miss in a casual survey. Only about 5% of the ocean floor has been mapped in the level of detail equivalent to the high-resolution maps of the Moon and Mars. Of this, only about 0.01% has been seen by humans through photo or video surveys.

      This is noteworthy because the continental margins are not just featureless, muddy plains that could be drilled at any location with little disturbance. They contain submarine canyons, deep-water coral reefs and mounds, and natural hydrocarbon seep communities. All of these habitats interact with the surface ocean and provide services on which humans rely. As an example, many small fish that feed the larger fish that we rely on for food, migrate every day to the deep sea and interact with the unusual habitats there. The pharmaceutical industry spends untold funds on drug discovery, and many promising finds have come from deep-sea corals and sponges. At a global scale, the ocean absorbs one-third of the carbon that humans emit and snore than 90% of the excess heat that has recently entered the atmosphere. Much of this ends up in the deep sea….

      The probability of an offshore drilling accident increases with the depth of the industrial activity, and a single isolated incident may require decades to centuries for recovery because of the slow growth and longevity of the deep-sea fauna. Even in well-studied areas, long-term observation and monitoring, both pre- and postdrilling, will be necessary to distinguish the impacts of drilling from the effects of climate change, pollutants, fishing, and other human disturbances.

      The regions now considered for drilling have not been subject to the decades of deep-sea exploration and research that are essential to make sound decisions for siting of new drilling and the effective management of deep-sea resources. If existing maps are insufficient, or our understanding of the basic life-history traits of the fauna are incomplete, we could be losing the next generation of anticancer drugs or the recycling of nutrients essential to support fisheries while we exploit energy reserves that are best left in the ground. A clear national commitment to science-based management of offshore drilling would demonstrate the leadership of the United States on this issue, which has broad relevance globally as the industrialization of the open ocean proceeds.

Fact or Fiction?: Vaccines are Dangerous

[This excerpt is from an article by Dina Fine Maron in the Winter 2017/2018 special edition of Scientific American.]

      We live in a crowded, fast-moving world, and disease travels easily. The data are clear: failure to immunize a child comes with a much more formidable risk—leaving children vulnerable to contracting a potentially debilitating or lethal illness. Some children are too sick or too young to receive inoculations, so they remain at risk. If those children or other unvaccinated kids come into contact with someone else who was not protected against certain microbes, that can set off a wave of disease such as the measles outbreak in the U.S. in the summer of 2017. Maladies that have become uncommon, such as polio and measles, can also quickly reappear if we stop vaccinating against them, particularly when they are unintentionally imported across geographic borders. The 2015 measles outbreak that rippled through the U.S., for example, had genetic markers that suggest it came from an overseas traveler. Protecting kids actually helps protect everyone.

Answers to Climate Contrarian Nonsense

[This excerpt is from an article by John Rennie in the Winter 2017/2018 special edition of Scientific American.]

      Although CO2 makes up only 0.04 percent of the atmosphere, that small number says nothing about its significance in climate dynamics. Even at that low concentration, CO2 absorbs infrared radiation and acts as a greenhouse gas, as physicist John Tyndall demonstrated in 1859. Chemist Svante Arrhenius went further in 1896 by estimating the impact of CO2 on the climate; after painstaking hand calculations, he concluded that doubling its concentration might cause almost six degrees Celsius of warming—an answer not much out of line with recent, far more rigorous computations.

      Contrary to the contrarians, human activity is by far the largest contributor to the observed increase in atmospheric CO2. According to the Global Carbon Project, anthropogenic CO2 amounts to about 35 billion tons annually—more than 130 times as much as volcanoes produce. True, 95 percent of the releases of CO2 to the atmosphere arc natural, but natural processes such as plant growth and absorption into the oceans pull the gas back out of the atmosphere and almost precisely offset them, leaving the human additions as a net surplus. Moreover, several sets of experimental measurements, including analyses of the shifting ratio of carbon isotopes in the air, further confirm that fossil-fuel burning and deforestation are the primary reasons that CO2 levels have risen 40 percent since 1832, from 284 parts per million (ppm) to more than 400 ppm—a remarkable jump to the highest levels seen in millions of years.

      Contrarians frequently object that water vapor, not CO2 is the most abundant and powerful greenhouse gas; they insist that climate scientists routinely leave it out of their models. The latter is simply untrue: from Arrhenius on, climatologists have incorporated water vapor into their models. In fact, water vapor is why rising CO2 has such a big effect on climate. CO2 absorbs some wavelengths of infrared that water does not, so it independently adds heat to the atmosphere. As the temperature rises, more water vapor enters the atmosphere and multiplies CO2’s greenhouse effect; the Intergovernmental Panel on Climate Change notes that water vapor may “approximately double the increase in the greenhouse effect due to the added CO2 alone.”

      Nevertheless, within this dynamic, the CO2 remains the main driver (what climatologists call a “forcing”) of the greenhouse effect. As NASA climatologist Gavin Schmidt has explained, water vapor enters and leaves the atmosphere much more quickly than CO2 and tends to preserve a fairly constant level of relative humidity, which caps off its greenhouse effect. Climatologists therefore categorize water vapor as a feedback rather than a forcing factor. (Contrarians who don’t see water vapor in climate models are looking for it in the wrong place.)

Education as an American Right?

[This excerpt is from an article by Julie Underwood in the February 2018 issue of Phi Delta Kappan.]

      Since their inception, public schools have served to develop children’s ability to participate as active citizens in our nation's democracy. In addition to helping them secure personal gain, prepare for productive work, and accomplish individual goals, education has been expected to create responsible and active participants in American democratic society.

      Our nation’s founders believed that the success of American democracy depended on the capacity of its citizens to make informed and reasonable decisions about the public good. In those days, the primary rationale for providing public funds for education was to develop a populace that understood political and social issues, could vote wisely, and serve on juries and deliberative bodies. Even today, the goal is to afford all children an adequate education for equal access to participation in our democratic society.

      Currently, we treat students as citizens of the state in which they reside and not as citizens of the larger country. Thus, we allow each state to set the parameters of its investment in children and to set the limits of those children's opportunities. In today's world, though, it is becoming increasingly difficult to argue that the education of children in Alabama or Arizona has no significant effect on the people of Mississippi or Michigan, not only insofar as they participate in the larger national economy but also to the extent that their political decisions have implications for the rest of the country. Do we really believe that we can ignore the differences in opportunities provided to individuals just because of their states of residence? Or, given the realities of 21st-century life — at a time when citizens frequently relocate for higher education or work, drive across state borders for dinner, share the same cyberspace, and consume the same media —can we acknowledge that we all have a stake in ensuring that our fellow citizens have the right, as Americans, to an education that allows them to participate in the political process?

My Climate Change Crisis

[These excerpts are from an article by Paul C. Rogge in the February 9, 2018, issue of Science.]

      Reading former Vice President Al Gore’s An Inconvenient Truth in college awakened me to the widespread threat of climate change. Spurred to action, I joined a lab to develop alternative energy technologies. The result was an undergraduate project studying magnetic materials, which are important for electric vehicles and wind turbines. It got me hooked on research and left me wanting to make a bigger impact. I wanted my work to lead straight to solutions—but in the process, I veered off course.

      While visiting prospective graduate programs, I found just the project I was looking for: creating a radically new type of solar cell using low-cost electrochemistry techniques. The chemistry my new project would require was fundamentally different from the methods I had learned while working on magnetic materials, and I hadn’t really thought about chemistry since taking Chem 101 as a college freshman. But I was so driven by the potential social impact of the work that these realities did not worry me.

      In my first week, it was clear that I was starting from scratch. Mixing my first solution, I dumped powdered copper sulfate into a dry beaker before adding water—a big no-no, as any chemist knows. The extreme heat given off nearly caused the beaker to explode. But I quickly learned the correct procedures and threw myself into my research.

      However, 2 years of intense work did not lead to much progress. I found myself in the lab less, and my patience for troubleshooting experiments waned. I began to doubt my project and even my ability to graduate. To my adviser, the unexpected failure of the technique I was using offered a chance to dig deeper into the underlying chemistry. He was excited about what he saw as a silver lining, which made my enthusiasm sink even further. I didn’t want to study electrochemistry techniques; I wanted to solve climate change.

      In hindsight, the source of my dissatisfaction was clear. In choosing my research area, I had been blinded by my enthusiasm to tackle a pressing social and environmental problem. I hadn’t critically considered my own scientific interests and whether I would enjoy working in an electrochemistry lab. I should have remembered how much I struggled in Chem 101. And I should have realized that the concepts and techniques I mastered in my undergraduate research on magnetic materials—the very things that got me hooked on research in the first place—were not particularly relevant for electrochemistry….

      Just as I came to see my research as part of a larger scientific ecosystem, today I understand that scientific advancements are just one part of the needed response to climate change. I’ve reduced my environmental impact by taking public transportation to work, significantly cutting my meat intake, and resisting my consumerist impulses. My lifestyle changes won’t single-handedly reverse climate change, and neither will my individual scientific contributions. But we all need to work together to address such challenges, each of us contributing in the best way we can.

An Indoor Chemical Cocktail

[These excerpts are from an article by Sasho Gligorovski and Jonathan P.D. Abbatt in the February 9, 2018, issue of Science.]

      In the past 50 years, many of the contaminants and chemical transformations that occur in outdoor waters, soils, and air have been elucidated. However, the chemistry of the indoor environment in which we live most of the time—up to 90% in some societies—is not nearly as well studied. Recent work has highlighted the wealth of chemical transformations that occur indoors. This chemistry is associated with 3 of the top 10 risk factors for negative health outcomes globally: household air pollution from solid fuels, tobacco smoking, and ambient particulate matter pollution. Assessments of human exposure to indoor pollutants must take these reactive processes into consideration.

      A few studies illustrate the nature of multi-phase chemistry in the indoor environment….a highly carcinogenic class of compounds—the tobacco-specific nitrosamines—forms via the reaction of gas-phase nitrous acid (HONO) with cigarette nicotine that is adsorbed onto indoor surfaces similar to those in a typical smoker’s room.. HONO is also produced indoors directly by other combustion sources such as gas stoves and by the gas-surface reactions of gaseous nitrogen oxides on walls, ceilings, and carpets. Likewise, carcinogenic polycyclic aromatic hydrocarbons (PAHs) and their often more toxic oxidation products are mobile, existing both on the walls of most dwellings and in the air; PAHs arise from combustion sources such as smoking and inefficient cookstoves. This is a particularly important issue in developing countries, where the adverse health effects from cooking with solid fuels is a leading cause of disease. As another example, use of chlorine bleach to wash indoor surfaces promotes oxidizing conditions not just on the surfaces being washed but throughout the indoor space. Reactive chlorinated gases (such as HOCl and Cl2) evaporate from the washed surface, can oxidize other surfaces in a room, and may be broken apart by ultraviolet (UV) light to form reactive radicals….

      The building science research community has long identified the importance of ventilation for the state of indoor environments. Open windows expose us to outdoor air, whereas well-sealed houses are subject to emissions from furnishings, building materials, chemical reactions, and people and their activities. Climate change and outdoor air pollution are leading to efforts to better seal off indoor spaces, slowing down exchange of outdoor air. The purpose may be to improve air conditioning, build more energy-efficient homes, or prevent the inward migration of outdoor air pollution. As exposure to indoor environments increases, we need to know more about the chemical transformations in our living and working spaces, and the associated impacts on human health.

As Polar Ozone Mends, UV Shield Closer to Equator Thins

[These excerpts are from an article by April Reese in the February 9, 2018, issue of Science.]

      Thirty years after nations banded together to phase out chemicals that destroy stratospheric ozone, the gaping hole in Earth's ultraviolet (UV) radiation shield above Antarctica is shrinking. But new findings suggest that at midlatitudes, where most people live, the ozone layer in the lower stratosphere is growing more tenuous—for reasons that scientists are struggling to fathom.

      “I don’t want people to panic or get overly worried,” says William Ball, an atmospheric physicist…. “But there is something happening in the lower stratosphere that's important to understand….”

      Ball and his colleagues suspect the culprit is “very short-lived substances,” or VSLSs: ozone-eating chemicals such as dichloromethane that break down within 6 months after escaping into the air. Researchers had long assumed that VSLSs’ short lifetime would keep them from reaching the stratosphere, but a 2015 study suggested that the substances may account for as much as 25% of the lower stratosphere’s ozone losses. Whereas many VSLSs are of natural origin—marine organisms produce dibromomethane, for example—use of humanmade dichloromethane, an ingredient in solvents and paint removers, has doubled in recent years. “We should study [VSLSs] more completely,” says Richard Rood, an atmospheric scientist at the University of Michigan in Ann Arbor. But because the compounds are released in small quantities, he says, “They’re going to be difficult to measure.”

      He and others say it's vital to determine what's destroying ozone over the populous midlatitudes. “The potential for harm ... may actually be worse than at the poles,” says Joanna Haigh, co-director of the Grantham Institute at Imperial College London. “The decreases in ozone are less than we saw at the poles before the Montreal Protocol was enacted, but UV radiation is more intense in these regions.”

      Ball and others emphasize that the Montreal Protocol has been a resounding success. “I don’t think it in any way says there’s some-thing fundamentally wrong with how we've been dealing with the ozone problem,” Rood says. “What it says to me is that we’re now looking at effects that are more subtle than that original problem we were taking on” when the Montreal Protocol was adopted.

Gun Science

[These excerpts are from an article by Michael Shermer in the Winter 2017/2018 special edition of Scientific American.]

      According to the Centers for Disease Control and Prevention, 33,594 people died by guns in 2014 (the most recent year for which U.S. figures are available), a staggering number that is orders of magnitude higher than that of comparable Western democracies. What can we do about it? National Rifle Association executive vice president Wayne LaPierre believes he knows: “The only thing that stops a bad guy with a gun is a good guy with a gun.” If LaPierre means professionally trained police and military who routinely practice shooting at ranges, this observation would at least be partially true. If he means armed private citizens with little to no training, he could not be more wrong….

      For example, of the 1,082 women and 267 men killed in 2010 by their intimate partners, 54 percent were shot by guns. Over the past quarter of a century, guns were involved in a greater number of intimate partner homicides than all other causes combined. When a woman is murdered, it is most likely by her intimate partner with a gun. Regardless of what really caused Olympic track star Oscar Pistorius to shoot his girlfriend, Reeva Steenkamp (whether he mistook her for an intruder or he snapped in a lover’s quarrel), her death is only the latest such headline. Recall, too, the fate of Nancy Lanza, killed by her own gun in her own home in Connecticut by her son, Adam Lanza, before he went to Sandy Hook Elementary School to murder some two dozen children and adults. As an alternative to arming women against violent men, legislation can help: data show that in states that prohibit gun ownership by men who have received a domestic violence restraining order, gun-caused homicides of intimate female partners have been reduced by 25 percent.

      Another myth to fall to the facts is that gun-control laws disarm good people and leave the crooks with weapons. Not so, say the Johns Hopkins authors: “Strong regulation and oversight of licensed gun dealers—defined as having a state law that required state or local licensing of retail firearm sellers, mandatory record keeping by those sellers, law enforcement access to records for inspection, regular inspections of gun dealers, and mandated reporting of theft of loss of firearms—was associated with 64 percent less diversion of guns to criminals by in-state gun dealers.”

      Finally, before we concede civilization and arm everyone to the teeth pace the NRA, consider the primary cause of the centuries-long decline of violence as documented by Steven Pinker in his 2011 book The Better Angels of Our Nature: the rule of law by states that turned over settlement of disputes to judicial courts and curtailed private self-help justice through legitimate use of force by police and military trained in the proper use of weapons.

Journey to Gunland

[These excerpts are from an article by Melinda Wenner Moyer in the Winter 2017/2018 special edition of Scientific American.]

      …A growing body of research suggests that violence is a contagious behavior that exists independent of weapon or means. In this framework, guns are accessories to infectious violence rather than fountainheads. But this does not mean guns don’t matter. Guns intensify violent encounters, upping the stakes and worsening the outcomes—which explains why there are more deaths and life-threatening injuries where firearms are common. Violence may be primarily triggered by other violence, but these deadly weapons make all this violence worse….

      The frequency of self-defense gun use rests at the heart of the controversy over how guns affect our country. Progun enthusiasts argue that it happens all the time. In 1995 Gary Kleck, a criminologist at Florida State University, and his colleague Marc Gertz published a study that elicited what has become one of the gun lobby's favorite numbers. They randomly surveyed 5,000 Americans and asked if they, or another member of the household, had used a gun for self-protection in the past year. A little more than 1 percent of the participants answered yes, and when Kleck and Gertz extrapolated their results, they concluded that Americans use guns for self-defense as many as 2.5 million times a year.

      This estimate is, however, vastly higher than numbers from government surveys, such as the National Crime Victimization Survey (NCVS), which is conducted in tens of thousands of house-holds. It suggests that victims use guns for self-defense only 65,000 times a year. In 2015 Hemenway and his colleagues studied five years’ worth of NCVS data and concluded that guns are used for self-defense in less than 1 percent of all crimes that occur in the presence of a victim. They also found that self-defense gun use is about as effective as other defensive maneuvers, such as calling for help. “It’s not as if you look at the data, and it says people who defend themselves with a gun are much less likely to be injured,” says Philip Cook, an economist at Duke University, who has been studying guns since the 1970s….

      A closer look at the who, what, where and why of gun violence also sheds some light on the self-defense claim. Most Americans with concealed carry permits are white men living in rural areas, yet it is young black men in urban areas who disproportionately encounter violence. Violent crimes are also geographically concentrated: Between 1980 and 2008, half of all of Boston’s gun violence occurred on only 3 percent of the city’s streets and intersections. And in Seattle, over a 14-year-period, every single juvenile crime incident took place on less than 5 percent of street segments. In other words, most people carrying guns have only a small chance of encountering situations in which they could use them for self-defense….

      The popular gun-advocacy bumper sticker says that “guns don’t kill people, people kill people”—and it is, in fact, true. People, all of us, lead complicated lives, misinterpret situations, get angry, make mistakes. And when a mistake involves pulling a trigger, the damage can’t be undone….

Girls Lead in Solving Problems with Others

[These excerpts are from a report by OECD (Organization for Economic Co-operation and Development) in the February 2018 issue of Phi Delta Kappan.]

      Girls are much better than boys at working together to solve problems, according to an international assessment of collaborative problem solving.

      About 125,000 15-year-olds in 52 countries and economies took part in the assessment, which analyzes for the first time how well students work together as a group, their attitudes toward collaboration, and the influence of factors such as gender, after-school activities, and social background….

      • Students with stronger reading or math skills tend to be better at collaborative problem solving because managing and interpreting information, and the ability to reason, are required to solve problems.

      • Girls do better than boys in every country and economy that participated in the assessment, by the equivalent of a half year of schooling on average….

      • Exposure to diversity in the classroom tends to be associated with better collaboration skills. For example, in some countries, students without an immigrant background perform better in the collaboration-specific aspects of the test when they attend schools with a larger proportion of immigrant students.

      • Students who attend physical education lessons or play sports generally are more positive about collaboration. Students who play video games outside of school score slightly lower in collaborative problem solving than students who do not play video games. But students who access the internet or social networks outside of school score slightly higher than other students.

North American Waterways Are Becoming Saltier

[These excerpts are from an item from the University of Maryland in the February 2018 issue of The Science Teacher.]

      Across America, streams and rivers are becoming saltier, thanks to road deicers, fertilizers, and other salty compounds that humans indirectly release into waterways. At the same time, freshwater supplies are becoming more alkaline.

      Salty, alkaline freshwater can create big problems for drinking water supplies, urban infrastructure, and natural ecosystems. A new study is the first to assess long-term changes in freshwater salinity and pH at the continental scale. Drawn from data recorded at 232 U.S. Geological Survey monitoring sites across the country over the past 50 years, the analysis shows significant increases in both salinization and alkalinization. The study results also suggest a close link between the two properties, with different salt compounds combining to do more damage than any one salt on its own….

      According to the researchers, most freshwater salinization research has focused on sodium chloride, better known as table salt, which is also the dominant chemical in road deicers. But in terms of chemistry, salt has a much broader definition, encompassing any combination of positively and negatively charged ions that dissociate in water. Some of the most common positive ions found in salts—including sodium, calcium, magnesium, and potassium—can have dam-aging effects on freshwater at higher concentrations.

      …Alkalinization, which is influenced by a number of different factors in addition to salinity, increased by 90%.

      The root causes of increased salt in waterways vary from region to region, according to researchers. In the snowy Mid-Atlantic and New England, road salt applied to maintain roadways in winter is a primary culprit. In the heavily agricultural Midwest, fertilizers, particularly those with high potassium content, also make major contributions. In other regions, mining waste and weathering of concrete, rocks, and soils releases salts into adjacent waterways.

It’s Time for EPA’s Scott Pruitt to Go

[This excerpt is from an editorial by Fred Krupp in the Winter 2018 issue of Solutions, the newsletter of the Environmental Defense Fund.]

      This month marks the one-year anniversary of President Trump’s inauguration and the convening of the 115th Congress. And what a year it has been—a perfect storm of extreme weather and extreme politics. The president is surrendering America's climate leadership, undermining the government's ability to enforce the law and demolishing environmental safeguards.

      The administration's point man for environmental assaults and climate denial has been EPA Administrator Scott Pruitt. As Oklahoma attorney general, he sued EPA 14 times trying to block clean air and water protections. This year he led what amounted to a hostile take-over of the agency, rolling back climate standards even as historic hurricanes and wildfires drove home the need for urgent action.

      Pruitt has ruled EPA under a cloak of secrecy, suppressing climate web pages, silencing scientists and keeping his schedule secret, until Freedom of Information Act requests from EDF and others forced its release. It showed Pruitt meeting regularly with executives from the mining, fossil fuel and auto industries, sometimes shortly before making decisions that put their interests above those of the American people. His frequent travel to Oklahoma at taxpayers’ expense prompted EPA’s Inspector General to open an investigation.

Our Science, Our Society

[These excerpts are from an editorial by Susan Hockfield in the February 3, 2018, issue of Science.]

      We live in a scientific golden age. Never has the pace of discovery been so rapid, the range of achievements so broad, and the changing nature of our understanding so revolutionary. Science today has extraordinary powers. It reveals fundamental phenomena of our universe, catalyzes new technologies, powers new businesses, fosters new industries, and improves lives….Today’s advances and innovations presage a future that most of us have not yet imagined.

      Lamentably, we also live in a new heyday of anti-science activism. Fake news and “alternative facts” abound. Climate-change deniers occupy political office and determine environmental policy. Fears of unsubstantiated dangers delay the deployment of genetically modified foods in starving nations. The risks of nuclear power are overstated rather than carefully weighed. The anti-vaccination movement endures, and there are claims that science is as culturally determined and subjective as any other endeavor. Public figures cynically dismiss scientific findings, fostering a popular distrust of expertise and experts. All this, too, presages a different future that most of us would not want to imagine.

      In this environment, how can we ensure that science prevails and continues to flourish? What can be done to get the most from this scientific golden age? We can start by recognizing the critical role of institutions in nurturing the scientific enterprise….

      When the focus of science is placed on individual achievement, it can neglect the importance of the institutions that make the work of science possible. That leaves our institutions open to attack. And, indeed, both science and its institutions are under attack today, with rampant skepticism about the utility of the research enterprise and higher education. Also under attack are the core principles that unite scientists and science enthusiasts: that objective reality can be discovered; that anyone can compete in a game governed by ideas; that disagreements are best resolved by assembling facts to test competing views; and that science and the application of scientific principles have the capacity to improve lives. What's more, science's universal truths call together people from any background, any nation, any phenotype or genotype. These principles have guided us for centuries along the road to discovery and understanding.

A Tale of Two Cultures

[These excerpts are from an editorial by Rush Holt in the January 26, 2018, issue of Science.]

      It is the best of times. It is the worst of times. We are witnessing major advances in almost every field of science, leading to a better understanding of the world and improvements in the quality of people’s lives. Yet, scattered distrust of science, neglect of science by public officials, and frequent denial of scientific thinking in many quarters seem to call into question that rosy view of scientific progress. The inconsistency indicates widespread misunderstanding of what science is and how it works. It is up to scientists to fix this.

      …For example, it is troubling to scientists that in the United States, the president has failed to appoint a science adviser. But even more troubling is that the public has reacted with a yawn.

      …A principle of science is that all findings are provisional. Some seem to think this means science is so uncertain that any opinion or political assertion is as valid as evidence.

      Somehow, scientists must rebuild public understanding and appreciation of science and evidence-based thinking. Clearly, it will not be accomplished simply by decrying the lack of trust or failure to appoint science advisers. It must be achieved by demonstrating trustworthiness and the extraordinary effectiveness of science in confronting questions and problems. Scientists must show that evidence-based thinking leads to more reliable policies to create jobs, maintain a healthy environment, or improve teaching. Rather than denouncing the absence of scientists in policy-making positions, the scientific community must raise public understanding to the level where no public official of any party would ever want to be without a science adviser. Scientists must build the recognition that despite occasional errors, and even blunders, scientific thinking has a strong record of success over centuries. Scientists must demonstrate that science and evidence-based thinking are relevant to everyone, and that science is not an arcane practice under the control of a remote, self-interested priesthood.

      Science practiced by those who neither make their work accessible to all people, nor make clear their work is for the benefit of all, becomes an impoverished enterprise and risks being unsustainable. It comes down to good science communication—not simply choosing the right words to explain one's research, but actually earning the public’s trust that the whole enterprise is intended for societal good. If scientists fail to rebuild the public's understanding and appreciation, this could indeed become the worst of times.

When Facts Backfire

[This excerpt is from an article by Michael Shermer in the Winter 2017/2018 special edition of Scientific American.]

      Have you ever noticed that when you present people with facts that are contrary to their deepest held beliefs they always change their minds? Me neither. In fact, people seem to double down on their beliefs in the teeth of overwhelming evidence against them. The reason is related to the worldview perceived to be under threat by the conflicting data.

      Creationists, for example, dispute the evidence for evolution in fossils and DNA because they are concerned about secular forces encroaching on religious faith. Antivaxxers distrust big pharm a and think that money corrupts medicine, which leads them to believe that vaccines cause autism despite the inconvenient truth that the one and only study claiming such a link was retracted and its lead author accused of fraud. The 9/11 truthers focus on minutiae like the melting point of steel in the World Trade Center buildings that caused their collapse because they think the government lies and conducts “false flag” operations to create a New World Order. Climate deniers study tree rings, ice cores and the ppm of greenhouse gases because they are passionate about freedom, especially that of markets and industries to operate unencumbered by restrictive government regulations….

The 1918 Flu, 100 Years Later

[These excerpts are from an editorial by Jessica A. Belser and Terrence M. Tumpey in the January 19, 2018, issue of Science.]

      Combating a disease of unknown cause is a daunting task. One hundred years ago, a pandemic of poorly understood etiology and transmissibility spread worldwide, causing an estimated 50 million deaths. Initially attributed to Haemophilus influenzae, it was not until the 1930s that an H1 subtype was identified as the causative strain. Subsequent influenza pandemics in 1957, 1968, and 2009 did not approach levels of morbidity and mortality comparable to those of the 1918 “Spanish flu,” leaving unanswered for almost a century questions regarding the extraordinary virulence and transmissibility of this unique strain. Technological advances made reconstruction of the 1918 virus possible; now continued research, vaccine development, and preparedness are essential to ensure that such a devastating public health event is not repeated.

      Over the past 20 years, studies of individual genes and the fully reconstructed live 1918 virus have identified numerous features that likely contributed to its robustness and rapid global spread. Importantly, this research has often been conducted in tandem with viral isolates from recent human and zoonotic sources, enabling insights from the 1918 virus to inform evaluations of current pandemic risk. As we now know, wild birds are the natural reservoir for influenza A viruses. With extensive antigenic and genetic diversity inherent among influenza virus surface proteins, a strain to which humans are immunologically naive could jump the species barrier at any time….

      Philosopher George Santayana pointed out, “Those who cannot remember the past are condemned to repeat it.” We are no doubt more prepared in 2018 for an infectious disease threat than in 1918. But it is critical to remember that preparation only stems from a global commitment to share data about viral isolates, support innovative research, and dedicate resources to assess the pandemic risk of new and emerging influenza viruses from zoonotic reservoirs.

Alvy’s Error and the Meaning of Life

[This excerpt is from an article by Michael Shermer in the February 2018 issue of Scientific American.]

      …we are sentient beings designed by evolution to survive and flourish in the teeth of entropy and death. The second law of thermodynamics (entropy) is the first law of life. If you do nothing, entropy will take its course, and you will move toward a higher state of disorder that ends in death. So our most basic purpose in life is to combat entropy by doing something “extropic”—expending energy to survive and flourish. Being kind and helping others has been one successful strategy, and punishing Paleolithic Stalins was another, and from these actions, we evolved morality. In this sense, evolution bestowed on us a moral and purpose-driven life by dint of the laws of nature. We do not need any source higher than that to find meaning or morality.

Why Fake Operations Are a Good Thing

[These excerpts are from an article by Claudia Wallis in the February 2018 issue of Scientific American.]

      …Take arthroscopic knee surgery, the number-one most common orthopedic operation. More than two million are done annually to tidy up ragged cartilage in people with arthritis and degenerative wear and tear in their knees, including a torn meniscus. Yet sham surgery studies and other research have shown it offers no advantages for the vast majority of such patients. They would do just as well with physical therapy, weight loss and exercise.

      Consider this: before a new drug is approved for marketing, researchers must show that it is more effective than a sugar pill. Not so for a new operation. And yet surgeries have a much bigger placebo effect than drugs. To quantify the difference, a 2013 meta-analysis looked at placebo effects in 79 studies of migraine prevention: sugar pills reduced headache frequency for 22 percent of patients, fake acupuncture helped 38 percent, and sham surgery was a hit for a remarkable 58 percent….

      And yet sham surgery studies are rarely done, especially in the U.S., where ethics boards resist subjecting patients to incisions, anesthesia and other risks without delivering an actual treatment. Redberg, who has written about the value of these studies, takes the opposite view: “I think it’s unethical not to do them.” Otherwise you maybe exposing millions of people to the risks and the financial costs of surgery for a placebo effect that will not likely last.

Porpoise-Driven Life

[These excerpts are from an article by Clara Moskowitz in the February 2018 issue of Scientific American.]

      The best military sonar technology pales in comparison with the echolocation porpoises use to track prey, predators and obstacles. The marine mammals can find objects a few centimeters wide from 100 meters away—akin to spotting a walnut from across a football field—by releasing clicks from their blowholes. Sonar-equipped ships, in contrast, must emit sound waves from multiple sources spread out over at least a few meters. A recent study suggests porpoises’ ultraefficient echolocation is made possible by adjustable structures in their heads—a finding that may help humans improve our own sonar technology.

      Sonar works by bouncing sound waves off objects and detecting the signals’ return time. Normally if the source of a sonar pulse is smaller than the wavelength of the sound, it releases sound signals in all directions, like light scattering from a disco ball. To send a targeted beam in a specific direction, the source must be much larger than the wavelength. But porpoises manage to evade this requirement….

      The work suggests that porpoises share some tricks with another mammal famous for echolocation: bats….Next to human technology, it seems bats and porpoises really are a few flaps or laps ahead.

My Second Life as a Teacher

[These excerpts are from an editorial by William H. Walter in the January 12, 2018, issue of Science.]

      …I am glad my career path took this unexpected turn. Making the adjustment felt like a new and exciting challenge, not a downgrade in my prospects. In some ways, teaching high school students has been even more rewarding than my college-level teaching. I teach physics and occasional astronomy-related courses at levels that are typically more rigorous than the introductory courses I was teaching at Tufts. I also get to interact with my high school students in a more personal way, as the class sizes are smaller and I see the students more often over the course of the year Yes, they are still teenagers who are prone to lapses in their executive functioning and who can manifest a fair degree of silliness and drama. But they also can be delightful, especially if given a chance to express themselves. I particularly enjoy mentoring students as they conceive and carry out research projects that they can then present in competition….

      Teaching also leaves enough room in my life for my own intellectual interests, including reading scientific journals and magazines….

Taming the Monsters of Tomorrow

[These excerpts are from an article by Kai Kupferschmidt in the January 12, 2018, issue of Science.]

      Philosopher Nick Bostrom believes it’s entirely possible that artificial intelligence (AI) could lead to the extinction of Homo sapiens….Bostrom paints a dark scenario in which researchers create a machine capable of steadily improving itself. At some point, it learns to make money from online transactions and begins purchasing goods and services in the real world. Using mail-ordered DNA, it builds simple nanosystems that in turn create more complex systems, giving it ever more power to shape the world….

      For Bostrom and a number of other scientists and philosophers, such scenarios are more than science fiction. They’re studying which technological advances pose “existential risks” that could wipe out humanity or at least end civilization as we know it—and what could be done to stop them….

      The idea of science eliminating the human race can be traced all the way back to Frankenstein….

      “I think Frankenstein illustrates the point beautifully," says physicist Max Tegmark of the Massachusetts Institute of Technology (MIT)….”We humans gradually develop ever-more-powerful technology, and the more powerful the tech becomes, the more careful we have to be, so we don’t screw up with it?”

      The study of existential risks is still a tiny field, with at most a few dozen people at three centers. Not everyone is convinced it's a serious academic discipline. Most civilization-ending scenarios—which include humanmade pathogens, armies of nanobots, or even the idea that our World is a simulation that might be switched off—are wildly unlikely….

      Harvard University psychologist Steven Pinker calls existential risks a “useless category” and warns that “Frankensteinian fantasies” could distract from real, solvable threats such as climate change and nuclear war….

Trust Me, I’m a Scientist

[These excerpts are from an article by Daniel T. Willingham in the Winter 2017-2018 special edition issue of Scientific American titled “The Science Behind the Debates.”]

      Most individuals blame the poor quality of science education in the U.S. If kids got more science in school, the thinking goes, they would learn to appreciate scientific opinion on vaccines, climate, evolution and other policy issues. But this is a misconception. Those who know more science have only a slightly greater propensity to trust scientists. The science behind many policy issues is highly specialized and evaluating it requires deep knowledge—deeper than students are going to get in elementary and high school science classes. A more direct approach would be to educate people about why they are prone to accept inaccurate beliefs in the first place….

      Asking science teachers to impart enough content to understand all the issues may be unrealistic, but they might be able to improve people’s appreciation for the accuracy of scientific knowledge. Through the study of the history of science, students might gain an understanding both of their own motivations for belief and of science as a method of knowledge. If a student understands how a medieval worldview could have made a geocentric theory of the solar system seem correct, it is a short step to seeing similar influences in oneself….

      Science may not be the only way of organizing and understanding our experience, but for accuracy it fares better than religion, politics and art. That’s the lesson.

Lessons Learned in School Reform

[This excerpt is from an article by Frederick M. Hess in the Winter 2017-2018 issue of American Educator.]

      Now, a few reformers will deny that education policy at the American Enterprise reform has disappointed. They’ll argue that dozens of new teacher-evaluation systems have delivered, never mind the growing piles of paperwork, dubious scoring systems, or lack of evidence that they’ve led to any changes in how many teachers are deemed effective or in need of improvement. They’ll insist that the conception and rollout of the Common Core State Standards went swimmingly, never mind the politicized mess, half-baked implementation, or fractured testing regime. They’ll tell you it doesn’t matter that the U.S. Department of Education’s School Improvement Grants didn’t move test scores or that Education Next reports that charter schools are less popular today than they’ve been in 15 years.

      …Why have good intentions and energetic efforts so often disappointed? What exactly have we learned from all of this?

      …Our schools and systems were never designed for what we’re asking them to do today—to rigorously educate every child in a diverse nation. Making that possible will indeed require big changes to policies governing staffing, spending, and much else…

      …Schools can turn around—we just don’t have a clue about how to make this happen via policy.

      Policy is a blunt tool, one that works best when simply making people do things is enough. In schooling, it's most likely to work as intended when it comes to straightforward directives—like mandating testing or the length of a school year. Policy tends to stumble when it comes to more complex questions—when how things are done matters more than whether they’re done.

      …Far too often, in fact, policy unfolds like a children’s game of telephone. In Washington, D.C., federal officials have a clear vision of what they think a change in guidance on Title I spending should mean. But when officials in 50 states read that new guidance, they don’t all understand it the same way. Those officials have to explain it to thousands of district Title I coordinators, who then provide direction to school leaders and teachers. By that point, bureaucracy, confusion, and nervous compliance can start to become the law of the land. Now, multiply that a hundredfold for the deluge of state and federal rules that rain down. When all this doesn’t work out as hoped, there’s a tendency for those responsible to insist that the policy is sound and any issues are just “implementation problems….”

      We’ve mucked up the relationship between parents and educators. We’ve lost the confidence to insist that parents have to do their part. Now, it’s important here to remember that the conviction that every child can learn—and that schools should be expected to teach every child—was not always the norm….can testify that it wasn’t unusual to hear educators declare that certain students-were unteachable and that it was their parents’ fault.

      Today, that mindset is regarded as unacceptable. Teachers are expected to teach every child. That’s a wonderful thing. I fear, though, that the insistence that parents do their part has been lost along the way. Talk of parental responsibility has come to be seen as little more than a case of blaming the victim. The result is that we just don’t talk very much anymore, at least in public, about whether parents insist that their kids do their homework or respect their teachers. When students are truant, we hesitate to say anything that would imply parents are at fault. When only a handful of parents show up at parent-teacher meetings, reformers are conspicuously mum. If they do take note, it’s usually only to lament that parents are overworked and overburdened.

      Obviously, these are thorny questions. Parents frequently are overburdened. But there’s a necessary balance here, and we’ve man-aged to tip from one extreme to the other….

      …Talk of parental responsibility is greeted with resistance and even accusations of bias. Yet parents have an outsized impact on their children’s academic future. Children whose parents read to them, talk to them, and teach them self-discipline are more likely to succeed academically.

      The point is decidedly not to scapegoat parents or to judge them….The point is to clarify for parents what they should be doing and help them do those things well. Today, we ask educators to accept responsibility for the success of all their students. Good. How students fare, though, is also a product of whether they do their work and take their studies seriously. Some of that truly is beyond the reach of educators. So, by all means, let's call teachers to account—let’s just be sure to do it for parents, too.

      …Educators are deeply versed in the fabric of schooling and experience the unintended consequences of reforms. This is why it’s easy for them to get so frustrated with self-styled reformers.

      Educators are right to be skeptical. Reformers and practitioners will inevitably see things differently. But what frustrated teachers can miss is that this is OK, even healthy. Educators are looking firma the inside out, and reformers from the outside in. In all walks of life, there are doers arid there are talkers. Doers are the people who teach students, attend to patients, and fix plumbing. Talkers are free to survey the sweep of what’s being done and explore ways to do it better.

      Ultimately, serious and sustainable school reform needs to be profoundly pro-doer. When talkers wax eloquent about students trapped in dysfunctional systems, they often forget that many teachers feel equally stymied. The bureaucracy that reformers decry can also infuriate and demoralize the teachers who live with it every day. Educators see when policies misfire and where existing practices come up short. Talkers have the time to examine the big picture, learn from lots of locales, and forge relationships with policymakers. Talkers have the distance to raise hard truths that can be tough for educators to address simply because they strike so close to home. But it’s ultimately the doers—the educators—who have to do the work, which means talkers need to pay close attention to what educators have to say. There's a crucial symbiosis here: teachers and talkers need each other….

Moving Beyond the Failure of Test-Based Accountability

[This excerpt is from an article by Daniel Koretz in the Winter 2017-2018 issue of American Educator.]

      Examples abound of how extreme—often simply absurd—this focus on testing has become. In 2012, two California high schools in the Anaheim Union High School District issued ID cards and day planners to students that were color-coded based on the students’ performance on the previous year’s standardized tests: platinum for those who scored at the “advanced” level, gold for those who scored “proficient,” and white for everyone else. Students with premium ID cards were allowed to use a shorter lunch line and received discounts on entry to football games and other school activities.

      Newspapers are replete with reports of students who are so stressed by testing that they become ill during testing or refuse to come to school. In 2013, for example, eight Near York school principals jointly sent a letter to parents that included this: “We know that many children cried during or after testing, and others vomited or lost control of their bowels or bladders. Others simply gave up. One teacher recorded that a student kept banging his head on the desk, and wrote ‘This is too hard’ and ‘I can’t do this,’ throughout his test booklet.”

      In many schools it is not just testing itself that stresses students; they are also stressed by the unrelenting focus on scores and on the degree of preparation for the end-of-year accountability tests. Test-based accountability has become an end in itself in American education, unmoored from clear thinking about what should be measured, how it should be measured, or how testing can fit into a rational plan for evaluating and improving our schools.

      The rationale for these policies is deceptively simple. American schools are not performing as well as we would like. They do not fare well in international comparisons, and there are appalling inequities across schools and districts in both opportunities for students and student performance. These problems have been amply documented. The prescription that has been imposed on educators and children in response is seductively simple: measure student performance using standardized tests and use those measurements to create incentives for higher performance. If we reward people for producing what we want, the logic goes, they will produce more of it. Schools will get better, and students will learn more.

      However, this reasoning isn’t just simple, it’s simplistic—and the evidence is overwhelming that this approach has failed.

      Ironically, our heavy-handed use of tests for accountability has also undermined precisely the function that testing is best designed to serve: providing trustworthy information about student achievement. It has led to “score inflation”—that is, increases in scores much higher than the actual improvements in achievement they are supposedly measuring. The result is illusions of progress; student performance appears to be improving far more than it really is. This cheats parents, students, and the public at large, who are being given a steady stream of seriously misleading good news. Perhaps even worse, these bogus score gains are more severe in some schools than in others.…And an increasing amount of evidence suggests that, on average, schools that serve disadvantaged students engage in more test preparation and therefore inflate scores more, creating an illusion that the gap in achievement between disadvantaged and advantaged children is shrinking more than it is. This is another irony, as one of the primary justifications for the current test-based account-ability programs has been to improve equity.

      …Many critics of our current system blame standardized tests, but for all the damage that test-based accountability has caused, the problem has not been testing itself but rather the rampant misuses of testing….

      So, I should be more precise: we ought to start with standardized tests if and only if we take steps to dramatically reduce bad test prep and inflated scores.

      What’s the solution? Precisely what the designers of standardized tests have been telling us to do for more than half a century, and what the Finnish, Dutch, and Singaporean systems do routinely: use local measures of student achievement—that is, measures not imposed from afar These local measures include both the quality of students' work and their performance on tests designed by educators in their schools, both of which go into the grades that teachers assign. In addition to providing a far more complete view of students' learning, using these local measures—along with standardized tests when we have good ones—would give teachers more of an incentive to focus on the quality of assignments and schoolwork rather than just preparing students for a single end-of-year test.

      …attributes such as persistence, the ability to work well in groups, and so on. E. E Lindquist, the same pioneer of achievement testing who warned that tests must be used in conjunction with local measures of learning, also cautioned—more than half a century ago—that skills of this sort that can’t be captured by standardized tests are a critically important goal of education. This may strike some hardheaded advocates of accountability as “soft,” but recent research has begun to confirm the wisdom of Lindquist’s advice: soft skills affect how well students do long term, even after they leave school….

      Finally, targets have to be reasonable: the goals facing educators have to be ones that they can reach by legitimate means. This requires practical targets for both the amount of improvement and the time allowed to accomplish it…

      There is room to argue about how best to determine what is reasonable, but the principle is inescapable. If we demand more than educators can deliver by teaching better, they will have to choose between failing and cutting corners—or worse, simply cheating. This may sound obvious as a general principle, but, in practice, it will be both controversial and difficult to implement. Demanding big and rapid gains makes for good press and often good politics, so persuading policymakers to be realistic won’t always be easy….

      We shouldn’t rely on tests when we don't have appropriate and sufficiently high-quality tests to use. As much as is practical, we need to avoid relying on arbitrary performance standards, and we need to set realistic goals for improvement. We need to use test scores in conjunction with a wide variety of other measures, and we need to balance the incentives to raise scores. We need to take steps to reduce inappropriate test prep.

      We must stop pretending that one test can do everything. It’s now common to claim that a test designed and used for accountability can also provide honest monitoring of progress and good diagnostic information for teachers. The fact that some are making this claim is hardly surprising; accountability testing has already swallowed a great deal of school time, and with our current incentives, few people want a second measure that might distract from the all-important goal of ratcheting up scores on the accountability test. However, it just isn’t so, particularly given the pressures in our system to raise scores….

      Finally, a recommendation for a truly fundamental shift: we should consider turning the current approach on its head and treating scores as the starting point rather than the end of evaluation. I’ve stressed repeatedly that scores alone, whether high or low, aren't enough to tell us why students are performing as they do. Low scores, however, are an indication of likely problems. Rather than treating these low scores as sufficient to label a school a failure, we could use them to target other resources used for evaluation.

      Teachers can't do it all—especially teachers in many low-performing schools. This fact is widely accepted in principle, but it is often ignored in practice. We will need to take this far more seriously than we have if we are to achieve the large gains in student learning and, in particular, the big improvements in equity that reformers have promised us for years.

      The supports we should provide are of three types. The first is better initial training and ongoing support for teachers already in the workplace….

      The second category is in-school supports: supplementary classes, longer school days, smaller classes, and the like. The third is out-of-school supports; one that has received a great deal of attention in recent years is high-quality preschool, which can improve the long-term prospects of disadvantaged kids.

      Why are recommendations for more support controversial? One reason is money. It is vastly cheaper to buy a test, set arbitrary targets, and pretend that the problem is solved. A second is timing….

      And we need to face up to two bask facts about interventions in complex systems such as education: most interventions, even very good ones, will have side effects we don’t want, and none will work exactly as planned. The implications of this are clear. We need to monitor—routinely—the effects of any new interventions, and we need to be prepared to face the music and make mid-course corrections when warranted. We expect this in fields like medicine and auto safety, and we ought to demand it in education as well.

      No matter how large, however, these difficulties don’t provide an excuse to continue on the current path. The strategy of test-based accountability has failed, and tinkering around the edges won’t change that. Everyone with a stake in our educational system—including parents, employers, educators, and most importantly students—deserves better.

The Profession Speaks

[This excerpt is from an article by Brett Gardiner Murphy in the Winter 2017-2018 issue of American Educator.]

      …Almost everything in public education—from evaluating teachers, to choosing which schools needed to be improved or shut down, to deciding whether new charter schools were successful or not—became connected to these exams. By 2016, 42 states relied on test scores in their teacher evaluation systems, but the systems were notoriously flawed. Today, teachers report the lowest morale in decades, fewer college students plan to become educators, and teacher shortages are rampant in nearly every state. Schools with the highest numbers of low-income students and students of color are still served by the most inexperienced teachers with fewer credentials, and they experience the highest rates of teacher turnover.

      …It is confounding how people so removed 1 from schools can create policies that are no more than large-scale experiments, often tried out on our nation’s most vulnerable children.

Are We Headed Toward a Sixth Extinction?

[This excerpt is from an article by Jennifer Chu in the January/February 2018 issue of Technology Review.]

      In the past 540 million years, Earth has endured five mass extinctions that led to the widespread extermination of marine species around the world. Each involved processes that upended the normal cycling of carbon through the atmosphere and oceans, unfolding over thousands to millions of years.

      The question for many scientists is whether the carbon cycle is now experiencing a significant jolt that could tip the planet toward a sixth mass extinction. In the modern era, carbon dioxide emissions have risen steadily since the 19th century, but deciphering whether the recent spike could lead to mass extinction has been challenging. That’s mainly because it’s difficult to relate ancient carbon anomalies, occurring over thousands to millions of years, to today’s disruptions, which have taken place over a little more than a century….

      Taking this reasoning forward in time, Rothman predicts that given the recent rise in carbon dioxide emissions over a relatively short time scale, the key question is whether a critical amount of carbon is added to the oceans. That amount, he calculates, is about 310 gigatons, which he estimates is roughly equivalent to the amount of carbon that human activities will have added to the world’s oceans by the year 2100.

      Does this mean mass extinction will soon follow? Rothman says it would take some time—about 10,000 years—for such ecological disasters to play out. However, he says that by 2100 the world may have tipped into “unknown territory.”

True Grit

[This excerpt is from an article by Anne Trafton in the January/February 2018 issue of Technology Review.]

      If at first you don’t succeed, try, try again.

      MIT researchers have found that babies as young as 15 months can learn to follow this advice. In their study, babies who watched an adult struggle at two different tasks before succeeding tried harder at their own difficult task than babies who saw an adult succeed effortlessly.

      In the experiment, the babies first watched an adult perform two tasks: removing a toy frog from a container and removing a key chain from a carabiner. Half saw the adult quickly succeed three times within 30 seconds; the other half saw her struggle for 30 seconds before succeeding.

      The experimenter then showed the baby a musical toy with a nonfunctioning button that looked as if it should turn the toy on. Out of the baby’s sight, the researcher turned the toy on, using a concealed, functional button, to demonstrate that it played music, and then turned it off and gave it to the baby.

      Each baby was allowed two minutes to play with the toy. Those who had seen the experimenter struggle before succeeding pressed the button nearly twice as many times overall as those who saw the adult succeed easily. They also pressed it nearly twice as many times before asking for help or tossing the toy. This suggests that people appear to be able to learn, from an early age, how to make decisions regarding effort allocation.

These Are Not Your Father’s GMOs

[These excerpts are from an article by Antonio Regalado in the January/February 2018 issue of Technology Review.]

      To many scientists, the potential of gene editing seems nearly limitless, offering a new way to rapidly create plants that are drought-resistant, immune to disease, or improved in flavor. A supermarket tomato that tastes good? That could happen if scientists restore the flavor-making genes that make heirloom varieties delicious. What about a corn plant with twice as many kernels? If nature allows it, scientists believe, gene editing could let them build it.

      There is another reason gene editing is causing excitement in industry. The U.S. Department of Agriculture has concluded that the new plants are not “regulated articles.” The reason is a legal loophole: its regulations apply only to GMOs constructed using plant pathogens like bacteria, or their DNA. That means Calyxt can commercialize its beans without going through the process of permits, inspections, and safety tests required for other genetically modified crops. It's counting on that to cut at least half the 13 years and $130 million that companies have, on average, invested in order to create a new GMO and get it into farmers’ hands.

      To GMO opponents, the new, unregulated plants are a source of alarm. For years, they have argued that GMOs should be opposed because they might be unsafe. What if they cause allergies or poison butterflies? Now the battle lines are shifting because companies like Calyxt can create plants without DNA from a different species in them. They can argue that gene editing is merely “accelerated breeding technology.”

      To the critics, any attempt to reclassify engineered plants as natural is a dangerous fiction. “If they don't have to go through the regulatory requirements, then it is game on again for genetic modification in agriculture,” says Jim Thomas, head of a nonprofit called the ETC Group that lobbies on environmental issues. “That is the prize. They are constructing a definition of a GMO so that gene editing falls outside it.”

      Already, the effort to persuade governments and food groups is reaching a planetary scale. New Zealand decided that the new plants are GMOs after all, and so did the USDAs own organic council. The Netherlands and Sweden don't think they are. China hasn’t said. The European Union still has to make up its mind. Billions in global grain exports could ultimately hang in the balance.

      Opponents say they’re ready to fight for rules, regulations, and labels…

      Some significant obstacles remain. Drug companies working on gene therapy have learned it is easier to design and make DNA strands than to get them inside a person’s cells. That is also true of many plants, where delivery of the gene-editing ingredients is still difficult. Understanding which genes should be edited is yet another roadblock. Scientists know a lot about how oils are synthesized and why fruit turns brown. But the list of valuable plant traits whose genetic causes are both well understood and easy to alter drops off quickly after that….

      What’s missing, then, is enough scrutiny of whether the plants could harm insects, spread their genetic enhancements to wild cousins, or create super-weeds like the ones resistant to Roundup. Companies do typically consult with the U.S. Food and Drug Administration to confirm that their plants are safe to eat. But that process is voluntary….

Americas Peopled in a Single Wave, Ancient Genome Reveals

[These excerpts are from an article by Michael Price in the January 5, 2018, issue of Science.]

      A rare smidgen of ancient DNA has sharpened the picture of one of humanity’s greatest migrations. Some 15,000 to 25,000 years ago, people wandered from Asia to North America across a now-submerged land called Beringia, which once connected Siberia and Alaska. But exactly when these ancient settlers crossed and how many migrations occurred are hotly debated. Now, the oldest full genome to be sequenced from the Americas suggests that some settlers stayed in Beringia while another group headed south and formed the population from which all living Native Americans descend….

      The genome comes from an 11,500-year-old infant found in 2013 at the site of Upward Sun River in central Alaska’s Tanana River Basin, a part of Beringia that's still above sea level. The infant, one of two from the site, belonged to a population that likely numbered in the low thousands, who hunted Beringa’s abundant herds and gathered plants….

      The infant’s group was most closely related to modern Native Americans—but it wasn’t a direct ancestor. Instead, it and modern Native Americans shared common ancestors who must have entered Beringia some 25,000 years ago….Perhaps 21,000 years ago, those ancient settlers branched into at least two groups: one that included the infant and another that gave rise to Native Americans.

      That supports the idea that Asian migrants lingered in Beringia and became genetically isolated—the so-called Beringian standstill model….

      The researchers also found that the ancient Beringian infant is equally related to both the northern and southern genetic subgroups of Native Americans, implying that both descend from a single migration. The team suggests that a group headed south into North America about 20,000 years ago and only afterward split into distinct subpopulations, perhaps between 14,500 and 17,000 years ago, dates that fit with previous studies.

CRISPR 2.0 Is Here, and It’s Way More Precise

[This excerpt is from an article by Emily Mullin in the January/February 2018 issue of Technology Review.]

      The human genome contains six billion DNA letters, or chemical bases known as A, C, G, and T. These letters pair off—A with T, and C with G—to form DNA’s double helix. Base editing, which uses a modified version of CRISPR, is able to change a single one of these letters at a time with-out making breaks to the DNA’s structure.

      That’s useful because sometimes just one base pair in a long strand of DNA gets swapped, deleted, or inserted—a phenomenon called a point mutation. Point mutations make up 32,000 of the 50,000 changes in the human genome known to be associated with diseases.

The Drifters

[These excerpts are from an article by Linnea Rundgren and Dennis Kunkel in the Winter 2018 issue of Rotunda, published by the American Museum of Natural History.]

      If you’re looking to understand what makes the incredible life in Earth’s vast oceans possible, you have to start small—very small. You have to start with plankton.

      Plankton isn’t a term for animals, nor a genus or family. It's a catch-all for a staggering variety of marine organisms that share one important trait: they’re drifters. In other words, if it lives in the world's oceans and can’t swim against a current, then it's plankton.

      There’s phytoplankton, plant-like organisms that can be found near the ocean’s surface. And then there’s zooplankton, animals that come in a range of sizes from remarkably tiny to easily observed with the naked eye.

      Plankton is the ultimate source of nutrition for the world's oceans—the food source that makes everything else possible. And many species don't just depend on plankton for a meal. They actually start out as plankton themselves.

      Some of the world’s most recognizable fishes and other marine animals begin life as tiny larvae. These larvae spend some time floating passively before either joining the ranks of active swimmers or drifting down to live out life on the seafloor….

      Among the soon-to-be-swimmers: the blue marlin, Makaira nigricans, one of the world’s most iconic game fishes, which can grow to weigh more than 1,000 pounds. Blue marlins start their lives as humble, millimeter-long eggs that, when fertilized, develop into slightly less tiny larvae and spend their early days floating among other zooplankton.

      If they survive long enough—and avoid being eaten—another subset of part-time plankton settle down—way, way down. These benthic species, as they're known, sink out of the water column and stick to the seafloor. Starfishes and sea urchins, for example, get their start as drifting planktonic larvae before moving to a more sedate maturity.

      For some planktonic life forms, though, it’s not just a phase—it’s who they are. Innumerable microscopic species, including bacteria and viruses as well as algae, tiny water fleas, and copepods, will spend their entire existence riding the currents—and feeding the rest of the ocean’s residents….

      Phytoplankton have another crucial role on Earth: they produce about half of the oxygen and soak up excess carbon dioxide from the atmosphere, transferring it to the deep ocean in a crucial carbon cycle.

      And just because many planktonic species are small, don't think that they are simple. Consider the diatom, represented by tens of thousands of living species. Despite being single-celled, many species of diatoms craft cell walls called frustules. While they’re invisible to the naked eye, these cellular armors are often intricate and beautiful pieces of engineering when viewed through a microscope.

      On the other end of the spectrum, there’s megaplankton: any species that measures over 2 mm. Here you’ll find comb jellies, which use rows of cilia along their bodies to propel themselves through the water, and the Portuguese man o’ war, which uses its venom-loaded tentacles to paralyze and kill prey.

      So the next time you go fora swim at the beach, look a little closer—whether you see them or not, you’re taking a swim with plankton.

What Can Machine Learning Do? Workforce Implications

[These excerpts are from an article by Erik Brynjolfsson and Tom Mitchell in the December 22, 2017, issue of Science.]

      Digital computers have transformed work in almost every sector of the economy over the past several decades. We are now at the beginning of an even larger and more rapid trans-formation due to recent advances in machine learning (ML), which is capable of accelerating the pace of automation itself. However, although it is clear that ML is a “general purpose technology” like the steam engine and electricity, which spawns a plethora of additional innovations and capabilities, there is no widely shared agreement on the tasks where ML systems excel, and thus little agreement on the specific expected impacts on the workforce and on the economy more broadly…

      Any discussion of what ML can and cannot do, and how this might affect the economy, should first recognize two broad, underlying considerations. We remain very far from artificial general intelligence. Machines cannot do the full range of tasks that humans can do. In addition, although innovations generally have been important for overall improvements in income and living standards, and the first wave of pre-ML information technology (IT) systems in particular has created trillions of dollars of economic value, “The case that technological advances have L. contributed to wage inequality is strong….”

      As the philosopher Polanyi observed, we know more than we can tell. Recognizing a face, riding a bike, and understanding speech are tasks humans know very well how to do, but our ability to reflect on how we perform them is poor. We cannot codify many tasks easily, or perhaps at all, into a set of formal rules. Thus, prior to ML, Polanyi’s paradox limited the set of tasks that could be automated by programming computers. But today, in many cases, ML algorithms have made it possible to train computer systems to be more accurate and more capable than those that we can manually program.

      Until recently, creating a new computer program involved a labor-intensive process of manual coding. But this expensive process is increasingly being augmented or replaced by a more automated process of running an existing ML algorithm on appropriate training data. The importance of this shift is two-fold. In a growing subset of applications, this paradigm can produce more accurate and reliable programs than human programmers (e.g., face recognition and credit card fraud detection). Second, this paradigm can dramatically lower costs for creating and maintaining new software. This lowered cost reduces the barrier to experiment with and explore potential computerization of tasks, and encourages development of computer systems that will automatically automate many types of routine workflows with little or no human intervention.

      Such progress in ML has been particularly rapid in the past 6 to 8 years due in large part to the sheer volume of training data available for some tasks, which may be large enough to capture highly valuable and previously unnoticed regularities—perhaps impossibly large for a person to examine or comprehend, yet within the processing ability of ML algorithms. When large enough training data sets are available, ML can sometimes produce computer programs that outperform the best humans at the task (e.g., dermatology diagnosis, the game of Go, detecting potential credit card fraud)….

Trump and Scientist: An Epic Estrangement

[These excerpts are from an article by Jeffrey Mervis in the December 22, 2017, issue of Science.]

      As President Donald Trump nears the end of his first year in office, the relationship between the maverick Republican and the U.S. research community is deeply dysfunctional. It’s a breakdown of epic proportions, with no obvious fix.

      One reason for the estrangement is Trump’s action on science-related issues: He has renounced the 2015 Paris climate accord, rolled back many environmental rules, and called for deep budget cuts at key research agencies. In addition, many scientists are alarmed by research-related appointments he has—and has not—made. At press time, there was still no White House science adviser, and Trump has chosen several people to oversee federal research programs who lack serious scientific credentials.

      These developments have fueled perceptions that the president and his top advisers don’t care about science or value its contributions to improving the nation’s health, prosperity, and security. If true, that would be a marked reversal from the support that science generally has received from generations of policymakers of both parties.

      Combined with the personal antipathy that many scientists feel toward the president, his apparent disregard for science appears to have soured the appetite of many scientific leaders for a role in this administration. An informal Science survey of 66 prominent U.S. scientists found that half would refuse an offer to work for Trump. That’s a surprisingly high percentage, given the historically positive attitude among the community for public service. At the same time, four in five said they would consider an invitation to serve on a high-level scientific advisory panel. (Democrats make up half the respondents; 40% are independents, and 10% are Republicans.)

      Many of the scientists surveyed report being torn between a desire to provide the government with the best possible advice on scientific issues and a concern that their efforts would be for naught….

      Of course, no president calls all the shots in Washington, D.C. Congress has largely dismissed Trump’s 2018 request for significantly smaller budgets at NSF, NIH, and several other science agencies. And that response should be a call to action….

      Although some see increased activism as a key to repairing science’s broken relationship with the White House, researchers are still debating the impact of the unprecedented March for Science this spring. Some say it was an effective way to mobilize public support, whereas others believe it has exacerbated the breach….

Preparing for the Next Pandemic

[These excerpts are from an article by Sharon Guynup in the Scientific American Custom Media entitled Next Frontiers.]

      Recently, the U.S. government and the private sector have focused specifically on expanding the nation's vaccine supply and improving effectiveness. Advances include vaccines targeted for people 65 and older who are among those at greatest risk —and a ‘quadrivalent’ vaccine that protects against four flu strains. Some researchers are trying to create the holy grail: a universal vaccine. It would offer broad immunity against all influenza infections rather than targeting the constantly-changing surface antigens of a virus, as current vaccines do. That appears to be years away….

      Beyond preventing outbreaks in the first place, rap-idly detecting them is the next best thing….

      When a threat is detected, it should be met with a robust public health response. If not, a few infections could blossom into an epidemic….

      However, influenza is harder to control than emerging diseases like Ebola since it can be trans-mitted before a patient even shows symptoms….

Defeating Diseases with Energy

[These excerpts are from an article by Renee Morad in the Scientific American Custom Media entitled Next Frontiers.]

      Scientists often refer to the mitochondria as the energy factories of the good reason. The intracellular transform the food we eat and the air we breathe into an electric potential that drives processes like DNA replication or protein building. Individually, the impact of any given mitochondria is small. The potential energy within a cell is about 0.2 volts. But add all those cells up, and the potential energy within a human body is roughly equivalent to a lightning bolt….

      And yet mitochondria, for all the energy they contribute, have been largely overlooked in medicine…. “Since different organs rely on mitochondrial energy to different extents, partial mitochondrial defects organ-specific symptoms.”

      …scientists are now digger deeper into the role mitochondrial disease might play in many of our most pervasive diseases and even into aging itself. The idea is nothing short of a paradigm shift, viewing energy broadly rather than organs specifically. And if Wallace is right, that idea could change millions of lives for the better.

      In the past, scientists have frequently looked to nuclear DNA for answers about disease and aging, but Wallace believes mitochondria' DNA could be the missing link that can steer them toward new understandings about disease, and potentially, new therapies.

      Most DNA, called nuclear DNA, is located in the nucleus of our cells, and contains two copies of each gene, of which there are more than 20,000. For each gene, one copy is inherited from the mother, and another is inherited from the father. But mitochondrial DNA, inherited exclusively from the mother, is much smaller and is located outside the nucleus, inside the mitochondrion. The mitochondria turn food’s hydrogen and carbon and inhaled oxygen into carbon dioxide (CO2), water and adenosine triphosphate, or ATP, which stores energy.

      …Recent studies suggest that mitochondrial DNA variations could lead to everything from autism, cancer and inflammation to neurodegenerative diseases….

      Mitochondrial DNA also plays a role in aging. “Over time, we accumulate more and more mitochondrial-DNA mutations that slowly erode our energetic capacity,” Wallace says. “As people get older, this is why they often complain that they don’t have the energy that they used to.”

      …For large, energy-intensive organs —the brain uses 20% of the body’s energy, for example — any change in energy output can have a profound effect on function….

A New Push for the Male “Pill”

[This excerpt is from an article by Gary Barker in the December 2017 issue of Population Connection.]

      On his third day in office, President Trump signed the new and worse Global Gag Rule, a restriction on international organizations that receive U.S. global health assistance that blocks them from using their own, non-U.S. funds to provide or refer women to abortion services. And lest we forget: He signed that presidential memorandum with seven men and zero women standing behind him.

      The disturbing image of a group of men literally blocking women’s access to abortion conveys the narrative of centuries of men controlling women’s bodies and lives. So, to the question, “What do men have to do with women’s reproductive rights?” the obvious answer in these political times seems to be: Stay out. It might be that we want men to have little or nothing to do with women’s sexual and reproductive health and rights.

      But would women be better off? Excluding all men from discussions around sexual and reproductive rights is a disservice to women. It keeps the burden for contraception on women. It halts efforts that encourage men to support the reproductive choices of their female partners, and perpetuates a culture in which no man is perceived to be, or engaged to be, an ally in ensuring reproductive rights of all people.

      Clearly, men matter in this discussion. There is the obvious point that, in the context of heterosexual relationships, men are half of the human reproductive process. However, they represent only about one-quarter of total contraceptive use, including withdrawal, vasectomy, and male condoms. That proportion has remained virtually unchanged since the 1980s, despite the fact that vasectomy is cheaper and safer than female sterilization. And, while condoms may not be the long-term contraceptive solution for many couples, they have the added protection of STI and HIV prevention.

      There are other male contraceptive methods in various stages of development. The most recent trial of a male hormonal contraceptive method was halted in 2016 due to negative side effects. Some women’s health advocates pointed out that the decision represented a double standard, given that trials for women’s hormonal contraceptives have continued despite multiple side effects experienced by women.

      Here’s the other reason we need men on board: Millions of women report not using contraceptives because of their husbands….

What Do Men Have to Do with Women’s Reproduction Rights?

[This excerpt is from an article by Gary Barker in the December 2017 issue of Population Connection.]

      On his third day in office, President Trump signed the new and worse Global Gag Rule, a restriction on international organizations that receive U.S. global health assistance that blocks them from using their own, non-U.S. funds to provide or refer women to abortion services. And lest we forget: He signed that presidential memorandum with seven men and zero women standing behind him.

      The disturbing image of a group of men literally blocking women’s access to abortion conveys the narrative of centuries of men controlling women’s bodies and lives. So, to the question, “What do men have to do with women’s reproductive rights?” the obvious answer in these political times seems to be: Stay out. It might be that we want men to have little or nothing to do with women’s sexual and reproductive health and rights.

      But would women be better off? Excluding all men from discussions around sexual and reproductive rights is a disservice to women. It keeps the burden for contraception on women. It halts efforts that encourage men to support the reproductive choices of their female partners, and perpetuates a culture in which no man is perceived to be, or engaged to be, an ally in ensuring reproductive rights of all people.

      Clearly, men matter in this discussion. There is the obvious point that, in the context of heterosexual relationships, men are half of the human reproductive process. However, they represent only about one-quarter of total contraceptive use, including withdrawal, vasectomy, and male condoms. That proportion has remained virtually unchanged since the 1980s, despite the fact that vasectomy is cheaper and safer than female sterilization. And, while condoms may not be the long-term contraceptive solution for many couples, they have the added protection of STI and HIV prevention.

      There are other male contraceptive methods in various stages of development. The most recent trial of a male hormonal contraceptive method was halted in 2016 due to negative side effects. Some women’s health advocates pointed out that the decision represented a double standard, given that trials for women’s hormonal contraceptives have continued despite multiple side effects experienced by women.

      Here’s the other reason we need men on board: Millions of women report not using contraceptives because of their husbands….

Five Ways the Trump Administration Is Attacking our Environment

[These excerpts are from an article by Benjamin Schreiber in the Fall 2017 issue of the Friends of the Earth Newsmagazine.]

      In the year since the election of Donald Trump to the Presidency of the United States, the list of his attacks on the environment has grown longer by the day.

      From gutting protections for our water and air, to handing key oversight positions to people in the polluting oil and gas industry, and backing the U.S. out of the Paris Climate Agreement, the Trump administration seems bent on destroying every bit of progress we've made over the last eight years.

      …Trump has vowed to slash the EPA budget by 30%, and he's appointed Scott Pruitt to undermine the agency from within.

      Within days of taking office, Pruitt ordered the removal of the EPA's climate change website - one of the world's top resources for climate change information. He then forced out almost everyone on the Board of Scientific Counselors, and now he is banning any scientist receiving EPA funding from participating while allowing fossil fuel industry representatives to replace them. Later, reports revealed that Pruitt has been wasting taxpayer dollars on private flights to meet with his buddies in the oil industry.

      It gets worse. Pruitt recently announced that the EPA is withdrawing from the Clean Power Plan. This move would give fossil fuel companies a green light to pour even more pollution into our air and could dramatically slow our transition toward clean, renewable energy.

      The fossil fuel industry isn't the only one benefitting from Pruitt's presence at the EPA. Shortly after being sworn in, Pruitt met privately with Andrew Liveris, CEO of Dow Chemical, manufacturer of the bee-toxic insecticide chlorpyrifos. The Obama administration had recommended revoking chlorpyrifos’ use in agriculture, but days after his meeting with Liveris, Pruitt announced that his EPA would reverse the decision….

      And in the House, Republicans have introduced a bill to allow drilling in Grand Teton, the Everglades, and more than 40 other national parks. Another bill would gut the Antiquities Act, which gives presidents the power to stop Big Polluters from destroying our public lands.

      The administration isn’t playing coy about its designs for our public lands, either. In a speech to the National Petroleum Council, Secretary of the Interior Ryan Zinke pledged to completely restructure his agency to make it easier to help his Big Oil cronies. “It’s going to be huge,” he told the room full of industry executives.

      The Trump administration’s vision for our environment is clear. They want to give polluters free rein over our national parks and monuments and our other precious public lands and waters.

      …The Arctic Refuge is the largest national wildlife refuge in the United States, with nearly 20 million acres of protected public lands that are home to rare wildlife like polar bears, porcupine caribou and muskox. It’s one of our last truly wild places in the U.S., and the ecosystem is fragile.

      But that’s not stopping the Trump administration from pushing to open the area to oil and gas drilling. And Congress is perfectly willing to help them, passing a budget that paves the way to opening the Arctic Refuge in Alaska to extractive industries.

Top 10 Reasons Students Plagiarize & What Teachers Can Do about It

[This list of items is extracted from an article by Michelle Navarre Cleary in the December 2017 issue of Phi Delta Kappan.]

      #10. They are lazy.

      #9. They panic.

      #8. They lack confidence.

      #7. They think they’re supposed to reproduce what the experts have said.

      #6. They have difficulty integrating source material into their own exposition or argument.

      #5. They do not understand why people make such a fuss about sources.

      #4. They are sloppy.

      #3. They do not understand that they need to cite facts, figures, and ideas, not just quotations.

      #2. They are learning.

      #1. They are used to a collaborative model of knowledge production.

Teacher Professionalism from the Superintendent’s Perspective

[This list of items is extracted from an article by Joshua P. Starr in the December 2017 issue of Phi Delta Kappan.]

      #1. Increase teacher pay.

      #2. Support professional learning communities.

      #3. Find ways for teachers to keep learning and growing.

      #4. Make teacher preparation programs tougher and more prestigious.

      #5. Don't forget the principal.

      #6. Use tests as diagnostic tools.

      #7. Encourage teachers to observe each other.

      #8. Bring more men and teachers of color into the profession.

      #9. Renegotiate seniority rules for teachers.

      #10. Encourage experimentation.

      #11. Be realistic.

School Districts Control Teachers’ Classroom Speech

[These excerpts are from an article by Julie Underwood in the December 2017 issue of Phi Delta Kappan.]

      Teachers face particular challenges when they are teaching political or controversial topics in classrooms. They must navigate a narrow passage between delivering the curriculum as required by their local board of education and sharing their own personal views and other information, while also abiding by board regulations regarding content and delivery. In addition, they must deliver the curriculum without attempting to indoctrinate students with their own personal beliefs, particularly on religious, political, and controversial topics.

      In K-12 public schools, the local school board has the authority to set the curriculum, and teachers must adhere to it, as well as following all state and school board regulations. Simply put, K-1 2 teachers do not have the broad academic freedom that is usually afforded to their counterparts in higher education. Courts have made a distinction between university faculty and K-12 teachers in the area of free speech….

      …There the court held that when public employees make statements pursuant to their official duties, the employees are not speaking as citizens for First Amendment purposes; as such, the Constitution does not protect them from employer discipline. For example, when talking to students during classroom instruction, teachers cannot assume their speech is protected….

      Teachers cannot let their personal beliefs interfere with their obligation to deliver the school’s curriculum, and they may not hijack the curriculum or use their position as teacher as an opportunity to inculcate students to their personal beliefs….

      Simply put, the court held that the “First Amendment does not entitle primary and secondary teachers . . . to cover topics or advocate viewpoints that depart from the curriculum adopted by the school system.”

      …The takeaway message here is that as much as teachers should be respected for their expertise and experience in providing curriculum in the classroom, the school district has the legal right to delineate and limit that curriculum, and as employees, teachers must adhere to those policy decisions. As stated by the Sixth Circuit Court of Appeals in a decision upholding a teacher's dismissal for not doing so, “Only the school board has ultimate responsibility for what goes on in the classroom, legitimately giving it a say over what teachers may (or may not) teach in the classroom.”

Stand Up for Good Research

[These excerpts are from an article by Maria Ferguson in the December 2017 issue of Phi Delta Kappan.]

      …To be filed under the category of “what a difference a year can make,” consider President Barack Obama’s efforts to ensure that federal policy decisions would be well-informed by research. In 2016, Obama joined with Congress to pass a law that created the Evidence-Based Policymaking Commission, which was given about a year to come up with a plan for improving the government’s capacity in this area. It was no easy task. Multiple federal agencies collect all kinds of data, much of which is highly departmentalized and subject to intense privacy concerns. Still, despite these and other challenges, the commission succeeded in developing a comprehensive plan that focuses on improving capacity, promoting transparency, and protecting privacy.

      The commission's work just ended in September so it is too soon to assess its effect. However, while there is broad bipartisan support in Congress for the commission’s recommendations for improving the quality and use of evidence in policy making, it is hard to imagine the current administration leading any substantive and nonpartisan effort to do so. Other than selectively citing studies that support President Donald Trump’s approach to school choice, the new administration has been entirely silent on the importance of grounding federal decisions in empirical research.

      For those of us who care deeply about research and the appropriate use of data, this is alarming. Although many organizations support the development and sharing of education research evidence, the federal government’s role remains critical, both financially and symbolically….

      Looking ahead, the path is not so clear….But for now, let’s agree on one thing at least: Without evidence, policy making is a time-consuming, resource-sucking black hole. Policy decisions based solely on a desire to tap into the next new thing are neither new nor innovative, and we have seen far too much of that in education already.

A Controversy over Controversial Issues

[These excerpts are from an article by Jonathan Zimmerman and Emily Robertson in the December 2017 issue of Phi Delta Kappan.]

      …the most significant restriction on public school teachers has come from the public itself. At the simplest level, most citizens have neither wanted nor trusted teachers to handle controversial questions. A survey of Californians in the late 193 0s found that one-third approved teaching such questions at the junior high school level and two-thirds at the secondary level. But more than half said they would exclude lessons that “might cause pupils to doubt the justice of our social order and government;” two-thirds said teachers should be fired for “giving arguments in favor of Communism” even if the teacher only offered them “for the sake of argument.” Others condemned schools for contradicting or challenging their own points of view. “The basic question is whether educators are to be our servants or our masters,” one respondent explained. “I am not at all ready to turn over to the educators the training of my children along political, religious, and social lines . . . It is rather distasteful to find the school working at cross purposes with the parent.”

      Today, our society — and our schools — would appear much more open to debate about controversial questions. Cable-news channels and internet chat rooms blare with discussions of every conceivable public issue, from same-sex marriage and human-made climate change to gun control and police brutality. Meanwhile, many school districts and state education agencies have official policies that seek to promote — not to prevent — classroom instruction about controversial issues. Indeed, controversy has become a central hallmark of modern America. We live in a roiling, rough-and-tumble political culture marked by endless debate and discussion. And we ostensibly prepare future citizens for that dialogue in our schools where there is a strong consensus in support of teaching about the questions that divide us.

      But a closer look clouds this sunny picture. Too many of the “debates” on our airwaves devolve into screaming matches in which combatants exchange insults rather than ideas. In our school classrooms, meanwhile, controversial issues arise far less frequently than our official policies and prescriptions would suggest. Part of the problem lies in the lowly status of American teachers, who often lack the professional training — and, in some cases, the legal protection— to engage in discussions of hotly contested public questions. Thanks to poor preparation, some teachers have not acquired the background knowledge or the pedagogical skills, or both, to lead in-depth discussions of hot-button political questions.

      Nor do they have much time for these discussions in their daily routines, which are increasingly dominated by test preparation and the other demands of federal and state accountability laws. Despite our overall consensus on teaching controversial issues, moreover, we have little agreement on which issues are legitimate topics for school classrooms. Should we debate recent “religious freedom” initiatives that would give citizens the right to discriminate against gay couples — even though some students might have gay parents or might be gay themselves? Should we ask whether human activity alters the earth’s climate when nearly every known expert on the subject confirms that it does?

      …But we strongly reject the idea that schools should ask whether human beings have changed the earth’s climate, which is simply not subject to reasonable debate…Minnesota Sen. Hubert Humphrey — a former teacher as well as a future vice president — insisted that schools should address public issues to prepare young people for “mature and intelligent citizenship.” But he also cautioned that schools should limit themselves to "arguable" questions about which reasonable and knowledgeable people disagreed. “I know from my own teaching experience how much heat is expended in classrooms when the debate rages over a fact as if its existence were a matter of opinion….”

      …As New York Sen. Daniel Patrick Moynihan famously quipped several decades ago, each of us is entitled to our own opinions — but not to our own facts. That is especially true in our so-called information age, when disinformation can gain millions of adherents from a few strategic clicks of a mouse. Agreement on a set of verified facts is actually the sine qua non of democracy, providing the shared assumptions for reasoned discussion. So teachers have a duty to share these facts with students instead of pretending that the facts themselves are subject to debate….

A Lesson in Civility

[These excerpts are from an article by Marcelo Suarez-Orozco, Carola Suarez-Orozco and Adam Strom in the December 2017 issue of Phi Delta Kappan.]

      American classrooms today reflect extraordinary diversity. Children originating in every country and every continent on earth are learning to become American. Today, a quarter of our students come from immigrant families. They are our littlest and newest Americans. Yet, though they pledge allegiance to the American flag, these millions of children find their place in our country challenged.

      White supremacists are marching at universities, politicians are spewing anti-immigrant rhetoric, social media are amplifying divisive messages, and hate crimes against minorities are up. But to what extent are children from immigrant families aware of what's going on? Are our schools immune from the hatred we see in the public square? How should they respond?

      …in today’s climate, too many immigrant children are being made to feel invisible and, in effect, have been silenced. Of course, they have much to say, and we all have a lot to learn from them — if we care to listen. Immigrant stories are narratives of resilience, grit, and optimism. They are quintessentially American stories that invite classroom dialogue about themes that can be found throughout our nation's history and literature. Indeed the story of migration is the story of our shared experience of humanity.

      As politicians and social media embody an ethos of discord, divisiveness, and even hatred, classroom teachers must model an ethic of civility. Simply put, bullying and intolerance are anathema to the give-and-take required for students to flourish. Young people can thrive only in classrooms where the basic ground rules include empathy, respect, and a willingness to listen to one another.

      In the American tradition, the obligation of every school is to foster a democratic ethos where immigrant children come to feel they are full members in the community. As John Dewey once argued, schools are places where democratic ideals come to life. In the broadest sense, citizenship is about our responsibilities to each other, the rights and rules of engagement, and the public good. Research has shown that immigrant youth themselves describe citizenship as a shared obligation to society, a responsibility to give back, and above all to be kind. These are lofty but essential goals, and they are more necessary than ever in our uncivil times. At a time when it seems everyone is screaming, the classroom must be a place for listening.

When the President is a Liar

[These excerpts are from an editorial by Joan Richardson in the December 2017 issue of Phi Delta Kappan.]

      …day in and day out, I am confronted with a president who routinely and deliberately lies and doubles-down on a lie when challenged about it. Ile runs an administration that regularly engages in fabricated “truths.” Remember that it was White House counselor Kellyanne Conway who blithely introduced “alternative facts” into our lexicon after commentators challenged the president’s assertions about the size of the crowd at his inauguration.

      But the lie that really pushed me over the edge to write this column was not one of Trump’s vanity lies. Instead, it was the mid-October report that the White House was circulating so-called fact sheets tying free trade to increases in infertility, abortions, single-parent households, spousal abuse, opioid addiction, and more. Here was an effort to assemble spurious “facts” to buttress an argument about a policy of significant importance to this nation. What rational leader would engage in such behavior?

      Of course, that pack of lies was quickly pushed to the side when the president lied about what he said to the widow of a fallen soldier. Then he claimed that no prior president had called grieving Gold Star families. Another blatant lie to soothe the vain beast.

      Lie upon lie upon lie.

      I don’t see an end to this. As someone who prefers life in a fact-based universe, I am cringing at the horror of surviving 1,460 lie-filled days before the Trump term ends. I know I am not alone in imagining the damage that it will do to this nation. Like many of you, I am also wondering how you raise and educate children in an environment in which the leader of your country has literally no regard for the truth. (I won’t even go into my concerns about having a leader who belittles opponents with reckless and childish taunts and the message such behavior telegraphs to all observers.)

      …Teaching students how to identify trustworthy sources of information and how to marshal a series of facts into a coherent argument — all of that is part of preparing the critical thinkers of tomorrow But how do teachers convince students that facts matter when social media routinely exposes them to lies from their president?

      When information moves through the ether in a nanosecond, students must learn how to fact-check in real time so they can verify, analyze, and respond to every morsel of information coming at them, whether through social media or the official messages of leaders. This is no longer an Encyclopedia Britannica world in which someone can thumb through a reference book to check the veracity of someone’s assertion. Everyone must be equipped with the skills of an analyst.

      Most of us want to make decisions based on facts, especially when those decisions involve life, money, or property. I want to know that money managers are plugging facts into their analysis of my retirement accounts. Would any employer hire someone who can’t separate fact from fiction? Imagine the reactions of corporate leaders who learn that employees have shared a set of “alternative facts” when seeking approval for a new project or later reporting the results of that project.

      As we’re working to help students become critical thinkers, let's also prepare them to stand up with backbone to defend their assertions. Let's teach them how to identify their own values and use those to guide their decisions. Then, let's arm them with strategies for deflecting onerous assaults on their integrity.

      To prepare students for citizenship has never been more important — and probably never more difficult. It's ironic that teachers — who have been maligned for decades by folks who also had a loose hold on the truth — have become the last best hope for setting the course right again. The future is literally in your hands.

Two Schools of Thought

[These excerpts are from an article by Rebecca Mead in the December 8, 2017, issue of The New Yorker.]

      As a charter school, Success Academy is required to admit children by lottery. But prominent critics, such as Diane Ravitch, the historian and public-education advocate, have alleged that Success Academy essentially weeds out students, by maintaining unreasonably high expectations of behavior and academic achievement. Similarly, critics claim that the program reduces class size by not accepting new students beyond fourth grade, whereas zoned public schools must accept all corners. To Moskowitz’s detractors, Success’s celebration of standardized test-taking--students attend “Slam the Exam” rallies—is a cynical capitulation to a bureaucratic mode of learning. Success Academy has attracted large donations—in the past two years, the hedge-fund manager Julian Robertson has given forty-five million dollars to the group—and Moskowitz’s opponents say that such gifts erode the principle that a quality education should be provided by the government….

      …Success students can expect to be called to answer a teacher's question at any moment, not just when they raise their hand, and must keep their eyes trained on the speaker at all times, a practice known as “tracking.” Staring of into space, or avoiding eye contact, is not acceptable. “Sometimes when kids look like they're daydreaming, it’s because they are, and we can't allow that possibility;” Moskowitz wrote a few years ago, in an editorial for the Wall Street Journal. Students who stop tracking are prodded both by their teachers and by their peers, who are expected to point out classmates who aren’t looking at them when they are speaking.

      …Some teachers use kitchen timers with beeping alarms that notify students when the ten seconds allotted for finding a space on the rug, or retrieving a book from a backpack, are up….

      For nearly a century, public education in America has been influenced by two opposing pedagogical approaches: traditionalism and progressivism. Broadly speaking, in the traditional approach to education a teacher imparts knowledge to students through direct instruction, and embodies a disciplinary culture in which obedience is both prized and rewarded. The purpose of the classroom is to equip all students to meet measurable academic standards. At a progressive institution, a teacher develops a curriculum but urges students to treat it as a staging ground for their own intellectual discoveries, often through hands-on activities and group work. Allowances are made for differences in the way individual students learn. Progressivism was inspired, in large part, by the work of John Dewey, the American philosopher and educational theorist, who died in 1952. For Dewey, the classroom was not simply a place for acquiring academic credentials; it was also a venue in which students learned crucial values about being citizens in a democracy. Traditionalism is easily caricatured as rote learning—or, in the contemporary classroom, as endless test prep. Progressivism, in its most exaggerated form, can look like an absence of standards and discipline, and an unhelpful abdication of authority on the part of the teacher.

      Many effective contemporary public-school classrooms exist somewhere be-tween these extremes….

      A Success Academy classroom is a highly controlled, even repressive, place. In some classrooms that I observed, there were even expectations for how pencils should be laid down when not in use….The atmosphere can be tense, and sometimes tips over into abuse, as was documented by the Times last year. The newspaper obtained a video that had been recorded secretly by an assistant teacher. It showed a teacher berating a first-grade girl who had made an error on her math worksheet, ripping up the sheet, and sending the child to sit in a “Calm Down” chair….

      …complaining about an excessively punitive atmosphere. Children, they claimed, were being given detention for not clasping their hands when seated, or for burping accidentally….

      …According to data from the New York State Education Department, three years ago, when Success Academy Springfield Gardens was starting up and had only kindergartners and first graders, eighteen per cent of the students were suspended at least once. It’s entirely believable that lots of children between the ages of four and seven found it impossible to meet the school’s stringent behavioral expectations. But it’s also fair to wonder whether, if one out of five young children cannot comply with the rules, there might not be something wrong with the rules.

      …teachers typically stay with Success for just three years. This may be consistent with the job-hopping habits of millennials, but according to veteran educators it generally takes at least three years to become a decent teacher….The system compensates for the inexperience of many of its teachers by having a highly centralized organization. Teachers do not develop their own lesson plans; rather, they teach precisely what the network demands. Like the students in their classrooms, Success’s teachers operate within tightly defined boundaries, with high expectations and frequent assessment.

      …Progressive educators hold that, in early childhood, play is not a distraction from learning but the very means of learning itself.

      But in recent years kindergarten teachers have become increasingly focussed on imparting academic skills—largely in response to pressure to achieve measurable, testable results….

      In college, of course, students have to flourish without constant supervision. Although charter students are admitted to college at higher rates than students from comparable public schools, their graduation rates are dispiritingly low. Seventy per cent of charter-school students who enroll in college fail to complete their degrees within six years. While there are many reasons for this problem—most notably, insufficient money for food and housing—charter-school leaders, including those at Success, are also considering the impact of their own teaching precepts….getting students to succeed at standardized tests isn’t enough; they must prepare students for a future in which their professors—and employers—won't be providing their parents with weekly updates….How can a highly supervised child be transformed into an independent learner? Do you allow students the freedom to fail, or do you continue to provide constant hand-holding?...

      One of the core tenets of John Dewey’s educational philosophy was the belief that, in school, children learn not only the explicit content of lessons but also an implicit message about the ideal organization of society A school, he argued, was a civilization in microcosm. “I believe that the school must represent present life—life as real and vital to the child as that which he carries on in the home, or the neighborhood, or on the playground,” Dewey wrote….

      “A school should be a model of what democratic adult culture is about,” Deborah Meier, a veteran progressive educator, and a theorist in the tradition of Dewey, told me. “Most of what we learn in life we learn from the company we keep. What is taught didactically is often forgotten.” A corollary of Dewey’s belief is that, if children are exposed in school to an authoritarian model of society, that is the kind of society in which they may prefer to live….

Ice on the Run

[These excerpts are from an article by Jane Qiu in the December 1, 2017, issue of Science.]

      Thousands of glaciers perch near human settlements, and in recent decades, dozens of surges have claimed lives. One of the worst calamities occurred in 2002, in the Caucasus Mountains of southern Russia, when Kolka Glacier rumbled into a valley, killing 140 people….

      Most surges, broadly defamed as a flow at least 10 and often hundreds of times faster than a glacier's usual pace of advance, are quieter affairs. Many are imperceptibly slow….Besides overwhelming settlements, glacier surges can threaten distant communities. They can block rivers, creating lakes that can later unleash floods, and by depleting glacier mass, they can threaten the flow of meltwater that downstream towns and I...farms may depend on….

      Studying surging glaciers could also offer insights into grander-scale ice flows with global consequences: the movements of the ice sheets in Antarctica and Greenland, which can change abruptly, altering the ice discharges that affect sea level….

      Just over 1% of our planet’s glaciers—some 2300 in all—are known to undergo these precipitous movements, though the number is likely to rise as glaciers come under closer surveillance by remote sensing….

Outlawing War

[This excerpt is from an article by Michael Shermer in the December 2017 issue of Scientific American.]

      In 1917, with the carnage of the First World War evident to all a Chicago corporate lawyer named Salmon Levinson reasoned, “We should have, not as now, laws of war, but laws against war; just as there are no laws of murder or of poisoning, but laws against them.” With the championing of philosopher John Dewey and support of Foreign Minister Aristide Briand of France, Foreign Minister Gustav Stresemann of Germany and U.S. Secretary of State Frank B. Kellogg, Levinson’s dream of war outlawry came to fruition with the General Pact for the Renunciation of War (otherwise known as the Peace Pact or the Kellogg-Briand Pact), signed in Paris in 1928. War was outlawed.

      Given the number of wars since, what happened? The moralization bias was dialed up to 11, of course, but there was also a lack of enforcement. That began to change after the ruinous Second World War, when the concept of “outcasting” took hold, the most common example being economic sanctions. “Instead of doing something to the rule breakers,” Hathaway and Shapiro explain, “outcasters refuse to do something with the rule breakers.” This principle of exclusion doesn’t always work (Cuba, Russia), but sometimes it does (Turkey, Iran), and it is almost always better than war. The result, the researchers show, is that “interstate war has declined precipitously, and conquests have almost completely disappeared.”

Hydrogen Cars for the Masses

[These excerpts are from an article by Donna J. Nelson in the December 2017 issue of Scientific American.]

      Battery-powered electric vehicles that give off no carbon dioxide are about to become mainstream. Today they constitute less than 1 percent of all rolling stock on the road globally, but multiple innovations in features such as the battery's cost and lifetime have made prices so competitive that Tesla has more than 400,000 advance orders for its $35,000 Model 3, which is slated to hit the road in the middle of 2018.

      Unfortunately, the other great hope for vehicles that exhaust no carbon—those powered by hydrogen-fed fuel cells—remains too pricey for broad sales….Many commercial versions contain the precious metal platinum, which aside from being pricey, is too rare to support ubiquitous use in vehicles.

      Investigators are pursuing several lines of attack to shrink the platinum content: using it more efficiently, replacing some or all of it with palladium (which performs similarly and is somewhat less expensive), replacing either of those precious metals with inexpensive metals, such as nickel or copper, and forgoing metals altogether. Commercial catalysts tend to consist of thin layers of platinum nanoparticles deposited on a carbon film; researchers are also testing alternative substrates.

Fuel from an Artificial Leaf

[These excerpts are from an article by Javier Garcia Martinez in the December 2017 issue of Scientific American.]

      The notion of an artificial leaf makes so much sense. Leaves, of course, harness energy from the sun to turn carbon dioxide into the carbohydrates that power a plant’s cellular activities. For decades scientists have been working to devise a process similar to photosynthesis to generate a fuel that could be stored for later use. This could solve a major challenge of solar and wind power—providing a way to stow the energy when the sun is not shining and the air is still.

      Many, many investigators have contributed over the years to the development of a form of artificial photosynthesis in which sunlight-activated catalysts split water molecules to yield oxygen and hydrogen—the latter being a valuable chemical for a wide range of sustainable technologies. A step closer to actual photosynthesis would be to employ this hydrogen in a reduction reaction that converts CO2 into hydrocarbons. Like a real leaf, this system would use only CO2, water and sunlight to produce fuels. The achievement could be revolutionary, enabling creation of a closed system in which carbon dioxide emitted by combustion was transformed back into fuel instead of adding to the greenhouse gases in the atmosphere.

      Several researchers are pursuing this goal. Recently one group has demonstrated that it is possible to combine water splitting and CO2 conversion into fuels in one system with high efficiency….A plant uses just 1 percent of the energy it receives from the sun to make glucose, whereas the artificial system achieved roughly 10 percent efficiency in converting carbon dioxide to fuel, the equivalent of pulling 180 grams of carbon dioxide from the air per kilowatt-hour of electricity generated.

      The investigators paired inorganic, solar water-splitting technology'(designed to use only biocompatible materials and to avoid creating toxic compounds) with microbes specially engineered to produce fuel, all in a single container. Remarkably, these metabolically engineered bacteria generated a wide variety of fuels and other chemical products even at low CO2 concentrations. The approach is ready for scaling up to the extent that the catalysts already contain cheap, readily obtainable metals. But investigators still need to greatly increase fuel production….

      …he initially ran the fertilizer test just to see if the idea would work. He envisions a time, however, when bacteria will “breathe in hydrogen’ produced by water splitting and ultimately use the hydrogen to produce products ranging from fuels to fertilizers, plastics and drugs, depending on the specific metabolic alterations designed for the bugs.

New Glue Seals Wounds in 60 Seconds

[These excerpts are from a current news entry in the December 2017 issue of The Science Teacher.]

      A highly elastic and adhesive surgical glue that quickly seals wounds without the need for common staples or sutures could transform how surgeries are performed.

      The glue is called MeTro.

      MeTro’s high elasticity makes it ideal for sealing wounds in body tissues that continually expand and relax—such as lungs, hearts and arteries—that arc otherwise at risk of reopening.

      The material also works on internal wounds that are often in hard-to-reach areas and have typically required staples or sutures due to surrounding body fluid hampering the effectiveness of other sealants.

      MeTro sets in just 60 seconds once treated with UV light, and the technology has a built-in degrading enzyme that can be modified to determine how long the sealant lasts—from hours to months, to allow adequate time for the wound to heal.

      The liquid or gel-like material has quickly and successfully sealed incisions in the arteries and lungs of rodents and the lungs of pigs without the need for sutures and staples. …

      The next stage for the technology is clinical testing….

Marijuana and the Teen Brain

[These excerpts are from an article by Claudia Wallis in the December 2017 issue of Scientific American.]

      American parents have been warning teenagers about the dangers of marijuana for about 100 years. Teenagers have been ignoring them for just as long….

      Exaggerating the perils of cannabis—the risks of brain dam-age, addiction, psychosis—has not helped. Any whiff of Reefer Madness hyperbole is perfectly calibrated to trigger an adolescent’s instinctive skepticism for whatever an adult suggests. And the unvarnished facts are scary enough.

      We know that being high impairs attention, memory and learning. Some of today's stronger varieties can make you physically ill and delusional. But whether marijuana can cause lasting damage to the brain is less clear.

      …During adolescence the brain matures in several ways believed to make it more efficient and to strengthen executive functions such as emotional self-control. Various lines of research suggest that cannabis use could disrupt such processes.

      For one thing, recent studies show that cannabinoids manufactured by our own nerve cells play a crucial role in wiring the brain, both prenatally and during adolescence. Throughout life they regulate appetite, sleep, emotion, memory and movement—which makes sense when you consider the effects of marijuana. There are “huge changes” in the concentration of these endocannabinoids during the teenage years….

      Brain-imaging studies reinforce this concern. A number of smallish studies have seen differences in the brains of habitual weed smokers, including altered connectivity between the hemispheres, inefficient cognitive processing in adolescent users, and a smaller amygdala and hippocampus—structures involved in emotional regulation and memory, respectively.

      More evidence comes from research in animals. Rats given THC, the chemical that puts the high in marijuana, show persistent cognitive difficulties if exposed around the time of puberty—but not if they are exposed as adults….

      But even if it turns out that weed does not pose a direct danger for most teens, it's hardly benign. If, like those kids outside my window, you frequently show up high in class, you will likely miss the intellectual and social stimulation to which the adolescent brain is perfectly tuned….On average, adolescents who partake heavily wind up achieving less in life and are unhappier. And those are things a teenager might care about.

Livestock Drove Ancient Old World Inequality

[These excerpts are from an article by Lizzie Wade in the November 17, 2017, issue of Science.]

      Today, 2% of the world’s people own more than half its wealth. This rise of the superrich has economists, politicians, and citizens alike wondering how much inequality societies can—or should—accept. But economic inequality has deep roots….its ancient hotbed was the Old World: Societies there tended to be less equal than those in the New World, likely because of the use of draft animals.

      …Just as striking: Every ancient society studied was much more equal than the United States is today.

      …The authors propose that domestic animals may explain the difference between the New World and the Old World: Whereas North American and Mesoamerican societies depended on human labor, Old World societies had oxen and cattle to plow fields and horses to carry goods and people. Livestock were an investment in future enterprises, allowing people to cultivate more land and stockpile food surpluses, as well as build trade caravans and armies to control huge territories….

Beyond Plastic Waste

[These excerpts are from an editorial by Dame Ellen MacArthur in the November 17, 2017, issue of Science.]

      With more than S million tons of plastic entering the ocean each year, humanity must urgently rethink the way we make and use plastics, so that they do not become waste in the first place.

      Cheap, light, and versatile, plastics are the dominant materials of our modern economy. Their production is expected to double over the next two decades. Yet, only 14% of all plastic packaging is collected for recycling after use, and vast quantities escape into the environment. This not only results in a loss of $80 billion to $120 billion per year to the global economy, but if the current trend continues, there could be more plastic than fish by weight in the oceans by 2050.

      Some companies have started changing their habits. Unilever, for example, has promised that by 2025, all its plastic packaging will be fully reusable, recyclable, or compostable in a commercially viable manner. Given that up to a third of all plastic packaging items are too small (such as straws and sachets) or too complex (such as multimaterial films and take-away coffee cups) to be economically recycled, achieving these commitments will require a great degree of redesign and innovation….

      Policy-makers may, for example, regulate the use of certain polymers, other chemicals, or particular applications of plastic. Such action can be effective, cost little, and garner public support. Bans on or charges for single-use shopping bags have, for example, led to rapid reductions in their use in France, Rwanda, and the United Kingdom. A few uncommon types of plastic used in packaging are too expensive to recycle and should be phased out. A science-based approach is needed to replace chemicals such as endocrine disruptors that are found in some plastics and pose a risk to human health.

      Such restrictions need to be complemented by mechanisms that foster innovation. Policy-makers can connect the design of plastic packaging with its collection, sorting, and subsequent reuse, recycling, or composting by supporting deposit-refund schemes for drinks bottles, as in Germany and Denmark, or by requiring producers to consider what happens to their packaging products after use….

      However, the most potent tool for policy-makers remains the setting of a clear common vision and credible high-level ambitions that drive investment decisions. In the case of plastics, a crucial pillar of such a policy ambition must be stimulating scientific breakthroughs in the development of materials that can be economically reused, recycled, or composted.

      Public- and private-sector financial commitments to combat ocean pollution totaled 7.2 billion euros at the Our Ocean conference this year alone. The task now is to harness this goodwill to make sure that plastics stay in the economy and out of the oceans.

North Atlantic Right Whales Face Extinction

[These excerpts are from an article by Elizabeth Pennisi in the November 10, 2017, issue of Science.]

      In a sad reversal of fortune, the North Atlantic right whale is in deep trouble again after rebounding in recent decades from centuries of hunting. Recent population trends are so dire that experts predict the whale could vanish within 20 years, making it the first great whale to go extinct in modern times.

      …whale experts reported that roughly 100 reproductively mature females remain, but they are not living long enough or reproducing quickly enough for the species to survive. Ship strikes have long been a threat, and fatal entanglements in commercial fishing gear are taking an increasing toll. And researchers have found that even when an entangled female doesn’t die, dragging ropes, buoys, or traps can exhaust her, making her less likely to reproduce.

      …Eubalaena glacialis, the North Atlantic right whale—so-called by 18th century whalers because it was easy to kill and rich in valuable blubber—is one of three right whale species. It is found along North America’s east coast, breeding in the winter in waters off of Florida and migrating to summer feeding waters off New England and northeastern Canada. Its accessible habitat has made it one the world’s best documented large whales. But its range is also in one of the most industrialized stretches of ocean in the world, crowded with threats including ships, fishing operations, and energy infrastructure.

      Over the past few decades, right whale numbers appeared to be slowly climbing, from roughly 300 to about 500. Governments helped it along by taking steps to prevent ship strikes, such as imposing speed limits on or rerouting larger vessels in some waters, and installing sensors that can warn mariners when the whales are nearby….

      Entanglement, however, appears to be taking a growing toll because of increased fishing in areas where the whales are foraging….

Science for Global Understanding

[This excerpt is from an editorial by Flavia Schleger in the November 10, 2017, issue of Science.]

      Resilience is also at the core of the debate at the UN Climate Change Conference (COP23) currently taking place in Bonn. The growing pressures of climate change and stress on natural resources through pollution, overuse, and mismanagement are fueling conflicts and violent extremism and forcing an increasing number of people to flee their homes. This calls for sound, inclusive STI, cooperative approaches between the sciences and among different knowledge systems, and standing up to climate change deniers among scientists and policy-makers.

      The United Nations’ Agenda 2030 for sustainable development recognizes the central role of STI in enabling the international community to respond to global challenges….

Going Negative

[These excerpts are from an article by Elizabeth Kolbert in the November 20, 2017, issue of The New Yorker.]

      This past April, the concentration of carbon dioxide in the atmosphere reached a record four hundred and ten parts per million. The amount of CO2 in the air now is probably greater than it’s been at any time since the mid-Pliocene, three and a half million years ago, when there was a lot less ice at the poles and sea levels were sixty feet higher. This year’s record will be surpassed next year, and next year’s the year after that. Even if every country fulfills the pledges made in the Paris climate accord—and the United States has said that it doesn’t intend to carbon dioxide could soon reach levels that, it’s widely agreed, will lead to catastrophe, assuming it hasn’t already done so.

      Carbon-dioxide removal is, potentially, a trillion-dollar enterprise because it offers a way not just to slow the rise in CO2 but to reverse it. The process is sometimes referred to as “negative emissions”: instead of adding carbon to the air, it subtracts it. Carbon-removal plants could be built anywhere, or everywhere, Construct enough of them and, in theory at least, CO2 emissions could continue unabated and still we could avert calamity. Depending on how you to at things, the technology represents either the ultimate insurance policy or the ultimate moral hazard.

      …In fact, fossil fuels currently provide about eighty per cent of the world's energy. Proportionally, this figure hasn’t changed much since the mid-eighties, but, because global energy use has nearly doubled, the amount of coal, oil, and natural gas being burned today is almost two times greater.

      …they argued that self-replicating machines could solve the world’s energy problem and, more or less at the same time, clean up the mess humans have made by burning fossil fuels. The machines would be powered by solar panels, and as they multiplied they’d produce more solar panels, which they’d assemble using elements, like silicon and aluminum, extracted from ordinary dirt. The expanding collection of 'panels would produce ever more power, at a rate that would increase exponentially. An array covering three hundred and eighty-six thousand square miles—an area larger than Nigeria but, as Lackner and Wendt noted, “smaller than many deserts”—could supply all the world’s electricity many times over.

      This same array could be put to use scrubbing carbon dioxide from the atmosphere….

      …Carbon dioxide should be regarded the same way we view other waste products, like sewage or garbage. We don’t expect people to stop producing waste….At the same time, we don’t let them shit on the sidewalk or toss their empty yogurt containers into the street….

      One of the reasons we’ve made so little progress on climate change, he contends, is that the issue has acquired an ethical charge, which has polarized people. To the extent that emissions are seen as bad, emitters become guilty….If CO2 is treated as just another form of waste, which has to be disposed of, then people can stop arguing about whether it’s a problem and finally start doing something.

      Carbon dioxide was “discovered,” by a Scottish physician named Joseph Black, in 1754. A decade later, another Scotsman, James Watt, invented a more efficient steam engine, ushering in what is now called the age of industrialization but which future generations may dub the age of emissions. It is likely that by the end of the nineteenth century human activity had raised the average temperature of the earth by a tenth of a degree Celsius (or nearly two-tenths of a degree Fahrenheit).

      As the world warmed, it started to change, first gradually and then suddenly. By now, the globe is at least one degree Celsius (1.8 degrees Fahrenheit) warmer than it was in Black’s day, and the consequences are becoming ever more apparent. Heat waves are hotter, rainstorms more intense, and droughts drier. The wildfire season is growing longer, and fires, like the ones that recently ravaged Northern California, more numerous. Sea levels are rising, and the rate of rise is accelerating. Higher sea levels exacerbated the damage from Hurricanes Harvey, Irma, and Maria, and higher water temperatures probably also made the storms more ferocious….

      Meanwhile, still more warming is locked in. There’s so much inertia in the climate system, which is as vast as the earth itself; that the globe has yet to fully adjust to the hundreds of billions of tons of carbon dioxide that have been added to the atmosphere in the past few decades. It’s been calculated that to equilibrate to current. CO2 levels the planet still needs to warm by half a degree. And every ten days another billion tons of carbon dioxide are released….

      No one can say exactly how warm the world can get before disaster—the inundation of low-lying cities, say, or the collapse of crucial ecosystems, like coral reefs—becomes inevitable. Officially, the threshold is two degrees Celsius (3.6 degrees Fahrenheit) above preindustrial levels. Virtually every nation signed on to this figure at a round of climate negotiations held in Cancun in 2010.

      Meeting in Paris in 2015,world leaders decided that the two-degree threshold was too high; the stared aim of the climate accord is to hold ”the increase in the global average temperature to well below 2oC” and to try to limit it to 1.5oC. Since the planet has already warmed by one degree and, for all practical purposes, is committed to another half a degree, it would seem impossible to meet the latter goal and nearly impossible to meet the former. And it is nearly impossible, unless the world switches course and instead of just adding CO2 to the atmosphere also starts to remove it….

      BECCS, which stands for “bioenergy with carbon capture and storage,” takes advantage of the original form of carbon engineering: photosynthesis. Trees and grasses and shrubs, as they grow, soak up CO2 from the air. (Replanting forests is a low-tech form of carbon removal.) Later, when the plants rot or are combusted, the carbon they have absorbed is released back into the atmosphere. If a power station were to burn wood, say, or cornstalks, and use C.C.S. to sequester the resulting CO2, this cycle would be broken. Carbon would be sucked from the Aix by the green plants and then forced underground. BECCS represents a way to generate negative emissions and, at the same time, electricity. The arrangement, at least as far as the models are concerned, could hardly be more convenient.

      …Photovoltaic cells have been around since the nineteen-fifties, but for decades they were prohibitively expensive. Then the prices started to drop, which increased demand, which led to further price drops, to the point where today, in many parts of the world, the cost of solar power is competitive with the cost from new coal plants. …

      BECCS doesn’t make big enough demands; instead, it requires vast tracts of arable land. Much of this land would, presumably, have to be diverted from food production, and at a time when the global population—and therefore global food demand—is projected to be growing. (It’s estimated that to do BECCS on the scale envisioned by some below-two degrees scenarios would require an area larger than India.)….

      One of the peculiarities of climate discussions is that the strongest argument for any given strategy is usually based on the hopelessness of the alternatives: this approach must work, because clearly the others aren’t going to. This sort of reasoning rests on a fragile premise—what might be called solution bias. There has to be an answer out there somewhere, since the contrary is too horrible to contemplate.

      Early last month, the Trump Administration announced its intention to repeal the Clean Power Plan, a set of rules aimed at cutting power plants’ emissions. The plan, which had been approved by the Obama Administration, was eminently achievable. Still, according to the current Administration, the cuts were too onerous. The repeal of the plan is likely to result in hundreds of millions of tons of additional emissions.

      A few weeks later, the United Nations Environmental Programme released its annual Emissions Gap Report. The report labeled the difference between the emissions reductions needed to avoid dangerous climate change and those which contries have pledged to achieve as “alarmingly high.”…..

      As a technology of last resort, carbon removal is, almost by its nature, paradoxical. It has become vital without necessarily being viable. It may be impossible to manage and it may also be impossible to manage without.

The Seven Deadly Sins of AI Predictions

[This list is excerpted from an article by Rodney Brooks in the November/December 2017 issue of Technology Review.]

      1. Overestimating and underestimating

      2. Imagining magic

      3. Performance versus competence

      4. Suitcase words

      5. Exponentials

      6. Hollywood scenarios

      7. Speed of deployment

We Need Computers with Empathy

[These excerpts are from an article by Rana el Kaliouby in the November/December 2017 issue of Technology Review.]

      …But Alexa was oblivious to my annoyance. Like the majority of virtual assistants and other technology out there, she’s clueless about what we’re feeling.

      We’re now surrounded by hyper-connected smart devices that are autonomous, conversational, and relational, but they're completely devoid of any ability to tell how annoyed or happy or depressed we are. And that's a problem.

      What if, instead, these technologies—smart speakers, autonomous vehicles, television sets, connected refrigerators, mobile phones—were aware of your emotions? What if they sensed nonverbal behavior in real time? Your car might notice that you look tired and offer to take the wheel. Your fridge might work with you on a healthier diet. Your wearable fitness tracker and TV might team up to get you off the couch. Your bathroom mirror could sense that you’re stressed and adjust the lighting while turning on the right mood-enhancing music. Mood-aware technologies would make personalized recommendations and encourage people to do things differently, better, or faster.

      Today, an emerging category of AI—artificial emotional intelligence, or emotion AI—is focused on developing algorithms that can identify not only basic human emotions such as happiness, sadness, and anger but also more complex cognitive states such as fatigue, attention, interest, confusion, distraction, and more….

      …In online learning environments, it is often hard to tell whether a student is struggling. By the time test scores are lagging, it’s often too late—the student has already quit. But what if intelligent learning systems could pro-vide a personalized learning experience? These systems would offer a different explanation when the student is frustrated, slow down in times of confusion, or just tell a joke when it’s time to have some fun….

Using Bricks to Store Electricity

[These excerpts are from an article by David L. Chandler in the November/December 2017 issue of Technology Review.]

      Firebricks have been part of humanity's technological arsenal for at least three millennia, since the era of the Hittites. Created with special heat-resistant clays fired at high temperatures, firebricks can withstand temperatures of up to 1,600 °C. Now a proposal from MIT researchers shows that this ancient invention could play a key role in helping the world shift away from fossil fuels.

      The idea is to store excess electricity produced when demand is low—for example, from wind farms at night—by using electric resistance heaters, the same kind found in electric ovens or clothes dryers, which convert electricity into heat. The devices would use the excess electricity to heat up a large mass of firebricks, which can retain the heat for many hours, given sufficient insulation. Later, the heat could be used directly for industrial processes, or it could feed generators that convert it back to electricity.

      The technology itself is old, but its usefulness in this context is due to the rapid recent rise of intermittent renewable energy sources and the peculiarities of the way electricity prices are set….

      Forsberg says the demand for industrial heat is actually larger than the total demand for electricity, and unlike the demand for electricity, it is constant. Factories can make use of extra heat whenever it’s available, providing an almost limitless market for it….

The Long View

[These excerpts are from an article by Robert L. Hampel in the November 2017 issue of Phi Delta Kappan.]

      Spurning the option of admission by certificate, several prestigious colleges and universities created the College Board [in 1899], which gave examinations meant to ensure that high school students met particular requirements in various academic fields. By spelling out those requirements in detail, the College Board shaped the corresponding high school courses. For instance, the eight-part syllabus for physics stipulated 51 different experiments.

      Furthermore, prominent university leaders and other educators convened blue-ribbon groups (most notably the Committee of Ten in 1893) to sketch what a true high school should and should not teach. And when the business mogul/philanthropist Andrew Carnegie endowed pensions for college faculty, it became important to decide the true meaning of “college” as well — the foundation needed to decide which faculty would be eligible for a pension. To the foundation officers, a genuine college had several attributes, including a four-year time span and admission restricted to graduates of four-year secondary schools. They even specified how often a course should meet in a real high school and college – hence the familiar Carnegie unit and the emphasis on time served and credits accumulated as the basis for promotion and graduation.

      …By the 1920s, it was much easier to differentiate secondary education and higher education.

      …there were disadvantages to making the boundaries less blurry. I point out three, in particular.

      First, the prestige of high school declined. The word “secondary” began to convey not just a part of a sequence but a lesser status. By the mid-20th century, high school teachers were rarely called professor, and their salaries lagged behind what college faculty earned. The growing reliance on SAT and ACT tests also downplayed the importance of what students achieved in high school — what mattered was their apparent potential for future success. Rising enrollments meant that high school graduation was no longer the badge of merit for a small fraction of teenagers (in contrast, the status of college graduation eroded less severely when enrollments soared after the mid 20th century).

      Second, getting an education took longer. Neither high schools nor colleges showed much interest in reducing their traditional four-year time spans….Instead, Americans came to believe that more time in school and college was not only academically useful but also psychologically crucial. Close friendships, athletic glory, extracurricular success, and opportunities to date attractive young men and women rivaled the formal curriculum….

      Third, the new boundaries could be illusory. By the 1920s, the messy array of 19th-century schools and colleges had been cleaned up, and the system looked relatively logical, rational, and streamlined. But the overlap between school and college was still substantial. In the 1930s, for instance, many college students knew less than high school students. In one study, 22% of high school seniors surpassed the test score of the average college sophomore, and 10% did better than the average college senior. Within each grade, the variations were vast….

People Are Not Dying Because of Opioids

[This excerpt is from an article by Carl L. Hart in the November 2017 issue of Scientific American.]

      I am concerned that declaring the opioid crisis a national emergency will serve primarily to increase law-enforcement budgets, precipitating an escalation of this same sort of routine racial discrimination. Recent federal data show that more than 80 percent of those who are convicted for heroin trafficking are either black or Latino, even though whites use opioids at higher rates than other groups and tend to buy drugs from individuals within their racial group.

      The president also claimed that the opioid crisis “is a world-wide problem.” It isn’t. Throughout Europe and other regions where opioids are readily available, people are pot dying at comparable rates as those in the U.S., largely because addiction is treated not as a crime but as a public health problem.

      It is certainly possible to die from an overdose of an opioid alone, but this accounts for a minority of the thousands of opioid-related deaths. Many are caused when people combine an opioid with another sedative (such as alcohol), an antihistamine (such as promethazine) or abenzodiazepine (such as Xanax or Klonopin). People are not dying because of opioids; they are dying because of ignorance.

      There is now one more opioid in the mix—fentanyl, which produces a heroinlike high but is considerably more potent. To make matters worse, according to some media reports, illicit heroin is sometimes adulterated with fentanyl. This, of course, can be problematic—even fatal—for unsuspecting heroin users who ingest too much of the substance thinking that it is heroin alone. One simple solution is to offer free, anonymous drug-purity testing services. If a sample contains adulterants, users would be informed. These services already exist in places such as Belgium, Portugal, Spain and Switzerland, where the first goal is to keep users safe. Law-enforcement officers should also do such testing whenever they confiscate street drugs, and they should notify the community whenever potentially dangerous adulterants are found. In addition, the opioid overdose antidote naloxone should be made more affordable and readily available not just to first responders but also to opioid users and to their family and friends.

Nip Misinformation in the Bud

[This excerpt is from an aeditorial by Rick Weiss in the October 27, 2017, issue of Science.]

      The democratization of journalism through crowd sourcing, blogging, and social media has proven to be a sharp, double-edged sword. The internet has vastly expanded the sourcing of news and information, capturing stories that might otherwise go untold and delivering a diversity of perspectives that no single media outlet could hope to offer. At the same time, this new and open model has given anyone with web access a global platform to propagate information that is mistakenly or intentionally false. This is especially problematic when it comes to scientific information, which is critical to rational policy-making in areas like health, environmental protection, and national security, and at its best is often misinterpreted by the lay public. Yet recent years have seen a reduction in specialized science pages and reporters in the nation’s newsrooms in favor of reliance on general assignment staffers, even as deadlines have grown shorter—reducing opportunities to ensure accuracy and clarity before publication.

      Postpublication fact checking is helping. From the “Pinocchios” that the Washington Post awards to those caught stretching the truth, to the day-to-day debunkings posted by organizations like FactCheck, Snopes, and PolitiFact, the recent explosion in fact-checking initiatives is a welcome response to this bubbling new environment. But memes take root quickly and die hard. So, in the fight against misinformation, fact checking is often too little, too late. When it comes to stories about science—or about legislation, economics, or other domains where science can be informative—it would be far better to help journalists and the public get it right before having to call in the truth squads….

The Exercise Pill

[These excerpts are from an article by Nicola Twilley in the November 6, 2017, issue of The New Yorker.]

      …the biologist Ron Evans introduced me to two specimens: Couch Potato Mouse and Lance Armstrong Mouse.

      Couch Potato Mouse had been raised to serve as a proxy for the average American. Its daily exercise was limited to an occasional waddle toward a bowl brimming with pellets of laboratory standard “Western Diet,” which consists almost entirely of fat and sugar and is said to taste like cookie dough. The mouse was lethargic, lolling in a fresh layer of bedding, rolls of fat visible beneath thinning, greasy-looking fur. Lance Armstrong Mouse had been raised under exactly the same conditions, yet, despite its poor diet and lack of exercise, it was lean and taut, its eyes and coat shiny as it snuffled around its cage. The secret to its healthy appearance and youthful energy, Evans explained, lay in a daily dose of GW501516: a drug that confers the beneficial effects of exercise without the need to move a muscle. [Referred to henceforth as 516.]

      …The drug works by mimicking the effect of endurance exercise on one particular gene: PPAR-delta. Like all genes, PPAR-delta issues instructions in the form of chemicals—protein-based signals that tell cells what to be, what to burn for fuel, which waste products to excrete, and so on. By binding itself to the receptor for this gene, 516 reconfigures it in a way that alters the messages the gene sends—boosting the signal to break down and burn fat and simultaneously suppressing instructions related to breaking down and burning sugar….

      …Evans refers to the compound as “exercise in a pill.” But although Evans understands the mechanism behind 516’s effects at the most minute level, he doesn't know what molecule triggers that process naturally during exercise….For all the known benefits of a short loop around the park, scientists are, for the most part, incapable of explaining how exercise does what it does.

      …The company was about to embark on Phase III trials—the large, expensive, double-blind, placebo-controlled trials that are required for F.D.A. approval—when the results of a long-term-toxicity test came in. Mice that had been given large doses of the drug over the course of two years (a lifetime for a lab rodent) developed cancer at a higher rate than their dope-free peers. Tumors appeared all over their bodies….the only way to conclusively prove that even a lower dose would not have a similar effect on humans would be to run a seventy-year trial….

      …516 is not the only “exercise pill” in development…. Compound 14 caused the blood-glucose levels of obese, sedentary mice on a high-fat diet to approach normal levels in just a week, while melting away five per cent of their body weight. It works, he explained, by fooling cells into thinking that they are running out of energy, causing them to burn through more of the body's fuel reserves.

      …Bruce Spiegelman, a Harvard cell biologist, has discovered two potent exercise hormones. One of them, irisin, turns metabolically inert white fat in mice into mitochondria-packed, energy-burning brown fat, and Spiegelman said that he's seen evidence that it may also boost levels of healthy proteins in the area of the brain associated with learning and memory….

      Even if everything goes smoothly, however, 516 is multiple trials and several years away from reaching the market. And although Evans is convinced that his improved version of the drug is safe, any molecule that affects metabolic processes is necessarily interacting with a variety of other molecules throughout the body, in ways that we don’t yet understand. Nonetheless, Evans, James, and Spiegelman are all confident that legal drugs mimicking some of the effects of exercise are on their way, sometime within the next ten to fifteen years….

      Although 516 has not been approved as a drug, plenty of people are taking it. Once the structure of a new compound has been published, chemical-supply laboratories are free to synthesize it for sale, “for research purposes only.” 516 is easy and relatively cheap to make, and it is readily available online….

Get Toxic Chemicals Out of Cosmetics

[This editorial by the editors is from the November 2017 issue of Scientific American.]

      Earlier this year a group of more than a dozen health advocacy groups and individuals petitioned the U.S. Food and Drug Administration to ban lead acetate from hair dyes. The compound, a suspected neurotoxin, is found in many hair products—Grecian Formula, for example. Lead acetate has been outlawed for nearly a decade in Canada and Europe. Studies show it is readily absorbed through the skin and can cause toxic levels of lead to accumulate in the blood.

      How is it possible that this chemical is still being sold to U.S. consumers in cosmetic products? The main reason is that petitions such as the one calling out lead acetate are one of the few ways, under current law, that the agency charged with ensuring food, drug and cosmetic safety can even start to limit dangerous chemicals used on our faces and in our bodies. We need to do better.

      Under the Federal Food, Drug, and Cosmetic Act and the Fair Packaging and Labeling Act, the FDA can regulate cosmetic chemicals. But it only steps in if it has “reliable information” that there is a problem. In practice, that has often meant that nothing is done before a public outcry Years can pass while the FDA investigates and deliberates. Aside from these situations, the safety of cosmetics and personal care products is the responsibility of the companies that make them. The law requires no specific tests before a company brings a new product with a new chemical to market, and it does not require companies to release whatever safety data they may collect.

      The result is that several chemicals with realistic chances of causing toxic effects can be found in everything from shampoo to toothpaste. One is formaldehyde, a carcinogenic by-product released by the preservatives used in cosmetics. In 2011 the National Toxicology Program at the Department of Health and Human Services declared formaldehyde a known human carcinogen, demonstrated by human and animal studies to cause cancer of the nose, head, neck and lymphatic system. Other research indicates it can be dangerous at the levels found in cosmetics, and nearly one fifth of cosmetic products contained the chemical. Other risky substances include phthalates, parabens (often found in moisturizers, makeup and hair products) and triclosan, which the FDA banned from hand soaps in 2016 yet is still allowed in other cosmetics. At exposures typical of cosmetic users, several of these chemicals have been linked to cancer, impaired reproductive ability and compromised neurodevelopment in children.

      A recent study published online by Ami R. Zota of George Washington University and Bhavna Shamasunder of Occidental College in the American Journal of Obstetrics & Gynecology showed that women of color are at especially high risk of exposure. In an attempt to adhere to Caucasian beauty ideals, the researchers found, women of color are more likely to use chemical hair straighteners and skin lighteners, which disproportionately expose them to high doses of phthalates, parabens, mercury and other toxic substances.

      The U.S. should protect its citizens. One worthwhile approach is to emulate the European Union's directive on cosmetics, which has banned more than 1,300 chemicals from personal health or cosmetic products. In some cases, the E.U. has acted after seeing only preliminary toxicity data. This is a prime example of the “precautionary principle” that has guided U.S. health agencies in setting acceptable levels of exposure to other potentially hazardous substances, such as lead.

      Right now the number of studies on cosmetics is limited, and the FDA does not have the resources or directive to initiate broad tests. This past May senators Dianne Feinstein of California and Susan Collins of Maine reintroduced the Personal Care Products Safety Act in Congress. The bill would require, among other things, that all cosmetics makers pay annual fees to the agency to help finance new safety studies and enforcement—totaling approximately $20 million a year. With that money, the FDA must assess the safety of at least five cosmetics chemicals a year. The bill also gives the agency the authority to pull products off the shelves immediately when customers have reported bad reactions, without waiting for a review that can take multiple years.

      Consumers should not be forced to scrutinize the ingredient lists in their medicine cabinets and report adverse reactions. That should be the FDA’S job. The Feinstein-Collins bill empowers the agency to make efficient determinations from sound science.

The Seventh Sense

[This excerpt is from the introduction to Eats, Shoots & Leaves by Lynne Truss.]

      Punctuation has been defined many ways. Some grammarians use the analogy of stitching: punctuation as the basting that holds the fabric of language in shape. Another writer tells us that punctuation marks are the traffic signals of language: they tell us to slow down, notice this, take a detour, and stop. I have even seen a rather fanciful reference to the full stop and comma as “the invisible servants in fairy tales — the ones who bring glasses of water and pillows, not storms of weather or love”. But best of all, I think, is the simple advice given by the style book of a national newspaper: that punctuation is “a courtesy designed to help readers to understand a story without stumbling”.

      Isn’t the analogy with good manners perfect? Truly good manners are invisible: they ease the way for others, without drawing attention to themselves. It is no accident that the word “punctilious” (“attentive to formality or etiquette”) comes from the same original root word as punctuation. As we shall see, the practice of “pointing” our writing has always been offered in a spirit of helpfulness, to underline meaning and prevent awkward misunderstandings between writer and reader. In 1644 a schoolmaster from Southwark, Richard Hodges, wrote in his The English Primrose that “great care ought to be had in writing, for the due observing of points: for, the neglect thereof will pervert the sense”, and he quoted as an example, “My Son, if sinners intise [entice] thee consent thou, not refraining thy foot from their way.” Imagine the difference to the sense, he says, if you place the comma after the word “not”: “My Son, if sinners intise thee consent thou not, refraining thy foot from their way.” This was the 1644 equivalent of Ronnie Barker in Porridge, reading the sign-off from a fellow lag's letter from home, “Now I must go and get on my lover”, and then pretending to notice a comma, so hastily changing it to, “Now I must go and get on, my lover.”

      To be fair, many people who couldn’t punctuate their way out of a paper bag are still interested in the way punctuation can alter the sense of a string of words. It is the basis of all “I’m sorry, I’ll read that again” jokes. Instead of “What would you with the king?” you can have someone say in Marlowe’s Edward II, “What? Would you? With the king?” The consequences of mispunctuation (and re-punctuation) have appealed to both great and little minds, and in the age of the fancy-that email a popular example is the comparison of two sentences:

      A woman, without her man, is nothing.

      A woman: without her, man is nothing.

      Which, I don’t know, really makes you think, doesn’t it? Here is a popular “Dear Jack” letter that works in much the same fundamentally pointless way:

      Dear Jack,

      I want a man who knows what love is all about. You are generous, kind, thoughtful. People who are not like you admit to being useless and inferior. You have ruined me for other men. I yearn for you. I have no feelings whatsoever when we're apart. I can be forever happy — will you let me be yours?


      Dear Jack,

      I want a man who knows what love is. All about you are generous, kind, thoughtful people, who are not like you. Admit to being useless and inferior. You have ruined me. For other men I yearn! For you I have no feelings whatsoever. When we're apart I can be forever happy. Will you let me be?



      But just to show there is nothing very original about all this, five hundred years before email a similarly tiresome puzzle was going round:

      Every Lady in this Land

      Hath 20 Nails on each Hand;

      Five & twenty on Hands and Feet;

      And this is true, without deceit.

      (Every lady in this land has twenty nails. On each hand, five; and twenty on hands and feet.)

Our Next Billion Years

[This excerpt is from an article by Max Tegmark in the November 2017 issue of Discover.]

      Seth Lloyd, an MIT quantum computer pioneer, showed that computing speed is limited by energy. This means that a 1-kilogram computer, equivalent to a small laptop, can perform at most 5x1050 operations per second — that’s a whopping 36 orders of magnitude more than the computer on which I’m typing these words. We’ll get there in a couple of centuries if computational power keeps doubling every couple of years. He also showed that a 1 kg computer can store up to 1031 bits, which is about one billion billion times better than my laptop.

      Actually attaining these limits may be challenging, even for superintelligent life. However, Lloyd is optimistic that the practical limits aren’t that far from the ultimate ones. Indeed, existing quantum computer prototypes have already miniaturized their memory by storing 1 bit per atom. Scaling that up would allow storing about 1025 bits per kilogram — a trillion times better than my laptop. Moreover, using electromagnetic radiation to communicate between these atoms would permit about 5 x1040 operations per second — 31 orders of magnitude better than my CPU.

      The potential for future life to compute and figure things out is truly mind-boggling: In terms of orders of magnitude, today's best supercomputers are much further from the ultimate 1 kg computer than they are from the blinking turn signal on a car, a device that stores merely 1 bit of information, flipping it between on and off about once per second.

Untangling Spider Biology

[These excerpts are from an article by Elizabeth Pennisi in the October 20, 2017, issue of Science.]

      For a display of nature's diabolical inventiveness, it’s hard to beat spiders. Take the reclusive ogre-faced spider, with its large fangs and bulging, oversized middle eyes. Throughout the tropics these eight-legged monsters hang from twigs, an expandable silk net stretched between their front legs so they can cast it, lightning-fast, over their victims. Showy 'peacock spiders, in contrast, flaunt rainbow-colored abdomens to attract mates, while their outsized eyes discern fine detail and color—the better to see both strutting mates and unsuspecting prey. Bolas spiders, named for the South American weapon made of cord and weights, specialize in mimicry. By night, the female bolas swings a silken line with a sticky ball at its end while emitting the scent of a female moth to lure and nab male moths.

      …Spiders’ universal ability to make silk helps explain their global success—an estimated 90,000 species thrive on every continent except Antarctica. This material, used for capturing prey, rappelling from high places, and building egg cases and dwellings, is itself fantastically diverse, its makeup varying from species to species. The same goes for venom, another universal spider attribute—each species makes a different concoction of up to 1000 different compounds.

      …Based mainly on fossil evidence and specimens preserved in amber, biologists concluded long ago that spiders descended from a many-legged, scorpionlike ancestor that by 380 million years ago had a long tail but looked quite spiderlike and may even have had silk glands. By 300 million years ago, fossils show, eight-legged creatures with spiderlike mouth parts, primitive silk glands, and stumpy abdomens had emerged. Those abdomens were still segmented, not fused as in today’s spiders. But what happened afterward to produce the explosion of spider diversity seen now has been mysterious.

      Today, taxonomists recognize three spider groups. The Mygalomorphae—ground-dwelling creatures characterized by fangs that point straight down—include about 2500 species, including tarantulas and so-called trapdoor and funnel-web spiders. Another group, Liphistiidae, consists of 97 species, many of which also build trapdoors to capture prey. The third group, the Araneomorphae, includes 5500 jumping spiders, 4500 dwarf spiders, 2400 wolf spiders, and thousands of web spinners.

Evolution Accelerated when Life Set Foot on Land

[These excerpts are from an article by Elizabeth Pennisi in the October 13, 2017, issue of Science.]

      Life probably originated in water, but nature did some of its best work once organisms made landfall. That's what Geerat Vermeij has concluded after surveying fossils and family trees to discover where and when some of life's greatest modern advances evolved. Almost all of these seemingly out-of-the-blue innovations, from fungus farming by insects to the water transport systems that made tall trees possible, came about after plants and animals learned how to survive on land some 440 million years ago….

      Many researchers have focused on how newly land-based organisms coped with gravity and the threat of desiccation. But Vermeij wondered instead how the move to land might have changed the pace of evolution. He compiled a list of key innovations that showed up in several groups of organisms and provided a big competitive edge, such as herbivory by vertebrates, flight, echolocation, and warm-bloodedness. Existing fossil evidence enabled him to date the origin of a dozen of these adaptations.

      Nine appeared first on land and later in the sea….

From Students to Scientists

[This excerpt is from an article by Olivia Ho-Shing in the Fall 2017 issue of American Educator.]

      What does it mean to be a scientist? In the most basic of terms, a scientist is someone who does scientific research. But what personal qualities does it take to do scientific research?

      In his book Letters to a Young Scientist, renowned biologist Edward O. Wilson recounts his own coming-of-age story as a scientist, and distills the motivating qualities of science down to curiosity and creativity. Individuals become scientists when they are curious about a phenomenon in the world around them and ask about the real nature of that phenomenon: What are its origins, its causes, or its consequences? Scientists then employ some creativity to answer their questions through a systematic testing of hypotheses (the scientific method), and form some conclusion based on their findings.

      This explanation of how scientists approach research highlights something very powerful: anybody with curiosity and creativity, by subscribing to the scientific method, can do science and discover something new about our natural world. From an early age, children brim with questions and sometimes come up with overly creative methods to test a hypothesis (say, using a magnifying glass to start a fire). It becomes incumbent upon teachers, then, to continually help foster students’ curiosity and creativity as critical aspects of their learning, particularly in science.

      Wilson describes the broad field of science as a “culture of illuminations dedicated to the most effective way ever conceived of acquiring factual knowledge.” His description points to another critical aspect in becoming a scientist: not only acquiring some knowledge but contributing that knowledge to a shared culture and community. Scientists engage with others in their field through collaborations, presentations, and publication, thereby strengthening their own findings and assessing information within a broader context of knowledge….

Message Control

[These excerpts are from an article by Brooke Borel in the October 2017 issue of Scientific American.]

      Science doesn’t happen in a vacuum. But historically, many researchers haven't done a great job of confronting—or even acknowledging—the entangled relation between their work and how it is perceived once it leaves the lab….When communication breaks down between science and the society it serves, the resulting confusion and distrust muddies everything from research to industry investment to regulation.

      In the emerging era of CRISPR and gene drives, scientists don’t want to repeat the same mistakes. These new tools give researchers an unprecedented ability to edit the DNA of any living thing—and, in the case of gene drives, to alter the DNA of wild populations. The breakthroughs could address big global problems, curtailing health menaces such as malaria and breeding crops that better withstand climate change. Even if the expectations of CRISPR and gene drives do come to fruition—and relevant products are safe for both people and the environment—what good is the most promising technology if the public rejects it?

      …To avoid that outcome, some researchers are taking a new tack. Rather than dropping fully formed technology on the public, they are proactively seeking comments and reactions, sometimes before research even starts….By opening an early dialogue with regulators, environmental groups and communities where the tools may be deployed, scientists are actually tweaking their research plans while wresting more control over the narrative of their work.

The Roots of Science Denial

[These excerpts are from an article by Katharine Hayhoe, as told to Jen Schwartz, in the October 2017 issue of Scientific American.]

      Science denial is basically anti-intellectualism. It’s a thread that has run though American society for decades, possibly even centuries. Back in 1980, Isaac Asimov said that it’s “nurtured by the false notion that democracy means that ‘my ignorance is just as good as your knowledge.’” Today we’re dealing with its most recent manifestation, at its peak.

      Climate change is a special case of science denial, which of course goes back to Galileo. The Catholic Church didn’t push back on Galileo until he stuck his head out of the ivory tower and published in Italian rather than in Latin, so that he could tell the common people something that was in direct opposition to the church’s official program. Same with Darwin. The church didn't have a problem with his theory of evolution until he published a popular book that everyone could read.

      Similarly, we’ve known about the relationship between carbon dioxide and global warming since the 1890s. It’s been about 50 years since scientists warned President Lyndon B. Johnson about the dangers of a changing climate. But scientists back then didn't get the deluge of hate mail that I get now So what shifted? It started, possibly, with [Columbia University climate scientist] James Hansen's testimony before Congress in 1988. He announced that a resource we all rely on—and makes many of the world's biggest companies rich—is harming not just the environment but all of humanity. I think it’s no accident that Hansen is the most vilified and attacked climate scientist in the U.S. because he was the first person to emerge from ivory tower and start talking about global warming in a sphere where its implications became apparent for policy and politics.

      So you can see that the problem people have with science is never the actual science. People have a problem with the implications of science for their worldview and, even more important, for their ideology. When anti-intellectualism rises to the surface, it's because there are new, urgent results coming out of the scientific community that challenge the perspective and status quo of people with power. Renewable energy is now posing a very significant threat to them. The more viable the technologies, the greater the pushback. It’s a last-ditch effort to resist change, which is why denial is at a fever pitch.

      Today, although many of the objections to climate science are couched in science-y terms—it’s just a natural cycle, scientists aren’t sure, global cooling, could it be volcanoes—or even religious-y terms—God is in control—99 percent of the time, that language is just a smokescreen. If you refuse to engage these arguments and push through for even five minutes, the conversation will naturally lead to profound objections to climate change solutions....

      Even in the science community, there’s so much confusion over how to communicate. The deficit model—just give them the facts!—does not work in public discourse unless everybody is politically neutral. That’s why social science is increasingly important. I was the experimental method in a recent paper where a researcher asked me to speak at an evangelical Christian college. He asked the students about global warming before and after my talk and found statistically significant differences on their perspectives. Many people are now doing this kind of message testing. How humans interact with information is an emerging area of research that's desperately important.

      Scientists also tend to understate the impact of climate change….

      Look, we can’t fix all these issues—cultural, political, psychological—before we take necessary action on climate change. People say to me, “Well, if you could just get everyone onboard with the science....” I’m like, good luck with that! How did that work out the past few centuries? This climate problem is urgent. The window is closing. We have to fix it with the flawed, imperfect society we have today….

      My number-one piece of advice for people doing climate—or any science—outreach is, Don’t focus on the dismissive people. They’re really a very small part of the population, and they're primarily older white men….

Better Batteries

[This excerpt is from an article by Matthew Sedacca in the October 2017 issue of Scientific American.]

      Some owners of Samsung Galaxy Note7 smartphones learned the hard way last year that lithium-ion batteries, commonly used in many consumer electronics, can be flammable and even explosive. Such batteries typically rely on liquid electrolytes, which are made up of an organic solvent and dissolved salts. These liquids enable ions to flow between electrodes separated by a porous membrane, thus creating a current. But the fluid is prone to forming dendrites—microscopic lithium fibers that can cause batteries to short-circuit and heat up rapidly. Now research suggests that gas-based electrolytes could yield a more powerful and safer battery.

      …recently tested electrolytes composed of liquefied fluoromethane gas solvents, which can absorb lithium salts as well as their conventional liquid-based counterparts do. After the experimental battery was fully charged and drained 400 times, it held a charge nearly as long as it did when new; a conventional lithium-ion battery tends to last nearly 20 percent as long. The condensed-gas battery also generated no dendrites. The findings were published earlier this year in Science.

      If a standard lithium-ion battery is punctured—and the membrane separating the electrodes is pierced—the electrodes can come into contact and short-circuit. This causes the battery to overheat in the presence of its reactive lithium electrolyte and possibly catch fire (which is exacerbated by oxygen entering from outside). But fluoromethane liquefies only under pressure, so if the new batteries are punctured, the pressure releases, the liquid reverts to a gas and the gas can escape…

      The batteries perform well in temperatures as low as —60 degrees Celsius, unlike standard lithium-ion batteries, so they could power instruments in high-altitude drones and long-range spacecraft….

      Donald Sadoway, a professor of materials chemistry at the Massachusetts Institute of Technology, who was not involved in the study, says the new concept “opens our eyes to a class of liquids that has been understudied.” But, he adds, the researchers need to ensure that excessive heat does not cause the batteries’ liquefied gas to expand rapidly and lead to a dangerous increase in pressure.

The Distracted Student Mind – Enhancing Its Focus and Attention

[These excerpts are from an article by Larry D. Rosen in the October 2017 issue of Kappan.]

      For more than three decades….my research team and I have watched Americans move from an initial fear of computers to a state of wary acceptance to eager adaptation to what has become more or less an obsession with the tiny devices we now carry in our purses and pockets.

      What does this obsession mean for today’s students? Recent research findings are sobering:

      • Typically, college students unlock their phones 50 times a day, using them for close to 4½ hours out of every 24-hour cycle. Put another way, they check their phones every 15 minutes — all day long (and sometimes all night) — and they look at them for about five minutes each time.

      • Teenagers are almost always attempting to multitask, even when they know full well that they cannot do so effectively.

      • When teenagers have their phones taken away, they become highly anxious (and visibly agitated within just a few minutes).

      • The average adolescent or young adult finds it difficult to study for 15 minutes at a time; when forced to do so, they will spend at least five of those minutes in a state of distraction.

      …consider how many decades it took for wired telephones to fully penetrate American society. Cell phones took hold much more quickly, but even so, it took a couple of decades before cell phone use reached 50 million users (the benchmark for penetrating society, according to consumer scientists). Then came the World Wide Web, which hit 50 million users in just four years. More recently, MySpace took 2.5 years to do so, Facebook did it in two years, YouTube took just a single year, and Instagram hit the mark in a matter of months. If that seems fast, consider that both Angry Birds and Pokemon GO took just one month to garner 50 million users….

      The question is, what does this increasingly rapid influx of media and technologies do to us mentally, physically, and neurologically? More specifically…as young people are buffeted by one new communications technology after another, what happens to their ability to focus on the present?...

      This is a huge problem, given that sleep plays an absolutely critical role in learning, allowing us to consolidate important information, rid ourselves of unwanted information, and dispose of stray toxic molecules left in the brain during the day. The human body includes hormonal mechanisms that ensure that it gets the sleep it needs — as day turns to dusk, the pineal gland starts to secrete melatonin, which is a hormone that gradually makes people sleepy. However, most electronic devices emit light in the blue part of that spectrum, which tells the pineal gland to shut down the melatonin and orders the adrenal gland to secrete cortisol, which wakes people up. The closer one holds the device to one’s eyes, the more blue light is absorbed and the more difficult it is to get a good night's sleep. The upshot is that 80% of today's teens say they rarely or never sleep well. The National Sleep Foundation recommends nine hours per night, but most teens now get far less than that. Most weeks, they accrue 12 hours of sleep debt, which can only be repaid by sleeping during the day (often in class)….

      What can educators do for students who’ve become used to accessing their smartphones all day long, are constantly distracted by texts and alerts, spend countless hours on social media, use their phones right up to bedtime, and rarely get a good night’s sleep? There’s no simple solution. For example, studies suggest that if we take away their phones, that only makes them anxious, impeding their learning. Plus, online conversations are their lifeblood, accounting for much, if not most, of their social lives. However, some simple strategies can help. Drawing from my own and others’ research, here’s what I recommend:

      #1. Make sure students understand that their brains need the occasional "reset."…

      #2. Help students build stamina for studying with tech breaks.…

      #3. Advise students to treat sleep as sacred….

      #4. Tell students to minimize the alerts and notifications….

      #5. Advise parents to create specific tech-free zones….

      I am hopeful, though, that with conscious effort we can help students strengthen their powers of attention. I've heard from many educators who have implemented the strategies described above and have seen students become less distracted by fears of missing the latest text or update. While these strategies require diligence, they are not difficult or complicated. And if you're skeptical that they can help students, then try them on yourself and your own family first — it shouldn’t take long before you begin to feel better able to control your “human-ware”and less like your hardware and software are controlling you.

Giant Shape-Shifters

[These excerpts are from an article by Annie Sneed in the October 2017 issue of Scientific American.]

      Paleontologists unearthed a strange sight in Newfoundland in the early 2000s: an ancient fossil bed of giant, frond-shaped marine organisms. Researchers had discovered these mysterious extinct creatures—called rangeomorphs—before, but they continue to defy categorization. Now scientists believe the Newfoundland fossils and their brethren could help answer key questions about life on Earth.

      Rangeomorphs date back to the Ediacaran period, which lasted from about 635 million to 541 million years ago. They had stemlike bodies that sprouted fractal-like branches and were soft like jellyfish. Scientists think these creatures grew to sizes until then unseen among animals—up to two meters long. After they went extinct, the planet saw an explosion of diverse large animal life during the Cambrian. “Rangeomorphs are part of the broader context of what was going on at this time in Earth’s history,” says study coauthor Jennifer Hoyal Cuthill, a paleobiology research fellow at the Tokyo Institute of Technology. Figuring out how rangeomorphs grew to such great sizes could help provide context for understanding how big, diverse animals originated and how conditions on Earth—which were shifting around this time—may have affected the evolution of life….

      The researchers examined various aspects of the rangeomorphs’ stems and branches, then used mathematical models to investigate the relation between the fossils’ surface areas and volumes. Their models, combined with the fossil observations, revealed that the organisms’ size and shape appeared to be governed by the amount of available nutrients….This may explain why they could reach such large sizes during a period when Earth's geochemistry was changing.

      But other experts are hesitant to generalize in this way….

Angels and Men

[These excerpts are from an article by Claudia Roth Pierpont in the October 16, 2017, issue of The New Yorker.]

      …There is little doubt that Leonardo was arrested. Although any time he may have spent in jail was brief, and the was dismissed….It is impossible to know if this affected the artist’s habit, later cited as a mark of his character, of buying caged birds from the market, just to set them free. But it does seem connected with the drawings he made, during the next few years, of two fantastical inventions: a machine that he explained was meant “to open a prison from the inside,” and another for tearing bars of windows.

      These drawings are part of a a vast treasure of texts and images, amounting to more than seven thousand surviving pages, now dispersed across several countries and known collectively as “Leonardo’s notebooks”—which is precisely what they were. Private notebooks of all sizes, some carried about for quick sketches and on-the-spot observations, others used for long-term, exacting studies in geology, botany, and human anatomy, to specify just a few of the areas in which he posed fundamental questions, and reached answers that were often hundreds of years ahead, of his time. Why is the sky blue? How does the heart function? What are the differences in air pressure above and beneath a bird’s wing, and how might this knowledge enable man to make a flying machine? Music, engineering, astronomy. Fossils and the doubt they cast on the Biblical story of creation……He intended publication, but never got around to it; there was always something more to learn. In the following centuries, at least half the pages were lost. What survives is an unparalleled record of a human mind at work, as fearless and dogged as it was brilliant….

      A chariot fitted with enormous whirling blades, slicing men in half or cutting off their legs, leaving pieces scattered; guns with multiple barrels arranged like organ pipes to increase the speed and intensity of firing; a colossal missile-launching crossbow. Leonardo made many such frightening drawings…. He had never demonstrated any military skills before, and his intention in these drawings remains a matter of dispute. Was he an unworldly visionary or a conscienceless inventor?...


[These excerpts are from an article by Jason G. Goldman in the October 2017 issue of Scientific American.]

      Humans kill large carnivores—a category of animals that includes wolves, bears, lions, tigers and pumas—at more than nine times their mortality rate in the wild. Although they may not be our prey in the traditional sense, new research shows that some of the World’s biggest carnivores are responding to humans in a way that resembles how prey animals react to predators. Biologists at the Santa Cruz Puma Project, an ongoing research effort in the mountains of California’s central coast, report that even the formidable puma, or mountain lion, shows its fearful side when people are around.

      In a recent study, the researchers followed 17 mountain lions outfitted with GPS collars to the animals’ deer kill sites. Once the cats naturally left the scene between feedings, ecologist Justine A. Smith…and her team trained motion-activated cameras on the prey carcasses. On the animals’ return, the cameras triggered nearby speakers, which broadcast recordings of either frogs croaking or humans conversing.

      The pumas almost always fled immediately on hearing the human voices, and many never returned to resume feeding or took a long time to do so. But they only rarely stopped eating or fled when they heard the frogs. They also spent less than half as much time feeding during the 24 hours after first hearing human chatter, compared with hearing the frogs….

      The human presence in such a situation has far-reaching consequences. A previous study found that Santa Cruz pumas living near residential areas killed 36 percent more deer than those in less populated places. The new finding could explain why: if the cats are scared away from their kills before they finish feeding, they may be taking more prey to compensate. And fewer deer could mean more plants go uneaten, according to Chris Darimont, a professor of conservation science at the University of Victoria in British Columbia, who was not involved in the study. Thus, fear of humans may alter the entire food chain.

      “Humans are the most significant source of mortality for pumas in this population even though [the cats are] not [legally] hunted” for food or sport, Smith says. Many are hunted illegally, struck by vehicles or legally killed by governmental agencies as a means of protecting livestock. “So they have good reason to be fearful of us,” she adds. Darimont predicts other large carnivores would show similar responses because humans have effectively become the planet's apex predators—even if we often do not eat what we kill. ”I expect this to be common because the human predator preys on just about every medium-to-large vertebrate on the planet,” he says. “And at very high rates.”

Put Science back in Congress

[This excerpt is from an editorial in the October 2017 issue of Scientific American.]

      The White House and Congress have lost their way when it comes to science. Notions unsupported by evidence are informing decisions about environmental policy and other areas of national interest, including public health, food safety, mental health and biomedical research. The president has not asked for much advice from his Office of Science and Technology Policy, evidently.

      The congressional committees that craft legislation on these matters do not even have formal designated science advisers. That’s a big problem. Take the House Committee on Science, Space, and Technology. Its leader, Republican Representative Lamar Smith of Texas, clearly misunderstands the scientific process, which includes assessment by independent peer reviewers prior to publication. The result has been a nakedly antiscience agenda. The committee has packed its hearings with industry members as witnesses instead of independent researchers. Democratic members have felt compelled to hold alternative hearings because they feel Smith has not allowed the real experts to speak. Smith’s misinformed leadership has made it clear that congressional science committees need to be guided by genuinely objective experts.

      So far this year, Smith and fellow committee member Representative Frank Lucas of Oklahoma have each introduced bills that would seriously weaken the Environmental Protection Agency. Lucas’s bill would help stack the EPA’s Science Advisory Board with industry representatives and supporters. Smiths—the Honest and Open New EPA Science Treatment (HONEST) Act—would make it harder for the EPA to create rules based on good research. As Rush Holt, CEO of the American Association for the Advancement of Science, a former representative and a nuclear physicist, said of an earlier version of the bill, this sort of legislation is nothing less than an attempt to “fundamentally substitute] a [political] process for the scientific process.”

      This is lunacy. We should not allow elected officials—especially the heads of congressional science committees—to interfere with the scientific process, bully researchers or deny facts that fit poorly with their political beliefs. Instead of seeing science as a threat, officials should recognize it as an invaluable tool for improving legislation.

The Most Effective Climate Change Solutions Are Rarely Discussed

[These excerpts are from a current news article in the October 2017 issue of The Science Teacher.]

      Governments and schools are not communicating the most effective ways for individuals to reduce their carbon footprints, according to new research. The four actions that most substantially decrease an individual’s carbon footprint arc eating a plant-based diet, avoiding air travel, living car-free, and having smaller families.

      The study found that the incremental changes advocated by governments are unlikely to reduce greenhouse gas emissions beneath the levels needed to prevent 2°C of climate warming…

      Lead author Seth Wynes said: ‘We found that [the four actions] could result in substantial decreases in an individual’s carbon footprint. For example, living car-free saves about 2.4 tons of CO2 equivalent per year, while eating a plant-based diet saves 0.8 tons of CO2 equivalent a year.

      “These actions have much greater potential to reduce emissions than commonly promoted strategies like comprehensive recycling (which is four times less effective than a plant-based diet) or changing household lightbulbs (eight times less effective).”

      The researchers also found that neither Canadian school textbooks nor government resources from the E.U., U.S., Canada, or Australia highlight these actions, instead focusing on incremental changes with much smaller potential to reduce emissions.

      Study co-author Kimberly Nicholas said: “We recognize these are deeply personal choices. But we can't ignore the climate effect our lifestyle actually has. Personally, I've found it really positive to make many of these changes. It's especially important for young people establishing lifelong patterns to be aware which choices have the big-gest impact. We hope this information sparks discussion and empowers individuals.”…

Shellfish Secrets Could Help Save Soldiers

[These excerpts are from an article by David L. Chandler in the September/October 2017 issue of Technology Review.]

      The shells of marine organisms take a beating as they get propelled onto rocky shores by storms and tides or chomped by sharp-toothed predators. But recent research has shown that one type of shell stands above all others in its toughness: the conch. Now, an MIT team has explored the secrets behind these shells’ extraordinary impact resilience—and they've shown that this strength could be reproduced in engineered materials, leading to superior protective headgear and body armor.

      Conch shells “have this really unique architecture….” This internal structure makes the material 10 times as resistant to fractures as nacre, or mother-of-pearl—the shell’s basic material. The key is that it forms a “zigzag matrix….”

      Protective helmets and other impact-resistant gear require a combination of strength and toughness….Strength refers to a material’s ability to resist damage, which steel does well, for example. Toughness, on the other hand, refers to a material's ability to dissipate energy, as rubber does. Traditional helmets use a metal shell for strength and a flexible liner for both comfort and energy dissipation. But in the new composite material, this combination of qualities is distributed through the whole material.

      The printing technology would make it possible to form the conch-inspired material into individualized helmets or body armor….

  Website by Avi Ornstein, "The Blue Dragon" – 2016 All Rights Reserved