Increase Your Brain Power
Sonia in Vert
Shared Idea
Interesting Excerpts
Awards and Honors
This Week's Puzzle
Last Week's Puzzle
Interesting Excerpts
The following excerpts are from articles or books that I have recently read. They caught my interest and I hope that you will find them worth reading. If one does spark an action on your part and you want to learn more or you choose to cite it, I urge you to actually read the article or source so that you better understand the perspective of the author(s).
Cinderella and Equations

[This excerpt is from page 43 of Great Ideas of Modern Mathematics: Their Nature and Use by Jagjit Singh (1959).]

      It is needless to add that it is easier to write equations, whether differential, integral or integro-differential, than to solve them. Only a small class of such equations has been solved explicitly. In some cases, when, due to its importance in physics or elsewhere, we cannot do without an equation of the insoluble variety, we use the equation itself to define the function, just as Prince Charming used the glass slipper to define Cinderella as the girl who could wear it. Very often the artifice works; it suffices to isolate the function from other undesirables in much the same way as the slipper sufficed to distinguish Cinderella from her ugly sisters.

An Earlier Perception of Computers Later

An Earlier Perception of Computers

      [This excerpt is from pages 24-26 of Great Ideas of Modern Mathematics: Their Nature and Use by Jagjit Singh (1959).]

      Now, as Norbert Wiener has remarked, the human and animal nervous systems, which too are capable of the work of a computation system, contain elements—the nerve cells or neurons—which are ideally suited to act as relays :

      `While they show rather complicated properties under the influence of electrical currents, in their ordinary physiological action they conform very nearly to the "all-or-none" principle; that is, they are either at rest, or when they "fire" they go through a series of changes almost independent of the nature and intensity of the stimulus.' This fact provides the link between the art of calculation and the new science of Cybernetics, recently created by Norbert Wiener and his collaborators

      This science (cybernetics) is the study of the 'mechanism of control and communication in the animal and the machine', and bids fair to inaugurate a new social revolution likely to be quite as profound as the earlier Industrial Revolution inaugurated by the invention of the steam engine. While the steam engine devalued brawn, cybernetics may well devalue brain—at least, brain of a certain sort. For the new science is already creating machines that imitate certain processes of thought and do some kinds of mental work with a speed, skill and accuracy far beyond the capacity of any living human being.

      The mechanism of control and communication between the brain and various parts of an animal is not yet clearly understood. We still do not know very much about the physical process of thinking in the animal brain, but we do know that the passage of some kind of physico-chemical impulse through the nerve-fibres between the nuclei of the nerve cells accompanies all thinking, feeling, seeing, etc. Can we reproduce these processes by artificial means? Not exactly, but it has been found possible to imitate them in a rudimentary manner by substituting wire for nerve-fibre, hardware for flesh, and electro-magnetic waves for the unknown impulse in the living nerve-fibre. For example, the process whereby flatworms exhibit negative phototropism—that is, a tendency to avoid light—has been imitated by means of a combination of photocells, a Wheatstone bridge and certain devices to give an adequate phototropic control for a little boat. No doubt it is impossible to build this apparatus on the scale of the flatworm, but this is only a particular case of the general rule that the artificial imitations of living mechanisms tend to be much more lavish in the use of space than their prototypes. But they more than make up for this extravagance by being enormously faster. For this reason, rudimentary as these artificial reproductions of cerebral processes still are, the thinking machines already produced achieve their respective purposes for which they are designed incomparably better than any human brain.

      As the study of cybernetics advances—and it must be remembered that this science is just an infant barely ten years old—there is hardly any limit to what these thinking-machines may not do for man. Already the technical means exist for producing automatic typists, stenographers, multilingual interpreters, librarians, psychopaths, traffic regulators, factory-planners, logical truth calculators, etc. For instance, if you had to plan a production schedule for your factory, you would need only to put into a machine a description of the orders to be executed, and it would do the rest. It would know how much raw material is necessary and what equipment and labour are required to produce it. It would then turn out the best possible production schedule showing who should do what and when.

      Or again, if you were a logician concerned with evaluating the logical truth of certain propositions deducible from a set of given premises, a thinking machine like the Kahn-Burkhart Logical Truth Calculator could work it out for you very much faster and with much less risk of error than any human being. Before long we may have mechanical devices capable of doing almost anything from solving equations to factory planning. Nevertheless, no machine can create more thought than is put into it in the form of the initial instructions. In this respect it is very definitely limited by a sort of conservation law, the law of conservation of thought or instruction. For none of these machines is capable of thinking anything new.

      A 'thinking machine' merely works out what has already been thought of beforehand by the designer and supplied to it in the form of instructions. In fact, it obeys these instructions as literally as the unfortunate Casabianca boy, who remained on the burning deck because his father had told him to do so. For instance, if in the course of a computation the machine requires the quotient of two numbers of which the divisor happens to be zero, it will go on, Sisyphus-wise, trying to divide by zero for ever unless expressly forbidden by prior instruction. A human computer would certainly not go on dividing by zero, whatever else he might do. The limitation imposed by the aforementioned conservation law has made it necessary to bear in mind what Hartree has called the 'machine-eye view' in designing such machines. In other words, it is necessary to think out in advance every possible contingency that might arise in. the course of the work and give the machine appropriate instructions for each case, because the machine will not deviate one whit from what the `Moving Finger’ of prior instructions has already decreed. Although the limitation imposed by this conservation law on the power of machines to produce original thinking is probably destined to remain forever, writers have never ceased to speculate on the danger to man from robot machines of his own creation. This, for example, is the moral of stories as old as those of Famutus and Frankenstein, and as recent as those of Karel Capek's play, R.U.R., Olaf Stapledon’s First and Last Men.

      It is true that as yet there is no possibility whatsoever of constructing Frankenstein monsters, Rossum robots or Great Brains—that is, artificial beings possessed of a 'free will' of their own. This, however, does not mean that the new developments in this field are without danger to mankind. The danger from the robot machines is not technical but social. It is not that they will disobey man but that if introduced on. a large enough scale, they are liable to lead to widespread unemployment.

Irrational Numbers

[This excerpt is from pages 15-16 of Great Ideas of Modern Mathematics: Their Nature and Use by Jagjit Singh (1959).]

      The discovery of magnitudes which, like the diagonal of a unit square, cannot be measured by any whole number or rational fraction, that is, by means of integers, singly or in couples, was first made by Pythagoras some 2500 years ago. This discovery was a great shock to him. For he was a number mystic who looked upon integers…as the essence and principle of all things in the universe. When, therefore, he found that the integers did not suffice to measure even the length of the diagonal of a unit square, he must have felt like a Titan cheated by the gods. He swore his followers to a secret vow never to divulge the awful discovery to the world at large and turned the Greek mind once for all away from the idea of applying numbers to measure geometrical lengths. He thus created an impassable chasm between algebra and geometry that was not bridged till the tie of Descartes nearly 2000 years later.

The Value of Mathematics

[This excerpt is from page 3 of Great Ideas of Modern Mathematics: Their Nature and Use by Jagjit Singh (1959).]

      Mathematics is too intimately associated with science to be explained away as a mere game. Science is serious work serving social ends. To isolate mathematics from the social conditions which bring mathematics…into existence is to do violence to history.

A Quantum Future Awaits

[This excerpt is from an article by Jacob M. Taylor in the July 27, 2018, issue of Science.]

      A century ago, the quantum revolution quietly began to change our lives. A deeper understanding of the behavior of matter and light at atomic and subatomic scales sparked a new field of science that would vastly change the world's technology landscape. Today, we rely upon the science of quantum mechanics for applications ranging from the Global Positioning System to magnetic resonance imaging to the transistor. The advent of quantum computers presages yet another new chapter in this story that will enable us to not only predict and improve chemical reactions and new materials and their properties, for example, but also to provide insights into the emergence of spacetime and our universe. Remarkably, these advances may begin to be realized in a few years.

      From initial steps in the 1980s to today, science and defense agencies around the world have supported the basic research in quantum information science that enables advanced sensing, communication, and computational systems. Recent improvements in device performance and quantum bit (“qubit”) approaches show the possibility of moderate-scale quantum computers in the near future. This progress has focused the scientific community on, and engendered substantial new industrial investment for, developing machines that produce answers we cannot simulate even with the world’s fastest supercomputer (currently the Summit supercomputer at the US. Department of Energy’s Oak Ridge National Laboratory in Tennessee).

      Achieving such quantum computational supremacy is a natural first goal. It turns out, however, that devising a classical computer to approximate quantum systems is sometimes good enough for the purposes of solving certain problems. Furthermore, most quantum devices have errors and produce correct results with a decreasing probability as problems become more complicated. Only with substantial math from quantum complexity theory can we actually separate “stupendously hard” problems to solve from just “really hard” ones. This separation of classical and quantum computation is typically described as approaching quantum supremacy. A device that demonstrates a separation may rightly deserve to be called the world’s first quantum computer and will represent a leap forward for theoretical computer science and even for our understanding of the universe.

Astro Worms

[This excerpt is from an article by Katherine Kornei in the August 2018 issue of Scientific American.]

      Caenorhabditis elegans would make an ace fighter pilot. That's because the roughly one-millimeter-long roundworm, a type of nematode that is widely used in biological studies, is remarkably adept at tolerating acceleration. Human pilots lose consciousness when they pull only 4 or 5 g’s (1 g is the force of gravity at Earth’s surface), but C. eiegans emerges unscathed from 400,000 g’s, new research shows.

      This is an important benchmark; rocks have been theorized to experience similar forces when blasted off planet surfaces and into space by volcanic eruptions or asteroid impacts. Any hitchhiking creatures that survive could theoretically seed another planet with life, an idea known as ballistic panspermia.

Capture that Carbon

[These excerpts are from an article by Madison Freeman and David Yellen in the August 2018 issue of Scientific American.]

      The conclusion of the Paris Agreement in 2015, in which almost every nation committed to reduce their carbon emissions, was supposed to be a turning point in the fight against climate change. But many countries have already fallen behind their goals, and the U.S. has now announced it will withdraw from the agreement. Meanwhile emissions worldwide continue to rise.

      The only way to make up ground is to aggressively pursue an approach that takes advantage of every possible strategy to reduce emissions. The usual suspects, such as wind and solar energy and hydropower, are part of this effort, but it must also include investing heavily in carbon capture, utilization and storage (CCUS)---a cohort of technologies that pull carbon dioxide from smokestacks, or even from the air, and convert it into useful materials or store it underground….

      Without CCUS, the level of cuts needed to keep global warming to two degrees Celsius (3.6 degrees Fahrenheit)—the upper limit allowed in the Paris Agreement—probably cannot be achieved, ad6Ording to the International Energy Agency. By 2050 carbon capture and storage must provide at least 13 percent of the reductions needed to keep warming in check, the agency calculates….

      CCUS technologies can also help decarbonize emissions in heavy industry—including production of cement, refined metals and chemicals—which accounts for almost a quarter of U.S. emissions. In addition, direct carbon-removal technology—which captures and converts carbon dioxide from the air rather than from a smokestack—can offset emissions from industries that cannot readily implement other clean technology, such as agriculture.

      The basic idea of carbon capture has faced a lot of opposition. Skepticism has come from climate deniers, who see it as a waste of money, and from passionate supporters of climate action, who fear that it would be used to justify continued reliance on fossil fuels. Both groups are ignoring the recent advances and the opportunity they present By limiting investment in decarbonization, the world will miss a major avenue for reducing emissions both in the electricity sector and in a variety of industries. CCUS can also create jobs and profits from what was previously only a waste material by creating a larger economy around carbon.

      For CCUS to succeed, the federal government must kick in funding for basic research and development and offer incentives such as tax breaks for carbon polluters who adopt the technology. The Trump administration has repeatedly tried to slash energy technology R&D, with the Department of Energy;s CCUS R&D cut by as much as 76 percent in proposed budgets. But this funding must be protected….

      The transition to clean energy has become inevitable. But that transition’s ability to achieve deep decarbonization will falter without this wide range of solutions, which must include CCUS.

Physics Makes Rules, Evolution Rolls the Dice

[These excerpts are from a book review by Chico Camargo in the July 20, 2018, issue of Science.]

      Picture a ladybug in motion. The image that came into your head is probably one of a small, round red-and-black insect crawling up a leaf. After reading Charles Cockell’s The Equations of Life, however, you may be more likely to think of this innocuous organism as a complex biomechanical engine, every detail honed and operating near thermodynamic perfection.

      In a fascinating journey across physics and biology Cockell builds a compelling argument for how physical principles constrain the course of evolution. Chapter by chapter, he aims his lens at all levels of biological organization, from the molecular machinery of electron transport to the social organisms formed by ant colonies. In each instance, Cockell shows that although these structures might be endless in their detail, they are bounded in their form. If organisms were pawns in a game of chess, physics would be the board and its rules, limiting how the game unfolds.

      Much of the beauty of this book is in the diversity of principles it presents. In the chapter dedicated to the physics of the ladybug, for example, Cockell first describes an unassuming assignment in which students are asked to study the properties of the insect. Physical principles emerge naturally: from the surface tension and viscous forces between the ladybug’s feet and vertical surfaces, to the diffusion-driven pattern formation on its back, to the thermodynamics of surviving as a small insect at water-freezing temperatures. These discussions are accompanied by a series of equations that one would probably not expect to see in a single textbook, as various branches of physics—from physical chemistry to optics—are discussed side by side.

      Physics itself is different at different scales. A drop of water, for example, is inconsequential to a human being. If you are a ladybug, however, water surface tension is a potential problem: Having a drop of water on your back might become as burdensome as a heavy backpack that can’t be discarded. For a tiny ant, a droplet large enough can turn into a watery prison because the molecular forces in play are too strong for the insect to escape….

      At the end of every chapter, the reader is reminded of how the laws of physics nudge, narrow, mold, shape, and restrict the “endless forms most beautiful” that Charles Darwin once described. Cockell’s persistence pays off as he gears up for his main argument: If life exists on other planets, it has to abide by the same laws as on Earth.

      Because the atoms in the Milky Way behave the same as in any other galaxy Cockell argues that water in other galaxies will still be an abundant solvent, carbon should still be the preferred choice for self-replicating complex molecules, and the thermodynamics of life should still be the same. Sure, a cow on a hypothetical planet 10 times the diameter of Earth would need wider, stronger legs, but there is no reason to believe that replaying evolution on another planet would lead to unimaginable life forms. Rather, one should expect to see variations on the same theme.

      Cockell ends the book by celebrating the elegant equations that represent the relations between form and function. Rather than being a lifeless form of reductionism, equations, he argues, are our window into what physics renders possible (or impossible) for life to achieve. In equations, we express how our biosphere is full of symmetry, pattern, and law. Within them, we express the boldest claim of them all: that these limitations should be no less than universal.

Space, Still the Final Frontier

[These excerpts are from an editorial by Daniel N. Baker and Amal Chandran in the July 20, 2018, issue of Science.]

      …At the height of the Cold War in the 1960s and 1970s, space science and human space exploration offered a channel for citizens from the East and West to communicate and share ideas. Space has continued to be a domain of collaboration and cooperation among nations. The International Space Station has been a symbol of this notion for the past 20 years, and it is expected to be used by many nations until 2028. By contrast, there have been recent trends toward increased militarization of space with more—not less—fractionalization among nations. As well, the commercial sector is becoming a key player in exploring resource mining, tourism, colonization, and national security operations in spate. Thus, space is becoming an arena for technological shows of economic and military force. However, nations are realizing that the Outer Space Treaty of 1967 needs to be reexamined in light of today’s new space race—a race that now includes many more nations. No one nation or group of nations has ever claimed sovereignty over the “high frontier” of space, and, simply put, this should never be allowed to happen….

      As was true during the Cold War, there are still political differences on Earth, but in space we should together seek to push forward the frontiers of knowledge with a common sense of purpose and most certainly in a spirit of peaceful cooperation.

A Path to Clean Water

[These excerpts are from an article by Klaus Kummerer, Dionysios D. Dionysiou, Oliver Olsson and Despo Fatta-Kassinos in the July 20, 2018, issue of Science.]

      Chemicals, including pharmaceuticals, are necessary for health, agriculture and food production, industrial production, economic welfare, and many other aspects of modern life. However, their widespread use has led to the presence of many different chemicals in the water cycle, from which they may enter the food chain. The use of chemicals will further increase with growth, health, age, and living standard of the human population. At the same time, the need for clean water will also increase, including treated wastewater for food production and high-purity water for manufacturing electronics and pharmaceuticals. Climate change is projected to further reduce water availability in sufficient quantity and quality. Considering the limits of effluent treatment, there is an urgent need for input prevention at the source and for the development of chemicals that degrade rapidly and completely in the environment.

      Conventional wastewater treatment has contributed substantially to the progress in health and environmental protection. However, as the diversity and volume of chemicals used have risen, water pollution levels have increased, and conventional treatment of wastewater and potable water has become less efficient. Even advanced wastewater and potable water treatments, such as extended filtration and activated carbon or advanced oxidation processes, have limitations, including increased demand for energy and additional chemicals; incomplete or, for some pollutants, no removal from the wastewater; and generation of unwanted products from parent compounds, which may be more toxic than their parent compounds. Microplastics are also not fully removed, and advanced treatment such as ozonation can lead to the increased transfer of antibiotic resistance genes, preferential enhancement of opportunistic bacteria, and strong bacterial population shifts.

      Furthermore, water treatment is far from universal. Sewer pipes can leak, causing wastewater and its constituents to infiltrate groundwater. During and after heavy rain events, wastewater and urban stormwater runoff is redirected to protect sewage treatment plants; this share of wastewater is not treated. Such events, as well as urban flooding, are likely to increase in the future because of climate change. Globally, 80% or more of wastewater is not treated.

      … given the ever-increasing list of chemicals that are introduced into the aquatic environment, attempts to assess harm and introduce thresholds will tend to lag new introductions. A preventive approach is therefore also needed. For example, giving companies relief from effluent charges if they use compounds from a list proven to be of low toxicity and readily mineralized-such as the abovementioned cellulose microbeads-could provide strong incentives for creating more sustainable products.

Prepare for Water Day Zero

[These excerpts are from an editorial by the editors in the August 2018 issue of Scientific American.]

      Earlier this year ominous headlines blared that Cape Town, South Africa, was headed for Day Zero—the date when the city’s taps would go dry because its reservoirs would become dangerously low on water. That day—originally expected in mid-April—has been postponed until at least 2019 as of this writing, thanks to water rationing and a welcome rainy season. But the conditions that led to this desperate situation will inevitably occur again, hitting cities all over the planet.

      As the climate warms, extreme droughts and vanishing water supplies will likely become more common. But even without the added impact of climate change, normal rainfall variation plays an enormous role in year-to-year water availability. These ordinary patterns now have extraordinary effects because urban populations have had a tremendous growth spurt: by 2050 the United Nations projects that two thirds of the world’s people will live in cities. Urban planners and engineers need to learn from past rainfall variability to improve their predictions and take future demand into account to build more resilient infrastructure.

      …since 2015 the region has been suffering from the worst drought in a century, and the water in those reservoirs dwindled perilously. Compounding the problem, Cape Town's population has grown substantially, increasing demand. The city actually did a pretty good job of keeping demand low by reducing leaks in the system, a major cause of water waste….But the government of South Africa was slow to declare a national disaster in the areas hit hardest by the drought, paving the way for the recent crisis.

      Cape Town is not alone. Since 2014 southeastern Brazil has been suffering its worst water shortage in 80 years, resulting from decreased rainfall, climate change, poor water management, deforestation and other factors. And many cities in India do not have access to municipal water for more than a few hours a day, if at all….

      In the U.S., the situation is somewhat better, but many urban centers still face water problems. California’s recent multiyear drought led to some of the state's driest years on record. Fortunately, about half of the state's urban water usage is for landscaping, so it was able to cut back on that fairly easily. But cities that use most of their water for more essential uses, such as drinking water, may not be so adaptable. In addition to the problems that drought, climate change and population growth bring, some cities face threats of contamination; crises such as the one in Flint, Mich., arose because the city changed the source of its water, causing lead to leach into it from pipes. If other cities are forced to change their water suppliers, they could face similar woes.

      Fortunately, steps can be taken to avoid urban water crises. In general, a “portfolio approach” that relies on multiple water sources is probably most effective. Cape Town has already begun implementing a number of water-augmentation projects, including tapping groundwater and building water-recycling plants. Many other cities will need to repair existing water infrastructure to cut down on leakage….

      The global community has an opportunity right now to take action to prevent a series of Day Zero crises. If we don’t act, many cities may soon face a time when there isn’t a drop to drink.

It’s Critical

[These excerpts are from an editorial by Steve Metz in the August 2018 issue of The Science Teacher.]

      One of the most important things students can learn in their science classes is the ability to think critically. Science content is important, of course. Our future scientists and engineers need deep understanding of the big ideas of science, as do all citizens. But students must also develop the life-long habit of critical, analytical thinking and evidence-based reasoning. Scientific facts and ideas are not enough. The ability to think critically gives these ideas meaning and is required for assessment of truth and falsehood.

      On the internet and social media the importance of critical thinking—and the notable lack thereof—are on full display. Clickbait promotes outrageous headlines untethered to reality. Statements are made with no concern for supporting evidence. We often accept a claim or counterclaim based on personal belief, reliance on authority, or downright prejudice rather than on evidence-based critical thinking.

      A recent study by researchers at Stanford found that students from middle school to college could not distinguish between a real and false news source, concluding that “Overall, young people’s ability to reason about the information on the Internet can be summed up in one word: bleak”….This is particularly troubling in a world where people—especially young people—increasingly depend on social media as their primary source of news and information.

      The unfortunate fact is that if left to itself, human thinking can be biased, distorted, uninformed, and incomplete. We often believe what we want to believe. Confirmation bias is real, especially in a world where media allow us to choose to view only what uncritically supports our own beliefs. The results include acceptance of fantastic conspiracy theories, pseudoscience, angry rhetoric, and untested assumptions—often leading to poor decision-making, both at the personal and societal level.

      Critical thinking is a difficult, higher-order skill that, like all such skills, requires intensive, deliberate practice. At its core is a healthy skepticism that questions everything, treats all conclusions as tentative, and sets aside interpretations that are not supported by multiple lines of reliable evidence….

Bringing Darwin Back

[These excerpts are from an article by Adam Piore in the August 2018 issue of Scientific American.]

      Straight talk about evolution in classrooms is less common than one might think. According to the most comprehensive study of public school biology teachers, just 28 percent implement the major recommendations and conclusions of the National Research Council, which call for them to “unabashedly introduce evidence that evolution has occurred and craft lesson plans so that evolution is a theme that unifies disparate topics in biology,” according to a 2011 Science article by Pennsylvania State University political scientists Michael Berkman and Eric Plutzer.

      Conversely, 13 percent of teachers (found in virtually every state in the Union and the District of Columbia) reported explicitly advocating creationism or intelligent design by spending at least an hour of class time presenting it in a positive light. Another 5 percent said they endorsed creationism in passing while answering student questions.

      The majority—60 percent of teachers—either attempted to avoid the topic of evolution altogether, quickly blew past it, allowing students to debate evolution, or “watered down” their lessons, Plutzer says. Many said they feared the reaction of students, parents and religious members of their community. And although only 2 percent of teachers reported avoiding the topic entirely, 17 percent, or roughly one in six teachers, avoided discussing human evolution. Many others simply raced through it.

      To confront these challenges, several organizations have launched new kinds of training sessions that are aimed at better preparing teachers for what they will face in the classroom. Moreover, a growing number of researchers have begun to examine the causes of these teaching failures and new ways to overcome them.

      Among many educators, a new idea has begun to take root: perhaps it is time to rethink the way evolution teachers grapple with religion (or choose not to grapple with it) in the classroom….

      For decades the most high-stakes, high-profile battles over evolution education were fought in the courts and state legislatures. The debate centered on, among other things, whether the subject itself could be banned or whether lawmakers could require that equal time be given to the biblical account of creation or the idea of “intelligent design.” Now, with those questions largely resolved—courts have overwhelmingly sided with those pushing to keep evolution in the classroom and creationism out of it—the battle lines have moved into the schools themselves….

      Today, many are now realizing, the far larger obstacle for the vast majority of ordinary science teachers is the legacy of acrimony left over from the decades of legal battles. In many communities, evolution education remains so charged with controversy that teachers either water down their lesson plans, devote as little time as possible to the subject or attempt to avoid it altogether.

      Meanwhile teachers in deeply religious communities such as Baconton face an additional challenge. Often they lack tools and methods that allow them to teach evolution in a way that does not force those students to take sides—a choice that usually does not go well for the scientists perceived to be at war with their community.

      Without such tools, even those teachers who do feel confident with the material often have trouble convincing students to listen to their lesson plans with an open mind—or to listen to them at all….

STEMM Education Should Get “HACD”

[These excerpts are from an article by Robert Root-Bernstein in the July 6, 2018, issue of Science.]

      If you’ve ever had a medical procedure, chances are you benefited from the arts. The stethoscope was invented by a French flautist/physician named Rene Laennec who recorded his first observations of heart sounds in musical notation. The suturing techniques used for organ transplants were adapted from lace-making by another Frenchman, Nobel laureate Alexis Carrel. The methods (and some of the tools) required to perform the first open-heart surgeries were invented by an African-American innovator named Vivien Thomas, whose formal training was as a master carpenter.

      But perhaps you’re more of a technology lover. The idea of instantaneous electronic communication was the invention of one of America's most famous artists, Samuel Morse, who built his first telegraph on a canvas stretcher. Actress Hedy Lamm collaborated with the avant-garde composer George Antheil to invent modern encryption of electronic messages. Even the electronic chips that run our phones and computers are fabricated using artistic inventions: etching, silk-screen printing, and photolithography.

      On 7 May 2018, the Board on Higher Education and Workforce of the U.S. National Academies of Sciences, Engineering, and Medicine (NASEM) released a report recommending that humanities, arts, crafts, and design (HACD) practices be integrated with science, technology, engineering, mathematics, and medicine (STEMM) in college and post-graduate curricula. The motivation for the study is the growing divide in American educational systems between traditional liberal arts curricula and job-related specialization….

      Because the ecology of education is so complex, the report concludes that there is no one, or best, way to integrate arts and humanities with STEMM learning, nor any single type of pedagogical experiment or set of data that proves incontrovertibly that integration is the definitive answer to improved job preparedness. Nonetheless, a preponderance of evidence converges on the conclusion that incorporating HACD into STEMM pedagogies can improve STEMM performance.

      Large-scale statistical studies have demonstrated significant correlations between the persistent practice of HACD with various measures of STEMM achievement….STEMM professionals with avocations such as wood- and metalworking, printmaking, painting, and music composition are more likely to file and license patents and to found companies than those who lack such experience. Likewise, authors that publish high-impact papers are more likely to paint, sculpt, act, engage in wood- or metalworking, or pursue creative writing….

      Every scientist knows that correlation is not causation, but many STEMM professionals report that they actively integrate their HACD and STEMM practices….

The Power of Many

[These excerpts are from an article by Elizabeth Pennisi in the June 29, 2018, issue of Science.]

      Billions of years ago, life crossed a threshold. Single cells started to band together, and a world of formless, unicellular life was on course to evolve into the riot of shapes and functions of multicellular life today, from ants to pear trees to people. It’s a transition as momentous as any in the history of life, and until recently we had no idea how it happened.

      The gulf between unicellular and multicellular life seems almost unbridgeable. A single cell's existence is simple and limited. Like hermits, microbes need only be concerned with feeding themselves; neither coordination nor cooperation with others is necessary, though some microbes occasionally join forces. In contrast, cells in a multicellular organism, from the four cells in some algae to the 37 trillion in a human, give up their independence to stick together tenaciously; they take on specialized functions, and they curtail their own reproduction for the greater good, growing only as much as they need to fulfill their functions. When they rebel, cancer can break out….

      Multicellularity brings new capabilities. Animals, for example, gain mobility for seeking better habitat, eluding predators, and chasing down prey. Plants can probe deep into the soil for water and nutrients; they can also grow toward sunny spots to maximize photosynthesis. Fungi build massive reproductive structures to spread their spores….

      …The evolutionary histories of some groups of organisms record repeated transitions from single-celled to multicellular forms, suggesting the hurdles could not have been so high. Genetic comparisons between simple multicellular organisms and their single-celled relatives have revealed that much of the molecular equipment needed for cells to band together and coordinate their activities may have been in place well before multicellularity evolved. And clever experiments have shown that in the test m tube, single-celled life can evolve the beginnings of multicellularity in just a few hundred generations—an evolutionary instant.

      Evolutionary biologists still debate what drove simple aggregates of cells to become more and more complex, leading to the wondrous diversity of life today. But embarking Lon that road no longer seems so daunting….

      …Some have argued that 2-billion-year-old, coil-shaped fossils of what may be blue-green or green algae—found in the United States and Asia and dubbed Grypania spiralis—or 2.5-billion-year-old microscopic filaments recorded in South Africa represent the first true evidence of multicellular life. Other kinds of complex organisms don’t show up until much later in the fossil record. Sponges, considered by many to be the most primitive living animal, may date back to 750 million years ago, but many researchers consider a group of frondlike creatures called the Ediacarans, common about 570 million years ago, to be the first definitive animal fossils. Likewise, fossil spores suggest multicellular plants evolved from algae at least 470 million years ago.

      Plants and animals each made the leap to multicellularity just once. But in other groups, the transition took place again and again. Fungi likely evolved complex multicellularity in the form of fruiting bodies—think mushrooms—on about a dozen separate occasions….The same goes for algae: Red, brown, and green algae all evolved their own multicellular forms over the past billion years or so….

See-through Solar Cells Could Power Offices

[These excerpts are from an article by Robert F. Service in the June 29, 2018, issue of Science.]

      Lance Wheeler looks at glassy skyscrapers and sees untapped potential. Houses and office buildings, he says, account for 75% of electricity use in the United States, and 40% of its energy use overall. Windows, because they leak energy, are a big part of the problem….

      A series of recent results points to a solution, he says: Turn the windows into solar panels. In the past, materials scientists have embedded light-absorbing films in window glass. But such solar windows tend to have a reddish or brown tint that architects find unappealing. The new solar window technologies, however, absorb almost exclusively invisible ultraviolet (UV) or infrared light. That leaves the glass clear while blocking the UV and infrared radiation that normally leak through it, sometimes delivering unwanted heat. By -cutting heat gain while generating power, the windows “have huge prospects.” Wheeler says, including the possibility that a large office building could power itself.

      Most solar cells, like the standard crystalline silicon cells that dominate the industry, sacrifice transparency to maximize their efficiency, the percentage of the energy in sunlight converted to electricity. The best silicon cells have an efficiency of 25%. Meanwhile, a new class of opaque solar cell materials, called perovskites, are closing in on silicon with top efficiencies of 22%. Not only are the perovskites cheaper than silicon, they can also be tuned to absorb specific frequencies of light by tweaking their chemical recipe.

Tomorrow’s Earth

[These excerpts are from an editorial by Jeremy Berg in the June 29, 2018, of Science.]

      Our planet is in a perilous state. The combined effects of climate change, pollution, and loss of biodiversity are putting our health and well-being at risk. Given that human actions are largely responsible for these global problems, humanity must now nudge Earth onto a trajectory toward a more stable, harmonious state. Many of the challenges are daunting, but solutions can be found….

      Many of today’s challenges can be traced back to the “Tragedy of the Commons” identified by Garrett Hardin in his landmark essay, published in Science 50 years ago. Hardin warned of a coming population-resource collision based on individual self-interested actions adversely affecting the common good. In 1968, the global population was about 3.5 billion; since then, the human population has more than doubled, a rise that has been accompanied by large-scale -changes in land use, resource consumption, waste generation, and societal structures….

      Through collective action, we can indeed achieve planetary-scale mitigation of harm. A case in point is the Montreal Protocol on Substances that Deplete the Ozone Layer, the first treaty to achieve universal ratification by all countries in the world. In the 1970s, scientists had shown that chemicals used as refrigerants and propellants for aerosol cans could catalyze the destruction of ozone. Less than a decade later, these concerns were exacerbated by the discovery of seasonal ozone depletion over Antarctica. International discussions on controlling the use of these chemicals culminated in the Montreal Protocol in 1987. Three decades later, research has shown that ozone depletion appears to be decreasing in response to industrial and domestic reforms that the regulations facilitated.

      More recent efforts include the Paris Agreement of 2015, which .aims to keep a global temperature rise this century well below 2°C and to strengthen the ability of countries to deal with the impacts of climate change, and the United Nations Sustainable Development Goals. As these examples show, there is widespread recognition that we must reverse damaging planetary change for the sake of the next generation. However, technology alone will not rescue us. For changes to be willingly adopted by a majority of people, technology and engineering will have to be integrated with social sciences and psychology….Although human population growth is escalating, we have never been so affluent. Along with affluence comes increasing use of energy and materials, which puts more pressure on the environment. How can humanity maintain high living standards without jeopardizing the basis of our survival?

      As our “Tomorrow’s Earth” series…will highlight, rapid research and technology developments across the sciences can help to facilitate the implementation of potentially corrective options. There will always be varying expert opinions on what to do and how to do it. But as long as there are options, we can hope to find the right paths forward.

Greenhouse Gases

[This excerpt is from chapter nine of Caesar’s Last Breath by Sam Kean.]

      Greenhouse gases got their name because they trap incoming sunlight, albeit not directly. Most incoming sunlight strikes the ground first and warms it. The ground then releases some of that heat back toward space as infrared light. (Infrared light has a longer wavelength than visible light; for our purposes, it's basically the same as heat.) Now, if the atmosphere consisted of nothing but nitrogen and oxygen, this infrared heat would indeed escape into space, since diatomic molecules like N2 and O2 cannot absorb infrared light. Gases like carbon dioxide and methane, on the other hand, which have more than two atoms, can and do absorb infrared heat. And the more of these many-atomed molecules there are, the more heat they absorb. That’s why scientists single them out as greenhouse gases: they’re the only fraction of the air that can trap heat this way.

      Scientists define the greenhouse effect as the difference between a planet’s actual temperature and the temperature it would be with-out these gases. On Mars, the sparse CO2 coverage raises its temp by less than 10°F. On Venus, greenhouse gases add a whopping 900°F. Earth sits between these extremes. Without greenhouse gases, our average global temperature would be a chilly 0°F, below the freezing point of water. With greenhouse gases, the average temp remains a balmy 60°F. Astronomers often talk about how Earth orbits at a perfect distance from the sun — a “Goldilocks distance” where water neither freezes nor boils. Contra that cliché, it’s actually the combination of distance and greenhouse gases that gives us liquid H2O. Based on orbiting distance alone, we’d be Hoth.

      By far the most important greenhouse gas on Earth, believe it or not, is water vapor, which raises Earth’s temperature 40 degrees all by itself. Carbon dioxide and other trace gases contribute the remaining 20. So if water actually does more, why has CO2 become such a bogeyman? Mostly because carbon dioxide levels are rising so quickly. Scientists can look back at the air in previous centuries by digging up small bubbles trapped beneath sheets of ice in the Arctic. From this work, they know that for most of human history the air contained 280 molecules of carbon dioxide for every million particles overall. Then the Industrial Revolution began, and we started burning ungodly amounts of hydrocarbons, which release CO2 as a by-product. To give you a sense of the scale here, in an essay he wrote for his grandchildren in 1882, steel magnate Henry Bessemer boasted that Great Britain alone burned fifty-five Giza pyramids’ worth of coal each year. Put another way, he said, this coal could “build a wall round London of 200 miles in length, 100 feet high, and 41 feet 11 inches in thickness --a mass not only equal to the whole cubic contents of the Great Wall of China, but sufficient to add another 346 miles to its length.” And remember, this was decades before automobiles and modern shipping and the petroleum industry. Carbon dioxide levels reached 312 parts per million in 1950 and have since zoomed past 400.

      People who pooh-pooh climate change often point out, correctly, that CO2 concentrations have been fluctuating for millions of years, long before humans existed, sometimes peaking at levels a dozen times higher than those of today. It’s also true that Earth has natural mechanisms for removing excess carbon dioxide—a nifty negative feedback loop whereby ocean water absorbs excess CO2, converts it to minerals, and stores it underground. But when seen from a broader perspective, these truths deteriorate into half-truths. Concentrations of CO2 varied in the past, yes — but they’ve never spiked as quickly as in the past two centuries. And while geological processes can sequester CO2 underground, that work takes millions of years. Meanwhile, human beings have dumped roughly 2,500 trillion pounds of extra CO2 into the air in the past fifty years alone. (That's over 1.6 million pounds per second. Think about how little gases weigh, and you can appreciate how staggeringly large these figures are.) Open seas and forests will gobble up roughly half that CO2, but nature simply can’t bail fast enough to keep up.

      Things look even grimmer when you factor in other greenhouse gases. Molecule for molecule, methane absorbs twenty-five times more heat than carbon dioxide. One of the main sources of methane on Earth today is domesticated cattle: each cow burps up an average of 570 liters of methane per day and farts 30 liters more; worldwide, that adds up to 175 billion pounds of CH4 annually — some of which degrades due to natural processes, but much of which doesn’t. Other gases do even more damage. Nitrous oxide (laughing gas) sponges up heat three hundred times more effectively than carbon dioxide. Worse still are CFCs, which not only kill ozone but trap heat several thousand times better than carbon dioxide. Collectively CFCs account for one-quarter of human-induced global warming, despite having a concentration of just a few parts per billion in the air.

      And CFCs aren’t even the worst problem. The worst problem is a positive feedback loop involving water. Positive feedback —like the screech you hear when two microphones get acquainted— involves a self-perpetuating cycle that spirals out of control. In this case, excess heat from greenhouse gases causes ocean water to evaporate at a faster rate than normal. Water, remember, is one of the best (i.e., worst) greenhouse gases around, so this increased water vapor traps more heat. This causes temperatures to inch up a bit more, which causes more evaporation. This traps still more heat, which leads to more evaporation, and so on. Pretty soon it's Venus outside. The prospect of a runaway feedback loop shows why we should care about things like a small increase in CFC concentrations. A few parts per billion might seem too small to make any difference, but if chaos theory teaches us anything, it’s that tiny changes can lead to huge consequences.


[This excerpt is from chapter eight of Caesar’s Last Breath by Sam Kean.]

      …But Lorenz made us confront the fact that we might never be able to lift our veil of ignorance—that no matter how hard we stare into the eye of a hurricane, we might never understand its soul. Over the long run, that may be even harder to accept than our inability to bust up storms. Three centuries ago we christened ourselves Homo sapiens, the wise ape. We exult in our ability to think, to know, and the weather seems well within our grasp—it’s just pockets of hot and cold gases, after all. But we’d do well to remember our etymology: gas springs from chaos, and in ancient mythology chaos was something that not even the immortals could tame.


[This excerpt is from chapter seven of Caesar’s Last Breath by Sam Kean.]

      The world had never known a threat quite like fallout. Fritz Haber during World War I had also weaponized the air, but after a good stiff breeze, Haber's gases generally couldn't harm you. Fallout could— it lingered for days, months, years. One writer at the time commented about the anguish of staring at every passing cloud and wondering what dangers it might hold. “No weather report since the one given to Noah,” he said, “has carried such foreboding for the human race.”

      More than any other danger, fallout shook people out of their complacency about nuclear weapons. By the early 1960s, radioactive atoms (from both Soviet and American tests) had seeded every last square inch on Earth; even penguins in Antarctica had been exposed. People were especially horrified to learn that fallout hit growing children hardest. One fission product, strontium-90, tended to settle onto breadbasket states in the Midwest, where plants sucked it up into their roots. It then began traveling up the food chain when cows ate contaminated grass. Because strontium sits below calcium on the periodic table, it behaves similarly in chemical reactions. Strontium-90 therefore ended up concentrated in calcium-rich milk—which then got concentrated further in children’s bones and teeth when they drank it. One nuclear scientist who had worked at Oak Ridge and then moved to Utah, downwind of Nevada, lamented that his two children had absorbed more radioactivity from a few years out West than he had in eighteen years of fission research.

      Even ardent patriots, even hawks who considered the Soviet Union the biggest threat to freedom and apple pie the world had ever seen, weren’t exactly pro-putting-radioactivity-into-children's-teeth. Sheer inertia allowed nuclear tests to continue for a spell, but by the late 1950s American citizens began protesting en masse. The activist group SANE ran ads that read “No contamination without representation,” and within a year of its founding in 1957, SANE had twenty-five thousand members. Detailed studies of weather patterns soon bolstered their case, since scientists now realized just how quickly pollutants could spread throughout the atmosphere. Pop culture weighed in as well, with Spiderman and Hulk and Godzilla — each the victim of a nuclear accident—debuting during this era. The various protests culminated in the United States, the Soviet Union, and Great Britain signing a treaty to stop all atmospheric nuclear testing in 1963. (China continued until 1974, France until 1980.) And while this might seem like ancient history JFK signed the test-ban treaty, after all—we're still dealing with the fallout of that fallout today, in several ways.

Nuclear Bombs

[This excerpt is from chapter seven of Caesar’s Last Breath by Sam Kean.]

      The Manhattan Project wasn’t a scientific breakthrough as much as an engineering triumph. All the essential physics had been worked out before the war even started, and the truly heroic efforts involved not blackboards and eurekas but elbow grease and backbreaking labor. Consider the refinement of uranium. Among other steps, workers had to convert over 20,000 pounds of raw uranium ore into a gas (uranium hexafluoride) and then whittle it down, almost atom by atom, to 112 pounds of fissionable uranium-235. This required building a $500 million plant ($6.6 billion today) in Oak Ridge, Tennessee, that sprawled across forty-four acres and used three times as much electricity as all of Detroit. All that fancy theorizing about bombs would have gone for naught if not for this unprecedented investment.

      Plutonium was no picnic, either. Making plutonium (it doesn’t exist in nature) proved every bit as challenging and costly as refining uranium. Detonating the stuff was an even bigger hassle. Although plutonium is quite radioactive —inhaling a tenth of a gram of it will kill most adults—the small amount of plutonium that scientists at Los Alamos were working with wouldn’t undergo a chain reaction and explode unless they increased its density dramatically. Plutonium metal is already pretty dense, though, so the only plausible way to do this was by crunching it together with a ring of explosives. Unfortunately, while it's easy to blow something apart with explosives, it's well-nigh impossible to collapse it into a smaller shape in a coherent way. Los Alamos scientists spent many hours screaming at one another over the details.

      By spring 1945, they’d finally sketched out a plausible setup for the explosives. But the idea needed confirming, so they scheduled the famous Trinity test for July 16, 1945. Responsibility for arming the device—nicknamed the Gadget—fell to Louis Slotin, a young Canadian physicist who had a reputation for being foolhardy (perfect for bomb work). After he'd climbed the hundred-foot Trinity tower and assembled the bomb, Slotin and his bosses accepted a $2 billion receipt for it and drove off to watch from the base camp ten miles distant.

      At 5:30 a.m. the ring of explosives went off and crushed the Gadget’s grapefruit-sized plutonium core into a ball the size of a peach pit. A tiny dollop in the middle— beryllium mixed with polonium—then kicked out a few subatomic particles called neutrons, which really got things hopping. These neutrons stuck to nearby plutonium atoms, rendering them unstable and causing them to fission, or split. This splitting released loads of energy: the kick from a single plutonium atom can make a grain of sand jump visibly, even though a plutonium atom is a hundred thousand million billion times smaller. Crucially, each split also released more neutrons. These neutrons then glommed onto other plutonium atoms, rendered them unstable, and caused more fissioning.

      Within a few millionths of a second, eighty generations of plutonium atoms had fissioned, releasing an amount of energy equal to fifty million pounds of TNT. What happened next gets complicated, but all that energy vaporized everything within a chip shot of the bomb — the metal tower, the sand below, every lizard and scorpion. More than vaporized, actually. The temperature near the core spiked so high, to tens of millions of degrees, that electrons within the vapor were torn loose from their atoms and began to roam around on their own, like fireflies. This produced a new state of matter called a plasma, a sort of ubergas most commonly found inside the nuclear furnace of stars.

      Given the incredible energies here, even sober scientists like Robert Oppenheimer (director of the Manhattan Project) had seriously considered the possibility that Trinity would ignite the atmo-sphere and fry everything on Earth's surface. That didn’t happen, obviously, but each of the several hundred men who watched that morning — some of whom slathered their faces in sunscreen, and shaded their eyes behind sunglasses—knew they’d unleashed a new type of hell on the world. After Trinity quieted down Oppenheimer famously recalled a line from the Bhagavad Gita: “Now I am become Death, the destroyer of worlds.” Less famously, Oppenheimer also recalled something that Alfred Nobel once said, about how dynamite would render war so terrible that humankind would surely give it up. How quaint that wish seemed now, in the shadow of a mushroom cloud.

      After the attacks on Hiroshima and Nagasaki in early August, most Manhattan Project scientists felt a sense of triumph. Over the next few months, however, the stories that emerged from Japan left them with a growing sense of revulsion. They'd known their marvelously engineered bombs would kill tens of thousands of people, obviously. But the military had already killed comparable numbers of civilians during the firebombings of Dresden and Tokyo. (Some historians estimate that more human beings died during the six hours of the Tokyo firebombing at least 100,000—than in any attack in history, then or since.)

      What appalled most scientists about Hiroshima and Nagasaki, then, wasn’t the immediate body count but the lingering radioactivity. Most physicists before this had a rather cavalier attitude about radioactivity; stories abound about their macho disdain for the dangers involved. Japan changed that. Fallout from the bomb continued to poison people for months afterward—killing their cells, ulcerating their skin, turning even the salt in their blood and the fillings in their teeth into tiny radioactive bombs.

Night Light

[This excerpt is from the interlude between the sixth and seventh chapter of Caesar’s Last Breath by Sam Kean.]

      For some context here, recall that Joseph Priestley’s Lunar Society met on the Monday nearest the full moon because its members needed moonlight to find their way home. But Priestley’s generation was among the last to have to worry about such problems. Several of the gases that scientists discovered in the late 1700s burned with uncanny brightness, and within a half century of Priestley’s death in 1804, gas lighting had become standard throughout Europe. Edison’s lightbulb gets all the historical headlines, but it was coal gas that first eradicated darkness in the modern world.

      Human beings had artificial lighting before 1800, of course --- wood fires, candles, oil lamps. But however romantic bonfires and candlelit dinners seem nowadays, those are actually terrible sources of light. Candles especially throw off a sickly, feeble glow that, as one historian joked, did little but “make darkness visible.” (A French saying from the time captured the sentiment in a different way: “By candlelight, a goat is ladylike.”) Not everyone could afford candles on a daily basis anyway imagine if all your lightbulbs needed replacing every few nights. Larger households and businesses might go through 2,500 candles per year. To top it off, candles released noxious smoke indoors, and it was all too easy to knock one over and set your house or factory ablaze.

      In retrospect coal gas seems the obvious solution to these problems. Coal gas is a heterogeneous mix of methane, hydrogen, and other gases that emerge when coal is slowly heated. Both methane and hydrogen burn brilliantly alone, and when burned together, they produce light dozens of times bolder and brighter than candlelight. But as with laughing gas, people considered coal gas little more than a novelty at first. Hucksters would pack crowds into dark rooms for a halfpenny each and dazzle them with gas pyrotechnics. And it wasn’t just the brilliance that impressed. Because they didn’t depend on wicks, coal-gas flames could defy gravity and leap out sideways or upside down. Some showmen even combined different flames to make flowers and animal shapes, somewhat like balloon animals.

      Gradually, people realized that coal gas would make fine interior lighting. Gas jets burned steadily and cleanly, without a candle’s flickering and smoking, and you could secure gas fixtures to the wall, decreasing the odds of things catching fire. An eccentric engineer named William Murdoch— the same man who invented a steam locomotive in James Watt's factory, before Watt told him to knock it off—installed the world’s first gas-lighting system in his home in Birmingham in 1792. Several local businessmen were impressed enough to install gas lighting in their factories shortly thereafter.

      After these early adopters, city governments began using coal gas to light their streets and bridges. Cities usually stored the gas inside giant tanks (called gasometers) and piped it through underground mains, much like water today. London alone had forty thousand gas street lamps by 1823, and other cities in Europe followed suit. (Paris didn't want bloody London usurping its reputation as the city of light, after all.) For the first time in history, human settlements would have been visible from space by night.

      Public buildings came online next, including railway stations, churches, and especially theaters, which benefitted more than probably any other institution. With more light available, theater directors could position actors farther back onstage, allowing for more depth of movement. A related technology called limelight—which involved streaming oxygen and hydrogen over burning quicklime—provided even brighter light and led to the first spotlights. Because the audience could see them clearly now, actors could also get by with less makeup and could gesture in more realistic, less histrionic ways.

      Even villages in rural England had rudimentary gas mains by the mid-1800s, and the spread of cheap, consistent lighting changed society in several ways. Crime dropped, since thugs and lowlifes could no longer hide under the cloak of darkness. Nightlife exploded as taverns and restaurants began staying open later. Factories instituted regular working hours since they no longer had to shut down after sunset in the winter, and some manufacturers operated all night to churn out goods.

Science in 1900

[This excerpt is from chapter six of Caesar’s Last Breath by Sam Kean.]

      Chemists in the 1780s fulfilled maybe the oldest dream of humankind, to snap the tethers of gravity and take flight. A century later a physicist solved one of humankind’s most enduring mysteries, why the sky is blue. So you can forgive scientists for having a pretty lofty view of themselves circa 1900, and for assuming that they had a full reckoning of how air worked. Thanks to chemists from Priestley to Ramsay, they now knew all its major components. Thanks to the ideal gas law, they now knew how air responded to almost any change in temperature and pressure you could throw at it. Thanks to Charles and Gay-Lussac and other balloonists, they now knew what the air was like even miles above our heads. There were a few loose ends, sure, in fields like atomic physics and meteorology. But all scientists needed to do was extrapolate from known gas laws to cover those cases. They must have felt achingly close to figuring out their world.

      Guess what. Scientists not only ran into difficulties tying those loose ends together, they eventually had to construct whole new laws of nature, out of sheer desperation, to make sense of what was going on. Atomic physics of course led to the absurdities of quantum mechanics and the horrors of nuclear warfare. And tough as it is to believe, meteorology, one of the sleepiest branches of science around, stirred to life chaos theory, one of the most profound and troubling currents in twentieth-century thought.


[This excerpt is from the interlude between the fifth and sixth chapter of Caesar’s Last Breath by Sam Kean.]

      The story of Bessemer’s discoveries in this field is long and convoluted, and there’s no room to get into it all here. (It also involved several other chemists and engineers, most of whom he stingily refused to credit in later years.) Suffice it to say that through a series of happy accidents and shrewd deductions, Bessemer figured out two shortcuts to making steel.

      He’d start by melting down cast iron, same as most smelters. He then added oxygen to the mix, to strip out the carbon. But instead of using iron ore to supply the oxygen atoms, like everyone else, Bessemer used blasts of air —a cheaper, faster substitute. The next shortcut was even more important. Rather than mix in lots and lots of oxygen gas and strip all the carbon out of his molten cast iron, Bessemer decided to stop the air flow partway through. As a result, instead of carbon-free wrought iron, he was left with somewhat-carbon-infused steel. In other words, Bessemer could make steel directly, without all the extra steps and expensive material.

      He’d first investigated this process by bubbling air into molten cast iron with a long blowpipe. When this worked, he arranged for a larger test at a local foundry seven hundred pounds of molten iron in a three-foot-wide cauldron. Rather than rely on his own lungs, this time he had several steam engines blast compressed air through the mixture. The workers at the foundry gave Bessemer pitying looks when he explained that he wanted to make steel with puffs of air. And indeed, nothing happened for ten long minutes that afternoon. All of a sudden, he later recalled, “a succession of mild explosions” rocked the room. White flames erupted from the cauldron and molten iron whooshed out in “a veritable volcano,” threatening to set the ceiling on fire.

      After waiting out the pyrotechnics, Bessemer peered into the cauldron. Because of the sparks, he hadn't been able to shut down the blasts of air in time, and the batch was pure wrought iron. He grinned anyway: here was proof his process worked. All he had to do now was figure out exactly when to cut the airflow off, and he’d have steel.

      At this point things moved quickly for Bessemer. He went on a patent binge over the next few years, and the foundry he set up managed to screw down the cost of steel production from around £40 per ton to £7. Even better, he could make steel in under an hour, rather than weeks. These improvements finally made steel available for large-scale engineering projects a development that, some historians claim, ended the three-thousand-year-old Iron Age in one stroke, and pushed humankind into the Age of Steel.

      Of course, that’s a retrospective judgment. At the time, things weren’t so rosy, and Bessemer actually had a lot of trouble persuading people to trust his steel. The problem was, each batch of steel varied significantly in quality, since it proved quite tricky to judge when to stop the flow of air. Worse, the excess phosphorus in English iron ore left most batches brittle and prone to fracturing at cold temperatures. (Bessemer, the lucky devil, had run his initial tests on phosphorus-free ore from Wales; otherwise they too would have failed.) Other impurities introduced other structural problems, and each snafu sapped the public's confidence in Bessemer steel a little more. Like Thomas Beddoes with gaseous medicine, colleagues and competitors accused Bessemer of overselling steel, even of perpetrating a fraud.

      Over the next decade Bessemer and others labored with all the fervor of James Watt to eliminate these problems, and by the 1870s steel was, objectively, a superior metal compared to cast iron—stronger, lighter, more reliable. But you can’t really blame engineers for remaining wary. Steel seemed too good to be true —it seemed impossible that puffs of air could really toughen up a metal so much— and years of problems with steel had corroded their faith anyway….

The Final Mysterians

[These excerpts are from an article by Michael Shermer in the July 2018 issue of Scientific American.]

      …For millennia, the greatest minds of our species have grappled to gain purchase on the vertiginous ontological cliffs of three great mysteries—consciousness, free will and God—without ascending anywhere near the thin air of their peaks. Unlike other inscrutable problems, such as the structure of the atom, the molecular basis of replication and the causes of human violence, which have witnessed stunning advancements of enlightenment, these three seem to recede ever further away from understanding, even as we race ever faster to catch them in our scientific nets.

      …I contend that not only consciousness but also free will and God are mysterian problems—not because we are not yet smart enough to solve them but because they can never be solved, not even in principle, relating to how the concepts are conceived in language. Call those of us in this camp the “final mysterians.”

      …It is not possible to know what it is like to be a bat (in philosopher Thomas Nagel's famous thought experiment), because if you altered your brain and body from humanoid to batoid, you would just be a bat, not a human knowing what it feels like to be a bat….

      …We are not inert blobs of matter bandied about the pinball machine of life by the paddles of nature's laws; we are active agents within the causal net of the universe, both deter-mined by it and helping to determine it through our choices….

      If the creator of the universe is supernatural—outside of space and time and nature's laws—then by definition, no natural science can discover God through any measurements made by natural instruments. By definition, this God is an unsolvable mystery. If God is part of the natural world or somehow reaches into our universe from outside of it to stir the particles (to, say, perform miracles like healing the sick), we should be able to quantify such providential acts. This God is scientifically soluble, but so far all claims of such measurements have yet to exceed statistical chance. In any case, God as a natural being who is just a whole lot smarter and more powerful than us is not what most people conceive of as deific.

      Although these final mysteries may not be solvable by science, they are compelling concepts nonetheless, well deserving of our scrutiny if for no other reason than it may lead to a deeper understanding of our nature as sentient, volitional, spiritual beings.

The Science of Anti-Science Thinking

[These excerpts are from an article by Douglas T. Kenrick, Adam B. Cohen, Steven L. Neuberg and Robert B. Cialdini in the July 2018 issue of Scientific American.]

      On a regular basis, government decision makers enact policies that fail to heed decades of evidence on climate change. In public opinion surveys, a majority of Americans choose not to accept more than a century of evidence on evolution by natural selection. Academic intellectuals put the word “science” in quotes, and members of the lay public reject vaccinations for their children.

      Scientific findings have long met with ambivalent responses: A welcome mat rolls out instantly for horseless buggies or the latest smartphones. But hostility arises just as quickly when scientists’ findings challenge the political or religious status quo. Some of the British clergy strongly resisted Charles Darwin’s theory of evolution by natural selection. Samuel Wilberforce, bishop of Oxford, asked natural selection proponent Thomas Huxley, known as “Darwin’s bulldog,” on which side of his family Huxley claimed descent from an ape.

      In Galileo’s time, officials of the Roman Catholic Church, well-educated and progressive intellectuals in most respects, expressed outrage when the Renaissance scientist reported celestial observations that questioned the prevailing belief that Earth was the center of the universe. Galileo was placed under house arrest and forced to recant his views as heresy.

      In principle, scientific thinking should lead to decisions based on consideration of all available information on a given question. When scientists encounter arguments not firmly grounded in logic and empirical evidence, they often presume that purveyors of those alternative views either are ignorant of the facts or are attempting to discourage their distribution for self-serving reasons—tobacco company executives suppressing findings linking tobacco use to lung cancer, for instance. Faced with irrational or tendentious opponents, scientists often grow increasingly strident. They respond by stating the facts more loudly and clearly in the hope that their interlocutors will make more educated decisions.

      Several lines of research, however, reveal that simply presenting a litany of facts does not always lead to more objective decision making. Indeed, in some cases, this approach might actually backfire. Human beings are intelligent creatures, capable of masterful intellectual accomplishments. Unfortunately, we are not completely rational decision makers….

      Although natural selection stands out as one of the most solidly supported scientific theories ever advanced, the average citizen has not waded through textbooks full of evidence on the topic. In fact, many of those who have earned doctorates in scientific fields, even for medical research, have never taken a formal course in evolutionary biology In the face of these challenges, most people rely on mental shortcuts or the pronouncements of experts, both strategies that can lead them astray. They may also rely—at their own peril—on intuition and gut instinct….

      Fear increases the tendency toward conformity. If you wish to persuade others to reduce carbon emissions, take care whom you scare: a message that arouses fear of a dystopian future might work well for an audience that accepts the reality of climate change but is likely to backfire for a skeptical audience….

Fish Bombs

[These excerpts are from an article by Katherine Kornei in the July 2018 issue of Scientific American.]

      Rogue fishers around the world toss explosives into the sea and scoop up bucketloads of stunned or dead fish, a practice that is illegal in many nations and can destroy coral reefs and wreak havoc on marine biodiversity. Catching perpetrators amid the vastness of the ocean has long proved almost impossible, but researchers working in Malaysia have now adapted acoustic sensors—originally used to locate urban gunfire—to pinpoint these marine blasts within tens of meters.

      Growing human populations and international demand for seafood are pushing fishers to increase their catches….Shock waves from the explosions rupture the fishes’ swim bladders, immobilizing the fish and causing some to float to the surface. And the bombs themselves are easy to make: ammonium nitrate (a common fertilizer) and diesel fuel are mixed in an empty bottle and topped with a detonator and waterproof fuse….

      Malaysian officials are proposing an initiative to promote fish farming….

Auto Mileage Rollback Is a Sick Idea

[These excerpt s are from an article by Rob Jackson in the July 2018 issue of Scientific American.]

      Seven years ago representatives from General Motors, Ford, Chrysler and other car manufacturers joined President Barack Obama to announce historic new vehicle mileage standards. The industry-supported targets would have doubled the fuel efficiency of cars and light trucks in the U.S. to 54.5 miles per gallon by 2025.

      But in April the Environmental Protection Agency announced plans to roll back part or all of the new standards, saying they were “wrong” and based on “politically charged expediency.” Let me explain why this terrible idea should unify Republicans and Democrats in opposition. The rollback is going to harm us economically and hurt us physically.

      The Obama-era standards made sense for many reasons, starting with our wallets. It is true that each vehicle would initially cost $1,000 to $2,000 more as manufacturers researched lighter materials and built stronger vehicles. In return, though, we would save about $3,000 to $5,000 in gas over the life of each vehicle, according to a 2016 report by Consumers Union. (Because gas prices were higher in 2011 and 2012, when the standards were proposed, estimated savings back then were significantly higher—about $8,000 per car. Prices have risen somewhat since 2016.) This research will also help auto companies compete internationally.

      National security and trade deficits are also reasons to keep the existing standards. Despite a growing domestic oil industry, the U.S. imported more than 10 million barrels of oil daily last year, about a third of it coming from OPEC nations. Imports added almost $100 billion to our trade deficit, sending hard-earned dollars to Canada, Saudi Arabia, Venezuela, Iraq and Colombia. Better gas mileage could eliminate half of our OPEC imports. It would also make our country safer and more energy-independent.

      The biggest reason to support the fuel-efficiency standards, however, is the link between vehicle exhaust and human health. More than four in 10 Americans—some 134 million of us—live in regions with unhealthy particulate pollution and ozone in the air. That dirty air makes people sick and can even kill them. A 2013 study by the Massachusetts Institute of Technology estimated that about 200,000 Americans now die every year from air pollution. The number-one cause of those deaths—more than 50,000 of them—is air pollution from road traffic….

      Here is what a rollback in mileage standards would mean: Thousands of Americans would die unnecessarily from cardiovascular and other diseases every year. Our elderly would face more bronchitis and emphysema. More children would develop asthma—a condition that, according to an estimate by the Centers for Disease Control and Prevention, affects more than one in 12. Millions of your sons and daughters have it. My son does, too.

      Rarely in my career have I seen a proposal more shortsighted and counterproductive than this one. Please say there is still time to change our minds.

How Did Homo sapiens Evolve?

[This excerpt is from an editorial by Julia Galway-Witham and Chris Stringer in the June 22, 2018, issue of Science.]

      Over the past 30 years, understanding of Homo sapiens evolution has advanced greatly. Most research has supported the theory that modern humans had originated in Africa by about 200,000 years ago, but the latest findings reveal more complexity than anticipated. They confirm interbreeding between H. sapiens and other hominin species, provide evidence for H. sapiens in Morocco as early as 300,000 years ago, and reveal a seemingly incremental evolution of H. sapiens cranial shape. Although the cumulative evidence still suggests that all modern humans are descended from African H. sapiens populations that replaced local populations of archaic humans, models of modern human origins must now include substantial interactions with those populations before they went extinct. These recent findings illustrate why researchers must remain open to challenging the prevailing theories of modern human origins.

      Although living humans vary in traits such as body size, shape, and skin color, they clearly belong to a single species, H. sapiens, characterized by shared features such as a narrow pelvis, a large brain housed in a globular braincase, and reduced size of the teeth and surrounding skeletal architecture. These traits distinguish modern humans from other now-extinct humans (members of the genus Homo), such as the Neandertals in western Eurasia (often classified as H. neanderthalensis) and, by inference, from the Denisovans in eastern Eurasia (a genetic sister group of Neandertals). How did H. sapiens relate to these other humans in evolutionary and taxonomic terms, and how do those relationships affect evolving theories of modern human origins?

      By the 1980s, the human fossil record had grown considerably, but it was still insufficient to demonstrate whether H. sapiens had evolved from local ancestors across much of the Old World (multiregional evolution) or had originated in a single region and then dispersed from there (single origin). In 1987, a study using mitochondrial DNA from living humans indicated a recent and exclusively African origin for modern humans. In the following year one of us coauthored a review of the fossil and genetic data, expanding on that discovery and supporting a recent African origin (RAO) for our species.

      The RAO theory posits that by 60,000 years ago, the shared features of modern humans had evolved in Africa and, via population dispersals, began to spread from there across the world. Some paleoanthropologists have resisted this single-origin view and the narrow definition of H. sapiens to exclude fossil humans such as the Neandertals. In subsequent decades, genetic and fossil evidence supporting the RAO theory continued to accumulate, such as in studies of the genetic diversity of African and non-African modern humans and the geographic distribution of early H. sapiens fossils, and this model has since become dominant within mainstream paleoanthropology. In recent years, however, new fossil discoveries, the growth of ancient DNA research, and improved dating techniques have raised questions about whether the RAO theory of H. sapiens evolution needs to be revised or even abandoned.

      Different views on the amount of genetic and skeletal shape variation that is reasonably subsumed within a species definition directly affect developing models of human origins. For many researchers, the anatomical distinctiveness of modern humans and Neandertals has been sufficient to place them in separate species; for example, variation in traits such as cranial shape and the anatomy of the middle and inner ears are greater between Neandertals and H. sapiens than between well-recognized species of apes. Yet, Neandertal genome sequences and the discovery of past interbreeding between Neandertals and H. sapiens provide support for their belonging to the same species under the biological species concept, and this finding has revived multiregionalism. The recent recognition of Neandertal art further narrows--or for some researchers removes--the perceived behavioral gap between the two supposed species.

      These challenges to the uniqueness of H. sapiens were a surprise to many and question assignments of hominin species in the fossil record. However, the limitations of the biological species concept have long been recognized. If it were to be implemented rigorously, many taxa within mammals-such as those in Equus, a genus that includes horses, donkeys, and zebras-would have to be merged into a single species. Nevertheless, in our view, species concepts need to have a basis in biology. Hence, the sophisticated abilities of Neandertals, however interesting, are not indicative of their belonging to H. sapiens. The recently recognized interbreeding between the late Pleistocene lineages of H. sapiens, Neandertals, and Denisovans is nonetheless important, and the discovery of even more compelling evidence to support Neandertals and modem humans belonging to the same species would have a profound effect on models of the evolution of H. sapiens.

Students Report Less Sex, Drugs

[This brief article by Jeffrey Brainard is in the June 22, 2018, issue of Science.]

      Fewer U.S. high school students report having sex and taking illicit drugs, but other risky activity remains alarmingly high, according to a biennial report released last week by the U.S. Centers for Disease Control and Prevention. Even as sexual activity declined, fewer reported using condoms during their most recent intercourse, increasing their risks of HIV and other sexually transmitted diseases. And nearly one in seven students reported misusing prescription opioids, for example by taking them without a prescription—a behavior that can lead to future injection drug use and risks of overdosing and contracting HIV. Gay, lesbian, and bisexual students reported experiencing significantly higher levels of violence in school, including bullying and sexual violence, and higher risks for suicide, depression, substance use, and poor academic performance than other students did. Nearly 15,000 students took the survey.

Emerging Stem Cell Ethics

[These excerpts are from an editorial by Douglas Sipp, Megan Munsie and Jeremy Sugarman in the June 22, 2018, issue of Science.]

      It has been 20 years since the first derivation of human embryonic stem cells. That milestone marked the start of a scientific and public fascination with stem cells, not just for their biological properties but also for their potentially transformative medical uses. The next two decades of stem cell research animated an array of bioethical debates, from the destruction of embryos to derive stem cells to the creation of human-animal hybrids. Ethical tensions related to stem cell clinical translation and regulatory policy are now center stage….Care must be taken to ensure that entry of stem cell-based products into the medical marketplace does not come at too high a human or monetary price.

      Despite great strides in understanding stem cell biology very few stem cell-based therapeutics are as yet used in standard clinical practice. Some countries have responded to patient demand and the imperatives of economic competition by promulgating policies to hasten market entry of stem cell-based treatments. Japan, for example, created a conditional approvals scheme for regenerative medicine products and has already put one stem cell treatment on the market based on preliminary evidence of efficacy. Italy provisionally approved a stem cell product under an existing European Union early access program. And last year, the United States introduced an expedited review program to smooth the path for investigational stem cell-based applications, at least 16 of which have been granted already. However, early and perhaps premature access to experimental interventions has uncertain consequences for patients and health systems.

      A staggering amount of public money has been spent on stem cell research globally. Those seeking to develop stem cell products may now not only leverage that valuable body of resulting scientific knowledge but also find that their costs for clinical testing are markedly reduced by deregulation. How should this influence affordability and access? The state and the taxpaying public's interests should arguably be reflected in the pricing of stem cell products that were developed through publicly funded research and the regulatory subsidies. Detailed programs for recouping taxpayers' investments in stem cell research and development must be established.

      Rushing new commercial stem cell products into the market also entails considerations inherent to the ethics of using pharmaceuticals and medical devices. For example, once a product is approved for a given indication, it becomes possible for physicians to prescribe it for “off-label use.” We have already witnessed the untoward effects of the elevated expectations that stem cells can serve as a kind of cellular panacea, a misconception that underlies the direct-to-consumer marketing of unproven uses of stem cells. Once off-label use of approved products becomes an option, there may be a new flood of untested therapeutic claims with which to contend….

      The new frontiers of stem cell-based medicine also raise questions about the use of fast-tracked products. In countries where healthcare is not considered a public good, who should pay for post-market efficacy testing? Patients already bear a substantial burden of risk when they volunteer for experimental interventions. Frameworks that ask them to pay to participate in medical research warrant much closer scrutiny than has been seen thus far.

      …For stem cell treatments, attaining this balance will require frank and open discussion between all stakeholders, including the patients it seeks to benefit and the taxpayers who make it possible.

A Good Day’s Work

[This excerpt is from an editorial by Steve Metz in the June 2018 issue of The Science Teacher.]

      …It is easy to drift through science and math classes wondering, “Why do I need to learn this?” Many do not see a college science major or science career in their future, making the need to learn science less than obvious. For underrepresented students of color and young women—who often lack exposure to STEM role models, especially in engineering and the physical sciences—a vision of a STEM career can seem even more remote.

      Of course, the best reason for learning science is that knowing science is important in and of itself, as part of humankind’s search for understanding. Scientific knowledge makes everything—a walk in the woods, reading a newspaper, a family visit to a science museum or beach—simply more interesting. The skepticism and critical thinking that are part of the scientific world view are essential for informed civic participation and evidence-based social discourse.

      But the next-best answer to the question—why do I need to learn this?—may strike students as more practical and persuasive: STEM careers can provide excellent employment opportunities with good salaries and better-than average job security. The Bureau of Labor Statistics reports that among occupations surveyed, all 20 of the fastest growing and 18 of the 20 highest paying are STEM fields.

      As science teachers, we need to get the word out: Learning science, engineering, and mathematics can lead to a life's work that is interesting, rewarding, and meaningful. Our classes must encourage students to pursue these fields and provide them with the skills necessary for success.

Copper Hemispheres

[This excerpt is from the second chapter of Caesar’s Last Breath by Sam Kean.]

      In general, a summons to stand in judgment before the Holy Roman Emperor was not an occasion for celebration. But Otto Gericke, the mayor of Magdeburg, Germany, felt his confidence soar as his cart rattled south. After all, he was about to put on perhaps the greatest science experiment in history.

      Gericke, a classic gentleman-scientist, was obsessed with the idea of vacuums, enclosed spaces with nothing inside. All most people knew about vacuums then was Aristotle’s dictum that nature abhors and will not tolerate them. But Gericke suspected nature was more open-minded than that, and he set about trying to create a vacuum in the early 1650s. His first attempt involved evacuating water from a barrel, using the local fire brigade’s water pump. The barrel was initially full of water and perfectly sealed, so that no air could get in. Pumping out the water should therefore have left only empty space behind. Alas, after a few minutes of pumping, the barrel staves leaked and air rushed in anyway. He next tried evacuating a hollow copper sphere using a similar setup. It held up longer, but halfway through the process the sphere imploded, collapsing with a bang that left his ears ringing.

      The violence of the implosion startled Gericke — and set his mind racing. Somehow, mere air pressure or more precisely, the difference in air pressure between the inside and outside of the sphere—had crumpled it. Was a gas really strong enough to crunch metal? It didn't seem likely. Gases are so soft, after all, so pillowy. But Gericke could see no other answer, and when his mind made that leap, it proved to be a turning point in the human relationship with gases. For perhaps the first time in history, someone realized just how strong gases are, how muscular, how brawny. Conceptually, it was but a short step from there to steam power and the Industrial Revolution.

      Before the revolution could begin, however, Gericke had to convince his contemporaries how powerful gases were, and luckily he had the scientific skills to pull this off. In fact, a demo he put together over the next decade began stirring up such wild rumors in central Europe that Emperor Ferdinand III eventually summoned Gericke to court to see for himself.

      On the 220-mile trip south, Gericke carried two copper hemispheres; they fit together to form a twenty-two-inch spherical shell. The walls of the hemispheres were thick enough to withstand crumpling this time, and each half had rings welded to it where he could affix a rope. Most important, Gericke had bored a hole into one hemisphere and fitted it with an ingenious air valve, which allowed air to flow through in one direction only.

      Gericke arrived at court to find thirty horses and a sizable crowd awaiting him. These were the days of drawing and quartering convicts, and Gericke announced to the crowd that he had a similar ordeal planned for his copper sphere: he believed that Ferdinand’s best horses couldn’t tear the two halves apart. You could forgive the assembled for laughing: without someone holding them together, the hemispheres fell apart from their own weight. Gericke ignored the naysayers and reached into his cart for the key piece of equipment, a sort of cylinder on a tripod. It had a tube coming off it, which he attached to the one-way valve on the copper sphere. He then had several local blacksmiths—the burliest men he could find— start cranking the machine’s levers and pistons. Every few seconds it wheezed. Gericke called the contraption an “air pump.”

      It worked like this. Inside the air pump’s cylinder was a special airtight chamber fitted with a piston that moved up and down. At the start of the process the piston was depressed, meaning the chamber had no air inside. Step one involved a blacksmith hoisting the piston up. Because the chamber and copper sphere were connected via the tube, air inside the sphere could now flow into the chamber. Just for argument's sake, let's say there were 800 molecules of air inside the sphere to start. (Which is waaaaay too low, but it's a nice round number.) After the piston got hoisted up, maybe half that air would flow out. This left 400 molecules in the sphere and 400 in the chamber.

      Now came the key step. Gericke closed the one-way valve on the sphere, trapping 400 molecules in each half. He then opened another, separate valve on the chamber, and had the smithy stamp the piston down. This recollapsed the chamber and expelled all 400 molecules. The net result was that Gericke had now pumped out half the original air in the sphere and expelled it to the outside world.

      Equally important, Gericke was now back at the starting point, with the piston depressed. As a result he could reopen the valve on the sphere and repeat the process. This time 200 molecules (half the remaining 400) would flow into the chamber, with the other 200 remaining in the sphere. By closing the one-way valve a second time he could once again trap those 200 molecules in the chamber and expel them. Next round, he’d expel an additional 100 molecules, then 50, and so on. It got harder each round to hoist the piston up—hence the stout blacksmiths —but each cycle of the pump removed half the air from the sphere.

      As more and more air disappeared from inside, the copper sphere started to feel a serious squeeze from the outside. That’s because gazillions of air molecules were pinging its surface every second. Each molecule was minuscule, of course, but collectively they added up to thousands of pounds of force (really). Normally air from inside the sphere would balance this pressure by pushing outward. But as the blacksmiths evacuated the inside, a pressure imbalance arose, and the air on the outside began squeezing the hemispheres together tighter and tighter: given the sphere’s size, there would have been 5,600 pounds of net force at perfect vacuum. Gericke couldn't have known all these details, and it's not clear how close he got to a perfect vacuum. But after watching that first copper shell crumple, he knew that air was pretty brawny. Even brawnier, he was gambling, than thirty horses.

      After the blacksmiths had exhausted the air (and themselves), Gericke detached the copper sphere from the pump, wound a rope through the rings on each side, and secured it to a team of horses. The crowd hushed. Perhaps some maiden raised a silk handkerchief and dropped it. When the tug-of-war began, the ropes snapped taut and the sphere shuddered. The horses snorted and dug in their hooves; veins bulged on their necks. But the sphere held—the horses could not tear it apart. Afterward, Gericke picked up the sphere and flicked a secondary valve open with his finger. Hissss. Air rushed in, and a second later the hemispheres fell apart in his hands; like the sword in the stone, only the chosen one could perform the feat. The stunt so impressed the emperor that he soon elevated plain Otto Gericke to Otto von Guericke, official German royalty.

      In later years von Guericke and his acolytes came up with several other dramatic experiments involving vacuums and air pressure. They showed that bells in evacuated glass jars made no noise when rung, proving that you need air to transmit sound. Similarly, they found that butter exposed to red-hot irons inside a vacuum would not melt, proving that vacuums cannot transmit heat convectively. They also repeated the hemisphere stunt at other sites, spreading far and wide von Guericke’s discovery about the strength of air. And it’s this last discovery that would have the most profound impact on the world at large. Our planet's normal, ambient air pressure of 14.7 pounds per square inch might not sound impressive, but that works out to one ton of force per square foot. It's not just copper hemispheres that feel this, either. For an average adult, twenty tons of force are pressing inward on your body at all times. The reason you don't notice this crushing burden is that there's another twenty tons of pressure pushing back from inside you. But even when you know the forces balance here, it all still seems precarious. I mean, in theory a piece of aluminum foil, if perfectly balanced between the blasts from two fire hoses, would survive intact. But who would risk it? Our skin and organs face the same predicament vis-à-vis air, suspended inside and out between two torrential forces.

      Luckily, our scientific forefathers didn’t tremble in fear of such might. They absorbed the lesson of von Guericke— that gases are shockingly strong—and raced ahead with new ideas. Some of the projects they took on were practical, like steam engines. Some shaded frivolous, like hot-air balloons. Some, like explosives, chastised us with their deadly force. But all relied on the raw physical power of gases.


[These excerpts are from the second chapter of Caesar’s Last Breath by Sam Kean.]

      The alchemy of air started with an insult. Fritz Haber was born into a middle-class German Jewish family in 1868, and despite an obvious talent for science, he ended up drifting between several different industries as a young man—dye manufacturing, alcohol production, cellulose harvesting, molasses production—without distinguishing himself in any of them. Finally, in 1905 an Austrian company asked Haber—by then a balding fellow with a mustache and pince-nez glasses—to investigate a new way to manufacture ammonia gas (NH3).

      The idea seemed straightforward. There’s plenty of nitrogen gas in the air (N2), and you can get hydrogen gas (H2) by splitting water molecules with electricity. To make ammonia, then, simply mix and heat the gases: N2 + 3H2 --> 2NH3. Voila. Except Haber ran into a heckuva catch-22. It took enormous heat to crack the nitrogen molecules in half so they could react; yet that same heat tended to destroy the product of the reaction, the fragile ammonia molecules. Haber spent months going in circles before finally issuing a report that the process was futile.

      The report would have languished in obscurity —negative results win no prizes— if not for the vanity of a plump chemist named Walther Nernst. Nernst had everything Haber coveted. He worked in Berlin, the hub of German life, and he'd made a fortune by inventing a new type of electric lightbulb. Most important, Nernst had earned scientific prestige by discovering a new law of nature, the Third Law of Thermodynamics. Nernst’s work in thermodynamics also allowed chemists to do something unprecedented: examine any reaction--like the conversion of nitrogen into ammonia—and estimate the yield at different temperatures and pressures. This was a huge shortcut. Rather than grope blindly, chemists could finally predict the optimum conditions for reactions.

      Still, chemists had to confirm those predictions in the lab, and here's where the conflict arose. Because when Nernst examined the data in Haber's report, he declared that the yields for ammonia were impossible— 50 percent too high, according to his predictions.

      Haber swooned upon hearing this. He was already a high-strung sort—he had a weak heart and tended to suffer nervous breakdowns. Now Nernst was threatening to destroy the one thing he had going for himself, his reputation as a solid experimentalist. Haber carefully redid his experiments and published new data more in line with Nernst’s predictions. But the numbers remained stubbornly higher, and when Nernst ran into Haber at a conference in May 1907, he dressed down his younger colleague in front of everyone.

      Honestly, this was a stupid dispute. Both men agreed that the industrial production of ammonia via nitrogen gas was impossible; they just disagreed over the exact degree of impossibility. But Nernst was a petty man, and Haber— who had a chivalrous streak—could not let this insult to his honor stand. Contradicting everything he’d said before, Haber now decided to prove that you could make ammonia from nitrogen gas after all. Not only could he rub Nernst’s fat nose in it if he succeeded, he could perhaps patent the process and grow rich. Best of all, unlocking nitrogen would make Haber a hero throughout Germany, because doing so would provide Germany with the one thing it lacked to become a world power— a steady supply of fertilizer….

      Beyond fiddling with temperatures and pressures, Haber focused on a third factor, a catalyst. Catalysts speed up reactions without getting consumed themselves; the platinum in your car’s muffler that breaks down pollutants is an example. Haber knew of two metals, manganese and nickel, that boosted the nitrogen-hydrogen reaction, but they worked only above 1300°F, which fried the ammonia. So he scoured around for substitute catalysts, streaming these gases over dozens of different metals to see what happened. He finally hit upon osmium, element 76, a brittle metal once used to make lightbulbs. It lowered the necessary temperature to “only” 1100°F, which gave ammonia a fighting chance.

      Using his nemesis Nernst’s equations, Haber calculated that osmium, if used in combination with the high-pressure jackets, might boost the yield of ammonia to 8 percent, an acceptable result at last. But before he could lord his triumph over Nernst, he had to confirm that figure in the lab. So in July 1909— after several years of stomach pains, insomnia, and humiliation— Haber daisy-chained several quartz canisters together on a tabletop. He then flipped open a few high-pressure valves to let the N2 and H2 mix, and stared anxiously at the nozzle at the far end.

      It took a while: even with osmium's encouragement, nitrogen breaks its bonds only reluctantly. But eventually a few milky drops of ammonia began to trickle out of the nozzle. The sight sent Haber racing through the halls of his department, shouting for everyone to “Look! Come look!” By the end of the run, they had a whole quarter of a teaspoon.

      They eventually cranked that up into a real gusher— a cup of ammonia every two hours. But even that modest output persuaded BASF to purchase the technology and fast-track it. As he often did to celebrate a triumph, Haber threw his team an epic party. “When it was over,” one assistant recalled, “we could only walk home in a straight line by following the streetcar tracks.”

      Haber’s discovery proved to be an inflection point in history—right up there with the first time a human being diverted water into an irrigation canal or smelted iron ore into tools. As people said back then, Haber had transformed the very air into bread.

      Still, Haber’s advance was as much theoretical as anything: he proved you could make ammonia (and therefore fertilizer) from nitrogen gas, but the output from his apparatus barely could have nourished your tomatoes, much less fed a nation like Germany. Scaling Haber's process up to make tons of ammonia at once would require a different genus of genius—the ability to turn promising ideas into real, working things. This was not a genius that most BASF executives possessed. They saw ammonia as just another chemical to add to their portfolio, a way to pad their profits a little. But the thirty-five-year-old engineer they put in charge of their new ammonia division, Carl Bosch, had a grander vision. He saw ammonia as potentially the most important— and lucrative—chemical of the new century, capable of transforming food production worldwide. As with most visions worth having, it was inspiring and dicey all at once.

      Bosch decided to tackle each of the many subproblems with ammonia production independently. One issue was getting pure enough nitrogen, since regular air contains oxygen and other “impurities.” For help here, Bosch turned to an unlikely source, the Guinness Brewing company. Fifteen years earlier Guinness had developed the most powerful refrigeration devices on the planet, so powerful they could liquefy air. (As with any substance, if you chill the gases in the air enough, they'll condense into puddles of liquid.) Bosch was more interested in the reverse process —taking cold liquid air and boiling it. Curiously, although liquid air contains many different substances mixed together, each substance within it boils off at a separate temperature when heated. Liquid nitrogen happens to boil at –320°F. So all Bosch had to do was liquefy some air with the Guinness refrigerators, warm the resulting pool of liquid to -319°F, and collect the nitrogen fumes. Every time you see a sack of fertilizer today, you can thank Guinness stout.

      The second issue was the catalyst. Although effective at kick-starting the reaction, osmium would never work in industry: as an ore it makes gold look cheap and plentiful, and buying enough osmium to produce ammonia at the scales Bosch envisioned would have bankrupted the firm. Bosch needed a cheap substitute, and he brought the entire periodic table to bear on the problem, testing metal after metal after metal. In all, his team ran twenty thousand experiments before finally settling on aluminum oxide and calcium mixed with iron. Haber the scientist had sought perfection—the best catalyst. Bosch the engineer settled for a mongrel.

      Pristine nitrogen and cut-rate catalysts meant nothing, however, if Bosch couldn't overcome the biggest obstacle, the enormous pres-sures involved. A professor in college once told me that the ideal piece of equipment for an experiment falls apart the moment you take the last data point: that means you wasted the least possible amount of time maintaining it. (Typical scientist.) Bosch’s equipment had to run for months without fail, at temperatures hot enough to make iron glow and at pressures twenty times higher than in locomotive steam engines. When BASF executives first heard those figures, they gagged: one protested that an oven in his department running at a mere seven times atmospheric pressure — one-thirtieth of what was proposed—had exploded the day before. How could Bosch ever build a reaction vessel strong enough?

      Bosch replied that he had no intention of building the vessel himself. Instead he turned to the Krupp armament company, makers of legendarily large cannons and field artillery. Intrigued by the challenge, Krupp engineers soon built him the chemistry equivalent of the Big Bertha: a series of eight-foot-tall, one-inch-thick steel vats. Bosch then jacketed the vessels in concrete to further protect against explosions. Good thing, because the first one burst after just three days of testing. But as one historian commented, “The work could not be allowed to stop because of a little shrapnel.” Bosch’s team rebuilt the vessels, lining them with a chemical coating to prevent the hot gases from corroding the insides, then invented tough new valves, pumps, and seals to withstand the high-pressure beatings.

      Beyond introducing these new technologies, Bosch also helped introduce a new approach to doing science. Traditional science had always relied on individuals or small groups, with each person providing input into the entire process. Bosch took an assembly-line approach, running dozens of small projects in parallel, much like the Manhattan Project three decades later. Also like the Manhattan Project, he got results amazingly quickly—and on a scale most scientists had never considered possible. Within a few years of Haber's first drips, the BASF ammonia division had erected one of the largest factories in the world, near the city of Oppau. The plant contained several linear miles of pipes and wiring, and used gas liquefiers the size of bungalows. It had its own railroad hub to ship raw materials in, and a second hub for transporting its ten thousand workers. But perhaps the most amazing thing about Oppau was this: it worked, and it made ammonia every bit as quickly as Bosch had promised. Within a few years, ammonia production doubled, then doubled again. Profits grew even faster.

      Despite this success, by the mid-1910s Bosch decided that even he had been thinking too small, and he pushed BASF to open a larger and more extravagant plant near the city of Leuna. More steel vats, more workers, more miles of pipes and wiring, more profit. By 1920 the completed Leuna plant stretched two miles wide and one mile across—“a machine as big as a town,” one historian marveled.

      Oppau and Leuna launched the modern fertilizer industry, and it has basically never slowed down since. Even today, a century later, the Haber-Bosch process still consumes a full 1 percent of the world’s energy supply. Human beings crank out 175 million tons of ammonia fertilizer each year, and that fertilizer grows half the world’s food. Half. In other words, one of every two people alive today, 3.6 billion of us, would disappear if not for Haber-Bosch. Put another way, half your body would disappear if you looked in a mirror: one of every two nitrogen atoms in your DNA and proteins would still be flitting around uselessly in the air if not for Haber’s spiteful genius and Bosch’s greedy vision.

Report Details Persistent Hostility to Women in Science

[These excerpts are from an article by Meredith Wadman in the June 15, 2018, issue of Science.]

      Ask someone for an example of sexual harassment and they might cite a professor’s insistent requests to a grad student for sex. But such lurid incidents account for only a small portion of a serious and widespread harassment problem in science, according to a report released this week by the National Academies of Sciences, Engineering, and Medicine. Two years in the making, the report describes pervasive and damaging “gender harassment”—behaviors that belittle women and imply that they don’t belong, including sexist comments and demeaning jokes. Between 17% and 50% of female science and medical students reported this kind of harassment in large surveys conducted by two major university systems across 36 campuses….

      Decades of failure to curb sexual harassment, despite civil rights laws that make it illegal, underscore the need for a change in culture, the report says….The authors suggest universities take measures to publicly report the number of harassment complaints they receive and investigations they conduct, use committee-based advising to prevent students from being in the power of a single harasser, and institute alternative, less formal ways for targets to report complaints if they don’t wish to start an official investigation….

      The report says women in science, engineering, or medicine who are harassed may abandon leadership opportunities to dodge perpetrators, leave their institutions, or leave science altogether. It also highlights the ineffectiveness of ubiquitous, online sexual harassment training and notes what is likely massive underreporting of sexual harassment by women who justifiably fear retaliation. To retain the talents of women in science, the authors write, will require true cultural change rather than “symbolic compliance” with civil rights laws.

Seaweed Masses Assault Caribbean Islands

[These excerpts are from an article by Katie Langin in the June 15, 2018, issue of Science.]

      In retrospect, 2011 was just the first wave. That year, massive rafts of Sargassum—a brown seaweed that lives in the open ocean—washed up on beaches across the Caribbean, trapping sea turtles and filling the air with the stench of rotting eggs….Before then, beachgoers had sometimes noticed “little drifty bits on the tideline,” but the 2011 deluge of seaweed was unprecedented,…piling up meters thick in places…. "

      Locals hoped the episode, a blow to tourism and fisheries, was a one-off….Now, the Caribbean is bracing for what could be the mother of all seaweed invasions, with satellite observations warning of record-setting Sargassum blooms and seaweed already swamping beaches. Last week, the Barbados government declared a national emergency….

      Before 2011, open-ocean Sargassum was mostly found in the Sargasso Sea, a patch of the North Atlantic Ocean enclosed by ocean currents that serves as a spawning ground for eels. So when the first masses hit the Caribbean, scientists assumed they had drifted south from the Sargasso Sea. But satellite imagery and data on ocean currents told a different story….

      Since 2011, tropical Sargassum blooms have recurred nearly every year, satellite imagery showed….

      Yet in satellite data prior to 2011, the region is largely free of seaweed....That sharpens the mystery of the sudden proliferation….Nutrient inputs from the Amazon River, which discharges into the ocean around where blooms were first spotted, may have stimulated Sargassum growth. But other factors, including changes in ocean currents and increased ocean fertilization from iron in airborne dust, are equally plausible….

      In the meantime, the Caribbean is struggling to cope as yearly bouts of Sargassum become “the new normal”….the blooms visible in satellite imagery dwarf those of previous years….

HIV—No Time for Complacency

[These excerpts are from an article by Quarralisha Abdool Karim and Salim S. Abdool in the June 15, 2018, issue of Science.]

      Today, the global HIV epidemic is widely viewed as triumph over tragedy. This stands in stark contrast to the first two decades of the epidemic, when AIDS was synonymous with suffering and death….

      The AIDS response has now become a victim of these successes: As it eases the pain and suffering from AIDS, it creates the impression that the epidemic is no longer important or urgent. Commitment to HIV is slowly dissipating as the world’s attention shifts elsewhere. Complacency is setting in.

      However, nearly 5000 new cases of HIV infection occur each day, defying any claim of a conquered epidemic. The estimated 36.7 million people living with HIV, 1 million AIDS-related deaths, and 1.8 million new infections in 2016 remind us that HIV remains a serious global health challenge. Millions need support for life-long treatment, and millions more still need to start antiretroviral treatment, many of whom do not even know their HIV status. People living with HIV have more than a virus to contend with; they must cope with the stigma and discrimination that adversely affect their quality of life and undermine their human rights.

      A further crucial challenge looms large: how to slow the spread of HIV. The steady decline in the number of new infections each year since the mid-1990s has almost stalled, with little change in the past 5 years. HIV continues to spread at unacceptable levels in several countries, especially in marginalized groups, such as men who have sex with men, sex workers, people who inject drugs, and transgender individuals. Of particular concern is the state of the HIV epidemic in sub-Saharan Africa, where young women aged 15 to 24 years have the highest rates of new infections globally. Their sociobehavioral and biological risks—including age-disparate sexual coupling patterns between teenage girls and men in their 30s, limited ability to negotiate safer sex, genital inflammation, and vaginal dysbiosis—are proving difficult to mitigate. Current HIV prevention technologies, such as condoms and pre-exposure prophylaxis, have had limited impact in young women in Africa, mainly due to their limited access, low uptake, and poor adherence.

      There is no room for complacency when so much more remains to be done for HIV prevention and treatment. The task of breaking down barriers and building bridges needs greater commitment and impetus. Now is not the time to take the foot off the pedal….

Does Tailoring Instruction to “Learning Styles” Help Students Learn?

[These excerpts are from an article by Daniel T. Willingham in the Summer 2018 issue of American Educator.]

      Research has confirmed the basic summary I offered in 2005; using learning-styles theories in the classroom does not bring an advantage to students. But there is one new twist. Researchers have long known that people claim to have learning preferences—they’ll say, “I’m a visual learner” or “I like to think in words.” There's increasing evidence that people act on those beliefs; if given the chance, the visualizer will think in pictures rather than words. But doing so confers no cognitive advantage. People believe they have learning styles, and they try to think in their preferred style, but doing so doesn't help them think….

      It’s fairly obvious that some children learn more slowly or put less effort into schoolwork, and researchers have amply confirmed this intuition. Strategies to differentiate instruction to account for these disparities are equally obvious: teach at the learner’s pace and take greater care to motivate the unmotivated student. But do psychologists know of any nonobvious student characteristics that teachers could use to differentiate instruction?

      Learning-styles theorists think they’ve got one: they believe students vary in the mode of study or instruction from which they benefit most. For example, one theory has it that some students tend to analyze ideas into parts, whereas other students tend to think more holistically. Another theory posits that some students are biased to think verbally, whereas others think visually.

      When we define learning styles, it’s important to be clear that style is not synonymous with ability. Ability refers to howwell you can do something. Style is the way you do it. I find an analogy to sports useful: two basketball players might be equally good at the game but have different styles of play; one takes a lot of risks, whereas the other is much more conservative in the shots she takes. To put it another way, you’d always be pleased to have more ability, but one style is not supposed to be valued over another; it’s just the way you happen to do cognitive work. But just as a conservative basketball player wouldn’t play as well if you forced her to take a lot of chancy shots, learning-styles theories hold that thinking will not be as effective outside of your preferred style.

      In other words, when we say someone is a visual learner, we don’t mean they have a great ability to remember visual detail (although that might be true). Some people are good at remembering visual detail, and some people are good at remembering sound, and some people are gifted in moving their bodies. That’s kind of obvious because pretty much every human ability varies across individuals, so some people will have a lot of any given ability and some will have less. There’s not much point in calling variation in visual memory a “style” when we already use the word “ability” to refer to the same thing.

      The critical difference between styles and abilities lies in the idea of style as a venue for processing, a way of thinking that an individual favors. Theories that address abilities hold that abilities are not interchangeable; I can’t use a mental strength (e.g., my excellent visual memory) to make up for a mental weakness (e.g., my poor verbal memory). The independence of abilities shows us why psychologist Howard Gardner's theory of multiple intelligences is not a theory of learning styles. Far from suggesting that abilities are exchangeable, Gardner explicitly posits that different abilities use different “codes” in the brain and therefore are incompatible. You can’t use the musical code to solve math problems, for example….

      In short, recent experiments do not change the conclusion that previous reviewers of this literature have drawn: there is not convincing evidence to support the idea that tailoring instruction according to a learning-styles theory improves student outcomes….

      Research from the last 10 years confirms that matching instruction to learning style brings no benefit. But other research points to a new conclusion: people do have biases about preferred modes of thinking, even though these biases don’t help them think better.

      …In sum, people do appear to have biases to process information one way or another (at least for the verbalizer/visualizer and the intuitive/reflective styles), but these biases do not confer any advantage. Nevertheless, working in your preferred style may make it feel as though you’re learning more.

      But if people are biased to think in certain ways, maybe catering to that bias would confer an advantage to motivation, even if it doesn't help thinking? Maybe honoring learning styles would make students more likely to engage in class activities? I don’t believe either has been tested, but there are a few reasons I doubt we'd see these hypothetical benefits. First, these biases are not that strong, and they are easily overwhelmed by task features; for example, you may be biased to reflect rather than to intuit, but if you feel hurried, you’ll abandon reflection because it’s time-consuming. Second, and more important, there are the task effects. Even if you're a verbalizer, if you're trying to remember sentences, it doesn’t make sense for me to tell you to verbalize (for example, by repeating the sentences to yourself) because visualizing (for example, by creating a visual mental image) will make the task much easier. Making the task more difficult is not a good strategy for motivation….

      One educational implication of this research is obvious: educators need not worry about their students’ learning styles. There's no evidence that adopting instruction to learning styles provides any benefit. Nor does it seem worthwhile to identify students’ learning styles for the purpose of warning them that they may have a pointless bias to process information one way or another. The bias is only one factor among many that determine the strategy an individual will select-the phrasing of the question, the task instructions, and the time allotted all can impact thinking strategies.

      A second implication is that students should be taught fruitful thinking strategies for specific types of problems. Although there’s scant evidence that matching the manner of processing to a student's preferred style brings any benefit, there’s ample evidence that matching the manner of processing to the task helps a lot. Students can be taught useful strategies for committing things to memory, reading with comprehension, overcoming math anxiety, or avoiding distraction, for example. Learning styles do not influence the effectiveness of these strategies.

Bilingual Boost

[These excerpts are from an article by Jane C. Hu in the June 2018 issue of Scientific American.]

      Children growing up in low-income homes score lower than their wealthier peers on cognitive tests and other measures of scholastic success, study after study has found. Now mounting evidence suggests a way to mitigate this disadvantage: learning another language.

      …researchers probed demographic data and intellectual assessments from a subset of more than 18,000 kindergartners and first graders in the U.S. As expected, they found children from families with low socioeconomic status (based on factors such as household income and parents’ occupation and education level) scored lower on cognitive tests. But within this group, kids whose families spoke a second language at home scored better than monolinguals.

      Evidence fora “bilingual advantage”—the idea that speaking more than one language improves mental skills such as attention control or ability to switch between tasks—has been mixed. Most studies have had only a few dozen participants from mid- to high-socioeconomic-status backgrounds perform laboratory-based tasks.

      …sought out a data set of thousands of children who were demographically representative of the U.S. population. It is the largest study to date on the bilingual advantage and captures more socioeconomic diversity than most others….The analysis also includes a real-world measure of children's cognitive skills: teacher evaluations.

      The use of such a sizable data set “constitutes a landmark approach” for language studies…..the data did not contain details such as when bilingual subjects learned each language or how often they spoke it. Without this information…it is difficult to draw conclusions about how being bilingual could confer cognitive advantages….

Suffocated Seas

[These excerpts are from an article by Lucas Joel in the June 2018 issue of Scientific American.]

      Earth’s largest mass extinction to date is sometimes called the Great Dying—and for good reason: it wiped out about 70 percent of life on land and 95 percent in the oceans. Researchers have long cited intense volcanism in modern-day Siberia as the main culprit behind the cataclysm, also known as the Permian Triassic mass extinction, 252 million years ago. A recent study pins down crucial details of the killing mechanism, at least for marine life: oceans worldwide became oxygen-starved, suffocating entire ecosystems.

      Scientists had previously suspected that anoxia, or a lack of oxygen, was responsible for destroying aquatic life. Supporting data came from marine rocks that formed in the ancient Tethys Ocean—but that body of water comprised only about 15 percent of Earth’s seas. That is hardly enough to say anything definitive about the entire marine realm….

      …This approach enabled the researchers to spot clues in rocks from Japan that formed around the time of the extinction in the middle of the Panthalassic Ocean, which then spanned most of the planet and held the majority of its seawater….

      The findings may have special relevance in modern times because the trigger for this ancient anoxia was most likely climate change caused by Siberian volcanoes pumping carbon dioxide into the atmosphere. And today, as human activity warms the planet, the oceans hold less oxygen than they did many decades ago. Brennecka cautions against speculating about the future but adds: “I think it’s pretty clear that when large-scale changes happen in the oceans, things die.”

The Origin of the Earth

[This excerpt is the start of an article by Harold C. Urey in the October 1952 issue of Scientific American.]

      It is probable that as soon as man acquired a large brain and the mind that goes with it he began to speculate on how far the earth extended, on what held it up, on the nature of the sun and moon and stars, and on the origin of all these things. He embodied his speculations in religious writings, of which the first chapter of Genesis is a poetic and beautiful example. For centuries these writings have been part of our culture, so that many of us do not realize that some of the ancient peoples had very definite ideas about the earth and the solar system which are quite acceptable today.

      Aristarchus of the Aegean island of Samos first suggested that the earth and the other planets moved about the sun—an idea that was rejected by astronomers until Copernicus proposed it again 2,000 years later. The Greeks knew the shape and the approximate size of the earth, and the cause of eclipses of the sun. After Copernicus the Danish astronomer Tycho Brahe watched the motions of the planet Mars from his observatory on the Baltic island of Hveen; as a result Johannes Kepler was able to show that Mars and the earth and the other planets move in ellipses about the sun. Then the great Isaac Newton proposed his universal law of gravitation and laws of motion, and from these it was possible to derive an exact description of the entire solar system. This occupied the minds of some of the greatest scientists and mathematicians in the centuries that followed.

      Unfortunately it is a far more difficult problem to describe the origin of the solar system than the motion of its parts. The materials that we find in the earth and the sun must originally have been in a rather different condition. An understanding of the process by which these materials were assembled requires the knowledge of many new concepts of science such as the molecular theory of gases, thermodynamics, radioactivity and quantum theory. It is not surprising that little progress was made about these lines until the 20th century.

Beavers, Rebooted

[These excerpts are from an editorial by Ben Goldfarb in the June 8, 2018, issue of Science.]

      In 1836, an explorer named Stephen Meek wandered down the piney slopes of Northern California’s Klamath Mountains and ended up here, in the finest fur trapping ground he’d ever encountered. This swampy basin would ultimately become known as the Scott Valley, but Meek’s men named it Beaver Valley after its most salient resource: the rodents whose darns shaped its ponds, marshes, and meadows. Meek’s crew caught 1800 beavers here in 1850 alone, shipping their pelts to Europe to be felted into waterproof hats. More trappers followed, and in 1929 one killed and skinned the valley’s last known beaver.

      The massacre spelled disaster not only for the beavers, but also for the Scott River’s salmon, which once sheltered in beaver-built ponds and channels. As old beaver dams collapsed and washed away, wetlands dried up and streams carved into their beds. Gold mining destroyed more habitat. Today, the Scott resembles a postindustrial sacrifice zone, its once lush floodplain buried under heaps La mine tailings…

      All is not lost, however. Beyond one slag heap, a tributary called Sugar Creek has been transformed into a shimmering pond, broad as several tennis courts and fringed with willow and alder. Gilmore tugged up her shorts and waded into the basin, sandals sinking deep into chocolatey mud. Schools of salmon fry flowed like mercury around her ankles. It was as if she had stepped into a time machine and been transported back to the Scott's fecund past. This oasis, Gilmore explained, is the fruit of a seemingly quixotic effort to re-beaver Beaver Valley. At the downstream end of the pond stood the structure that made the resurrection possible: a rodent-human collaboration known as a beaver darn analog (BDA). Human hands felled and peeled Douglas fir logs, pounded them upright into the stream bed, and wove a lattice of willow sticks through the posts. A few beavers that had recently returned to the valley promptly took over, gnawing down nearby trees and reinforcing the dam with branches and mud….

Dig Seeks Site of First English Settlement in the New World

[These excerpts are from an editorial by Andrew Lawler in the June 8, 2018, issue of Science.]

      In 1587, more than 100 men, women, and children settled on Roanoke Island in what is now North Carolina. War with Spain prevented speedy resupply of the colony—the first English settlement in the New World, backed by Elizabethan courtier Sir Walter Raleigh. When a rescue mission arrived 3 years later, the town was abandoned and the colonists had vanished.

      What is commonly called the Lost Colony has captured the imagination of generations of professional and amateur sleuths, but the colonists’ fate is not the only mystery. Despite more than a century of digging, no trace has been found of the colonists’ town—only the remains of a small workshop and an earthen fort that may have been built later, according to a study to be published this year. Now, after a long hiatus, archaeologists plan to resume digging this fall….

      The first colonists arrived in 1585, when a voyage from England landed more than 100 men here, among them a science team including Joachim Gans, a metallurgist from Prague and the first known practicing Jew in the Americas. According to eyewitness accounts, the colonists built a substantial town on the island’s north end. Gans built a small lab where he worked with scientist Thomas Harriot. After the English assassinated a local Native American leader, however, they faced hostility. After less than a year, they abandoned Roanoke and returned to England.

      A second wave of colonists, including women and children, arrived in 1587 and rebuilt the decaying settlement. Their governor, artist John White, returned to England for supplies and more settlers, but war with Spain delayed him in England for 3 years. When he returned here in 1590, he found the town deserted.

      By the time President James Monroe paid a visit in 1819, all that remained was the outline of an earthen fort, presumed to have been built by the 1585 all-male colony. Digs near the earthwork in the 1890s and 1940s yielded little. The U.S. National Park Service (NPS) subsequently reconstructed the earthen mound, forming the centerpiece of today’s Fort Raleigh National Historic Site.

      Then in the 1990s, archaeologists led by Ivor Noel Hume of The Colonial Williamsburg Foundation in Virginia uncovered remains of what archaeologists agree was the workshop where Gans tested rocks for precious metals and Harriot studied plants with medicinal properties, such as tobacco. Crucibles and pharmaceutical jars littered the floor, along with bits of brick from a special furnace. The layout closely resembled those in 16th century woodcuts of German alchemical workshops.

      In later digs Noel Hume determined that the ditch alongside the earthwork cuts across the workshop—suggesting the fort was built after the lab and possibly wasn’t even Elizabethan. NPS refused to publish these controversial results, and Noel. Hume died in 2017. But the foundation intends to publish his paper in coming months….

Knowledge Can Be Power

[These excerpts are from an article by Peter Salovey in the June 2018 issue of Scientific American.]

      If knowledge is power, scientists should easily be able to influence the behavior of others and world events. Researchers spend their entire careers discovering new knowledge—from a single cell to the whole human, from an atom to the universe.

      Issues such as climate change illustrate that scientists, even if armed with overwhelming evidence, are at times powerless to change minds or motivate action….

      For many, knowledge about the natural world is superseded by personal beliefs. Wisdom across disciplinary and political divides is needed to help bridge this gap. This is where institutions of higher education can provide vital support. Educating global citizens is one of the most important charges to universities, and the best way we can transcend ideology is to teach our students, regardless of their majors, to think like scientists. From American history to urban studies, we have an obligation to challenge them to be inquisitive about the world, to weigh the quality and objectivity of data presented to them, and to change their minds when confronted with contrary evidence.

      Likewise, STEM majors' college experience must be integrated into a broader model of liberal education to prepare them to think critically and imaginatively about the world and to understand different viewpoints. It is imperative for the next generation of leaders in science to be aware of the psychological, social and cultural factors that affect how people understand and use information.

      Through higher education, students can gain the ability to recognize and remove themselves from echo chambers of ideologically-driven narratives and help others do the same. Students at Yale, the California Institute of Technology and the University of Waterloo, for instance, developed an Internet browser plug-in that helps users distinguish bias in their news feeds. Such innovative projects exemplify the power of universities in teaching students to use knowledge to fight disinformation.

      For a scientific finding to find traction in society, multiple factors must be considered. Psychologists, for example, have found that people are sensitive to how information is framed. My research group discovered that messages focused on positive outcomes have more success in encouraging people to adopt illness-prevention measures, such as applying sunscreen to lower their risk for skin cancer, than loss-framed messages, which emphasize the downside of not engaging in such behaviors. Loss-framed messages are better at motivating early-detection behaviors such as mammography screening.

      Scientists cannot work in silos and expect to improve the world, particularly when false narratives have become entrenched in communities…

      Universities are conveners of experts and leaders across disciplinary and political boundaries. Knowledge is power but only if individuals are able to analyze and compare information against their personal beliefs, are willing to champion data-driven decision making over ideology, and have access to a wealth of research findings to inform policy discussions and decisions.

Constitutional Right to Contraception

[This excerpted editorial from the March 8, 2018, issue of The New York Times appeared in the June 2018 issue of Population Connection.]

      Landmark Supreme Court decisions in 1965 and 1972 recognizing a constitutional right to contraception made it more likely that women went to college, entered the work force, and found economic stability. That's all because they were better able to choose when, or whether, to have children.

      A 2012 study from the University of Michigan found that by the 1990s, women who had early access to the birth control pill had wage gains of up to 30 percent, compared with older women.

      It’s mind-boggling that anyone would want to thwart that progress, especially since women still have so far to go in attaining full equality in the United States. But the Trump administration has signaled it may do just that, in a recent announcement about funding for a major family planning program, Title X.

      Since 1970, the federal government has awarded Title X grants to providers of family planning services — including contraception, cervical cancer screenings, and treatment for sexually transmitted infections — to help low-income women afford them. It’s a crucial program.

      Yet the Trump administration appeared to accept the conservatives’ retrograde thinking with a recent announcement from the Department of Health and Human Services’ Office of Population Affairs outlining its priorities for awarding Title X grants. Alarmingly, unlike previous funding announcements, the document makes zero reference to contraception. In setting its standards for grants, it disposes of nationally recognized clinical standards, developed with the federal Centers for Disease Control and Prevention, that have long been guideposts for family planning. Instead, the government says it wants to fund “innovative” services and emphasizes “fertility awareness” approaches, which include the so-called rhythm method. These have long been preferred by the religious right, but are notoriously unreliable.

Trump on Family Planning

[This excerpted editorial from the March 11, 2018, issue of the St. Louis Post-Dispatch appeared in the June 2018 issue of Population Connection.]

      The Trump administration's answer to questions surrounding family planning and safe sex is to give preference for $260 million in grants to groups stressing abstinence and “fertility awareness.” Instead of urging at-risk members of the public to use condoms and other forms of protection, the administration favors far-less safe and effective measures such as the rhythm method.

      Effective and accessible contraception has helped lower rates of unplanned pregnancies in the U.S., thereby reducing the number of abortions. The federal Centers for Disease Control and Prevention reported last year that there were fewer abortions in 2014 than at any time since abortion was legalized in 1973. Adolescent pregnancies decreased 55 percent between 1990 and 2011. Birth rates for women between 15 and 19 declined an additional 35 percent between 2011 and 2016, according to the data.

      Much of the goal of family planning and contraception is to reduce the abortion rate by limiting unintended pregnancies and to decrease the number of sexually transmitted infections. There are enormous health, social, and economic benefits for women who control their own reproductive health.

      The administration’s emphasis on abstinence and natural family planning — including the so-called rhythm method — is part of a familiar pattern of shifting away from scientific, evidence-based policies toward non-scientific ideologies. With the current shift, Trump is undermining nearly fifty years of successful family planning efforts.

      Abstinence is 100 percent effective if practiced consistently. That’s a big if. Fertility awareness is effective if practitioners have a nearly medical understanding of hormonal cycles and adhere to them unfailingly.

When Facts Are Not Enough

[These excerpts are from an article by Katherine Hayhoe in the June 1, 2018, issue of Science.]

      …Scientists furthermore assume that disagreements can be resolved by more facts. So when people object to the reality of climate change with science-y sounding arguments—“the data is wrong,” or “it’s just a natural cycle,” or even, “eve need to study it longer”—the natural response of scientists is simple and direct: People need more data. But this approach often doesn't work and can even backfire. Why? Because when it conies to climate change, science-y sounding objections are a mere smokescreen to hide the real reasons, which have much more to do with identity and ideology than data and facts.

      For years, climate change has been one of the most politically polarized issues in the United States. Today, the best predictor of whether the public agrees with the reality of anthropogenic climate change is not how much scientific information there is. It’s where each person falls on the political spectrum. That’s why the approach of bombarding the unconvinced with more data doesn’t work—people see it as an attack on their identity and an attempt to change their way of life.

      …As uncomfortable as this is for a scientist in today’s world, the most effective thing I’ve done is to let people know that I am a Christian. Why? Because it’s essential to connect the impacts of a changing climate directly to what's already meaningful in one's life, and for many people, faith is central to who they are. Scientists can be effective communicators by bonding over a value that they genuinely share with the people with whom they’re speaking. It doesn't have to be a shared faith. It could be that both are parents, or live in the same place, or are concerned about water resources or national security, or enjoy the same outdoor activities. Instead of beginning with what most divides scientists from others, start the conversation from a place of agreement and mutual respect. Then, scientists can connect the dots: share from their head and heart why they care.

      Talking about impacts isn't enough, though. Sadly, the most dangerous myth that many people have bought into is, “it doesn’t matter to me,” and the second most dangerous myth is, “there’s nothing I can do about it.” If scientists describe the daunting challenge of climate change but can’t offer an engaging solution, then people's natural defense mechanism is to disassociate from the reality of the problem. That's why changing minds also requires providing practical, viable, and attractive solutions that someone can get excited about. Concerned homeowner? Mention the amazing benefits of energy conservation. Worried parent? Bring up the practical steps to take to make outdoor play spaces safer for kids, even in the hot summer. Business executive? Talk about the economic benefits of renewables.

      We all live on the same planet, and we all want the same things. By connecting our heads to our hearts, we all can talk about—and tackle—the problem of climate change together.

What I Learned from Teaching

[These excerpts are from an article by Moamen Elmassry in the May 25, 2018, issue of Science.]

      …I tried my best to help my students learn, but my inexperience was apparent.

      I could have carried on as a mediocre teacher. But I recalled how some of my own teachers had inspired me over the years. I felt I owed my students the same—which, I realized, would require time and training. It was my responsibility to make that happen, even if it meant taking a little more time and focus away from my research.

      …I introduced my students to epidemiology by asking them to write short stories about an epidemic spreading on campus, hoping to incorporate more creativity into their learning. This unconventional assignment surprised the students at first. But some of them got so into it that they wrote much more than the half page I had assigned. I loved seeing my students so engaged with an activity I had designed. In my end-of-semester evaluations, some students said that I was their favorite TA, and others asked me to write recommendation letters for them, which was both humbling and rewarding.

      …teaching has provided me with some unexpected benefits. Knowing that I have teaching commitments pushes me to conduct efficient, well-designed experiments. Answering undergraduate students’ fundamental “why” questions helps keep me intellectually stimulated and forces me to think about science in new ways….

The Unlikely Triumph of Dinosaurs

[These excerpts are from an article by Stephen Brusatte in the May 2018 issue of Scientific American.]

      …Like many successful organisms, dinosaurs were born of catastrophe. Around 252 million years ago, at the tail end of the Permian Period, a pool of magma began to rumble underneath Siberia. The animals living at the surface—an exotic menagerie of large amphibians, knobby-skinned reptiles and flesh-eating forerunners of mammals—had no inkling of the carnage to come. Streams of liquid rock snaked through the mantle and then the crust, before flooding out through mile-wide cracks in the earth’s surface. For hundreds of thousands, maybe millions, of years the eruptions continued, spewing heat, dust, noxious gases and enough lava to drown several million square miles of the Asian landscape. Temperatures spiked, oceans acidified, ecosystems collapsed and up to 95 percent of the Permian species went extinct. It was the worst mass extinction in our planet's history. But a handful of survivors staggered into the next period of geologic time, the Triassic. As the volcanoes quieted and ecosystems stabilized, these plucky creatures now found themselves in a largely empty world. Among them were various small amphibians and reptiles, which diversified as the earth healed and which later diverged into today’s frogs, salamanders, turtles, lizards and mammals…

      The Prorotodactylus tracks date to about 250 million years ago, just one or two million years after the volcanic eruptions that brought the Permian to a close. Early on it was clear from the narrow distance between the left and right tracks that they belonged to a specialized group of reptiles called archosaurs that emerged after the Permian extinction with a newly evolved upright posture that helped them run faster, cover longer distances and track down prey with greater ease. The fact that the tracks came from an early archosaur meant that they could potentially bear on questions about the origins of dinosaurs. Almost as soon as the archosaurs originated, they branched into two major lineages, which would grapple with each other in an evolutionary arms race over the remainder of the Triassic: the pseudosuchians, which led to today's crocodiles, and the ave-metatarsalians, which developed into dinosaurs. Which branch did Prorotodactylus belong to?

      The technology itself is old, but its usefulness in this context is due to the rapid recent rise of intermittent renewable energy sources and the peculiarities of the way electricity prices are set….

      …Prorotodactyius is therefore a dinosauromorph: not a dinosaur per se but a primitive member of the avemetatarsalian subgroup that includes dinosaurs and their very closest cousins. Members of this group had long tails, big leg muscles, and hips with extra bones connecting the legs to the trunk, which allowed them to move even faster and more efficiently than other archosaurs.

      These earliest dinosauromorphs were hardly fearsome, however. Fossils indicate that they were only about the size of a house cat, with long, skinny legs….

      Over the next 10 million to 15 million years the dinosauromorphs continued to diversify. The fossil record from this time period shows an increasing number of track types in Poland and then around the world. The tracks get larger and develop a greater variety of shapes. Some trackways stop showing impressions of the hand, a sign the makers were walking only on their hind legs. Skeletons start to turn up as well. Then, at sonic point between 240 million and 230 million years ago, one of these primitive dinosauromorph lineages evolved into true dinosaurs. It was a radical change in name only—the transition involved just a few subtle anatomical innovations: a long scar on the upper arm that anchored bigger muscles, some tablike flanges on the neck vertebrae that supported stronger ligaments, and an open, windowlike joint where the thighbone meets the pelvis that stabilized upright posture. Still, modest though these changes were, they marked the start of something big….

      But then, just when it seemed that dinosaurs would never escape their rut, they received two lucky breaks. First, in the humid zone, the dominant large herbivores of the time—reptiles called rhynchosaurs and mammal cousins called dicynodonts—went into decline, disappearing entirely in some areas for reasons still unknown. Their fall from grace between 225 million and 215 million years ago gave primitive plant-eating sauropodomorphs such as Saturnalia, a dog-size species with a slightly elongated neck, the opportunity to claim an important niche. Before long these sauropod precursors were the main herbivores in the humid parts of the Northern and Southern Hemispheres. Second, around 215 million years ago dinosaurs finally broke into the deserts of the Northern Hemisphere, probably because shifts in the monsoons and the amount of carbon dioxide in the atmosphere made differences between the humid and arid regions less severe, allowing dinosaurs to migrate between them more easily….

      No matter which interval you look at in the Triassic, from the time the first dinosaurs appeared around 230 million years ago until the period ended 201 million years ago, the story is the same. Only some dinosaurs were able to live in some parts of the world, and wherever they lived—humid forests or parched deserts—they were surrounded by all kinds of bigger, more common, more diverse animals….

      More than anything, however, Triassic dinosaurs were being outgunned by their close cousins the so-called pseudosuchians, on the crocodile side of the archosaur family….

      Our statistical analysis led us to an iconoclastic conclusion: the first dinosaurs were not particularly special, at least compared with the variety of other animals they were evolving alongside during the Triassic. If you were around back then to survey the Taiwan scene, you probably would have considered the dinosaurs a fairly marginal group. And if you were of a gambling persuasion, you would probably have bet on some of the other animals, most likely those hyperdiverse pseudosuchians, to eventually become dominant, grow to massive sizes and conquer the world. But of course, we know that it was the dinosaurs that became ascendant and even persist today as more than 10,000 species of birds. In contrast, only two dozen or so species of modern crocodilians have survived to the present day.

      How did dinosaurs eventually wrestle the crown from their crocodile-line cousins? The biggest factor appears to have been another stroke of good fortune outside the dinosaurs’ control. Toward the end of the 'Triassic great geologic forces pulled on Pangea from both the east and west, causing the supercontinent to fracture. Today the Atlantic Ocean fills that gap, but back then it was a conduit for magma. For more than half a million years tsunamis of lava flooded across much of central Pangea, eerily similar to the enormous volcanic eruptions that closed out the Permian 50 million years prior. Like those earlier eruptions, the End Triassic ones also triggered a mass extinction. The crocodile-line archosaurs were decimated, with only a few species—the ancestors of today’s crocodiles and alligators—able to endure.

      Dinosaurs, on the other hand, seemed to have barely noticed this fire and brimstone. All the major subgroups—the theropods, sauropodomorphs and ornithischians—sailed into the next interval of geologic time, the Jurassic Period. As the world was going to hell, dinosaurs were thriving, somehow taking advantage of the chaos around them. I wish I had a good answer for why—was there something special about dinosaurs that gave them an edge over the pseudosuchians, or did they simply walk away from the plane crash unscathed, saved by sheer luck when so many others perished? This is a riddle for the next generation of paleontologists to solve.

      Whatever the reason dinosaurs survived that disaster, there is no mistaking the consequences. Once on the other side, freed from the yoke of their pseudosuchian rivals, these dinosaurs had the opportunity to prosper in the Jurassic. They became more diverse, more abundant and bigger than ever before. Completely new dinosaur species evolved and migrated widely, taking pride of place in terrestrial ecosystems the world over. Among these newcomers were the first dinosaurs with plates on their backs and armor covering their bodies; the first truly colossal sauropods that shook the earth as they walked; carnivorous ancestors of T rex that began to get much bigger; and an assortment of other theropods that started to get smaller, lengthen their arms and cover themselves in feathers—predecessors of birds. Dinosaurs were now dominant. It took more than 30 million years, but they had, at long last, arrived.

The Battle of the Belt

[These excerpts are from an article by Claudia Wallis in the May 2018 issue of Scientific American.]

      Among the indignities of aging is a creeping tendency to put on weight, as our resting metabolism slows down—by roughly 1 to 2 percent every decade. But what's worse, at least for women, is a shift, around menopause, in where this excess flab accumulates. Instead of thickening the hips and thighs, it starts to add rolls around the belly—a pattern more typical of men—which notoriously reshapes older women from pears into apples.

      The change is not just cosmetic. A high waist-to-hip ratio portends a greater risk of heart disease, stroke, diabetes, metabolic syndrome and even certain cancers—for both men and women. The shift helps to explain why, after menopause, women begin to catch up to men in their rates of cardiovascular disease. And those potbellies are costly. A 2008 Danish study found that for every inch added to a healthy waistline, annual health care costs rose by about 3 percent for women and 5 percent for men.

      Researchers have been investigating “middle-aged spread” for decades, but there is still debate about why it happens, whether it is a cause or merely an indicator of health risks, and what can be done to avoid it. As we grow older, we deposit relatively more excess fat around our abdominal organs as opposed-to under the skin—where most of our body fat sits. There are some ethnic and racial differences, however….For a given waist circumference, African-Americans tend to have less of this “visceral fat,” and Asians tend to have more. Visceral fat differs from subcutaneous fat in that it releases fatty acids and inflammatory substances directly into the liver rather than into the general circulation. Some experts believe this may play a direct role in causing the diseases linked to abdominal obesity.

      But not everyone agrees….

      Another area of uncertainty is why we pack on visceral fat with aging. Clearly, sex hormones are involved, given that the change occurs in women around menopause. But it is more complicated than just a drop in estrogen. Consider, for instance, that young women with polycystic ovary syndrome tend to have the apple shape and insulin resistance, although their bodies produce plenty of estrogen. Such women do, however, have high levels of androgens. Or consider that when transgender males—who are biologically female—take androgens to masculinize their body, they, too, develop more visceral fat and glucose intolerance. Both examples suggest that “a relative imbalance” of male and female hormones may be at work…. The same might also be true of healthy women at menopause.

      But this isn’t settled science. A newer theory made a splash last year after researchers reported in Nature that they could radically reduce body fat—including visceral fat—and raise metabolic rates in mice by blocking the action of follicle-stimulating hormone (FSH), a substance better known for its role in reproduction. Could FSH be the key to the midlife weight puzzle? The researchers had previously shown that blocking FM could halt bone loss, raising the intriguing prospect of a medical twofer: one drug to combat obesity and osteoporosis. “The next step is to take this to humans,” says senior author Mone Zaidi of the Icahn School of Medicine at Mount Sinai.

      Of course, many a thrilling discovery in mice has fizzled in humans, and combating the evolutionary programming for storing fat is particularly difficult….

      As far as we know, there’s only one way to fight nature's plan for a thickening middle and its attendant risks—and you know where this is going. Eat less or exercise more as you age, or do both….

Results Roll in from the Dinosaur Renaissance

[These excerpts are from a book review by Vistoria Arbour in the May 11, 2018, issue of Science.]

      …Steve Brusatte’s The Rise and Fall of the Dinosaurs takes readers on a tour of the new fossils and discoveries that are shedding light on the dinosaurs’ evolutionary story.

      The dawn of the dinosaurs, the Triassic period, is still one of the most poorly understood periods in dinosaur history, but it’s also where some of the information gaps are being filled most rapidly and most surprisingly. Whereas the end of the age of dinosaurs was abruptly cut short by a meteor, their ascent was complex and drawn-out. New finds from Poland, New Mexico, and Argentina show that dinosaurs were uncommon and relatively unspecialized for the first 30 million years of their existence, and they lived alongside relatives of today’s crocodiles that looked much like dinosaurs themselves.

      Elsewhere in the Mesozoic era, we meet a variety of newly discovered dinosaurs alongside old favorites. As the giant supercontinent Pangaea split apart during the Jurassic and Cretaceous periods, dinosaurs on newly drifting continents were isolated from each other and began to evolve their own characteristic features. South America was home to the snub-snouted, tiny-armed abelisaurs, Africa to the shark-toothed carcharodontosaurs, and in Transylvania, a bizarre set of dinosaurs includes relatives of Velociraptor with not one but two sets of killer sickle claws on their feet.

      …The veritable flood of fluffy and feathery fossils from China has revealed an amazing diversity of winged dinosaurs. These specimens indicate that feathers evolved long before flight but also suggest that powered, flapping flight may have evolved multiple times in dinosaurs. (We need look no further than the totally weird bat-winged Yi qi to see that dinosaurs experimented with many ways to get airborne.)

      …Tyrannosaurs weren’t all giant bone-crunchers with tiny arms: the earliest members of the group started out as small, lightly built, long-armed predators with fancy crests on their heads. Many were feathered, as evidenced by those found in China, where the right kind of conditions preserved soft tissues such as skin.

      …Recent advances in understanding dinosaur growth, biogeography, extinction dynamics, and fine-scale evolutionary changes through time, for example, have only been possible because of the comparatively abundant fossil record of duck-billed hadrosaurs and homed ceratopsians….

Finding the First Horse Tamers

[These excerpts are from an article by Michael Price in the May 11, 2018, issue of Science.]

      Taming horses opened a new world, allowing prehistoric people to travel farther and faster than ever before, and revolutionizing military strategy. But who first domesticated horses—and the genetic and cultural impact of the early riders—has long been a puzzle.

      The “steppe hypothesis” suggested that Bronze Age pastoralists known as the Yamnaya, or their close relatives, first domesticated the horse. Aided by its fleet transport, they migrated out from the Eurasian steppe and spread their genes, as well as precursors of today’s Indo-European languages, across much of Eurasia. But a new study of ancient genomes…suggests that the Yamnaya’s effect on Asia was limited, and that another culture domesticated the horse first….

      The first signs of horse domestication—pottery containing traces of mares’ milk and horse teeth with telltale wear from a riding bit—come from Botai hunter-gatherers, who lived in modern Kazakhstan from about 3700 B.C.E. to 3100 B.C.E. Yet some researchers thought the Botai were unlikely to have invented horse husbandry because they lingered as hunter-gatherers long after their neighbors had adopted farming and herding. These researchers assumed the Botai learned to handle horses from nearby cultures on the steppe, perhaps even the Yarrinaya, who were already herding sheep and goats.

      Genetic data suggest the Yamnaya migrated both east and west during the Bronze Age, and mixed with locals. Some researchers hypothesize that they also spread early branches of a Proto-Indo-European (PIE) language, which later diversified into today’s many Indo-European languages, including I English, Italian, Hindi, Russian, and Persian.

      …sequenced the whole genomes of 74 ancient Eurasians, most of whom lived between 3500 B.C.E. and 1500 B.C.E. The researchers devised a rough family tree and timeline for these samples and those from later civilizations and modern people.

      The team found no Yamnaya DNA in the three Botai individuals, suggesting the two groups hadn’t mixed. That implies the Botai domesticated horses on their own….

      The new work fits with the archaeologi-cal evidence and a recent study of DNA from ancient horses themselves….That work showed that Botai horses were not related to modern horses, hinting at separate domestications by the Botai and other steppe dwellers….

NASA Cancels Carbon Monitoring Research Program

[These excerpts are from an article by Paul Voosen in the May 11, 2018, issue of Science.]

      You can’t manage what you don’t measure. The adage is especially relevant for climate-warming greenhouse gases, which are crucial to manage—and challenging to measure. In recent years, though, satellite and aircraft instruments have begun monitoring carbon dioxide and methane remotely, and NASA’s Carbon Monitoring System (CMS), a $10- million-a-year research line, has helped stitch together observations of sources and sinks into high-resolution models of the planet’s flows of carbon. Now, President Donald Trump’s administration has quietly killed the CMS, Science has learned.

      The move jeopardizes plans to verify, the national emission cuts agreed to in the Paris climate accords….

      The White House has mounted a broad attack on climate science, repeatedly proposing cuts to NASA's earth science budget, including the CMS, and cancellations of climate missions such as the Orbiting Carbon Observatory 3 (OCO-3). Although Congress fended off the budget and mission cuts, a spending deal signed in March made no mention of the CMS. That allowed the administration’s move to take effect….

      The agency declined to provide a reason for the cancellation beyond “budget constraints and higher priorities within the science budget.” But the CMS is an obvious target for the Trump administration because of its association with climate treaties and its work to help foreign nations understand their emissions……

      Many of the 65 projects supported by the CMS since 2010 focused on understanding the carbon locked up in forests. For example, the U.S. Forest Service has long operated the premier land-based global assessment of forest carbon, but the labor-intensive inventories of soil and timber did not extend to the remote interior of Alaska. With CMS financing, NASA scientists worked with the Forest Service to develop an aircraft-based laser imager to tally up forest carbon stocks….

      The program has also supported research to improve tropical forest carbon inventories. Many developing nations have been paid to prevent deforestation through mechanisms like the United Nations’s REDD+ program, which is focused on reducing emissions from deforestation and forest degradation. But the limited data and tools for monitoring tropical forest change often meant that claimed reductions were difficult to trust….The end of the CMS is disappointing and “means we're going to be less capable of tracking changes in carbon….”

      The CMS improved other carbon monitoring as well. It supported efforts by the city of Providence to combine multiple data sources into a picture of its greenhouse gas emissions, and identify ways to reduce them. It has tracked the dissolved carbon in the Mississippi River as it flows out into the ocean….

Fossils Reveal How Ancient Bird Got Their Beaks

[These excerpts are from an article by Gretchen Vogel in the May 4, 2018, issue of Science.]

      As every schoolchild now knows, birds are dinosaurs, linked to their extinct relatives by feathers and anatomy. But birds’ beaks—splendidly versatile adaptations that allow their owners to grasp, pry, preen, and tear—are nothing like stiff dinosaurian snouts, and how they evolved has been a mystery. Now, 3D scans of new fossils of an iconic ancient bird capture the beak just as it took form.

      …By bringing details from multiple specimens together, the new scans offer an early glimpse of key features of bird skulls, including a big brain and the movable upper jaw that helps make beaks so nimble.

      Ichthyornis, an ancient seabird from about 90 million years ago, has long been famous for having a body like a modem bird, with a snout lined with teeth like a dinosaur. Paleontologists studying the first Ichthyornis fossil, discovered in the 1870s in Kansas, initially thought the body came from a small bird and the jaw from a marine reptile. Further excavation convinced them that the pieces belonged to the same animal. In 1880, Charles Darwin wrote that Ichthyornis was among “the best support for the theory of evolution” since On the Origin of Species was published 2 decades earlier.

      But in the original Ichthyornis fossil, the upper jaw is missing, and the toothed lower jaw resembles that of other dinosaurs. So paleontologists assumed that early birds made do with a fixed upper jaw, like most other vertebrates.

      In 2014, paleontologists in Kansas found a new specimen of Ichthyornis….

      Instead of extracting the fossil from the limestone in which it is embedded, the researchers used computerized tomography to scan the entire block of rock. Then they scanned three previously unrecognized specimens that they found in museum collections, and combined all the scans into a complete model of Ichthyornis’s skull. They also re-examined the original fossil from the 1870s, housed at Yale's Peabody Museum of Natural History. Among unidentified pieces stored with the fossil, they found a small fragment that, when scanned, turned out to contain two key bones from the upper snout—bones that were missing in the new specimens.

      The resulting 3D model captures Ichthyornis’s transitional position between modern birds and other dinosaurs….Despite its dinosaurlike teeth, Ichthyornis had a hooked beak, likely covered by a hard layer of keratin, on the tip of its snout. It also could move both top and bottom jaws independently like modern birds.

      That means beaks appeared earlier than thought, perhaps around the same time as wings….

Critics See Hidden Goal in EPA Data Access Rule

[These excerpts are from an article by Warren Cornwall in the May 4, 2018, issue of Science.]

      When Scott Pruitt, administrator of the U.S. Environmental Protection Agency (EPA) in Washington, D.C., announced last week that the agency plans to bar regulators from considering studies that have not made their underlying data public, he said it was to ensure the quality of the re-search used to shape new rules. “The era of secret science at EPA is coming to an end,” Pruitt said at a 24 April event (which was closed to the press) unveiling the proposed “transparency” rule.

      But longtime observers of EPA, including former senior agency officials, see a more troubling and targeted goal: undermining key studies that have helped justify stricter limits on air pollution. In particular, they say, the new policy is aimed at blocking EPA consideration of large epidemiological studies that have highlighted the health dangers of tiny particles of soot and other chemicals less than 2.5 microns in diameter. Those studies, which rest in part on confidential health information that is difficult to make public, have been under attack for decades from some industry groups and Republican lawmakers in Congress, who argue that the confidentiality masks flaws in the studies. The same interests lobbied heavily for the new EPA rule, and critics of the policy say it is just new clothing for an old—and largely discredited—argument….

      At the heart of the fight is a type of pollution scientists believe is particularly lethat, but relatively costly to control: tiny particles of soot and other chemicals produced by burning oil, coal, gasoline, wood, and other fuels, which can lodge deep in the lungs. In the mid-1990s, two major epidemiological studies—known as the Harvard Six Cities and American Cancer Society (ACS) studies—tracked the medical histories of thousands of people exposed to different levels of air pollution. The studies found that exposure to even relatively low particulate levels increased premature deaths. Further studies have linked the pollution to other problems including asthma, heart disease, and heart attacks.

      In response, EPA began tightening clean air regulations—and affected industries began to attack the findings. Industry representatives also urged Congress to pass legislation that would bar EPA from using nonpublic data in crafting regulations. In recent years that legislation, championed by Representative Lamar Smith (R-TX), head of the House of Representatives’s science committee, failed to gain approval. But after the election of President Donald Trump, Smith and his allies found a receptive audience in Pruitt, who agreed to implement similar policies as an EPA rule.

      In the meantime, an array of studies, including a government-sponsored reanalysis of the original particulate data, has generally validated the findings….

      …Lowering the standard to 11 micrograms would increase pollution-control costs by as much as $1.35 billion in 2020, analysts estimated, but the health gains and lives saved would be worth as much as $20 billion a year.

      …The timing of the rule—which observers expect EPA to adopt once a public comment L period closes—is no coincidence….The agency is about to embark on a periodic review of key air pollution limits, including those governing paticulates. Even seemingly modest changes in how the agency evaluates the science could lead to lower estimates of the health benefits of tighter standards….

Orangutan Medicine

[These excerpts are from an article by Doug Main in the May issue of Scientific American.]

      Medicine is not exclusively a human invention. Many other animals, from insects to birds to nonhuman primates, have been known to self-medicate with plants and minerals for infections and other conditions. Behavioral ecologist Helen Morrogh-Bernard of the Borneo Nature Foundation has spent decades studying the island's orangutans and says she has now found evidence they use plants in a previously unseen medicinal way

      …watched 10 orangutans occasional& chew a particular plant (which is not part of their diet) into a foamy lather and then rub it into their fur. The apes spent up to 45 minutes at a time massaging the concoction onto their upper arms or legs. The researchers believe this behavior is the first known example of a non-human animal using a topical analgesic.

      Local people use the same plant—Dracaena cantleyi, an unremarkable-looking shrub with stalked leaves—to treat aches and pains….

      …That behavior may then have been passed on to other orangutans. Because this type of self-medication is seen only in south-central Borneo, Morrogh-Bernard says, it was probably learned locally.

Watchful Plants

[These excerpts are from an article by Erica Tennenhouse in the May 2018 issue of Scientific American.]

      Plants cannot run or hide, so they need other strategies to avoid being eaten. Some curl up their leaves; others chum out chemicals to make themselves taste bad if they sense animals drooling on them, chewing them up or laying eggs on them—all surefire signals of an attack. New research now shows some flora can detect an herbivorous animal well before it launches an assault, letting a plant mount a preemptive defense that even works against other pest species.

      When ecologist John Orrock of the University of Wisconsin-Madison squirted snail slime—a lubricating mucus the animals ooze as they slide along—into soil, nearby tomato plants appeared to notice. They increased their levels of an enzyme called lipoxygenase, which is known to deter herbivores….

      Initially Orrock found this defense worked against snails; in the latest study, his team measured the slimy warning’s impact on another potential threat. The investigators found that hungry caterpillars, which usually gorge on tomato leaves, had no appetite for them after the plants were exposed to snail slime and activated their chemical resistance. This nonspecific defense may be a strategy that gets the plants more bang for their buck by further improving their overall odds of survival, says Orrock….

Batty Schedules

[These excerpts are from an article by Inga Vesper in the May 2018 issue of Scientific American.]

      Every year migratory bats travel from Mexico to Bracken Cave near San Antonio, Tex., where they spend the summer consuming insects that would otherwise devour common food crops. But the bats have been showing up far earlier than they did two decades ago, possibly because of a warming climate, new research suggests.

      This trend creates a risky situation in which bats may not find enough food for themselves and their young, as the insects they prey on may not yet have arrived or hatched. If bat colonies shrink as a result of this schedule snafu, their pest control effect could fall out of sync with crop-growing seasons—potentially causing hefty losses, scientists say….

      Mexican (also called Brazilian) free-t tailed bats, the migratory species that inhabits Bracken Cave, feast on 20 different moth species and more than 40 other agricultural pests. One favorite is the corn earworm moth, which eats plants such as corn, soybean, potato and pumpkin—costing U.S. farmers millions of dollars a year in ruined crops. A 2011 study estimated that bats indirectly contribute around $23 billion to the U.S. economy by keeping plant-eating insects in check and by hunting bugs that prey on pollinator insects….

      Changing bat migration times can also clash with rainfall patterns. Many insects that bats eat breed in seasonal lakes and puddles. If the bats arrive too early to benefit from summer rainfall and the resulting abundance of bugs, they may struggle to feed their pups or skip reproduction altogether, O'Keefe says. She fears this shift could cause Midwestern bats to dwindle toward extinction, which would be bad news for humans. “Declines in bat populations could have severe implications for crop success,” she says, adding that bats also “control significant disease vectors, such as mosquitoes.”

Incentivize Responsible Antibiotic Use

[These excerpts are from a book review by Ramana Laxminarayan in the April 27, 2018, issue of Science.]

      Ever since the advent of antibiotics, scientists and clinicians have warned of the potential for widespread antibiotic resistance. Indeed, the first in vitro study of resistance to penicillin was published in 1940, 2 years before the first patient was even treated with the drug. In the ensuing decades, experts and the media continued to warn about an impending crisis of resistance but were largely ignored by the public and policy-makers.

      Public opinion changed when resistance became clinically relevant….

      The U.S. Centers for Disease Control and Prevention warned of “nightmare bacteria,” and global leaders started talking about the problem. Yet the problem of resistance t. lacked an effective global spokesperson….

      Antibiotic resistance has much in common with climate change, in that actions in any single country have the potential to affect the rest of the world. No matter where the next strain of multidrug-resistant S. aureus arises, it will become a problem in rich and poor countries alike. And, as with climate change, a key problem is the lack of incentives—for individuals, organizations, and countries—to preserve a global common resource.

      The metaphor of an “arms race” against bacteria is outdated. One could argue that the idea that bacteria are our enemy is what got us to the problem of antibiotic overuse in the first place….

Plastics Recycling with a Difference

[These excerpts are from an article by Haritz Sardon and Andrew P. Dove in the April 27, 2018, issue of Science.]

      Since the synthesis of the first synthetic polymer in 1907, the low cost, durability, safeness, and processability of polymers have led to ever-expanding uses throughout the global economy. Polymers, commonly called plastics, have become so widely used that global production is expected to exceed 500 million metric tons by 2050. This rising production, combined with rapid disposal and poor mechanisms for recycling, has led to the prediction that, by 2050, there will be more plastic in the sea than fish….

      The production of synthetic plastics is far from being sustainable. Most plastics are produced for single-use applications, and their intended use life is typically less than 1 year. Yet the materials commonly persist in the environment for centuries. More than 40 years after the launch of the recycling symbol, only 5% of plastics that are manufactured are recycled, mainly mechanically into lower-value secondary products that are not recycled again and that ultimately find their way to landfills or pollute the environment. With these materials being lost from the system, there is a constant need for the generation of new plastics, mostly from petrochemical sources, thus further depleting natural resources. Although there has been a substantial effort to develop biodegradable plastics, with polylactide arguably the most successful example, the mechanical and thermal properties of these materials still need to be improved to be substitutes for a wider range of existing materials properties.

      In the past decade, an alternative sustainable strategy has been proposed in which the plastic never becomes waste. Instead, once used, it is collected and chemically recycled into raw materials for the production of new virgin plastics with the same properties as the original but without the need for further new monomer feedstocks. This strategy not only helps to address the environmental issues related to the continual growth of disposed plastics over the world but may also reduce the demand for finite raw materials by providing a circular materials economy….

      Plastics will continue to be critical for addressing the continuing demands of our society. New polymeric materials will, for example, be needed for energy generation and storage, to address healthcare needs, for food conservation, and for providing clean water….

      Studies…in which disposed plastics can be infinitely recycled without deleterious effects on their properties, can lead to a world in which plastics at the end of their life are not considered as waste but as raw materials to generate high-value products and virgin plastics. This will both incentivize recycling and encourage sustainability by reducing the requirement for new monomer feedstocks. Current chemical recycling processes are expensive and energetically unfavorable, and further advances in monomer and polymer development and catalyst design are required to facilitate the implementation of economically viable sustainable polymers.

Searching for a Stone Age Odysseus

[These excerpts are from an article by Andrew Lawler in the April 27, 2018, issue of Science.]

      Odysseus, who voyaged across the wine-dark seas of the Mediterranean in Homer's epic, may have had some astonishingly ancient forerunners. A decade ago, when excavators claimed to have found stone tools on the Greek island of Crete dating back at least 130,000 years, other archaeologists were stunned—and skeptical. But since then, at that site and others, researchers have quietly built up a convincing case for Stone Age seafarers—and for the even more remarkable possibility that they were Neandertals, the extinct cousins of modern humans.

      The finds strongly suggest that the urge to go to sea, and the cognitive and technological means to do so, predates modern h mans….

      Scholars long thought that the capability to construct and victual a watercraft and then navigate it to a distant coast arrived only with the advent of agriculture and animal domestication. The earliest known boat, found in the Netherlands, dates back only 10,000 years or so, and convincing evidence of sails only shows up in Egypt’s Old Kingdom around 2500 B.C.E. Not until 2000 B.C.E. is there physical evidence that sailors crossed the open ocean, from India to Arabia.

      But a growing inventory of stone tools and the occasional bone scattered across Eurasia tells a radically different story. (Wooden boats and paddles don't typically survive the ages.) Early members of the human family such as Homo erectus are now known to have crossed several kilometers of deep water more than a million years ago in Indonesia, to islands such as Flores and Sulawesi. Modern humans braved treacherous waters to reach Australia by 65,000 years ago. But in both cases, some archaeologists say early seafarers might have embarked by accident, perhaps swept out to sea by tsunamis.

      In contrast, the recent evidence from the Mediterranean suggests purposeful navigation. Archaeologists had long noted ancient-looking stone tools on several Mediterranean islands including Crete, which has been an island for more than 5 million years, but they were dismissed as oddities.

      …The picks, cleavers, scrapers, and bifaces were so plentiful that a one-off accidental stranding seems unlikely, a Strasser says. The tools also offered a clue to the identity of the early seafarers: The artifacts resemble Acheulean tools developed more than a million years ago by H. erectus and used until about 130,000 years ago by Neandertals as well.

      …the tools may represent a sea-borne migration of Neandertals from the Near East to Europe….

In Blockchain We Trust

[These excerpts are from an article by Michal J. Casey and Paul Vigna is in the May/June 2018 issue of Technology Review.]

      …we need to go back to the 14th century.

      That was when Italian merchants and bankers began using the double-entry bookkeeping method. This method, made possible by the adoption of Arabic numerals, gave merchants a more reliable record-keeping tool, and it let bankers assume a powerful new role as middlemen in the international payments system. Yet it wasn’t just the tool itself that made way for modern finance. It was how it was inserted into the culture of the day.

      In 1494 Luca Pacioli, a Franciscan friar and mathematician, codified their practices by publishing a manual on math and accounting that presented double-entry bookkeeping not only as a way to track accounts but as a moral obligation. The way Pacioli described it, for everything of value that merchants or bankers took in, they had to give something back. Hence the use of offsetting entries to record separate, balancing values — a debit matched with a credit, an asset with a liability.

      Pacioli’s morally upright accounting bestowed a form of religious benediction on these previously disparaged professions. Over the next several centuries, clean books came to be regarded as a sign of honesty and piety, clearing bankers to become payment intermediaries and speeding up the circulation of money. That funded the Renaissance and paved the way for the capitalist explosion that would change the world.

      Yet the system was not impervious to fraud. Bankers and other financial actors often breached their moral duty to keep honest books, and they still do—just ask Bernie Madoff’s clients or Enron’s shareholders. Moreover, even when they are honest, their honesty comes at a price. We've allowed centralized trust managers such as banks, stock exchanges, and other financial middlemen to become indispensable, and this has turned them from intermediaries into gatekeepers. They charge fees and restrict access, creating friction, curtailing innovation, and strengthening their market dominance.

      The real promise of blockchain technology, then, is not that it could make you a billionaire overnight or give you a way to shield your financial activities from nosy governments. It's that it could drastically reduce the cost of trust by means of a radical, decentralized approach to accounting—and, by extension, create a new way to structure economic organizations.

      A new form of bookkeeping might seem like a dull accomplishment. Yet for thousands of years, going back to Hammurabi’s Babylon, ledgers have been the bedrock of civilization. That's because the exchanges of value on which society is founded require us to trust each other’s claims about what we own, what we’re owed, and what we owe. To achieve that trust, we need a common system for keeping track of our transactions, a system that gives definition and order to society itself. How else would we know that Jeff Bezos is the world's richest human being, that the GDP of Argentina is $620 billion, that 71 percent of the world's population lives on less than $10 a day, or that Apple's shares are trading at a particular multiple of the company’s earnings per share?

      A blockchain (though the term is bandied about loosely, and often misapplied to things that are not really blockchains) is an electronic ledger—a list of transactions. Those transactions can in principle represent almost anything. They could be actual exchanges of money, as they are on the blockchains that underlie cryptocurrencies like Bitcoin. They could mark exchanges of other assets, such as digital stock certificates. They could represent instructions, such as orders to buy or sell a stock. They could include so-called smart contracts, which are computerized instructions to do something (e.g., buy a stock) if something else is true (the price of the stock has dropped below $10).

      What makes a blockchain a special kind of ledger is that instead of being managed by a single centralized institution, such as a bank or government agency, it is stored in multiple copies on multiple independent computers within a decentralized network. No single entity controls the ledger. Any of the computers on the network can make a change to the ledger, but only by following rules dictated by a “consensus protocol,” a mathematical algorithm that requires a majority of the other computers on the network to agree with the change.

      Once a consensus generated by that algorithm has been achieved, all the computers on the network update their copies of the ledger simultaneously. If any of them tries to add an entry to the ledger without this consensus, or to change an entry retroactively, the rest of the network automatically rejects the entry as invalid.

      Typically, transactions are bundled together into blocks of a certain size that are chained together (hence “blockchain”) by cryptographic locks, themselves a product of the consensus algorithm. This produces an immutable, shared record of the “truth,” one that—if things have been set up right—cannot be tampered with….

      The benefits of this decentralized model emerge when weighed against the current economic system's cost of trust. Consider this: In 2007, Lehman Brothers reported record profits and revenue, all endorsed by its auditor, Ernst & Young. Nine months later, a nosedive in those same assets rendered the 158-year-old business bankrupt, triggering the biggest financial crisis in 80 years. Clearly, the valuations cited in the preceding years’ books were way off. And we later learned that Lehman’s ledger wasn’t the only one with dubious data Banks in the US and Europe paid out hundreds of billions of dollars in fines and settlements to cover losses caused by inflated balance sheets. It was a powerful reminder of the high price we often pay for trusting centralized entities’ internally devised numbers.

      The crisis was an extreme example of the cost of trust. But we also find that cost ingrained in most other areas of the economy. Think of all the accountants whose cubicles fill the skyscrapers of the world. Their jobs, reconciling their company’s ledgers with those of its business counterparts, exist because neither party trusts the other’s record. It is a time-consuming, expensive, yet necessary process.

      …Might this blind spot explain why some prominent economists are quick to dismiss blockchain technology? Many say they can’t see the justification for its costs. Yet their analyses typically don't weigh those costs against the far-reaching societal cost of trust that the new models seek to overcome….

      Although there are still major obstacles to overcome before blockchains can fulfill the promise of a more robust system for recording and storing objective truth, these concepts are already being I tested in the field. Companies such as IBM and Foxconn are exploiting the idea of immutability in projects that seek to unlock trade finance and make supply chains more transparent….

      What makes these programmable money contracts “smart” is not that they’re automated; we already have that when our bank follows our programmed instructions to autopay our credit card bill every month. It’s that the computers executing the contract are monitored by a decentralized blockchain network. That assures all signatories to a smart contract that it will be carried out fairly.

      With this technology, the computers of a shipper and an exporter, for example, could automate a transfer of ownership of goods once the decentralized software they both use sends a signal that a digital-currency payment—or a cryptgraphically unbreakable commitment to pay—has been made. Neither party necessarily trusts the other, but they can nonetheless carry out that automatic transfer without relying on a third party. In this way, smart contracts take automation to a new level—enabling a much more open, global set of relationships.

      Programmable money and smart contracts constitute a powerful way for communities to govern themselves in pursuit of common objectives. They even offer a potential breakthrough in the “Tragedy of the Commons,” the long-held notion that people can't simultaneously serve their self-interest and the common good….

Evidence for Opportunity

[These excerpts are from an article by Michael J. Feuer in the April 27, 2018, issue of Science.]

      “Our nation is moving toward two societies, one black, one white—separate and unequal” So concluded a 1968 report by the Kerner Commission, established by U.S. President Lyndon Johnson to investigate the race riots of 1967. Not only did the report shine a spotlight on America’s unfulfilled promises, it spurred action by politicians and policy-makers. Fifty years later, it is fair—and necessary—to ask if anything has changed. Healing Our Divided Society, the 2018 sequel to the Kerner report, argues sadly that gains of the 1970s and early 1980s are evaporating or reversing. But, noting the role of empirical evidence in bolstering past reforms, the new report suggests hopefully that “the quantity and sophistication of scientific information available today far exceeds what was available [in 1968].”

      To be sure, there was important progress after the 1968 report: Education achievement gaps narrowed (mostly in the early grades), college participation and degree attainment rose for all groups, and average family wealth for black and Hispanic Americans increased. But the current picture is alarming: Income inequality has exploded; child poverty is unacceptably high, especially in racially concentrated neighborhoods; and black children face considerably lower chances of upward mobility than their white peers.

      …The U.S. now has more and better policy-relevant research and evidence that can help move the needle….investments in programs such as Head Start, coupled with sustained K-12 funding, can break the cycle of poverty.

      A key takeaway from such examples is that political will is necessary but insufficient without empirical evidence. The question is whether we can be confident in the supply of good research, in renewed political commitment, and in a revived appetite for evidence-informed policy at all levels of government.

      Let’s hope so. To be prepared, we must address worrisome trends. After decades of federal funding, the U.S. has a robust supply of doctoral-level scientists in education and related fields, but federal resources for their continuing work are meager and politically vulnerable….Private foundations mostly advance the public good, but few support general education research and some put advocacy ahead of evidence…

      …Congress should increase funding for behavioral and social sciences, and governments at all levels should consider new approaches to accessing evidence….

      If there is hope for restoring economic and educational opportunity, research is essential. The proven tradition of relying on science to make the world better cannot end on our watch.

False News Flies Faster

[This article by Peter Dizikes is in the May/June 2018 issue of Technology Review.]

      Paleontologists unearthed a strange sight in Newfoundland in the early 2000s: an ancient fossil bed of giant, frond-shaped marine organisms. Researchers had discovered these mysterious extinct creatures—called rangeomorphs—before, but they continue to defy categorization. Now scientists believe the Newfoundland fossils and their brethren could help answer key questions about life on Earth.

      “We found that falsehood diffuses significantly farther, faster, deeper, and more broadly than the truth, in all categories of information, and in many cases by an order of magnitude,” says Sinan Aral, a professor at the MIT Sloan School of Management and coauthor of a paper detailing the results in Science.

      “These findings shed new light on fundamental aspects of our online communication ecosystem,” says study coauthor Deb Roy, an associate professor of media arts and sciences at the MIT Media Lab and director of the Media Lab’s Laboratory for Social Machines (LSM). Roy, who served as Twitter’s chief media scientist from 2013 to 2017, adds that the researchers were “somewhere between surprised and stunned” at the different trajectories of true and false news on Twitter.

      To conduct the study, the researchers tracked roughly 126,000 “cascades,” or unbroken retweet chains, of news stories cumulatively tweeted over 4.5 million times by about three million people from 2006 to 2017. To determine whether stories were true or false, they used the assessments of six fact-checking organizations.

      The researchers found that false news stories are 70 percent more likely to be retweeted than true stories are. It also takes true stories about six times as long to reach 1,500 people as it does for false stories to reach the same number of people. And falsehoods reach a “cascade depth” of 10 about 20 times faster than real facts do.

      Moreover, the scholars found, hots are not the principal reason inaccurate stories get around so much faster and farther than real news is able to manage. Instead, inaccurate news items spread faster around Twitter because people are retweeting them.

      “When we removed all of the bots in our data set, [the] differences between the spread of false and true news stood,” says LSM postdoc and paper coauthor Soroush Vosoughi, whose PhD research with Roy on the spread of rumors led to the current study.

      So why do falsehoods spread more quickly than the truth on Twitter? The scholars suggest the answer may reside in human psychology: we like new things, and false news is often accompanied by reactions of surprise.

      “False news is more novel, and people are more likely to share novel information,” Aral says.

Biases in Forensic Experts

[These excerpts are from an article by Itiel E. Dror in the April 20, 2018, issue of Science.]

      Forensic evidence plays a critical role in court proceedings and the administration of justice. It is a powerful tool that can help convict the guilty and avoid wrongful conviction of the innocent. Unfortunately, flaws in forensic evidence are increasingly becoming apparent. Assessments of forensic science have too often focused only on the data and the underlying science, as if they exist in isolation, without sufficiently addressing the process by which forensic experts evaluate and interpret the evidence. After all, it is the forensic expert who observes the data and makes interpretations, and therefore forensic evidence is mediated by human and cognitive factors. A U.S. National Research Council examination of forensic science in 2009, followed by a 2016 evaluation by a presidential panel, along with a U.K. inquiry into fingerprinting in 2011 and a 2015 guidance by the U.K. Forensic Science Regulator, have all expressed concerns about biases in forensic expert decision-making. Where does forensic bias come from, and how can we minimize it?

      Forensic experts are too often exposed to irrelevant contextual information, largely because they work with the police and prosecution. Extraneous information—from a suspect's ethnicity or criminal record to eyewitness identifications, confessions, and other lines of evidence—can potentially cause bias. This can give rise to conclusions that are incorrect or overstated, rather than what forensic decisions should be: impartial decisions, appropriately circumscribed by what the evidence actually supports. A consequence of cognitive biases is that science is misused, and sometimes even abused, in court. Not only can irrelevant information bias a particular aspect of an investigation, it often causes “bias cascade” from one component of an investigation to another and “bias snowball,” whereby the bias increases in strength and momentum as different components of an investigation influence one another. Bias also arises when forensic experts work backward: Rather than having the evidence drive the forensic decision-making process, experts work from the target suspect to the evidence.

      …many forensic experts have a “bias blind spot” to these implicit biases and therefore tend to deny their existence. Forensic experts frequently present their decisions to the court with great confidence and then incorrectly take the court's acceptance of their findings as confirmation that they have not been biased or made a mistake. Acknowledging that bias can influence forensic science experts would be a substantial step toward implementing countermeasures that could greatly improve forensic evidence and the fair administration of justice.

      If we want science to serve society, then it must be properly used in the halls of justice.

Plant Responses to CO2 Are a Question of Time

[These excerpts are from an article by Mark Hoverden and Paul Newton in the April 20, 2018, issue of Science.]

      Rising carbon dioxide (CO2) concentrations in the atmosphere as a result of fossil fuel burning are expected to fertilize plants, resulting in faster growth. However, this change is not expected to be the same for all plants. Rather, scientists believe that differences in photosynthetic mechanism favor one plant group—the C3 plants—over the other, the C4 plants….

      In 1966, Hatch and Slack found that some plants have a distinct mechanism for assimilating CO2 from the atmosphere. This has profound ecological consequences. The ancestral method of photosynthesis combines CO2 with a five-carbon molecule to produce two identical three-carbon molecules. However, the enzyme that catalyzes this reaction also combines the same five-carbon compound with dioxygen, thus reducing the carbon assimilation rate and, hence, growth. Hatch and Slack discovered that some plants can avoid this by first combining CO2 from the atmosphere with a three-carbon molecule, producing a four-carbon molecule as the first stable product in photosynthesis. Plants with this pathway are known as C4 plants, distinguishing them from those with the ancestral pathway, termed C3 plants.

      Although only about 3% of the global plant species are C4 plants, they play a crucial role hi many ecosystems, particularly savannas and grasslands. C4 species contribute 25% of land biomass globally, provide forage for animals in both natural (for example, Serengeti) and managed (for example, Great Plains) grasslands, and contribute 14 of the world’s 18 worst weeds. It is clearly important, therefore, to predict the future distribution and abundance of C4 plants.

      …The C4 method of photosynthesis appears to have evolved during a period of declining atmospheric CO2 concentration and allows the plants to use CO2 more efficiently than C3 species. Today, the atmospheric CO2 concentration is higher than at any other time in the past 500,000 years and continues to rise. Most scientists expect C3 plants to benefit from this additional CO2 and outcompete C4 species, because C3 photosynthesis increases in efficiency with increasing CO2 concentration to a far greater extent than does C4 photosynthesis….

The Suns in Our Daughters

[These excerpts are from an article by Lisa Einstein in the May 2018 issue of Scientific American.]

      …Aissatou is a designer: she builds, plays and imagines. I observe her ingenuity with awe.

      I see Aissatou the way my parents saw me: filled with unlimited potential. My parents called their four kids “their greatest collaboration” and helped us grow into our fullest selves. Knowing the challenges facing young women in physics, Dad went out of his way to fuel my passion. Once he drove me six hours to a lecture by a female physicist. His encouragement emboldened me to dive into a challenging field dominated by men.

      Aissatou, on the other hand, has been taught that she should be dominated by men. When male visitors arrive at her house, the jubilant builder I know transforms into a meek and submissive servant, bowing as she acquiesces to their every request.

      The difference? I won the lottery at birth: time, place and parents who gave me the chance to develop my passions. I am on a mission to give Aissatou and Binta the chance to do the same.

      I think about the untapped potential of millions of girls like Aissatou and Binta, who lack opportunities because of custom, poverty, laws or terrorist threats. The gifted young women I’ve taught as a Peace Corps volunteer implementing the Let Girls Learn program have strengthened my conviction that it is possible for them to fulfill their promise through education. And educating girls is not only morally right but also provides a cornerstone of achieving a peaceful and prosperous future….

      Do you want to know something exciting I learned? Mass-energy equivalence means that the solar energy striking the earth each second equals only four pounds of mass. That means a small girl of 40 pounds could unleash the energy of 10 suns shining on the earth in a second. Take the 132 million girls who are not in school, and we have 1.32 billion suns in our daughters.

      How will we help them rise?

End the War on Weed

[These excerpts are from an editorial in the May 2018 issue of Scientific American.]

      Like the failed Nixon-era War on Drugs, this resurgent war on marijuana is ill informed and misguided. Evidence suggests that cannabis—though not without its risks—is less harmful than legal substances such as alcohol and nicotine. And despite similar marijuana use among blacks and whites, a disproportionate number of blacks are arrested for it. By allowing states to regulate marijuana without federal interference, we can ensure better safety and control while allowing for greater research into its possible harms and benefits….

      That does not mean that marijuana is entirely benign. Studies suggest it can impair driving, and a subset of users develops a form of dependence called marijuana use disorder. Other research indicates that teenage marijuana use may adversely impact the developing brain: it has been linked to changes in neural structure and function, including lower IQ, as well as an increased risk of psychosis in vulnerable individuals. But some of these findings have been challenged. A pair of longitudinal twin studies, for example, found no significant link between marijuana use and IQ. Moreover, people with these brain characteristics may simply be more likely to use marijuana in the first place.

      We are not advocating for unfettered access to marijuana, especially by adolescents. More large-scale, randomized controlled studies are needed to tease out the risks and benefits. But to do these kinds of studies, scientists must have access to the drug, and until very recently, the federal government has had a monopoly on growing cannabis for research purposes. We also need more research on the various, often more potent, marijuana strains grown for recreational use. As long as the federal government continues to crack down on state-level legal marijuana, it will be difficult to carry out such studies….

      It is time to stop treating marijuana like a deadly drug, when science and public opinion agree that it is relatively safe for adult recreational use. The last thing we need is another expensive and ineffective war on a substance like cannabis—especially when there are far more serious drug problems to tackle.

Adapting to Life in the Big City

[This excerpt of a book review by Arne Mooers is in the April 13, 2018, issue of Science.

      Metal-excreting pigeons, pigeon-eating catfish, cigarette-wielding sparrows, soprano-voiced great tits: The modem city is a fantastical menagerie of the odd and unexpected. Through a series of 20 short but connected chapters that mix natural history vignettes, interviews with visionary scientists, and visits to childhood haunts, science journalist and biology professor Menno Schilthuizen introduces readers to the striking facts of ongoing urban evolution in Darwin Comes to Town. But while the prose may be playful (“Cut to the Hollywood bobcats”), the underlying message may cause discomfort.

      Two cross-cutting ideas permeate the book. The first is the notion of rampaging sameness. Because we are incessant, but messy, busybodies, Schilthuizen argues, we scatter species across countries and continents. And we move most among cities.

      The author does a fine job of conveying this urban sameness when describing a scene along an estuary in Singapore: the house crows and the mynas feeding in the cow grass, the apple snails laying eggs among the mimosa, the red-eared slider turtles dipping into the water, and the peacock bass breaking the surface for a gulp of air. Every one of the species he describes is a non-native, every one is found in countless other cities the world over, and every one is at home in its new habitat. Schilthuizen has even borrowed a name from parasitology for them: anthropophiles.

      And the reason for this biological sameness is urban sameness. Cities around the world produce the same sorts of garbage and the same sorts of noise, house the same sorts of skyscrapers, and produce the same fragmented landscapes. They can even generate the same sort of weather via particulate pollution and the heat-island effect.

      The book’s second major theme is that rapid change is an enduring part of the urban environment. Urban plants and animals evolve and adapt to their novel surroundings at remarkable speed. The city pigeons’ darker, more melanic feathers, for example, sequester poisonous metals; the great tit’s new soprano notes are better heard above the city din; and city moths in Europe have become less attracted to deadly artificial lights.

      Indeed, the realization that adaptive evolutionary change occurring on human time scales in multicellular species is common, rather than rare, is both fairly new and fairly profound. The ubiquity of the phenomenon has even given rise to a new field known as eco-evolutionary dynamics.

      It is now clear that adaptation can be so fast as to affect the very environment that sets the stage for those adaptations, leading to possible merry-go-rounds of organism-environment-organism changes through time. The implications of this are still not fully known, but it’s safe to assume that this is not what Darwin envisioned from his seat in the Kent countryside. (Perhaps he should have come up to the city more often.)….

How Cleaner Air Changes the Climate

[These excerpts are from an an article by Elizabeth Pennisi in the April 13, 2018, issue of Science.]

      Human influence on the climate is a tug-of-war, with greenhouse gas-induced warming being held partly in check by cooling from aerosol emissions. In a Faustian bargain, humans have effectively dampened global climate change through air pollution. Increased greenhouse gas concentrations from fossil fuel use are heating the planet by trapping heat radiation. At the same time, emissions of aerosols—particles that make up a substantial fraction of air pollution—have an overall cooling effect by reflecting incoming sunlight. The net effect of greenhouse gases and aerosols is the -1°C of global warming observed since 1880 CE. The individual contributions of greenhouse gases and aerosols are, however, much more uncertain. Recent climate model simulations indicate that without anthropogenic aerosols, global mean surface warming would be at least 0.5°C higher, and that in their absence there would also be a much greater precipitation change….

      Since 1990, there has been little change in the global volume of anthropogenic aerosol emissions. Regionally, however, there are large differences, with reductions in Europe and the United States balanced by increases in Africa and Asia….

Human Mutation Rate a Legacy from Our Past

[These excerpts are from an an article by Elizabeth Pennisi in the April 13, 2018, issue of Science.]

      Kelley Harris wishes humans were more like paramecia. Every newborn’s DNA carries more than 60 new mutations, some of which lead to birth defects and disease, including cancers. “If we evolved parameciumlike replication and DNA repair processes, that would never happen,” says Harris….Researchers have learned that these single-cell protists go thousands of generations without a single DNA error—and they are figuring out why human genomes seem so broken in comparison.

      The answer, researchers reported at the Evolution of Mutation Rate workshop here late last month, is a legacy of our origins. Despite the billions on Earth today, humans numbered just thousands in the early years of our species. In large populations, natural selection efficiently weeds out deleterious genes, but in smaller groups like those early humans, harmful genes that arise—including those that foster mutations—can survive.

      Support comes from data on a range of organisms, which show an inverse relationship between mutation rate and ancient population size. This understanding offers insights into how cancers develop and also has implications for efforts to use DNA to kdate branches on the tree of life.

      Mutations occur, for example, when cells copy their DNA incorrectly or fail to repair damage from chemicals or radiation. Some mistakes are good, providing variation that enables organisms to adapt. But some of these genetic mistakes cause the mutation rate to rise, thus fostering more mutations.

      For a long time, biologists assumed mutation rates were identical among all species, and so predictable that they could be used as “molecular clocks.” By counting differences between the genomes of two species or populations, evolutionary geneticists could date when they diverged. But now that geneticists can compare whole genomes of parents and their offspring, they can count the actual number of new mutations per generation.

      That has enabled researchers to measure mutation rates in about 40 species, including newly reported numbers for orangutans, gorillas, and green African monkeys. The primates have mutation rates similar to humans….But…bacteria, paramecia, yeasts, and nematodes—all of which have much larger populations than humans—have mutation rates orders of magnitude lower.

      The variation suggests that in some species, genes that cause high mutation rates—for instance, by interfering with DNA repair—go unchecked. In 2016, Lynch detailed a possible reason, which he calls the drift barrier hypothesis.… Genetic drift plays a bigger role in smaller populations. In large populations, harmful mutations are often counteracted by later beneficial mutations. But in a smaller population with fewer individuals reproducing, the original mutation can be preserved and continue to do damage.

      Today, 7.6 billion people inhabit Earth, but population geneticists focus on the effective population size, which is the number of people it took to produce the genetic variation seen today. In humans, that's about 10,000—not so different from that of other primates. Humans tend to form even smaller groups and mate within them. In such small groups, Harris says, “we can’t optimize our biology because natural selection is imperfect”….

      …Among Europeans, the excess cytosine to thymine mutations existed in early farmers but not in hunter-gatherers, she reported. She speculates that these farmers’ wheat diet may have led to nutrient deficiencies that predisposed them to a mutation in a gene that in turn favored the cytosine-to-thymine changes, suggesting environment can lead to changes in mutation rate. Drift likely played a role in helping the mutation-promoting gene stick around.

What We All Need to Know about Vaping

[These excerpts are from an article by Susan Leonard in the April 2018 issue of Phi Delta Kappan.]

      …Among these experts’ many concerns was the fact that most delivery devices contain large concentrations of propylene glycol, which is a known irritant when inhaled. Little is known about the effect of long-term inhalation of this chemical. Additionally, because these devices are unregulated, users have no idea what other chemicals they may be inhaling, nor what the short- or long-term effects of that exposure are.

      …Nicotine causes the release of adrenaline, which elevates the heart rate, increases blood pressure, and constricts blood vessels, potentially leading to long-term heart problems. The effect of vaping is almost immediate, but it also wears off quickly and encourages the user to want to nape again and again, needing more and more to feel the same effects.

      Candidly, it is hard for me to believe that this industry is truly trying to do good by helping smokers wean themselves off harmful tobacco cigarettes when they market the devices with fruity flavors and pack their juices with more nicotine than cigarettes are allowed to contain under Food and Drug Administration regulations. Nicotine is highly and quickly addicting, and addiction creates a lucrative product, especially if those users are young.

      During adolescence, the brain is sensitive to novel experiences. Unfortunately, just as young people want most to experiment with new and risky behaviors, their immature brains have very different sensitivities to drugs and alcohol. Exposure to nicotine during this stage can lead to long-term changes in neurology and behavior. Chronic nicotine exposure during adolescence also alters the subsequent response of the serotonin system. Such alterations are often permanent….

      We should also not ignore the gateway effect nicotine has on its users. In 2012, nearly 90% of U.S. adults 18-34 years of age who had used cocaine had smoked cigarettes (i.e., used nicotine) first. Behavioral experiments have proven that nicotine “primes” the brain to en-hance the effects of cocaine. So, it's not just that one risky behavior leads to another — nicotine actually affects how the brain works, making other drugs (most commonly marijuana and cocaine) more pleasurable and desirable. And the younger a person is when beginning to use drugs and alcohol, the more likely it is that person will move on to other drugs and/or develop a serious addiction ….

      …Vaping is not just smokeless smoking – it’s a real and present danger to our students.

A Wider Vision of Learning

[These excerpts are from an article by Elliot Washor in the April issue of Phi Delta Kappan.]

      Today, school leaders tend to be fixated on using big data to crunch a narrow set of numbers, rather than actually thinking big — and deep and broad —about learning. And the more sophisticated the technology they apply (or misapply) to the same handful of indicators, the less clearly they see their students. They use test results to assign learners to groups so that schools can provide “appropriate” interventions, but they don’t actually know very much about the varying talents and interests of the individuals they put into those groups, nor do they know much about the personal struggles they may be facing that can profoundly affect their performance.

      Further, they may not be able to imagine other ways of assessing students, or — consistent with Abraham Maslow’s observation that a fear of knowing is a fear of doing — they may lack the courage to look more deeply at the talent that sits before them. After all, if they did so, then they might feel obliged to act on what they learn, and this could interfere with their ability to complete the tasks they’ve been assigned. Worse yet, if educators allowed themselves to know more about their students, then they might be forced to acknowledge that the whole system needs to be redesigned.

      But why, given its rapidly expanding ability to collect and analyze seemingly endless streams of data, does the educational system remain so narrowly fixated on the same few indicators? Why can’t we use the power of big data to collect different and better measures that look more broadly and deeply at the things students can and want to do, not just in the classroom but also outside of school?

      …Traditional indicators and measures are at best incomplete; often, they are perniciously inaccurate, even as they delude us into believing we actually know the individual learner. A student might ace an interim standardized test, indicating they are on track to succeed, while also facing significant issues at home that could increase their likelihood of dropping out of school before graduation. Or a student might be engaged in pursuing a passion that presents valuable learning opportunities but does not necessarily improve their grades and test scores. We can’t understand students unless we expand our vision.

Do Students’ High Scores on International Assessments Translate to Low Levels of Creativity?

[This excerpt is from an article by Stefan Johansson in the April 2018 issue of Phi Delta Kappan.]

      Although it’s true that test scores have been misused, Zhao’s critique of PISA — arguing that high PISA scores in East Asian countries are related to low levels of creativity — is difficult, if not impossible, to support. Certainly, nobody other than Zhao has been able to establish any causal relationships between increasing scores on international large-scale assessments and decreasing levels of innovation across countries.

      To back up his thesis, Zhao compares the mathematics scores from the 2009 PISA with results of the Global Entrepreneurship Monitor (GEM) study, which is a survey of, among other things, perceived levels of entrepreneurial capability (i.e., an individual’s confidence in his or her ability to succeed in entrepreneurship) in a wide range of countries. On first glance, the comparison is striking — countries with high scores on PISA (such as Japan, Korea, and Singapore) all had low scores on their perceived entrepreneurial capacity. At the same time, countries that performed in the middle of the pack on PISA (such as Sweden and the U.S.) ranked fairly high on entrepreneurial capacity, and one of PISA’s lowest performers (the United Arab Emirates) reported the highest entrepreneurial capacity of all.

      Zhao finds this pattern to be evidence of a statistically significant relationship, showing that countries with high PISA scores tend to be less innovative. Moreover, he asserts, many Chinese and Singaporeans themselves “blame their education for their shortage of creative and entrepreneurial talents,” and states that if they’re correct, then the relation-ship “could be causal.” That is, Zhao appears to be arguing that the way these countries teach math causes their students not only to do well on tests but also to become less creative. Thus, he concludes, it’s a mistake for U.S. policy makers to pursue reforms that make their schools more like those in East Asia. If they continue to push for more emphasis on standardization, test taking, and highly rigorous academic work, then students’ creativity will be seriously harmed, and the American workforce will become less innovative.

      But on closer inspection, this argument turns out to be pretty weak. It may be true that Singapore and China haven’t produced many world-famous musicians, but other than that, there isn’t much evidence that East Asians suffer from a lack of creativity or, if they do, that it has anything to do with their NSA scores.

      Zhao’s reliance on self-reported levels of entrepreneurial capacity is especially problematic. For one thing, the construct is not well defined, making it difficult to interpret people's self-assessments. For another, it is unclear why entrepreneurial abilities should be equated with creativity, or why creativity should be distinguished from mathematical proficiency. Actually, mathematical reasoning and problem solving are often described as deeply creative activities. For example, Haylock argued that there are at least two major ways in which the term creativity is used in the context of mathematics: 1) thinking that is divergent and overcomes fixation and 2) the thinking behind a product that is perceived as outstanding by a large group of people. Further, creativity is associated with long periods of work and reflection rather than rapid and unique insights.

      Perhaps more important, the context is so different from one country to another that it may not be possible to compare those self-assessments at all, or to know what to make of results that seem to show that people in East Asia are less innovative than their counterparts in the West. For example, it can be very difficult to start a business or pursue other forms of entrepreneurship in a country that is tightly governed by the state, while it is relatively easy to do so in the U.S. or Sweden — one would expect such differences to affect the GEM results.

Avoiding Difficult History

[This excerpt is from a ‘worthy item’ in the April 2018 issue of Phi Delta Kappan.]

      The Southern Poverty Law Center (SPLC) has found that U.S. students are receiving an inadequate education about the role of slavery in American history. Surveys of high school seniors and social studies teachers, analysis of state content standards, and a view of history textbooks revealed what the SPLC considers seven key problems with current practice:

      1. We teach about slavery without context, preferring feel-good stories of heroes like Harriet Tubman over the more difficult story of the role of slave labor in building the nation.

      2. We subscribe to a view of history that acknowledges flaws only to the extent that they have been solved and avoids exploration of the continuing legacy of those flaws.

      3. We teach about slavery as an exclusively southern institution even though it existed in all states when the Declaration of Independence was signed.

      4. We rarely connect slavery to the White supremacist ideology that grew up to protect it.

      5. We rely on pedagogy, such as simulations, that is poorly suited to the subject and potentially traumatizing.

      6. We rarely connect slavery to the present, or even historical events such as the Great Migration and the Harlem Renaissance.

      7. We tend to foreground the White experience by focusing on the political and economic impacts of slavery.

      The report identifies 10 key concepts that should be incorporated into instruction on slavery in the U.S. These include the facts that the slave trade was central to the growth of the U.S. economy and was the chief cause of the Civil War….

Edge of Extinction

[These excerpts are from an article by Sanjay Kumar in the April 6, 2018, issue of Science.]

      The first dinosaur fossils found in Asia, belonging to a kind of sauropod, were unearthed in 1828 in Jabalpur, in central India’s Narmada Valley. Ever since, the subcontinent has yielded a stream of important finds, from some of the earliest plant remains through the reign of dinosaurs to a skull of the human ancestor Homo erectus….

      Much of that fossil richness reflects India’s long, solitary march after it broke loose from the supercontinent Gondwanaland, starting some 150 million years ago. During 100 million years of drifting, the land mass acquired a set of plant and animal species, including many dinosaurs, that mix distinctive features with ones seen elsewhere. Then, 50 million to 60 million years ago, India began colliding with Asia, and along the swampy edges of the vanishing ocean between the land masses, new mammals emerged, including ancestral horses, primates, and whales.

      Now, that rich legacy is colliding with the realities of present-day India. Take a site in Himachal Pradesh state where, in the late 1960s, an expedition by Panjab University and Yale University excavated a trove of humanoid fossils, including the most complete jaw ever found of a colossal extinct ape, Gigantopithecus bilaspurensis. The discovery helped flesh out a species known previously only through teeth and fragmentary jaws. Today’s paleontologists would love to excavate further at the site, Sahni says. But it “has been completely flattened”—turned into farm fields, with many of its fossils lost or sold. To India’s paleontologists, that is a familiar story.

      In the early 1980s, for example, blasting at a cement factory in Balasinor in Gujarat revealed what the workers believed were ancient cannon balls. A team led by Dhananjay Mohabey, a paleontologist then at the Geological Survey of India in Kolkata, realized they were dinosaur eggs. Mohabey and his colleagues soon uncovered thousands more in hundreds of nests, as well as many other fossils. Examining one Cretaceous period clutch in 2010, Jeffrey Wilson of the University of Michigan in Ann Arbor discerned what appeared to be snake bones. He and Mohabey recovered more fossil fragments and confirmed that a rare snake (Sanajeh indicus) had perished while coiled around a dinosaur egg. It was the first evidence, Mohabey says, of snakes preying on dinosaur hatchlings.

      Mohabey and others have since documented seven dinosaur species that nested in the area. (In a separate find in Balasinor, other researchers unearthed the skeleton of a horned carnivore called Rajasaurus narmadensis — the royal Narmada dinosaur.) But locals and visitors soon began pillaging the sites. In the 1980s, dinosaur eggs were sold on the street for pennies.

      In 1997, local authorities designated 29 hectares encompassing the nesting sites as the Balasinor Dinosaur Fossil Park in Raiyoli. But poaching continued largely unabated in the park and outside its boundaries, Mohabey says. Even now, the park is not fully fenced and the museum building, ready since 2011, is still not open….

United States to Ease Car Emission Rules

[This news brief by Jeffrey Brainard is in the April 6, 2018, issue of Science.]

      U.S. President Donald Trump’s administration last week announced it intends to roll back tough auto mileage standards championed by former President Barack Obama to combat climate change. The standards, released in 2012, called for doubling the average fuel economy of new cars and light trucks, to 23.2 kilometers per liter by 2025. The Environmental Protection Agency (EPA) estimated the rules would prevent about 6 billion tons of carbon emissions by 2025. But on 2 April, EPA Administrator Scott Pruitt said the agency would rewrite the standards, arguing that Obama’s EPA “made assumptions ... that didn’t comport with reality, and set the standards too high.” In a formal finding, Pruitt argued the standards downplay costs and are too optimistic about the deployment of new technologies and consumer demand for electric vehicles. Clean car advocates disputed many of Pruitt’s claims, noting that auto sales have been strong despite stiffer tailpipe rules. “Backing off now is irresponsible and unwarranted,” said Luke Tonachel of the Natural Resources Defense Council in Washington, D.C. Pruitt’s move could also set up a legal clash with California state regulators, who have embraced the Obama-era standards and say they want to keep them.

Mist Hardships

[This excerpt is from the first chapter of Caesar’s Last Breath, by Sam Kean.]

      The most deadly gas outburst in history took place in Iceland in 1783, when a volcanic fissure spewed poisonous gas for eight months, ultimately releasing 7 million tons of hydrochloric acid, 15 million tons of hydrofluoric acid, and 122 million tons of sulfur dioxide. Locals called the event the Moduhardindin, or the “mist hardships,” after the strange, noxious fumes that emerged —"air bitter as seaweed and reeking of rot," one witness remembered. The mists killed 80 percent of the sheep in Iceland, plus half the cattle and horses. Ten thousand people there also died—one-fifth of the population—mostly of starvation. When the mists wafted over to England, they mixed with water vapor to form sulfuric acid, killing twenty thousand more people. The mists also killed crops across huge swaths of Europe, inducing long-term food shortages that helped spark the French Revolution six years later.

Tiny Dancers

[These excerpts are from a brief article in the Spring 2018 issue of the American Museum of Natural History Rotunda.]

      Relying on sight, scent, and touch, honey bees navigate the world—and even dance to it.

      Their antennae are highly sensitive to vibrations. They also have numerous receptors that respond to odors and other stimuli. Together, these keen senses may explain how worker bees are able to pick up and interpret the so-called “waggle dance,” which they use to share the location of food with fellow worker bees of the colony.

      While the process is not completely understood, it goes a little something like this: a successful forager uses an elaborate dance pattern to indicate both the direction of food in relation to the Sun and its distance from the hive. The dancer adjusts the direction over time to account for the movement of the Sun, as do the foragers in the field.

      The worker bees don’t actually see the waggle dance within the pitch-black hive, perhaps giving new meaning to the phrase “dancing in the dark” Instead, the bees sense air vibrations through their antennae, which are held close to the dancing, waggling bee….

      The dance is accompanied by an olfactory message, too: pollen brought back by the returning dancing bee or regurgitated nectar conveys the scent of the food at the forage site. Finally, the richness of the nectar source is indicated by the duration of the dance. The bees don’t exactly measure the length of the dance, but the longer the bee dances, the more foragers are recruited—essentially matching the workforce to the harvest at hand.

Volcanic Eruptions

[This excerpt is from the first chapter of Caesar’s Last Breath, by Sam Kean.]

      It took Mount Saint Helens two thousand years to build up its beautiful cone and about two seconds to squander it. It quickly shrank from 9,700 feet to 8,400 feet, shedding 400 million tons of weight in the process. Its plume of black smoke snaked sixteen miles high and created its own lightning as it rose. And the dust it spewed swept across the entire United States and Atlantic Ocean, eventually circling the world and washing over the mountain again from the west seventeen days later. Overall the eruption released an amount of energy equivalent to 27,000 Hiroshima bombs, roughly one per second over its nine-hour eruption.

      With all that in mind, it’s worth noting that Mount Saint Helens was actually small beer as far as eruptions go. Although it vaporized a full cubic mile of rock, that’s only 8 percent of what Krakatoa ejected in 1883 and 3 percent of what Tambora did in 1815. Tambora also decreased sunlight worldwide by 15 percent, disrupted the mighty Asian monsoons, and caused the infamous Year Without a Summer in 1816, when temperatures dropped so much that snow fell in New England in summertime. And Tambora itself would tremble before the truly epic outbursts in history, like the Yellowstone eruption 2.1 million years ago that launched 585 cubic miles of Wyoming into the stratosphere. (This megavolcano will likely have an encore someday and will bury much of the continental United States in ash.)

Continental Drift

[These excerpts are from the first chapter of Caesar’s Last Breath by Sam Kean.]

      To say that geologists didn't embrace Wegener's theory is a bit like saying that General Sherman didn’t receive the warmest welcome in Atlanta. Geologists loathed plate tectonics, even got pleasure out of loathing it. But as more and more evidence trickled in through the 1940s and 1950s, the drifting of continental plates didn't seem so silly anymore. The balance finally tipped in the late 1960s, and in one of the most stunning reversals in science history, pretty much every geologist on earth had accepted Wegener’s ideas by 1980. The rout was so complete that nowadays we have a hard time appreciating the theory’s importance. In the same way that the theory of natural selection shored up biology, plate tectonics took a hodgepodge of facts about earthquakes, mountains, volcanoes, and the atmosphere, and fused them together into one overarching schema.

      Continental plates can sometimes shift all at once in dramatic fashion, jolts we call earthquakes. Much more commonly the plates slowly grind past one another moving at about the rate that fingernails grow. (Think about that next time you clip your nails: we’re that much closer to the Big One.) When one plate slips beneath the other, a process called subduction, the friction of this grinding produces heat, which melts the lower plate and reduces it to magma. Some of this magma disappears into the bowels of Earth; but the lighter fraction of it actually climbs back upward through random cracks in the crust, swimming toward the surface. (That’s why hunks of pumice, a volcanic rock, float when tossed into water, because pumice comes from material with such a low density.) The heat of grinding also liberates carbon dioxide from the melting plate, as well as lesser amounts of hydrogen sulfide, sulfur dioxide, and other gases, including trace amounts of nitrogen.

      Meanwhile, as hot magma pushes upward through cracks in the crust, water in the crust seeps downward through those same cracks. And here’s where things get dangerous. One key fact about gases — it comes up over and over—is that they expand when they get warmer. Related to this, the gaseous version of a substance always takes up far more space than the liquid or solid version. So when that liquid water dribbling downward meets that magma bubbling upward, the water flashes into steam and expands with supernoval force, suddenly occupying 1,700 times more volume than before. Firefighters have special reason to fear this phenomenon: when they splash cold water onto hot, hissing fires, the burst of steam in an enclosed space can flash-burn them. So it goes with volcanoes. We ogle the orange lava pouring down the slopes, but it’s gases that cause the explosions and that do most of the damage.

      Around the world roughly six hundred volcanoes are active at any moment. Most of them lie along the famous Ring of Fire around the Pacific Ocean, which rests atop several unstable plates. In the case of Mount Saint Helens, the Juan de Fuca plate off Washington State is grinding away against the North American plate, and doing so roughly a hundred miles beneath the surface. This depth leaves a heavy cap of rock over the pools of magma and thereby prevents constant venting of noxious fumes. But when one of these deep pockets does pop, there’s that much more shrapnel.

Dirty Politics

[These excerpts are from an article by Margaret Talbot in the April 2, 2018, issue of The New Yorker.]

      …[Scott]Pruitt, who is forty-nine, looked cheerful, as he generally does at public appearances. (He declined my requests for an interview.) Unlike many people who have joined the chaotic Trump Administration, he seems unconflicted about his new role, his ideological and career goals fitting together as neatly as Lego blocks. The former attorney general of Oklahoma, Pruitt ascended politically by fighting one regulation after another. In his first year at the E.P.A., he has proposed repealing or delaying more than thirty significant environmental rules. In February, when the White House announced its intention to reduce the E.P.A.'s budget by twenty-five per cent—one of the largest cuts for any federal agency—Pruitt made no objections. His schedule is dominated by meetings and speaking engagements with representatives of the industries he regulates. He has met only a handful of times with environmental groups….

      Under Pruitt, even the dirtiest forms of pollution are getting a reprieve. On February 2, 2014, as much as thirty-nine thousand tons of coal ash began spilling into the Dan River from a Duke Energy power plant in Eden, North Carolina. Like many utilities, the Dan River Steam Station had recently transitioned from coal combustion to natural gas, which is cheaper. But the plant still had waste ponds containing more than a million tons of coal ash; the ponds were separated from the river by an earthen dam. When a guard made his rounds that day, he noticed that the water level in the ponds was rapidly dropping, as though someone had opened a bathtub drain….

      Recent technological changes have caused the wastewater produced by coal-fired power plants to become even more toxic. Some of the worst wastewater is discharged by “wet scrubbers,” which remove pollutants from smokestack emissions. Holleman, the attorney at the Southern Environmental Law Center, said, “I like to say they’re using twenty-first-century technology to take pollutants out of the air and thirteenth-century technology to put it in the water. But someone told me I was insulting the thirteenth century….”

      Pruitt may parrot Trump's views, but he has a far more polished manner. In public appearances, he’s well spoken and unflaggingly polite. When conservative journalists prod him to snipe at the E.P.A.’s “lifelong bureaucrats,” he chuckles and declines the bait. In an interview with a Fort Worth radio station, Pruitt described the E.P.A.’s career employees as “hardworking folks” who, in the Obama years, had lost “their mission.” He told the Daily Signal that he was talking to career employees about “the rule of law and process and federalism,” but emphasized that he was listening to them, too.

      As administrator, Pruitt has become adept at presenting his views with bland jargon. He defends his frequent meetings with industry representatives as time spent with “stakeholders who care about outcomes.” (And he describes them as “farmers and ranchers,” not as “fossil-fuel lobbyists.”) He touts “fuel diversity,” ex-plaining, “It’s not the job of the E.P.A. to say to the utility company in any state of the country, ‘You should choose renewables over natural gas or coal.’ . . . We need more choices, not less.” And Pruitt has adopted a favored term of the anti-regulatory right, “cooperative federalism”: putting more of the onus for environmental rule-making and enforcement on states….

      At the same time that Pruitt has been pledging to clean up some Superfund sites, he has been dismantling important Superfund regulations. In December, he announced that he would eliminate a 2016 rule requiring hard-rock-mining operations, such as gold, silver, and lead mines, to provide evidence that they had the financial resources to clean up any toxic messes that they created. The rule came about after environmental groups sued the E.P.A. over “heap-leach” mining, in which cyanide is used to extract gold from open pits. Multiple companies using this method had caused vast contaminations, then declared bankruptcy or sheltered assets. The 1980 Superfund law states that polluters, not taxpayers, must pay for remediation of disaster sites….

      One of the engineers said that it might take a while to “rebuild capacity” after Pruitt. But it would be done. The public, he reminded everyone, “is expecting us to protect the planet.” He said, “Pruitt is a temporary interloper. We are the real agency.”


[These excerpts from are from an article by Jennifer A. Francis in the March 2018 issue of Scientific Americn.]

      …At the current rate of change, there was a real possibility that within a century, the world could witness a summer Arctic Ocean that would be ice-free, a state not seen for thousands of years. Today I am startled again because it now appears that the ocean will likely be free of summer ice by 2040—a full 60 years earlier than we had predicted little more than a decade ago.

      The Arctic is changing exactly the way scientists thought it would but faster than even the most aggressive predictions. The recent behavior is off the charts. In just three years more than a dozen climate records that had each stood for many decades have crumbled, including those for disappearing summer sea ice, decreasing winter sea ice, warming air and thawing ground.

      These trends signal trouble for people around the world. The last time the Arctic was only slightly warmer than today—about 125,000 years ago—oceans were 13 to 20 feet higher. Goodbye Miami, New Orleans, the naval base in Norfolk, Va., most of New York City and Silicon Valley, as well as Venice, London and Shanghai. New research suggests that rapid Arctic warming also tends to reroute the jet stream in ways that could allow punishing weather patterns to linger across North America, central Europe and Asia longer than usual, subjecting millions of people to unyielding heat waves, droughts or relentless storms. Plankton are increasing throughout the southern Arctic Ocean, which may disrupt foods chains that support commercial fisheries. And the massive ice melt is adding to an enormous blob of freshwater south of Greenland that may be slowing the Gulf Stream, which could significantly change weather patterns for continents on both sides of the Atlantic Ocean….

      In only 40 years the extent of ice across the Arctic Ocean in summer has shrunk by half. Yes, half. The volume of sea ice, year-round, is way down, too—about a quarter of what it was in the early 1980s. Until recently, scientists had thought it would take until at least the middle of this century to reach these extremes.

      Summer sea ice is vanishing quickly because of feedbacks—vicious cycles that can amplify a small change. For example, when a bit of extra heat melts bright-white ice, more of the dark ocean surface is exposed, which reflects less of the sun’s energy back to space. That absorbed heat then warms the area further, which melts even more ice, leading to yet more warming….

      Permafrost—soil that usually remains frozen year-round—has been thawing. Buildings constructed atop permafrost are collapsing, trees are toppling and roads are buckling. In addition to disrupting daily life for local residents, thawing soils also can release large quantities of heat-trapping gases into the atmosphere. When the organic matter that has been locked in permafrost for thousands of years thaws, bacteria break it down into carbon dioxide (if oxygen is present) or methane (if it is not). Arctic permafrost contains about twice as much carbon as the atmosphere holds now, so widespread thaw could greatly exacerbate global warming—which would lead to even faster thaw….

      Are these impacts still avoidable? Yes and no. Because the climate’s response lags behind the increases in greenhouse gas concentrations and because carbon dioxide has a very long lifetime in the atmosphere, future change is already baked into the system. But the magnitude and pace can be reduced if society moves quickly to slow emissions and if methods can be developed to extract carbon from the atmosphere in large quantities. Progress on both these fronts is rapid, though likely too little, too late, to preserve the earth and the Arctic as we have known them. Prepare for the unexpected.

Good Gun Policy Needs Research

[These excerpts are from an article by Alan I. Leshner and Victor J. Dzau in the March 16, 2018, issue of Science.]

      The tragic shooting at a school in Parkland, Florida, last month triggered another round of proposals from local, state, and federal policy-makers about controlling firearm-related violence without violating broad interpretations of the rights to keep and bear arms provided by the US. Constitution. Unfortunately, there is only very sparse scientific evidence that can help figure out which policies will be effective. Earlier this month, the RAND Corporation released a comprehensive analysis on gun policy in the United States, and among its conclusions is that too few policies and outcomes have been the subject of rigorous scientific investigation. Even the seemingly popular view that violent crime would be reduced by laws prohibiting the purchase or possession of guns by individuals with mental illness was deemed to have only moderate supporting evidence. If the nation is serious about getting firearm-related violence under control, it must rise above its aversion to providing financial support for firearm-related research, and the scientific community will have to expeditiously carry out the needed research.

      There used to be more federally funded research on firearm-related violence than there is now. Although its program was small relative to other public health issues, the U.S. Centers for Disease Control and Prevention (CDC) did support research on firearm violence through its National Center for Injury Prevention and Control. However, the 1996 “Dickey Amendment” prohibited the CDC from funding activities that promoted or advocated for gun control. Although arguably some research might still have been considered acceptable, the amendment was interpreted as an outright prohibition of CDC support for any gun violence research. In 2011, Congress enacted similar restrictions affecting the Department of Health and Human Services, resulting in a dearth of scientific activity on any aspect of the availability and possession of firearms and the violence that might be related to them….

      It’s time to stop the polarized “debates” that lack a science base and turn our energies toward constructive, informed examination. The IOM-NRC report has spelled out a research path that calls for a closer examination of the characteristics of firearm-related violence; the risk and protective factors (like growing up in violence-prone environments) that increase the probability of firearm violence; the effectiveness of diverse violence prevention and other interventions; and the impact of various gun safety technologies. And the RAND analysis calls for research on the effects of specific firearm policies, such as whether background checks that investigate all types of mental health histories do reduce gun injuries, suicides, and homicides and whether raising the minimum age for purchasing firearms (to 21 years old) reduces firearm suicides among youth.

      Without science to drive firearm policy development and implementation, we risk inventing policies based on personal ideology or intuition. If we are serious that gun violence is a major public threat, then let's rise to the moment and take the next science-based steps.

Who Holds the Power?

[These excerpts are from a book review by Sean P. Cornelius in the March 9, 2018, issue of Science.]

      What was the cause of Donald Trump’s stunning victory over Hillary Clinton in the 2016 U.S. presidential election? Was it the peculiarities of the electoral college? Voter resistance to three-term rule by a single party? Anxiety about illegal immigration?

      As Niall Ferguson explains in The Square and the Tower, the answer lies largely in one word: networks. Specifically, without the cyber infrastructure that facilitated Russian interference, the “alt-right” networks that churned out memes and “fake news,” and the social media that gave them wing, history may have turned out very differently…..

      The book’s enigmatic title evokes the heart of the archetypical medieval town—the high tower of the state looming over the noisy public square below. It’s an apt metaphor for Ferguson's central point: Networks have always been with us, and their interaction with hierarchies has catalyzed some of the most momentous events in history.

      Effective networks can topple hierarchies, as shown in Luther's Reformation against the Catholic church. But under the right circumstances, the tower can cast its shadow over the square anew. Look no further than the age of empire and colonialism that lasted from Napoleon’s defeat to the First World War.

      The study of networks can be traced to the work of 18th-century mathematician Leonhard Euler. It was Euler who used the mathematical language of graphs to solve a puzzle that vexed the citizens of Konigsberg: whether it was possible to walk through the town crossing each of its seven bridges exactly once. (No.)….

      The Square and the Tower offers an enthralling “reboot” of history from a novel perspective, spanning antiquity to the present day. Ferguson, at once insightful and droll, builds his case meticulously. And, like the best historians, he always pauses to learn from the past and anticipate the future. If only for this reason, the book is well worth a read.

      After all, we live in a time when networks appear ascendant. Smartphone usage has penetrated deep into the developing world, Twitter has galvanized revolutions in Tunisia and Egypt, and cryptocurrency has (at times) rivaled Fortune 500 companies in market capitalization. Surely this all signifies the dawn of a new era, the inexorable, final triumph of networks over the ossified hierarchies of the past? If history is any guide, don’t count on it.

Slow Coolant Phaseout Could Worsen Warming

[These excerpts are from an article by April Reese in the March 9, 2018, issue of Science.]

      In the summer of 2016, temperatures in Phalodi, an old caravan town on a dry plain in northwestern India, reached a blistering 51°C—a record high during a heat wave that claimed more than 1600 lives across the country. Wider access to air conditioning (AC) could have prevented many deaths—but only 8% of India’s 249 million households have AC….As the nation’s economy booms, that figure could rise to 50% by 2050….And that presents a dilemma: As India expands access to a life-saving technology, it must comply with international mandates—the most recent imposed just last fall—to eliminate coolants that harm stratospheric ozone or warm the atmosphere.

      “Growing populations and economic development are exponentially increasing the demand for refrigeration and air conditioning,” says Helena Molin Valdes, head of the United Nations’s (UN’s) Climate & Clean Air Coalition Secretariat in Paris. “If we continue down this path,” she says, “we will put great pressure on the climate system.” But a slow start to ridding appliances of the most damaging compounds, hydrofluorocarbons (HFCs), suggests that the pressure will continue to build. HFCs are now “the fastest-growing [source of greenhouse gas] emissions in every country on Earth,” Molin Valdes says.

      HFCs, already widely used in the United States and other developed countries, are up-and-coming replacements for hydrochlorofluorocarbons (HCFCs) found today in most AC units and refrigerators in India and other developing nations. HCFCs are themselves replacements for chlorofluorocarbons (CFCs), ozone-destroying chemicals banned under the 1987 Montreal Protocol on Substances that Deplete the Ozone Layer. But HCFCs are potent greenhouse gases, as well as a threat to ozone, and they are now being phased out under a 2007 amendment to the protocol. Developed countries are to ditch them by 2020; developing countries have until 2030.

      To meet those deadlines, manufacturers have turned to HFCs, which do not destroy ozone. But they are a serious climate threat. The global warming potency of HFC-134a, commonly used in vehicle AC units, is 1300 times that of carbon dioxide. Clamping down on HFCs, a 2014 analysis found, could avoid a full 0.5°C of future warming.

      As with the HCFC phaseout, developed countries agreed to make the first move: They must begin abandoning the production and consumption of HFCs next year and achieve an 85% reduction by 2036….

      Some climate experts are more hopeful, pointing out that developing countries have an opportunity to bypass HFCs altogether. “The alternative when developed countries phased out HCFCs was HFCs. But developing countries are in a different position: They’re at the beginning of phasing out HCFCs and can leap directly past HFCs” to benign alternatives, says Nathan Borgford-Parnell, regional assessment initiative coordinator for the UN’s Climate & Clean Air Coalition.

      India is crafting a National Cooling Action Plan that aims to do just that. It will include better city planning and building design, and it will embrace novel coolants, says Stephen Andersen of the Institute for Governance & Sustainable Development in Washington, D.C., who helped develop the plan.

      Meanwhile, six AC manufacturers in India have already begun “leapfrogging” to hydrocarbon-based coolants such as R-290— refrigerant-grade propane—that have lower warming potential….

Health Security’s Blind Spot

[These excerpts are from an article by Seth Berkley in the March 9, 2018, issue of Science.]

      The severity of this year's influenza virus is a reminder of the daunting task facing the global health community as it struggles to prevent infectious diseases from sparking deadly epidemics. Today, yellow fever and cholera continue to spread in Africa, while Brazil is in the midst of a major yellow fever outbreak. It was only recently that Zika virus and Ebola virus epidemics were in the headlines. The world needs to harness every resource and tool in the battle to catch outbreaks before they catch us. Prevention is always the first line of defense, and nations must maintain vigilant surveillance—and yet, effective and affordable, quick and definitive diagnostics are absent in the countries where they are most needed. This represents one of our most serious global health security blind spots.

      During the 2014 Ebola epidemic in West Africa, the first cases were initially misdiagnosed as cholera, and then later as Lassa fever on the basis of clinical symptoms. It took nearly 3 months before blood samples sent to Europe finally identified the disease id as Ebola, during which time it was allowed to spread. Similarly, in Nigeria, a lack of rapid diagnostics is making it difficult to get ahead of the current yellow fever outbreak with targeted vaccination. Throughout 2016 and the first 8 months of 2017, Nigerian laboratories were unable to carry out tests on almost all suspected cases of yellow fever, owing to a shortage of chemicals needed for those diagnostics. When these reagents eventually became available last fall, yellow fever had spread to multiple states. As of last month, there were more than 350 suspected yellow fever cases over 16 states and 45 deaths. The world’s poorest countries simply cannot equip and maintain g their limited laboratory facilities.

      But the problem is not just how well-stocked laboratories are, it’s also about how quickly and reliably they can W respond. For yellow fever, whenever lab tests are positive or inconclusive in Africa, samples are sent to a Regional Reference Laboratory for confirmation. For the whole of Africa there is just one such facility in Dakar, Senegal. Even under the best conditions, these lab tests are expensive and take at least a month. What’s more, about 40% of samples found to be positive by Nigerian national laboratories have tested negative in Senegal, creating uncertainty about the reliability of the test….

      Ultimately, to achieve sustainable global epidemic preparedness, we need to stimulate the development of cutting-edge diagnostic technologies—both for laboratories and for use in the field in remote locations—and make them available and affordable in low-income countries….Early detection through reliable, available, and efficient testing is essential to stopping outbreaks before they spread. With many diseases presenting similar first symptoms, it’s all too easy to get a diagnosis wrong and potentially miss an outbreak. And given the ease and speed at which pathogens can now travel in the modern urban-dense global village, any delay in diagnosis will inevitably and increasingly be measured in lives lost.

Fever Dilemma>

[This excerpt is from an article by Gretchen Vogel in the March 9, 2018, issue of Science.]

      The toddler on her mother's lap is listless, her eyes dull. She has a fever, little appetite, and a cough. Her journey to the health clinic took an hour by bush taxi, and she had to wait two more hours to be examined. When it's finally her turn, the nurse practitioner pricks her finger and blots a drop of blood onto a rapid diagnostic test (RDT) for malaria. In 15 minutes the answer is clear: The child has malaria. She receives antimalarial drugs, which will most likely vanquish the parasites from her bloodstream within days, and she is sent home to recover.

      If the test is negative, however, things get complicated. If malaria isn't making her sick, what is? Is it pneumonia, typhoid, or Lassa fever? Meningitis? Or more than one infection at the same time? If she has bacterial meningitis, the right antibiotic could save her life. If she has Lassa fever, antibiotics won’t help.

      Until recently, nearly every child with a temperature above 38.5°C was treated for malaria in regions where the disease is endemic. It was one of the most common and deadliest causes of fever, and there was no easy way to rule it out: A definitive diagnosis required a microscope and a skilled technician—unavailable in many places. To be safe, health workers were trained to treat most fevers with a dose of antimalarial medicine. Public health campaigns helped spread the word: If your child has a fever, get them treated for malaria!

      In the past decade, malaria RDTS—which use antibodies to detect the parasite’s proteins—have transformed the landscape. The tests help reduce unnecessary prescriptions for malaria medicines, but they have exposed a new problem: the previously hidden prevalence of “negative syndrome”—feverish kids who don’t have malaria. Even in places with the highest rates of malaria, only about half of fevers are actually due to the disease. In many places, that figure is 10% or less. In 2014, the World Health Organization (WHO) estimated that 142 million suspected malaria cases tested negative worldwide.

      Negative test results pose a dilemma for health care workers, who in remote areas may be community volunteers with minimal training. When their one diagnostic test comes up negative, they are left empty-handed, with nothing to offer except some advice: Return if the child gets sicker….

Materials’ Quantum Leap

[This brief article by David Rotman is in the March/April 2018 issue of Technology Review.]

      The prospect of powerful new quantum computers comes with a puzzle. They’ll be capable of feats of computation inconceivable with today’s machines, but we haven't yet figured out what we might do with those powers.

      One likely and enticing possibility: precisely designing molecules.

      Chemists are already dreaming of new proteins for far more effective drugs, novel electrolytes for better batteries, compounds that could turn sunlight directly into a liquid fuel, and much more efficient solar cells.

      We don’t have these things because molecules are ridiculously hard to model on a classical computer. Try simulating the behavior of the electrons in even a relatively simple molecule and you run into complexities far beyond the capabilities of today’s computers.

      But it’s a natural problem for quantum computers, which instead of digital bits representing 1s and 0s use “qubits” that are themselves quantum systems. Recently, IBM researchers used a quantum computer with seven qubits to model a small molecule made of three atoms.

      It should become possible to accurately simulate far larger and more interesting molecules as scientists build machines with more qubits and, just as important, better quantum algorithms.

Genetic Fortune-Telling

[This brief article by Antonio Regalado is in the March/April 2018 issue of Technology Review.]

      One day, babies will get DNA report cards at birth. These reports will offer predictions about their chances of suffering a heart attack or cancer, of getting hooked on tobacco, and of being smarter than average.

      The science making these report cards possible has suddenly arrived, thanks to huge genetic studies—some involving more than a million people.

      It turns out that most common diseases and many behaviors and traits, including intelligence, are a result of not one or a few genes but many acting in concert. Using the data from large ongoing genetic studies, scientists are creating what they call “polygenic risk scores.”

      Though the new DNA tests offer probabilities, not diagnoses, they could greatly benefit medicine. For example, if women at high risk for breast cancer got more mammograms and those at low risk got fewer, those exams might catch more real cancers and set off fewer false alarms.

      Pharmaceutical companies can also use the scores in clinical trials of preventive drugs for such illnesses as Alzheimer’s or heart disease. By picking volunteers who are more likely to get sick, they can more accurately test how well the drugs work.

      The trouble is, the predictions are far from perfect. Who wants to know they might develop Alzheimer’s? What if someone with a low risk score for cancer puts off being screened, and then develops cancer anyway?

      Polygenic scores are also controversial because they can predict any trait, not only diseases. For instance, they can now forecast about 10 percent of a person’s performance on 10 tests. As the scores improve, it’s likely that DNA IQ predictions will become routinely available. But how will parents and educators use that information?

      To behavioral geneticist Eric Turkheimer, the chance that genetic data will be used for both good and bad is what makes the new technology “simultaneously exciting and alarming.”

Zero Carbon Natural Gas

[These excerpts are from an article by James Temple in the March/April 2018 issue of Technology Review.]

      The world is probably stuck with natural gas as one of our primary sources of electricity for the foreseeable future. Cheap and readily available, it now accounts for more than 30 percent of US electricity and 22 percent of world electricity. And although it's cleaner than coal, it’s still a massive source of carbon emissions.

      A pilot power plant just outside Houston, in the heart of the US petroleum and refining industry, is testing a technology that could make clean energy from natural gas a reality. The company behind the 50-megawatt project, Net Power, believes it can generate power at least as cheaply as standard natural-gas plants and capture essentially all the carbon dioxide released in the process.

      If so, it would mean the world has a way to produce carbon-free energy from a fossil fuel at a reasonable cost. Such natural-gas plants could be cranked up and down on demand, avoiding the high capital costs of nuclear power and sidestepping the unsteady supply that renewables generally provide….

      A key part of pushing down the costs depends on selling that carbon dioxide. Today the main use is in helping to extract oil from petroleum wells. That’s a limited market, and not a particularly green one. Eventually, however, Net Power hopes to see growing demand for carbon dioxide in cement manufacturing and in making plastics and other carbon-based materials.

      Net Power’s technology won’t solve all the problems with natural gas, particularly on the extraction side. But as long as we’re using natural gas, we might as well use it as cleanly as possible. Of all the clean-energy technologies in development, Net Power’s is one of the furthest along to promise more than a marginal advance in cut-ting carbon emissions.

A Contraceptive Gel for Men Is about to Go on Trial

[These excerpts are from an article by Emily Muller in the March/April 2018 issue of Technology Review.]

      After more than a decade of work, government researchers in the US are ready to test an unusual birth control method for men—a topical gel that could prevent the production of sperm.

      And no, gentlemen, you don’t nib it on your genitals.

      The clinical trial, which begins in April and will run for about four years, will be the largest effort in the US to test a hormonal form of birth control for men.

      Currently, the most effective options men have for birth control are condoms or a vasectomy….

      The new gel contains two synthetic hormones, testosterone and a form of progestin. Progestin blocks the testes from making enough testosterone for normal sperm production. The replacement testosterone is needed to counteract the hormone imbalances the progestin causes but won’t make the body produce sperm.

      More than 400 couples will participate in the study, which will take place at sites in the US, the UK, Italy, Sweden, Chile, and Kenya. Men in the trial will take home a pump bottle of the gel and rub about half a teaspoon of it on their upper arms and shoulders every day. The gel dries within a minute….

      The gel can suppress sperm levels for about 72 hours, so if men forget a dose, “there is a bit of forgiveness….”

      Men will use the gel for at least four months while their partners also use some form of female contraception. Researchers will monitor the men's sperm levels, which need to drop to less than one million per milliliter to effectively prevent pregnancy…Once the sperm count is low enough, the women will go off their birth control. The couples will then use the contraceptive gel as their only form of daily birth control for a year….

      Still, the question is: will men use it...?

      Men’s attitudes on their role in contraception vary by country, but a 2010 survey indicated that at least 25 percent of men worldwide would consider using a hormonal contraceptive.

Seeking Resilience in Marine Ecosystems

[These excerpts are from an article by Emily S. Darling and Isabelle M. Cote in the March 2, 2018, issue of Science.]

      Resilience is a popular narrative for conservation and provides an opportunity to communicate optimism that ecosystems can recover and rebound from disturbances. A resilience lens also reinforces the need for continued conservation investments, even in degraded ecosystems. It is probably for these reasons that resilience has become a conceptual cornerstone in the management of tropical coral reefs, which are one of the ecosystems most vulnerable to climate change.

      The term “resilience” captures two dynamic processes: the ability of ecosystems to resist and absorb disturbance, and their ability to recover. Recent observations suggest that coral reef recovery is increasingly unlikely. The time windows for reefs to recover from consecutive mass bleaching events have shrunk from 25 to 30 years a few decades ago to just 6 years—far shorter than the 10 to 15 years that even the fastest-growing corals need to bounce back from catastrophic mortality. There has been some recovery from recent bleaching events on reefs that are isolated or deeper and more structurally complex. Understanding more about how to catalyze such recovery processes is important. However, if climate change continues unabated, resilience may come not from the ability of coral reefs to recover but from their ability to resist.

      The idea of protecting resistant species or resistant areas is not new, but for various reasons, it is not often put into practice. To increase ecosystem resistance to climate change, there are two options: to increase intrinsic resistance, provided by traits that allow species to cope with a changing climate, or to increase extrinsic resistance, provided by locations that are less vulnerable to climate disturbances.

      Individuals or species that survive extreme climate events can have traits that underpin a general ability to cope or adapt to new environmental conditions. For corals, resistant traits include tolerance to warmer and acidified waters, salinity fluctuations, herbicides, diseases, and storms. These traits might be associated with aspects of the coral microbiome, or the ability of corals to draw on energy reserves or flexible feeding strategies. Natural populations of “super corals” that are tolerant of stressful conditions might arise after repeated bleaching events or in more variable thermal environments, such as reefs exposed to tidal heat pulses….

      Whether marine ecosystems resist, recover, restructure, or vanish hinges on how extreme future climate change is. It is almost certain that most of today’s coral reefs will be transformed beyond recognition in the coming decades. Shifts in coral communities toward smaller, weedier species and dominance by other groups, such as algae and sponges, will alter ecosystem functioning and reduce the services that coral reefs provide to society.

      These ecological shifts will force millions of people to adapt and change how they use and depend on the fisheries, tourism, and coastal protection provided by coral reefs. The political will necessary to improve the resilience of coral reefs or marine ecosystems might or might not materialize in time. Regardless, any fight for the remaining “reefs of hope” can, and must, occur alongside improving the resilience of people and communities to help dampen the coming climate shocks.

Energizing Global Environmental Cooperation

[These excerpts are from an article by Michael Blanding in the Winter 2018 issue of Spectrum.]

      China’s rapid economic growth during the 2000s was largely powered by coal, which was causing severe local air pollution and contributing to the global rise in climate-warming greenhouse gases. She began to wonder: “How could we innovate in a way that would address these two interlinked challenges at the same time?”

      …Karplus’s team developed a detailed model of the country’s economy and energy system that projected the potential long-term impact of different policy initiatives. The team’s results have shown the value of implementing national climate targets by pricing CO2, a system the Chinese government began piloting in 2013.

      At the time, the US and China were in a major stalemate over which nation would lower its emissions first. While the US claimed its own actions would be futile without addressing China’s higher and growing emissions, China maintained that developed nations like the US should take the lead in cutting carbon. Key to arguments on both sides were projections that China’s fossil energy usage would climb with no end in sight. In contrast, results from the MIT-Tsinghua model, shared with policy makers in both nations, suggested China’s energy-related CO2 emissions could peak by 2030 without undermining its economic development.

      The US and China made a historic agreement in 2014 that both sides would limit climate-related missions—setting the stage for the larger global agreement in Paris in 2015….

Framing Uncertainty

[These excerpts are from an article by Steve Nadis in the Winter 2018 issue of Spectrum.]

      The barrage of hurricanes that in 2017 pummeled the Caribbean islands, United States, and even Ireland—fueled in part by unusually balmy Atlantic Ocean temperatures—provided a glimpse of consequences that may accompany a warming planet. If left unchecked, climate change will act as a “threat-multiplier,” increasing the probability and intensity of such extreme weather events, according to Kerry Emanuel….The situation is exacerbated, Emanuel noted in a Washington Post editorial last fall, by policies that have enabled the number of people living in vulnerable coastal zones to triple, worldwide, since 1970.

      How can we get a handle on both the natural and manmade facets of potential climate-related disasters? Fortunately, researchers…have, for nearly a quarter of a century, been developing a comprehensive tool for dissecting the full range of impacts of climate on people and people on climate….

      “Projecting climate change into the future requires an understanding of both the natural environment and of how human development occurs,” explains Ronald Prinn….

      The Joint Program’s 2016 Outlook report, for example, showed that even if all nations met their pledges to the 2015 Paris climate accord, the Earth's average temperature rise by 2100 would still exceed the established goal of 2 degrees Celsius or less. The authors then presented a set of emissions scenarios that could satisfy the 2 degrees C target, though complicated tradeoffs would arise. Increased energy production from the wind and sun means more land devoted to turbines and solar panels. An enhanced reliance on biomass energy could similarly take up land and water that might otherwise have been reserved for growing crops. Obtaining cooling water for power plants also gets harder as river temperatures rise.

      “It’s becoming ever more apparent that things we used to study separately are all interconnected,” says Prinn. “Meanwhile, environmental changes are occurring far faster than expected, which is another way of saying the need for the IGSM is now greater than ever.”

Squirrels with a Rainy Day Fund

[This brief article by Mitch Leslie is in the March 2, 2018, issue of Science.]

      Scurrying around the South Dakota prairie, 13-lined ground squirrels mark the approach of winter by bingeing. By the time a squirrel holes up to hibernate, its weight will have soared by about 40%, thanks to extra fat that will tide the creature over until spring.

      During droughts, migrations, bleak winters, and other challenges, organisms often face times when resources are scarce. To get by, the ground squirrel, like many other creatures, stockpiles resources t use later. It can gain more than 2% of its body weight in a single day as it gorges on seeds, grasshoppers, and other delicacies.

      But the tactic has downsides. A roly-poly rodent is easy prey for a hawk or coyote. The rainy day fund can also run out prematurely. So once a squirrel is nice and tubby, it enters hibernation, slashing its energy expenditure by 90%. Its body temperature drops to just above freezing and its heart rate falls to as low as 5 beats per second, down from the usual 350 to 400.

      Packing on the fat requires metabolic and behavioral adjustments. But somehow, the squirrel dodges the health problems that plague obese people. Although it develops some of the metabolic defects of type 2 diabetes, the animal isn’t sick. And by spring, it is lean and spry and ready to begin the cycle again.

Resilience by Regeneration

[This brief article by Elizabeth Pennisi is in the March 2, 2018, issue of Science.]

      Humans should envy the axolotl. Our powers of regeneration are limited. Broken bones knit, wounds heal, and large parts of the liver can regenerate, but that’s about it. But the axolotl—a large salamander also called the Mexica walking fish because it looks like a 20-centimeter eel with stumpy legs—can replace n entire missing limb or even its tail, which means regrowing the spinal cord, backbone, and muscles. About 30 research teams are probing how these salamanders do it. In the axolotl, they’ve found, various tissues work together to detect limb loss and coordinate regrowth. In the process, the animals reactivate the same genetic circuits that guided the formation of those structures during embryonic development, causing generalist stem cells to specialize.

      Axolotls are only one of several regenerators in the animal kingdom. Flatworms called planarians are even more resilient—able to surge back after losing 90% of their bodies. One simple fragment of these 2-centimeter-long aquatic worms can rejuvenate the brain, skin, gut, and all the other functional organs. Again, stem cells are key, and a special set of genes active in muscles tells those stem cells what to do, activating growth and specialization genes in the right cells at the right time. So the planarian can rebuild itself almost from scratch, whereas the axolotl can rebuild only if the main body axis is intact. This year, researchers took another step toward detailing the molecules underlying regeneration by sequencing th genomes of those two species. The ultimate hope: One day, we’ll be able to coax injured humans to execute the same repairs.

Fish that Switch Sex to Survive

[This brief article by Elizabeth Pennisi is in the March 2, 2018, issue of Science.]

      Fish are masters of reproductive resilience. About 450 species witch sexes over their lifetimes to maximize their number of offspring. The fish do so by undergoing hormonal changes that transform their organs from those of one sex to the other. Patterns of sex switching vary by species. Big females produce more eggs than little females, so for some species, such as clownfish, it’s better to be a male early in life when more runty and then switch to a female later on. But among males that fight each other for females or territory—such as groupers, sea breams, and porgies—being a too-small male can mean no offspring at ll. In that case, it’s better to stay a small female instead.

      Now, this age-old strategy is allowing fish like the sea bream to adapt to a modern challenge that also disrupts the sex balance: overfishing. Fishers favor the biggest catch. Because one sex is usually bigger than the other, the bigger sex risks being fished out. But researchers have found that sea breams—flavorful, reddish fish common in warmer Atlantic Ocean coastal waters—are ready. Removing big males prompts earlier-than-usual sex changes in some females, so the sex balance is preserved. Still, it’s more a short-term strategy than a long-term solution, researchers say. The fish are switching sex at younger ages, so females don’t have a chance to grow big. That trend translates into fewer offspring and a shrinking population. That resilience strategy keeps them reproducing for now—but the fish can’t save themselves all on their own.

Asia’s Hunger for Sand Takes Toll on Ecology

[These excerpts are from an article by Christina Larson in the March 2, 2018, issue of Science.]

      Across Asia, rampant extraction of sand for construction is eroding coastlines and scouring waterways. “For a resource we think is infinite, we are beginning to realize that it’s not,” says Aurora Torres, an ecologist at the German Centre for Integrative Biodiversity Research in Leipzig. “It’s a global concern, but especially acute in Asia, where all trends show that urbanization and the region's big construction boom are going to continue for many years.” And it is taking an environmental toll that scientists are beginning to assess—and environmentalists hope to reduce.

      Already, scientists have linked poorly regulated and often illegal sand removal to declines in seagrasses in Indonesia and in charismatic species such as the Ganges River dolphin and terrapins in India and Malaysia. In eastern China’s Poyang Lake, dredging boats are sucking up tens of millions of tons of sand a year, altering the hydrology of the country’s largest freshwater lake, a way station for migratory birds. Conservation groups are urging governments to crack down. But the political clout of developers means it will be an uphill—and perilous—battle. Last September, for example, two activists with Mother Nature Cambodia who were filming illegal sand dredging off the Cambodian coast were arrested and convicted of “violation of privacy.” They spent several months in jail before being released last month.

      Used to make concrete and glass, sand is an essential ingredient of nearly every modern highway, airport, dam, windowpane, and solar panel. Although desert sand is plentiful, its wind-tumbled particles are too smooth—and therefore not cohesive enough—for construction material. Instead, builders prize sand from quarries, coastlines, and riverbeds. “The very best sand for construction is river sand; it's the right particle size and shape….”

      Between 1994 and 2012, global cement production—a proxy for concrete use—tripled, from 1.37 billion to 3.7 billion tons, driven largely by Asian construction....sand mining—on an industrial scale and by individual operators—“greatly exceeds natural renewal rates” and “"is increasing exponentially.”

      ….colleagues explain how sand mining has driven declines of seagrass meadows off of Indonesia. Sediment plumes stirred up by the dredging block sunlight, impeding photosynthesis…The meadows nourish several species, including the dugong, which is in decline….

      Another sand mining victim is the southern river terrapin, a critically endangered turtle in Southeast Asia….

      Also under siege, in Bangladesh and India, is the northern river terrapin. “Sand mining is one of the biggest problems and reasons why they are so endangered today….When the sand banks are gone, the [terrapin] is gone.” Other creatures directly affected by river sand mining, scientists say, are the gharial—a rare crocodile found in northern India—and the Ganges River dolphin.

      Poyang Lake, a key wintering ground on the East Asian-Australasian Flyway, hosts dozens of migratory species, including almost all of the 4000 or so surviving Siberian cranes. But sand dredging campaigns in the middle Yangtze Basin have expanded rapidly since the early 2000s, when such activities were banned on sections of the lower Yangtze….

      “We are not saying we need to stop sand mining altogether. We are saying we need to minimize the impacts,” says Jack Liu, a biologist at Michigan State University in East Lansing who is spearheading an effort to assemble a comprehensive picture of the damage. Construction standards should be raised to extend building longevity, he says, and building materials should be recycled. Those sand grains on the beach may not be innumerable after all.

How to Get Wyoming Wind to California

[These excerpts are from an article by James Temple in the March/April 2018 issue of Technology Review.]

      Several miles south of Rawlins, Wyoming, on a cattle ranch east of the Continental Divide, construction crews have begun laying down roads and pads that could eventually underpin up to 1,000 wind turbines. Once complete, the Chokecherry and Sierra Madre project could generate around 12 million megawatt-hours of electricity annually, making it the nation’s largest wind farm.

      But how do you get that much wind power to where it's actually needed?

      The Denver-based company behind the project hopes to erect a series of steel transmission towers that would stretch a high-voltage direct-current transmission (HVDC) line 730 miles across the American West. It could carry as much as 3,000 megawatts of Wyoming wind power to the electricity markets of California, Nevada, and Arizona. With the right deals in place, the transmission line could deliver solar-generated electricity back as well, balancing Wyoming’s powerful late-afternoon winds with California’s bright daytime sun….

      Transmission isn’t sexy. It’s basic infrastructure: long wires and tall towers. But a growing body of studies conclude that building out a nationwide network of DC transmission lines could help enable renewable sources to supplant the majority of US energy generation, offering perhaps the fastest, cheapest, and most efficient way of slashing greenhouse-gas emissions.

      Developing these transmission lines, however, is incredibly time-consuming and expensive. The TransWest project was first proposed in 2005, but the developers will be lucky to secure their final permits and begin moving dirt at the end of next year.

      There’s no single agency in charge of overseeing or ushering along such projects, leaving companies to navigate a thicket of overlapping federal, state, county, and city jurisdictions—every one of which must sign off for a project to begin. As a result, few such transmission lines ever get built.

      Direct current, in which electric charges constantly flow in a single direction, is an old technology. DC and AC—alternating current—were the subject of one of the world's first technology standards battles, pitting Thomas Edison against his former protégé Nikola Tesla in the “War of the Currents” starting in the 1880s.

      AC won this early war, mainly because, thanks to the development of transformers, its voltage could be cranked up for long-distance transmission and stepped down for homes and businesses.

      But a series of technological improvements have substantially increased the functionality of DC, opening up new ways of designing and interconnecting the electricity grid….

      With direct-current lines, grid operators have more options for energy sources throughout the day, allowing them to tap into, say, cheap wind two states away during times of peak demand instead of turning to nearby but more expensive natural-gas plants for a few hours. The fact that regions can depend on energy from distant states for their peak demand also means they don't have to build as much high-cost generation locally.

      A national direct-current grid could also help lower emissions to as much as 80 percent below 1990 levels within 15 years, all with commercially available technology and without increasing the costs of electricity, according to a study published earlier in Nature Climate Change.

      The researchers produced an idealized transmission network that connected 32 nodes across the nation, linking hydroelectric power in the Pacific Northwest, solar in California, wind energy in the Southwest, and nuclear on the East Coast. Simply put, the system balances out the intermittency of renewable energy sources over long distances, meaning there’s always reliable generation somewhere. Being able to tap into it from any corner of the nation lowers the cost of supplying energy at peak demand, reduces the amount of generation required in any single area, minimizes excess generation, and eliminates the need to develop expensive grid-scale storage systems….

      There are already a handful of DC transmission lines in the US and a growing number of proposals, including the New England Clean Power Link, which would transport 1,000 megawatts of renewable power from Canada into New England. Houston's Clean Line Energy has at least a half-dozen proposals in various stages, including the Plains and Eastern Clean Line connecting western Oklahoma to markets in the Southeast, and the Grain Belt Express Clean Line stretching from Kansas to Indiana.

      But all of these are moving through the approvals process at a dawdling pace. The TransWest developers, who have secured permission along the two-thirds of the line's path that lies on federal land since taking over the project in 2008, are still working to finalize approvals from states and private landowners.

      Most developers and energy policy experts say what’s needed to accelerate these projects is a federal authority with greater power to push them through….

Tech Companies Should Stop Pretending AI Won’t Destroy Jobs

[This excerpt is from an article by Kai-Fu Lee in the March/April 2018 issue of Technology Review.]

      The rise of China as an AI super-power isn’t a big deal just for China. The competition between the US and China has sparked intense advances in AI that will be impossible to stop anywhere. The change will be massive, and not all of it good. Inequality will widen. As my Uber driver in Cambridge has already intuited, AI will displace a large number of jobs, which will cause social discontent. Consider the progress of Google DeepMind’s AlphaGo software, which beat the best human players of the board game Go in early 2016. It was subsequently bested by AlphaGo Zero, introduced in 2017, which learned by playing games against itself and within 40 days was superior to all the earlier versions. Now imagine those improvements transferring to areas like customer service, telemarketing, assembly lines, reception desks, truck driving, and other routine blue-collar and white-collar work. It will soon be obvious that half of our job tasks can be done better at almost no cost by AI and robots. This will be the fastest transition humankind has experienced, and we're not ready for it.

      Not everyone agrees with my view. Some people argue that it will take longer than we think before jobs disappear, since many jobs will be only partially replaced, and companies will try to redeploy those displaced internally. But even if true, that won't stop the inevitable. Others remind us that every technology revolution has created new jobs as it displaced old ones. But it’s dangerous to assume this will be the case again.

      Then there are the symbiotic optimists, who think that AI combined with humans should be better than either one alone. This will be true for certain professions—doctors, lawyers—but most jobs won't fall in that category Instead they are routine, single-domain jobs where AI excels over the human by a large margin.

      Others think we’ll be saved by a universal basic income. “Take the extra money made by AI and distribute it to the people who lost their jobs,” they say. “This additional income will help people find their new path, and replace other types of social welfare.” But UBI doesn’t address people’s loss of dignity or meet their need to feel useful. It’s just a convenient way for a beneficiary of the AI revolution to sit back and do nothing.

      And finally, there are those who deny that AI has any downside at all—which is the position taken by many of the largest Al companies. It’s unfortunate that Al experts aren’t trying to solve the problem. What’s worse, and unbelievably selfish, is that they actually refuse to acknowledge the problem exists in the first place.

      These changes are coming, and we need to tell the truth and the whole truth. We need to find the jobs that Al can’t do and train people to do them. We need to reinvent education. These will be the best of times and the worst of times. If we act rationally and quickly, we can bask in what’s best rather than wallow in what’s worst.

50 Years Ago: Repeal of Tennessee’s “Monkey Law”

[This excerpt is from an article by Glenn Branch and Ann Reid in the Winter 2017/2018 special edition of Scientific American.]

      Even before the legal defeat of intelligent design in Pennsylvania, evolution’s opponents were already beginning to try yet a different tack: requiring, or more commonly allowing, science teachers to misrepresent evolution as scientifically controversial, often under the rubric of academic freedom. More than 70 such bills have been introduced since 2004, with three enacted, in Mississippi, Louisiana and Tennessee.

      It is unclear to what extent science teachers in those three states take advantage of those laws. But according to a national survey conducted in 2007, about one in eight public high school biology teachers present creationism as scientifically credible in their classrooms, despite the unconstitutionality of the practice. Scientific, educational and legal concerns are often overridden by personal or community attitudes of doubt and denial.

      Indeed, the same survey revealed that six in 10 of the instructors were teaching evolution less than forthrightly—compromising on the accuracy, completeness or rigor of their treatment, often for fear of provoking a creationist backlash. Such fears, sadly, appear to be warranted: more than one in five of the teachers reported experiencing community resistance to their teaching of evolution.

      Fortunately, the treatment of evolution in state science standards is getting better on the whole, which means that textbooks, curricula and, ideally, teachers are following suit. But scientific knowledge and pedagogical know-how are not the only equipment that educators need to teach evolution. They also need the confidence to persist, even in the face of doubt and denial.

      Creationists are as active as ever, with a few even in the bully pulpits afforded by high political office. And the legal rulings that established the obstacles that have so far thwarted attacks on evolution education could be overturned by a reactionary Supreme Court or circumvented by public support of religious schools not subject to constitutional strictures. So the evolution wars are by no means over.

Post-Truth: A Guide for the Perplexed

[These excerpts are from an article by Kathleen Higgins in the Winter 2017/2018 special edition of Scientific American.]

      Post-truth refers to blatant lies being routine across society, and it means that politicians can lie without condemnation. This is different from the cliché that all politicians lie and make promises they have no intention of keeping—this still expects honesty to be the default position. In a post-truth world, this expectation no longer holds.

      This can explain the current political situation in the U.S. and elsewhere. Public tolerance of inaccurate and undefended allegations, non sequiturs in response to hard questions and outright denials of facts is shockingly high. Repetition of talking points passes for political discussion, and serious interest in issues and options is treated as the idiosyncrasy of wonks. The lack of public indignation when political figures claim disbelief in response to growing scientific evidence of the reality of climate change is part of this larger pattern. “Don’t bother me with facts” is no longer a punchline. It has become a political stance. It’s worth remembering that it has not always been this way: the exposure of former U.S. president Richard Nixon’s lies was greeted with outrage….

      When political leaders make no effort to ensure that their “facts” will bear scrutiny, we can only conclude that they take an arrogant view of the public. They take their right to lie as given, perhaps particularly when the lies are transparent Many among the electorate seem not to register the contempt involved, perhaps because they would like to think that their favored candidate is at least well intentioned and would not deliberately mislead them.

      Much of the public hears what it wants to hear because many people get their news exclusively from sources whose bias they agree with. But contemptuous leaders and voters who are content with hand-waving and entertaining bluster undermine the democratic idea of rule by the people. The irony is that politicians who benefit from post-truth tendencies rely on truth, too, but not because they adhere to it. They depend on most people's good-natured tendency to trust that others are telling the truth, at least the vast majority of the time.

      Scientists and philosophers should be shocked by the idea of post-truth, and they should speak up when scientific findings are ignored by those in power or treated as mere matters of faith. Scientists must keep reminding society of the importance of the social mission of science—to provide the best information possible as the basis for public policy. And they should publicly affirm the intellectual virtues that they so effectively model: critical thinking, sustained inquiry and revision of beliefs on the basis of evidence_ Another line from Nietzsche is especially pertinent now: “Three cheers for physics!—and even more for the motive that spurs us toward physics—our honesty!”

Misleading Semantics of Creationism

[This short essay by John Rennie is in the Winter 2017/2018 special edition of Scientific American.]

      “Creation science” is a contradiction in terms. A central tenet of modern science is methodological naturalism—it seeks to explain the universe purely in terms of observed or testable natural mechanisms. Thus, physics describes the atomic nucleus with specific concepts governing matter and energy, and it tests those descriptions experimentally. Physicists introduce new particles, such as quarks, to flesh out their theories only when data show that the previous descriptions cannot adequately explain observed phenomena. The new particles do not have arbitrary properties, moreover—their definitions are tightly constrained, because the new particles must fit within the existing framework of physics.

      In contrast, intelligent-design theorists invoke shadowy entities that conveniently have whatever unconstrained abilities are needed to solve the mystery at hand. Rather than expanding scientific inquiry, such answers shut it down. (How does one disprove the existence of omnipotent intelligences?)

      Intelligent design offers few answers. For instance, when and how did a designing intelligence intervene in life’s history? By creating the first DNA? The first cell? The first human? Was every species designed, or just a few early ones? Proponents of intelligent-design theory frequently decline to be pinned down on these points. They do not even make real attempts to reconcile their disparate ideas about intelligent design. Instead they pursue argument by exclusion—that is, they belittle evolutionary explanations as farfetched or incomplete and then imply that only design-based alternatives remain.

      Logically, this is misleading: even if one naturalistic explanation is flawed, it does not mean that all are. Moreover, it does not make one intelligent-design theory more reasonable than another. Listeners are essentially left to fill in the blanks for themselves, and some will undoubtedly do so by substituting their religious beliefs for scientific ideas.

      Time and again, science has shown that methodological naturalism can push back ignorance, finding increasingly detailed and informative answers to mysteries that once seemed impenetrable: the nature of light, the causes of disease, how the brain works. Evolution is doing the same with the riddle of how the living world took shape. Creationism, by any name, adds nothing of intellectual value to the effort.

Reason (and Science) for Hope

[These excerpts are from a book review by Michael Shermer in the February 23, 2018, issue of Science.]

      For most of us, it is easier to imagine the world going to hell in a handbasket than it is to picture a rosy future. We can readily conjure up incremental improvements such as increased Internet bandwidth, improved automobile navigation systems, or another year added to our average life span. But what really gets our imaginations roiling are images of nuclear Armageddon, robots run amok, and terrorists in trucks mowing down pedestrians. The reason for this asymmetry is an evolved feature of human cognition called the negativity bias, a phenomenon explored in depth by the Harvard psychologist and linguist Steven Pinker in his magisterial new book, Enlightenment Now

      Pinker begins with the Enlightenment because the scientists and scholars who drove that movement took the methods developed in the Scientific Revolution and applied them to solving problems in all fields of knowledge. “Dare to know” was Immanuel Kant’s oft-quoted one-line summary of the age he helped launch, and with knowledge comes power over nature, starting with the second law of thermodynamics, which Pinker fingers as the cause of our natural-born pessimism.

      In the world in which our ancestors evolved the cognition and emotions that we inherited, entropy dictated that there were more ways for things to go bad than good. Thus, our modern psychology is tuned to a world that was more dangerous than it is today, he argues. Because our ancestors’ survival depended on vigilantly scanning for negative stimuli, good experiences (e.g., a pain-free day) often go unnoticed, as we attempt to respond to the failures that could spell the end of our existence. But instead of interpreting accidents, plagues, famine, and disease as the wrath of angry gods, vengeful demons, or bewitching women like our medieval ancestors did, we enlightened thinkers now know that's just entropy talking its course.

      In 75 charts and graphs and thousands of statistics, Pinker documents how we have systematically applied knowledge to problems in order to propel ourselves to unimaginable levels of progress. Since the Enlightenment, he reveals, more people live longer, healthier, happier, and more meaningful lives filled with enriching works of art, music, literature, science, technology, and medicine. This is not to mention improvements to food, drink, clothes, transportation, and houses, nor the ever-increasing ease of international travel or instant access to much of the world’s knowledge that many of us enjoy today.

      Exceptions are no counter to Pinker’s massive data set. Follow the trend lines, not the headlines, he urges. For example, although military engagements make the news daily, “[w]ar between countries is obsolescent, and war within countries is ab-sent from five-sixths of the world’s surface.” “In most times and places, homicides kill far more people than wars,” he adds, “and homicide rates have been falling as well.”

      We’re not just less likely to fall victim to an intentional death. On the whole, we are safer than ever. “Over the course of the 20th century, Americans became 96 percent less likely to be killed in a car accident, 88 percent less likely to be mowed down on the sidewalk, 99 percent less likely to die in a plane crash, 59 percent less likely to fall to their deaths, 92 percent less likely to die by fire, 90 percent less likely to drown, 92 percent less likely to be asphyxiated, and 95 percent less likely to be killed on the job.”

      Each area of improvement has specific causes that Pinker carefully identifies, but he attributes our overall progress to “Enlightenment humanism,” a secular worldview that values science and reason over superstition and dogma. It is a heroic journey, Pinker concludes with rhetorical flair: “We are born into a pitiless universe, facing steep odds against life-enabling order and in constant jeopardy of falling apart.” Nevertheless, “human nature has also been blessed with resources that open a space for a kind of redemption. We are endowed with the power to combine ideas recursively, to have thoughts about our thoughts. We have an instinct for language, allowing us to share the fruits of our experience and ingenuity. We are deepened with the capacity for sympathy—for pity, imagination, compassion, commiseration.”

      This is our story, not vouchsafed to any one tribe but “to any sentient creature With the power of reason and the urge to persist in its being.” With this fact, there is reason (and science) for hope.


[These excerpts are from an article by Michael Shermer in the March 2018 issue of Scientific American.]

      …Far from lurching backward, Pinker notes, today’'s fact-checking ethic “would have served us well in earlier decades when false rumors regularly set off pogroms, riots, lynchings, and wars (including the Spanish-American War in 1898, the escalation of the Vietnam War in 1964, the Iraq invasion of 2003, and many others).” And contrary to our medieval ancestors, he says, “few influential people today believe in werewolves, unicorns, witches, alchemy, astrology, bloodletting, miasmas, animal sacrifice, the divine right of kings, or supernatural omens in rainbows and eclipses.”

      Ours is called the Age of Science for a reason, and that reason is reason itself, which in recent decades has come under fire by cognitive psychologists and behavioral economists who assert that humans are irrational by nature and by postmodernists who aver that reason is a hegemonic weapon of patriarchal oppression. Balderdash! Call it “factiness,” the quality of seeming to be factual when it is not. All such declarations are self-refuting, inasmuch as “if humans were incapable of rationality, we could never have discovered the ways in which they were irrational, because we would have no benchmark of rationality against which to assess human judgment, and no way to carry out the assessment,” Pinker explains. “The human brain is capable of reason, given the right circumstances; the problem is to identify those circumstances and put them more firmly in place.”

      Despite the backfire effect, in which people double down on their core beliefs when confronted with contrary facts to reduce cognitive dissonance, an “affective tipping point” may be reached when the counterevidence is overwhelming and especially when the contrary belief becomes accepted by others in one’s tribe. This process is helped along by “debiasing” programs in which people are introduced to the numerous cognitive biases that plague our species, such as the confirmation bias and the availability heuristic, and the many ways not to argue: appeals to authority, circular reasoning, ad hominem and especially ad Hitlerem. Teaching students to think critically about issues by having them discuss and debate all sides, especially articulating their own and another's position is essential, as is asking, “What would it take for you to change your mind?” This is an effective thinking tool employed by Portland State University philosopher Peter Boghossian.

      “However long it takes,” Pinker concludes, “we must not let the existence of cognitive and emotional biases or the spasms of irrationality in the political arena discourage us from the Enlightenment ideal of relentlessly pursuing reason and truth.” That’s a fact.

Paleo Diets, GMOs and Food Taboos

[These excerpts are from an article by Michael Shermer in the Winter 2017/2018 special edition of Scientific American.]

      …In its essence, the see-food diet was the first so-called Paleo diet, not today’s popular fad, premised on the false idea that there is a single set of natural foods—and a correct ratio of them—that our Paleolithic ancestors ate. Anthropologists have documented a wide variety of foods consumed by traditional peoples, from the Masai diet of mostly meat, milk and blood to New Guineans’ fare of yams, taro and sago. As for food ratios, according to a 2000 study entitled “Plant-Animal Subsistence Ratios and Macronutrient Energy Estimations in Worldwide Hunter-Gatherer Diets,” published in the American Journal of Clinical Nutrition, the range for carbohydrates is 22 to 40 percent, for protein 19 to 56 percent, and for fat 23 to 58 percent.

      And what constitutes “natural” anyway? Humans have been genetically modifying foods through selective breeding for more than 10,000 years. Were it not for these original genetically modified organisms—and today’s more engineered GMOs designed for resistance to pathogens and herbicides and for better nutrient profiles—the plan et could sustain only a tiny fraction of its current population. Golden rice, for example, was modified to enhance vitamin A levels, in part, to help Third World children with nutritional deficiencies that have caused millions to go blind. As for health and safety concerns, according to A Decade of EU-Funded GMO Research, a 2010 report published by the European Commission: “The main conclusion to be drawn from the efforts of more than 130 research projects, covering a period of more than 25 years of research, and involving more than 500 independent research groups, is that biotechnology, and in particular GMOs, are not per se more risky than e.g. conventional plant breeding technologies.”

      …Given the importance of food for survival and flourishing, I suspect GMOs—especially in light of their association with large corporations such as Monsanto that operate on the market-pricing model—feel like an infringement of communal sharing and equality matching. Moreover, the elevation of “natural foods” to near-mythic status, coupled with the taboo many genetic-modification technologies are burdened with—remember when in vitro fertilization was considered unnatural?—makes GMOs feel like a desecration. It need not be so. GMOs are scientifically sound, nutritionally valuable and morally noble in helping humanity during a period of rising population. Until then, eat, drink and be merry.

The “True” Human Diet

[These excerpts are from an article by Peter S. Unger in the Winter 2017/2018 special edition of Scientific American.]

      People have been debating the natural human diet for thousands of years, often framed as a question of the morality of eating other animals. The lion has no choice, but we do. Take the ancient Greek philosopher Pythagoras, for example: “Oh, how wrong it is for flesh to be made from flesh?” The argument hasn’t changed much for ethical vegetarians in 2,500 years, but today we also have Sarah Palin, who wrote in Going Rogue: An American Life, “If God had not intended for us to eat animals, how come He made them out of meat?” Have a look at Genesis 9:3—“Every moving thing that liveth shall be meat for you.”

      While humans don't have the teeth or claws of a mammal evolved to kill and eat other animals, that doesn't mean we aren’t “supposed” to eat meat, though. Our early Homo ancestors invented weapons and cutting tools in lieu of sharp carnivorelike teeth. There is no explanation other than meat eating for the fossil animal bones riddled with stone tool cut marks at fossil sites. It also explains our simple guts, which look little like those evolved to process large quantities of fibrous plant foods.

      But gluten isn’t unnatural either. Despite the pervasive call to cut carps, there is plenty of evidence that cereal grains were staples, at least for some, long before domestication. People at Ohalo II on the shore of the Sea of Galilee ate wheat and barley during the peak of the last ice age, more than 10,000 years before these grains were domesticated. Paleobotanists have even found starch granules trapped in the tartar on 40,000-year-old Ncandertal teeth with the distinctive shapes of barley and other grains and the telltale damage that comes from cooking. There is nothing new about cereal consumption.

      This leads us to the so-called Paleolithic Diet. As a palcoanthropologist I’m often asked for my thoughts about it. I’m not really a fan—I like pizza and French fries and ice cream too much. Nevertheless, diet gurus have built a strong case for discordance between what we eat today and what our ancestors evolved to eat. The idea is that our diets have changed too quickly for our genes to keep up, and the result is said to be “metabolic syndrome,” a cluster of conditions that include elevated blood pressure, high blood sugar level, obesity and abnormal cholesterol levels. It’s a compelling argument. Think about what might happen if you put diesel in an automobile built for regular gasoline. The wrong fuel can wreak havoc on the system, whether you're filling a car or stuffing your face.

      It makes sense, and it’s no surprise that Paleolithic diets remain hugely popular. There are many variants on the general theme, but foods rich in protein and omega-3 fatty acids show up again and again. Grass-fed cow meat and fish are good, and carbohydrates should come from nonstarchy fresh fruits and vegetables. On the other hand, cereal grains, legumes, dairy, potatoes, and highly refined and processed foods are out. The idea is to eat like our Stone Age ancestors—you know, spinach salads with avocado, walnuts, diced turkey, and the like.

      I am not a dietician and cannot speak with authority about the nutritional costs and benefits of Paleolithic diets, but I can comment on their evolutionary underpinnings. From the standpoint of paleoecology, the Paleolithic diet is a myth. Food choice is as much about what is available to be eaten as it is about what a species evolved to eat. And just as fruits ripen, leaves flush and flowers bloom predictably at different times of the year, foods available to our ancestors varied over deep time as the world changed around them from warm and wet to cool and dry and back again. Those changes are what drove our evolution….

      Many paleoanthropologists today believe that increasing climate fluctuation through the Pleistocene sculpted our ancestors—whether their bodies or their wit, or both—for the dietary flexibility that has become a hallmark of humanity. The basic idea is that our ever changing world winnowed out the pickier eaters among us. Nature has made us a versatile species, which is why we can find something to satiate us on nearly all its myriad biospheric buffet tables. It's also why we have been able to change the game, transition from forager to farmer, and really begin to consume our planet.

Are Engineered Foods Evil?

[These excerpts are from an article by David H. Freedman in the Winter 2017/2018 special edition of Scientific American.]

      …The bulk of the science on GM safety points in one direction. Take it from David Zilberman, a U.C. Berkeley agricultural and environmental economist and one of the few researchers considered credible by both agricultural chemical companies and their critics. He argues that the benefits of GM crops greatly outweigh the health risks, which so far remain theoretical. The use of GM crops “has lowered the price of food,” Zilberman says. “It has increased farmer safety by allowing them to use less pesticide. It has raised the output of corn, cotton and soy by 20 to 30 percent, allowing some people to survive who would not have without it. If it were more widely adopted around the world, the price [of food] would go lower, and fewer people would die of hunger.”

      In the future, Zilberman says, those advantages will become all the more significant. The United Nations Food and Agriculture Organization estimates that the world will have to grow 70 percent more food by 2050 just to keep up with population growth. Climate change will make much of the world's arable land more difficult to farm. GM crops, Zilberman says, could produce higher yields, grow in dry and salty land, withstand high and low temperatures, and tolerate insects, disease and herbicides.

      Despite such promise, much of the world has been busy banning, restricting and otherwise shunning GM foods. Nearly all the corn and soybeans grown in the U.S. are genetically modified, but only two GM crops, Monsanto's MON810 maize and BASF's Amflora potato, are accepted in the European Union. Ten E.U. nations have banned MON810, and although BASF withdrew Amflora from the market in 2012, four E.U. nations have taken the trouble to ban that, too. Approval of a few new GM corn strains has been proposed there, but so far it has been repeatedly and soundly voted down. Throughout Asia, including in India and China, governments have yet to approve most GM crops, including an insect-resistant rice that produces higher yields with less pesticide. In Africa, where millions go hungry, several nations have refused to import GM foods in spite of their lower costs (the result of higher yields and a reduced need for water and pesticides). Kenya has banned them altogether amid widespread malnutrition. No country has definite plans to grow Golden Rice, a crop engineered to deliver more vitamin A than spinach (rice normally has no vitamin A), even though vitamin A deficiency causes more than one million deaths annually and half a million cases of irreversible blindness in the developing world.

      Globally, only a tenth of the world's cropland includes GM plants. Four countries—the U.S., Canada, Brazil and Argentina—grow 90 percent of the planet's GM crops. Other Latin American countries are pushing away from the plants. And even in the U.S., voices decrying genetically modified foods are becoming louder. In 2016 the U.S. federal government passed a law requiring labeling of GM ingredients in food products, replacing GM-labeling laws in force or proposed in several dozen states.

      The fear fueling all this activity has a long history. The public has been worried about the safety of GM foods since scientists at the University of Washington developed the first genetically modified tobacco plants in the 1970s. In the mid-1990s, when the first GM crops reached the market, Greenpeace, the Sierra Club, Ralph Nader, Prince Charles and a number of celebrity chefs took highly visible stands against them. Consumers in Europe became particularly alarmed: a survey conducted in 1997, for example, found that 69 percent of the Austrian public saw serious risks in GM foods, compared with only 14 percent of Americans.

      In Europe, skepticism about GM foods has long been bundled with other concerns, such as a resentment of American agribusiness. Whatever it is based on, however, the European attitude reverberates across the world, influencing policy in countries where GM crops could have tremendous benefits. “In Africa, they don’t care what us savages in America are doing,” Zilberman says. “They look to Europe and see countries there rejecting GM, so they don't use it.” Forces fighting genetic modification in Europe have rallied support for “the precautionary principle," which holds that given the kind of catastrophe that would emerge from loosing a toxic, invasive GM crop on the world, GM efforts should be shut down until the technology is proved absolutely safe.

      But as medical researchers know, nothing can really be “proved safe.” One can only fail to turn up significant risk after trying hard to find it—as is the case with GM crops….

      The human race has been selectively breeding crops, thus altering plants’ genomes, for millennia. Ordinary wheat has long been strictly a human-engineered plant; it could not exist outside of farms, because its seeds do not scatter. For some 60 years scientists have been using “mutagenic” techniques to scramble the DNA of plants with radiation and chemicals, creating strains of wheat, rice, peanuts and pears that have become agricultural mainstays. The practice has inspired little objection from scientists or the public and has caused no known health problems.

      The difference is that selective breeding or mutagenic techniques tend to result in large swaths of genes being swapped or altered. GM technology in contrast, enables scientists to insert into a plant's genome a single gene (or a few of them) from another species of plant or even from a bacterium, virus or animal. Supporters argue that this precision makes the technology much less likely to produce surprises. Most plant molecular biologists also say that in the highly unlikely case that an unexpected health threat emerged from a new GM plant, scientists would quickly identify and eliminate it. “We know where the gene goes and can measure the activity of every single gene around it,” Goldberg says. “We can show exactly which changes occur and which don’t.”

      And although it might seem creepy to add virus DNA to a plant, doing so is, in fact, no big deal, proponents say. Viruses have been inserting their DNA into the genomes of crops, as well as humans and all other organisms, for millions of years. They often deliver the genes of other species while they are at it, which is why our own genome is loaded with genetic sequences that originated in viruses and nonhuman species. “When GM critics say that genes don’t cross the species barrier in nature, that’s just simple ignorance,”" says Alan McHughen, a plant molecular geneticist at U.C. Riverside. Pea aphids contain fungi genes. Triticale is a century-plus-old hybrid of wheat and rye found in some flours and breakfast cereals. Wheat itself, for that matter, is a cross-species hybrid. “Mother Nature does it all the time, and so do conventional plant breeders,” McHughen says.

      Could eating plants with altered genes allow new DNA to work its way into our own? It is possible but hugely improbable. Scientists have never found genetic material that could survive a trip through the human gut and make it into cells. Besides, we are routinely exposed to—and even consume—the viruses and bacteria whose genes end up in GM foods. The bacterium Bacillus thuringlensis, for example, which produces proteins fatal to insects, is sometimes enlisted as a natural pesticide in organic farming. “We’ve been eating this stuff for thousands of years,” Goldberg says.

      In any case, proponents say, people have consumed as many as trillions of meals containing genetically modified ingredients over the past few decades. Not a single verified case of illness has ever been attributed to the genetic alterations. Mark Lynas, a prominent anti-GM activist who in 2013 publicly switched to strongly supporting the technology has pointed out that every single news-making food disaster on record has been attributed to non-GM crops, such as the Escherichia coli-infected organic bean sprouts that killed 53 people in Europe in 2011.

      Critics often disparage U.S. research on the safety of genetically modified foods, which is often funded or even conducted by GM companies, such as Monsanto. But much research on the subject comes from the European Commission, the administrative body of the E.U., which cannot be so easily dismissed as an industry tool. The European Commission has funded 130 research projects, carried out by more than 500 independent teams, on the safety of GM crops. None of those studies found any special risks from GM crops….

      Some scientists say the objections to GM food stem from politics rather than science—that they are motivated by an objection to large multinational corporations having enormous influence over the food supply; invoking risks from genetic modification just provides a convenient way of whipping up the masses against industrial agriculture. “This has nothing to do with science,” Goldberg says. “It’s about ideology.” Former anti-GM activist Lynas agrees. He has gone as far as labeling the anti-GM. crowd “explicitly an antiscience movement….”

      There is a middle ground in this debate. Many moderate voices call for continuing the distribution of GM foods while maintaining or even stepping up safety testing on new GM crops. They advocate keeping a close eye on the health and environmental impact of existing ones. But they do not single out GM crops for special scrutiny, the Center for Science in the Public Interest’s Jaffe notes: all crops could use more testing. “We should be doing a better job with food oversight altogether,” he says.

      Even Schubert agrees. In spite of his concerns, he believes future GM crops can be introduced safely if testing is improved. "Ninety percent of the scientists I talk to assume that new GM plants are safety-tested the same way new drugs are by the FDA," he says. "They absolutely aren't, and they absolutely should be."

      Stepped-up testing would pose a burden for GM researchers, and it could slow down the introduction of new crops. “Even under the current testing standards for GM crops, most conventionally bred crops wouldn’t have made it to market,” McHughcn says. “What’s going to happen if we become even more strict?”

      That is a fair question. But with governments and consumers increasingly coming down against GM crops altogether, additional testing may be the compromise that enables the human race to benefit from those crops’ significant advantages.

How to Break the Climate Deadlock

[These excerpts are from an article by Naomi Oreskes in the Winter 2017/2018 special edition of Scientific American.]

      The main claim of politicians, lobbyists and CEOs who lead the charge to minimize the government’s role in addressing climate change is that the world should rely on the marketplace to fix the problem. Greenhouse gas emissions are part of the world’s economy, so if they are a problem, markets will respond, for instance, by offering technologies to prevent climate change or allow us to adapt to it.

      In truth, however, energy markets do not account for the “external,” or social, costs of using fossil fuels. These are not reflected in the price we pay at the pump, the wellhead or the electricity meter. For example, pollution from coal causes disease, damages buildings and contributes to climate change. When we buy electricity generated from coal, we pay for the electricity, but we do not pay for these other real, measurable costs.

      In a properly functioning market, people pay the true cost of the goods and services they use. If I dump my garbage in your backyard, you are right to insist that I pay for that privilege, assuming you are willing to let me do it at all. And if you do not insist, you can be pretty sure that I will keep on dumping my garbage there. In our markets today, people are dumping carbon dioxide into the atmosphere without paying for that privilege. This is a market failure. To correct that failure, carbon emissions must have an associated cost that reflects the toll they take on people and the environment. A price on carbon encourages individuals, innovators and investors to seek alternatives, such as solar and wind power, that do not cause carbon pollution. When economist Hoesung Lee became the new head of the Intergovernmental Panel on Climate Change in 2015, he named carbon pricing as the world’s top climate change priority.

      Several countries and regions have implemented carbon prices. In British Columbia, a carbon tax has helped cut fuel consumption and carbon emissions without harming economic growth. To prevent taxes from rising overall, the government also lowered personal and corporate income taxes; the province now has the lowest personal income tax rate and one of the lowest corporate tax rates in Canada.

      Another way to remedy the market failure of pollution is to create a trading system where people can buy the right to pollute—a right that they can use, save or sell. A company that can reduce its emissions more than the law requires can sell any unused credits, whereas a company that cannot meet the standards can buy credits until it figures out how to solve its pollution problem….

      Some people, pointing to the recent rapid drop in the cost of solar power, argue that the market is responding and thus proves that we can solve climate change without Paris, without a big international treaty or even without government intervention at all. Others say that if we just implement a carbon tax, the market will do the rest. Neither position captures the whole story.

      To stop climate change, we need a new energy system. This means large-scale renewable power, coupled with dramatic increases in energy efficiency, demand management and energy storage. Solar and wind power work and in many domains are now cost-competitive, but they are not at the scale needed to replace enough fossil-fuel power plants to stop the ongoing rise in atmospheric CO2 levels. After half a century of heavy and sustained public investment, nuclear power remains costly and controversial, and we still lack an accepted means of nuclear waste disposal. Carbon capture and storage—which collects emissions and puts them underground—is a great idea but has yet to be implemented.

      A price on carbon will push demand in the right direction, but it needs to be reinforced by the pull of public investment in innovation. The most likely way we will get the innovation we need, at the scale we need, in the time frame we need and at a retail price that people can afford, is if the public sector plays a significant role….

      Carbon capture and storage requires special attention. The emissions-reduction goals being promised by many countries assume that these nations will be capturing carbon and storing it in the ground. The dirty secret is that a proved system does not exist, not to mention a cost-effective one….

Exploration before Exploitation

[These excerpts are from an editorial by Erik E. Cordes and Lisa A. Levin in the February 16, 2018, issue of Science.]

      The current U.S. administration has proposed to open up 90% of the U.S. continental shelf to oil and gas drilling as part of a new Bureau of Ocean Energy Management (BOEM) Draft Proposed Program. Although there is a clear need to move beyond fossil fuels for America’s energy needs and energy security, there are also a number of immediate existential threats posed by an increase in offshore drilling.

      The sites that would be opened include vast areas of the ocean floor that remain unexplored, even unmapped. The maps that exist of our oceans, which make up about 70% of the surface of Earth, are at a resolution of about 5 kilometers. If Manhattan were under water, a map of it would be 4 pixels by 1 pixel across—easy enough to miss in a casual survey. Only about 5% of the ocean floor has been mapped in the level of detail equivalent to the high-resolution maps of the Moon and Mars. Of this, only about 0.01% has been seen by humans through photo or video surveys.

      This is noteworthy because the continental margins are not just featureless, muddy plains that could be drilled at any location with little disturbance. They contain submarine canyons, deep-water coral reefs and mounds, and natural hydrocarbon seep communities. All of these habitats interact with the surface ocean and provide services on which humans rely. As an example, many small fish that feed the larger fish that we rely on for food, migrate every day to the deep sea and interact with the unusual habitats there. The pharmaceutical industry spends untold funds on drug discovery, and many promising finds have come from deep-sea corals and sponges. At a global scale, the ocean absorbs one-third of the carbon that humans emit and snore than 90% of the excess heat that has recently entered the atmosphere. Much of this ends up in the deep sea….

      The probability of an offshore drilling accident increases with the depth of the industrial activity, and a single isolated incident may require decades to centuries for recovery because of the slow growth and longevity of the deep-sea fauna. Even in well-studied areas, long-term observation and monitoring, both pre- and postdrilling, will be necessary to distinguish the impacts of drilling from the effects of climate change, pollutants, fishing, and other human disturbances.

      The regions now considered for drilling have not been subject to the decades of deep-sea exploration and research that are essential to make sound decisions for siting of new drilling and the effective management of deep-sea resources. If existing maps are insufficient, or our understanding of the basic life-history traits of the fauna are incomplete, we could be losing the next generation of anticancer drugs or the recycling of nutrients essential to support fisheries while we exploit energy reserves that are best left in the ground. A clear national commitment to science-based management of offshore drilling would demonstrate the leadership of the United States on this issue, which has broad relevance globally as the industrialization of the open ocean proceeds.

Fact or Fiction?: Vaccines are Dangerous

[This excerpt is from an article by Dina Fine Maron in the Winter 2017/2018 special edition of Scientific American.]

      We live in a crowded, fast-moving world, and disease travels easily. The data are clear: failure to immunize a child comes with a much more formidable risk—leaving children vulnerable to contracting a potentially debilitating or lethal illness. Some children are too sick or too young to receive inoculations, so they remain at risk. If those children or other unvaccinated kids come into contact with someone else who was not protected against certain microbes, that can set off a wave of disease such as the measles outbreak in the U.S. in the summer of 2017. Maladies that have become uncommon, such as polio and measles, can also quickly reappear if we stop vaccinating against them, particularly when they are unintentionally imported across geographic borders. The 2015 measles outbreak that rippled through the U.S., for example, had genetic markers that suggest it came from an overseas traveler. Protecting kids actually helps protect everyone.

Answers to Climate Contrarian Nonsense

[This excerpt is from an article by John Rennie in the Winter 2017/2018 special edition of Scientific American.]

      Although CO2 makes up only 0.04 percent of the atmosphere, that small number says nothing about its significance in climate dynamics. Even at that low concentration, CO2 absorbs infrared radiation and acts as a greenhouse gas, as physicist John Tyndall demonstrated in 1859. Chemist Svante Arrhenius went further in 1896 by estimating the impact of CO2 on the climate; after painstaking hand calculations, he concluded that doubling its concentration might cause almost six degrees Celsius of warming—an answer not much out of line with recent, far more rigorous computations.

      Contrary to the contrarians, human activity is by far the largest contributor to the observed increase in atmospheric CO2. According to the Global Carbon Project, anthropogenic CO2 amounts to about 35 billion tons annually—more than 130 times as much as volcanoes produce. True, 95 percent of the releases of CO2 to the atmosphere arc natural, but natural processes such as plant growth and absorption into the oceans pull the gas back out of the atmosphere and almost precisely offset them, leaving the human additions as a net surplus. Moreover, several sets of experimental measurements, including analyses of the shifting ratio of carbon isotopes in the air, further confirm that fossil-fuel burning and deforestation are the primary reasons that CO2 levels have risen 40 percent since 1832, from 284 parts per million (ppm) to more than 400 ppm—a remarkable jump to the highest levels seen in millions of years.

      Contrarians frequently object that water vapor, not CO2 is the most abundant and powerful greenhouse gas; they insist that climate scientists routinely leave it out of their models. The latter is simply untrue: from Arrhenius on, climatologists have incorporated water vapor into their models. In fact, water vapor is why rising CO2 has such a big effect on climate. CO2 absorbs some wavelengths of infrared that water does not, so it independently adds heat to the atmosphere. As the temperature rises, more water vapor enters the atmosphere and multiplies CO2’s greenhouse effect; the Intergovernmental Panel on Climate Change notes that water vapor may “approximately double the increase in the greenhouse effect due to the added CO2 alone.”

      Nevertheless, within this dynamic, the CO2 remains the main driver (what climatologists call a “forcing”) of the greenhouse effect. As NASA climatologist Gavin Schmidt has explained, water vapor enters and leaves the atmosphere much more quickly than CO2 and tends to preserve a fairly constant level of relative humidity, which caps off its greenhouse effect. Climatologists therefore categorize water vapor as a feedback rather than a forcing factor. (Contrarians who don’t see water vapor in climate models are looking for it in the wrong place.)

Education as an American Right?

[This excerpt is from an article by Julie Underwood in the February 2018 issue of Phi Delta Kappan.]

      Since their inception, public schools have served to develop children’s ability to participate as active citizens in our nation's democracy. In addition to helping them secure personal gain, prepare for productive work, and accomplish individual goals, education has been expected to create responsible and active participants in American democratic society.

      Our nation’s founders believed that the success of American democracy depended on the capacity of its citizens to make informed and reasonable decisions about the public good. In those days, the primary rationale for providing public funds for education was to develop a populace that understood political and social issues, could vote wisely, and serve on juries and deliberative bodies. Even today, the goal is to afford all children an adequate education for equal access to participation in our democratic society.

      Currently, we treat students as citizens of the state in which they reside and not as citizens of the larger country. Thus, we allow each state to set the parameters of its investment in children and to set the limits of those children's opportunities. In today's world, though, it is becoming increasingly difficult to argue that the education of children in Alabama or Arizona has no significant effect on the people of Mississippi or Michigan, not only insofar as they participate in the larger national economy but also to the extent that their political decisions have implications for the rest of the country. Do we really believe that we can ignore the differences in opportunities provided to individuals just because of their states of residence? Or, given the realities of 21st-century life — at a time when citizens frequently relocate for higher education or work, drive across state borders for dinner, share the same cyberspace, and consume the same media —can we acknowledge that we all have a stake in ensuring that our fellow citizens have the right, as Americans, to an education that allows them to participate in the political process?

My Climate Change Crisis

[These excerpts are from an article by Paul C. Rogge in the February 9, 2018, issue of Science.]

      Reading former Vice President Al Gore’s An Inconvenient Truth in college awakened me to the widespread threat of climate change. Spurred to action, I joined a lab to develop alternative energy technologies. The result was an undergraduate project studying magnetic materials, which are important for electric vehicles and wind turbines. It got me hooked on research and left me wanting to make a bigger impact. I wanted my work to lead straight to solutions—but in the process, I veered off course.

      While visiting prospective graduate programs, I found just the project I was looking for: creating a radically new type of solar cell using low-cost electrochemistry techniques. The chemistry my new project would require was fundamentally different from the methods I had learned while working on magnetic materials, and I hadn’t really thought about chemistry since taking Chem 101 as a college freshman. But I was so driven by the potential social impact of the work that these realities did not worry me.

      In my first week, it was clear that I was starting from scratch. Mixing my first solution, I dumped powdered copper sulfate into a dry beaker before adding water—a big no-no, as any chemist knows. The extreme heat given off nearly caused the beaker to explode. But I quickly learned the correct procedures and threw myself into my research.

      However, 2 years of intense work did not lead to much progress. I found myself in the lab less, and my patience for troubleshooting experiments waned. I began to doubt my project and even my ability to graduate. To my adviser, the unexpected failure of the technique I was using offered a chance to dig deeper into the underlying chemistry. He was excited about what he saw as a silver lining, which made my enthusiasm sink even further. I didn’t want to study electrochemistry techniques; I wanted to solve climate change.

      In hindsight, the source of my dissatisfaction was clear. In choosing my research area, I had been blinded by my enthusiasm to tackle a pressing social and environmental problem. I hadn’t critically considered my own scientific interests and whether I would enjoy working in an electrochemistry lab. I should have remembered how much I struggled in Chem 101. And I should have realized that the concepts and techniques I mastered in my undergraduate research on magnetic materials—the very things that got me hooked on research in the first place—were not particularly relevant for electrochemistry….

      Just as I came to see my research as part of a larger scientific ecosystem, today I understand that scientific advancements are just one part of the needed response to climate change. I’ve reduced my environmental impact by taking public transportation to work, significantly cutting my meat intake, and resisting my consumerist impulses. My lifestyle changes won’t single-handedly reverse climate change, and neither will my individual scientific contributions. But we all need to work together to address such challenges, each of us contributing in the best way we can.

An Indoor Chemical Cocktail

[These excerpts are from an article by Sasho Gligorovski and Jonathan P.D. Abbatt in the February 9, 2018, issue of Science.]

      In the past 50 years, many of the contaminants and chemical transformations that occur in outdoor waters, soils, and air have been elucidated. However, the chemistry of the indoor environment in which we live most of the time—up to 90% in some societies—is not nearly as well studied. Recent work has highlighted the wealth of chemical transformations that occur indoors. This chemistry is associated with 3 of the top 10 risk factors for negative health outcomes globally: household air pollution from solid fuels, tobacco smoking, and ambient particulate matter pollution. Assessments of human exposure to indoor pollutants must take these reactive processes into consideration.

      A few studies illustrate the nature of multi-phase chemistry in the indoor environment….a highly carcinogenic class of compounds—the tobacco-specific nitrosamines—forms via the reaction of gas-phase nitrous acid (HONO) with cigarette nicotine that is adsorbed onto indoor surfaces similar to those in a typical smoker’s room.. HONO is also produced indoors directly by other combustion sources such as gas stoves and by the gas-surface reactions of gaseous nitrogen oxides on walls, ceilings, and carpets. Likewise, carcinogenic polycyclic aromatic hydrocarbons (PAHs) and their often more toxic oxidation products are mobile, existing both on the walls of most dwellings and in the air; PAHs arise from combustion sources such as smoking and inefficient cookstoves. This is a particularly important issue in developing countries, where the adverse health effects from cooking with solid fuels is a leading cause of disease. As another example, use of chlorine bleach to wash indoor surfaces promotes oxidizing conditions not just on the surfaces being washed but throughout the indoor space. Reactive chlorinated gases (such as HOCl and Cl2) evaporate from the washed surface, can oxidize other surfaces in a room, and may be broken apart by ultraviolet (UV) light to form reactive radicals….

      The building science research community has long identified the importance of ventilation for the state of indoor environments. Open windows expose us to outdoor air, whereas well-sealed houses are subject to emissions from furnishings, building materials, chemical reactions, and people and their activities. Climate change and outdoor air pollution are leading to efforts to better seal off indoor spaces, slowing down exchange of outdoor air. The purpose may be to improve air conditioning, build more energy-efficient homes, or prevent the inward migration of outdoor air pollution. As exposure to indoor environments increases, we need to know more about the chemical transformations in our living and working spaces, and the associated impacts on human health.

As Polar Ozone Mends, UV Shield Closer to Equator Thins

[These excerpts are from an article by April Reese in the February 9, 2018, issue of Science.]

      Thirty years after nations banded together to phase out chemicals that destroy stratospheric ozone, the gaping hole in Earth's ultraviolet (UV) radiation shield above Antarctica is shrinking. But new findings suggest that at midlatitudes, where most people live, the ozone layer in the lower stratosphere is growing more tenuous—for reasons that scientists are struggling to fathom.

      “I don’t want people to panic or get overly worried,” says William Ball, an atmospheric physicist…. “But there is something happening in the lower stratosphere that's important to understand….”

      Ball and his colleagues suspect the culprit is “very short-lived substances,” or VSLSs: ozone-eating chemicals such as dichloromethane that break down within 6 months after escaping into the air. Researchers had long assumed that VSLSs’ short lifetime would keep them from reaching the stratosphere, but a 2015 study suggested that the substances may account for as much as 25% of the lower stratosphere’s ozone losses. Whereas many VSLSs are of natural origin—marine organisms produce dibromomethane, for example—use of humanmade dichloromethane, an ingredient in solvents and paint removers, has doubled in recent years. “We should study [VSLSs] more completely,” says Richard Rood, an atmospheric scientist at the University of Michigan in Ann Arbor. But because the compounds are released in small quantities, he says, “They’re going to be difficult to measure.”

      He and others say it's vital to determine what's destroying ozone over the populous midlatitudes. “The potential for harm ... may actually be worse than at the poles,” says Joanna Haigh, co-director of the Grantham Institute at Imperial College London. “The decreases in ozone are less than we saw at the poles before the Montreal Protocol was enacted, but UV radiation is more intense in these regions.”

      Ball and others emphasize that the Montreal Protocol has been a resounding success. “I don’t think it in any way says there’s some-thing fundamentally wrong with how we've been dealing with the ozone problem,” Rood says. “What it says to me is that we’re now looking at effects that are more subtle than that original problem we were taking on” when the Montreal Protocol was adopted.

Gun Science

[These excerpts are from an article by Michael Shermer in the Winter 2017/2018 special edition of Scientific American.]

      According to the Centers for Disease Control and Prevention, 33,594 people died by guns in 2014 (the most recent year for which U.S. figures are available), a staggering number that is orders of magnitude higher than that of comparable Western democracies. What can we do about it? National Rifle Association executive vice president Wayne LaPierre believes he knows: “The only thing that stops a bad guy with a gun is a good guy with a gun.” If LaPierre means professionally trained police and military who routinely practice shooting at ranges, this observation would at least be partially true. If he means armed private citizens with little to no training, he could not be more wrong….

      For example, of the 1,082 women and 267 men killed in 2010 by their intimate partners, 54 percent were shot by guns. Over the past quarter of a century, guns were involved in a greater number of intimate partner homicides than all other causes combined. When a woman is murdered, it is most likely by her intimate partner with a gun. Regardless of what really caused Olympic track star Oscar Pistorius to shoot his girlfriend, Reeva Steenkamp (whether he mistook her for an intruder or he snapped in a lover’s quarrel), her death is only the latest such headline. Recall, too, the fate of Nancy Lanza, killed by her own gun in her own home in Connecticut by her son, Adam Lanza, before he went to Sandy Hook Elementary School to murder some two dozen children and adults. As an alternative to arming women against violent men, legislation can help: data show that in states that prohibit gun ownership by men who have received a domestic violence restraining order, gun-caused homicides of intimate female partners have been reduced by 25 percent.

      Another myth to fall to the facts is that gun-control laws disarm good people and leave the crooks with weapons. Not so, say the Johns Hopkins authors: “Strong regulation and oversight of licensed gun dealers—defined as having a state law that required state or local licensing of retail firearm sellers, mandatory record keeping by those sellers, law enforcement access to records for inspection, regular inspections of gun dealers, and mandated reporting of theft of loss of firearms—was associated with 64 percent less diversion of guns to criminals by in-state gun dealers.”

      Finally, before we concede civilization and arm everyone to the teeth pace the NRA, consider the primary cause of the centuries-long decline of violence as documented by Steven Pinker in his 2011 book The Better Angels of Our Nature: the rule of law by states that turned over settlement of disputes to judicial courts and curtailed private self-help justice through legitimate use of force by police and military trained in the proper use of weapons.

Journey to Gunland

[These excerpts are from an article by Melinda Wenner Moyer in the Winter 2017/2018 special edition of Scientific American.]

      …A growing body of research suggests that violence is a contagious behavior that exists independent of weapon or means. In this framework, guns are accessories to infectious violence rather than fountainheads. But this does not mean guns don’t matter. Guns intensify violent encounters, upping the stakes and worsening the outcomes—which explains why there are more deaths and life-threatening injuries where firearms are common. Violence may be primarily triggered by other violence, but these deadly weapons make all this violence worse….

      The frequency of self-defense gun use rests at the heart of the controversy over how guns affect our country. Progun enthusiasts argue that it happens all the time. In 1995 Gary Kleck, a criminologist at Florida State University, and his colleague Marc Gertz published a study that elicited what has become one of the gun lobby's favorite numbers. They randomly surveyed 5,000 Americans and asked if they, or another member of the household, had used a gun for self-protection in the past year. A little more than 1 percent of the participants answered yes, and when Kleck and Gertz extrapolated their results, they concluded that Americans use guns for self-defense as many as 2.5 million times a year.

      This estimate is, however, vastly higher than numbers from government surveys, such as the National Crime Victimization Survey (NCVS), which is conducted in tens of thousands of house-holds. It suggests that victims use guns for self-defense only 65,000 times a year. In 2015 Hemenway and his colleagues studied five years’ worth of NCVS data and concluded that guns are used for self-defense in less than 1 percent of all crimes that occur in the presence of a victim. They also found that self-defense gun use is about as effective as other defensive maneuvers, such as calling for help. “It’s not as if you look at the data, and it says people who defend themselves with a gun are much less likely to be injured,” says Philip Cook, an economist at Duke University, who has been studying guns since the 1970s….

      A closer look at the who, what, where and why of gun violence also sheds some light on the self-defense claim. Most Americans with concealed carry permits are white men living in rural areas, yet it is young black men in urban areas who disproportionately encounter violence. Violent crimes are also geographically concentrated: Between 1980 and 2008, half of all of Boston’s gun violence occurred on only 3 percent of the city’s streets and intersections. And in Seattle, over a 14-year-period, every single juvenile crime incident took place on less than 5 percent of street segments. In other words, most people carrying guns have only a small chance of encountering situations in which they could use them for self-defense….

      The popular gun-advocacy bumper sticker says that “guns don’t kill people, people kill people”—and it is, in fact, true. People, all of us, lead complicated lives, misinterpret situations, get angry, make mistakes. And when a mistake involves pulling a trigger, the damage can’t be undone….

Girls Lead in Solving Problems with Others

[These excerpts are from a report by OECD (Organization for Economic Co-operation and Development) in the February 2018 issue of Phi Delta Kappan.]

      Girls are much better than boys at working together to solve problems, according to an international assessment of collaborative problem solving.

      About 125,000 15-year-olds in 52 countries and economies took part in the assessment, which analyzes for the first time how well students work together as a group, their attitudes toward collaboration, and the influence of factors such as gender, after-school activities, and social background….

      • Students with stronger reading or math skills tend to be better at collaborative problem solving because managing and interpreting information, and the ability to reason, are required to solve problems.

      • Girls do better than boys in every country and economy that participated in the assessment, by the equivalent of a half year of schooling on average….

      • Exposure to diversity in the classroom tends to be associated with better collaboration skills. For example, in some countries, students without an immigrant background perform better in the collaboration-specific aspects of the test when they attend schools with a larger proportion of immigrant students.

      • Students who attend physical education lessons or play sports generally are more positive about collaboration. Students who play video games outside of school score slightly lower in collaborative problem solving than students who do not play video games. But students who access the internet or social networks outside of school score slightly higher than other students.

North American Waterways Are Becoming Saltier

[These excerpts are from an item from the University of Maryland in the February 2018 issue of The Science Teacher.]

      Across America, streams and rivers are becoming saltier, thanks to road deicers, fertilizers, and other salty compounds that humans indirectly release into waterways. At the same time, freshwater supplies are becoming more alkaline.

      Salty, alkaline freshwater can create big problems for drinking water supplies, urban infrastructure, and natural ecosystems. A new study is the first to assess long-term changes in freshwater salinity and pH at the continental scale. Drawn from data recorded at 232 U.S. Geological Survey monitoring sites across the country over the past 50 years, the analysis shows significant increases in both salinization and alkalinization. The study results also suggest a close link between the two properties, with different salt compounds combining to do more damage than any one salt on its own….

      According to the researchers, most freshwater salinization research has focused on sodium chloride, better known as table salt, which is also the dominant chemical in road deicers. But in terms of chemistry, salt has a much broader definition, encompassing any combination of positively and negatively charged ions that dissociate in water. Some of the most common positive ions found in salts—including sodium, calcium, magnesium, and potassium—can have dam-aging effects on freshwater at higher concentrations.

      …Alkalinization, which is influenced by a number of different factors in addition to salinity, increased by 90%.

      The root causes of increased salt in waterways vary from region to region, according to researchers. In the snowy Mid-Atlantic and New England, road salt applied to maintain roadways in winter is a primary culprit. In the heavily agricultural Midwest, fertilizers, particularly those with high potassium content, also make major contributions. In other regions, mining waste and weathering of concrete, rocks, and soils releases salts into adjacent waterways.

It’s Time for EPA’s Scott Pruitt to Go

[This excerpt is from an editorial by Fred Krupp in the Winter 2018 issue of Solutions, the newsletter of the Environmental Defense Fund.]

      This month marks the one-year anniversary of President Trump’s inauguration and the convening of the 115th Congress. And what a year it has been—a perfect storm of extreme weather and extreme politics. The president is surrendering America's climate leadership, undermining the government's ability to enforce the law and demolishing environmental safeguards.

      The administration's point man for environmental assaults and climate denial has been EPA Administrator Scott Pruitt. As Oklahoma attorney general, he sued EPA 14 times trying to block clean air and water protections. This year he led what amounted to a hostile take-over of the agency, rolling back climate standards even as historic hurricanes and wildfires drove home the need for urgent action.

      Pruitt has ruled EPA under a cloak of secrecy, suppressing climate web pages, silencing scientists and keeping his schedule secret, until Freedom of Information Act requests from EDF and others forced its release. It showed Pruitt meeting regularly with executives from the mining, fossil fuel and auto industries, sometimes shortly before making decisions that put their interests above those of the American people. His frequent travel to Oklahoma at taxpayers’ expense prompted EPA’s Inspector General to open an investigation.

Our Science, Our Society

[These excerpts are from an editorial by Susan Hockfield in the February 3, 2018, issue of Science.]

      We live in a scientific golden age. Never has the pace of discovery been so rapid, the range of achievements so broad, and the changing nature of our understanding so revolutionary. Science today has extraordinary powers. It reveals fundamental phenomena of our universe, catalyzes new technologies, powers new businesses, fosters new industries, and improves lives….Today’s advances and innovations presage a future that most of us have not yet imagined.

      Lamentably, we also live in a new heyday of anti-science activism. Fake news and “alternative facts” abound. Climate-change deniers occupy political office and determine environmental policy. Fears of unsubstantiated dangers delay the deployment of genetically modified foods in starving nations. The risks of nuclear power are overstated rather than carefully weighed. The anti-vaccination movement endures, and there are claims that science is as culturally determined and subjective as any other endeavor. Public figures cynically dismiss scientific findings, fostering a popular distrust of expertise and experts. All this, too, presages a different future that most of us would not want to imagine.

      In this environment, how can we ensure that science prevails and continues to flourish? What can be done to get the most from this scientific golden age? We can start by recognizing the critical role of institutions in nurturing the scientific enterprise….

      When the focus of science is placed on individual achievement, it can neglect the importance of the institutions that make the work of science possible. That leaves our institutions open to attack. And, indeed, both science and its institutions are under attack today, with rampant skepticism about the utility of the research enterprise and higher education. Also under attack are the core principles that unite scientists and science enthusiasts: that objective reality can be discovered; that anyone can compete in a game governed by ideas; that disagreements are best resolved by assembling facts to test competing views; and that science and the application of scientific principles have the capacity to improve lives. What's more, science's universal truths call together people from any background, any nation, any phenotype or genotype. These principles have guided us for centuries along the road to discovery and understanding.

A Tale of Two Cultures

[These excerpts are from an editorial by Rush Holt in the January 26, 2018, issue of Science.]

      It is the best of times. It is the worst of times. We are witnessing major advances in almost every field of science, leading to a better understanding of the world and improvements in the quality of people’s lives. Yet, scattered distrust of science, neglect of science by public officials, and frequent denial of scientific thinking in many quarters seem to call into question that rosy view of scientific progress. The inconsistency indicates widespread misunderstanding of what science is and how it works. It is up to scientists to fix this.

      …For example, it is troubling to scientists that in the United States, the president has failed to appoint a science adviser. But even more troubling is that the public has reacted with a yawn.

      …A principle of science is that all findings are provisional. Some seem to think this means science is so uncertain that any opinion or political assertion is as valid as evidence.

      Somehow, scientists must rebuild public understanding and appreciation of science and evidence-based thinking. Clearly, it will not be accomplished simply by decrying the lack of trust or failure to appoint science advisers. It must be achieved by demonstrating trustworthiness and the extraordinary effectiveness of science in confronting questions and problems. Scientists must show that evidence-based thinking leads to more reliable policies to create jobs, maintain a healthy environment, or improve teaching. Rather than denouncing the absence of scientists in policy-making positions, the scientific community must raise public understanding to the level where no public official of any party would ever want to be without a science adviser. Scientists must build the recognition that despite occasional errors, and even blunders, scientific thinking has a strong record of success over centuries. Scientists must demonstrate that science and evidence-based thinking are relevant to everyone, and that science is not an arcane practice under the control of a remote, self-interested priesthood.

      Science practiced by those who neither make their work accessible to all people, nor make clear their work is for the benefit of all, becomes an impoverished enterprise and risks being unsustainable. It comes down to good science communication—not simply choosing the right words to explain one's research, but actually earning the public’s trust that the whole enterprise is intended for societal good. If scientists fail to rebuild the public's understanding and appreciation, this could indeed become the worst of times.

When Facts Backfire

[This excerpt is from an article by Michael Shermer in the Winter 2017/2018 special edition of Scientific American.]

      Have you ever noticed that when you present people with facts that are contrary to their deepest held beliefs they always change their minds? Me neither. In fact, people seem to double down on their beliefs in the teeth of overwhelming evidence against them. The reason is related to the worldview perceived to be under threat by the conflicting data.

      Creationists, for example, dispute the evidence for evolution in fossils and DNA because they are concerned about secular forces encroaching on religious faith. Antivaxxers distrust big pharm a and think that money corrupts medicine, which leads them to believe that vaccines cause autism despite the inconvenient truth that the one and only study claiming such a link was retracted and its lead author accused of fraud. The 9/11 truthers focus on minutiae like the melting point of steel in the World Trade Center buildings that caused their collapse because they think the government lies and conducts “false flag” operations to create a New World Order. Climate deniers study tree rings, ice cores and the ppm of greenhouse gases because they are passionate about freedom, especially that of markets and industries to operate unencumbered by restrictive government regulations….

  Website by Avi Ornstein, "The Blue Dragon" – 2016 All Rights Reserved