Increase Your Brain Power
Sonia in Vert
Publications
Shared Idea
Interesting Excerpts
Awards and Honors
Presentations
This Week's Puzzle
Last Week's Puzzle
Interesting Excerpts
The following excerpts are from articles or books that I have recently read. They caught my interest and I hope that you will find them worth reading. If one does spark an action on your part and you want to learn more or you choose to cite it, I urge you to actually read the article or source so that you better understand the perspective of the author(s).
Beliefs in Aliens, Atlanis Are on the Rise

[These excerpts are from an article by Lizzie Wade in the 12 April issue of Science.]

      …Common beliefs include that aliens helped build the Egyptian and Mayan pyramids, that refugees escaping Atlantis brought technology to cultures around the world, and that European immigrants were the original inhabitants of North America.

      …41% of Americans believed that aliens visited Earth in the ancient past, and 57% believed that Atlantis or other advanced ancient civilizations existed. Those numbers are up from 2016, when the survey found that 27% of Americans believed in ancient aliens and 40% believed in Atlantis.

      ….He can’t say exactly what is driving the rise in such ideas, but cable TV shows like Ancient Aliens (which has run for 13 seasons) propagate them, as does the internet.

      …Almost all such claims assume that ancient non-European societies weren’t capable of inventing sophisticated architecture, calendars, math, and sciences like astronomy on their own. “It’s racist at its core,” says Kenneth Feder, an archaeologist at Central Connecticut State University in New Britain….

New Species of Ancient Human Unearthed

[These excerpts are from an article by Lizzie Wade in the 12 April 2019 issue of Science.]

      A strange new species may have joined the human family. Human fossils found in a cave on Luzon, the largest island in the Philippines, include tiny molars suggesting their owners were small; curved finger and toe bones hint that they climbed trees. Homo luzonensis, as the species has been christened, lived some 50,000 to 80,000 years ago, when the world hosted multiple archaic humans, including Neanderthals and Denisovans, and when H. sapiens may have been making its first forays into Southeast Asia….

      The discovery echoes that of another unusual ancient hominin—the diminutive H. floresiensis, or “hobbit,” found on the island of Flores in Indonesia. “One is interesting. Two is a pattern,” says Jeremy DeSilva, an expert on Homo foot bones at Dartmouth College. He and others suspect the islands of Southeast Asia may have been a cradle of diversity for ancient humans, and that H. luzonensis, like H. floresiensis, may have evolved small body size in isolation on an island….

      The teeth show a unique mosaic of traits found separately in other Homo species. The premolars are about the size of ours, but instead of a single root they have two or three—a primitive feature. The molars are much more modern, with single roots, but “incredibly small” at only 10 millimeters across and 8 millimeters long, says Florent Detroit, a paleoanthropologist at the Museum of Man in Paris who worked with Mijares. That’s even smaller than those of H. floresiensis. Tooth size tends to correlate with body size, so it’s possible that H. luzonensis itself was tiny, Detroit says. But only a complete arm or leg bone will say for sure.

      The long, curved fingers and toes resemble those of australopithecines like Lucy, an early human ancestor thought to have both walked upright and swung through the trees….

      Not everyone is ready to embrace these teeth and skeletal fragments as a separate species, rather than a locally adapted population of, say, H. erectus, an older hominin that lived in Asia for millennia….

      Regardless of whether H. luzonensis was its own species, it may have evolved in isolation for hundreds of thousands of years. Butchered rhino bones on Luzon date to 700,000 years ago, though researchers don’t yet know which human species was responsible….

Reverse Global Vaccine Dissent

[This editorial by Heidi J. Larson and William S. Schulz is in the 12 April 2019 issue of Science.]

      This year, the World Health Organization named vaccine hesitancy as one of the top 10 global health threats, alongside threats as grave as climate change, antimicrobial resistance, Ebola virus, and the next influenza pandemic. What happened? How did vaccine reluctance and refusal become such a major risk?

      The concerns driving antivaccine sentiment today are diverse. For example, from 2003 to 2004, a vaccine boycott in Nigeria's Kano State sparked the retransmission of polio across multiple countries as far as Indonesia. Rumors of vaccine contamination with antifertility agents contributed to distrust and reinforced the boycott, costing the Global Polio Eradication Initiative over U.S. $500 million to regain the progress that was lost. In Japan, vaccination against human papilloma virus plummeted to almost zero after young women complained of movement disorders and chronic pain, causing the government to suspend proactive recommendation of the vaccine nearly 6 years ago. Similar episodes occurred in Denmark, Ireland, and Colombia as YouTube videos of the girls' symptoms spread anxiety, despite evidence of the vaccine's safety.

      The global surge in measles outbreaks has been exacerbated by vaccine refusers. In 2015, the measles strain that sparked the Disneyland outbreak came from visitors from the Philippines, infecting people who had refused vaccination. And in Indonesia, Muslim leaders issued a fatwa against a measles vaccine containing “haram” porcine compounds, while naturopathic “cupping” methods were promoted on Facebook as an alternative to vaccination. In 2018, a mix of political, religious, and alternative health antivaccine messages circulated on WhatsApp and Facebook in Southern India, disrupting a local measles-rubella vaccination campaign.

      The phenomenon of vaccine dissent is not new. The pages of 18th-century London antivaccination pamphlets bristle with many of today’s memes, but these ideas now spread over unprecedented distances with remarkable speed, clustering in online neighborhoods of shared beliefs. This clustering can tear the protective fabric—the “herd (community) irnmunity”—that the majority of vaccine acceptors have woven. As the portion of the community that is vaccinated decreases, there is less protection for others who may be too young, unable, or choose not to be vaccinated. For some diseases, it only takes a small minority to disrupt the protective cover.

      It is just over 20 years since British physician Andrew Wakefield sowed seeds of doubt about the safety of the MMR (measles, mumps, rubella) vaccine, suggesting a link between the vaccine and autism. Suspicions around the vaccine traveled globally, instilling anxiety among the most and least educated alike. The discredited Wakefield alone, though, cannot be blamed for today’s waves of vaccine discontent. He seeded a message on the eve of a technological revolution that disrupted business, politics, societies, and global health. The same year that Wakefield published his research, Google opened its doors. The launches of Facebook, YouTube, Twitter, and Instagram soon followed. These social media platforms have magnified individual sentiments that might have stayed local. Emotions are particularly contagious on social media, where personal narrative, images, and videos are shared easily.

      Today’s tech companies are now being called to account for their role in spreading vaccine dissent. Last month, the American Medical Association urged the chief executives of key technology companies to “ensure that users have access to scientifically valid information on vaccinations.” But this is not merely an issue of correcting misinformation. There are social networks in which vaccine views and information are circulating in online communities, where vaccine choices become part of one’s overall identity.

      To mitigate the globalization of vaccine dissent, while respecting legitimate sharing of concerns and genuine questions, a mix of relevant expertise is needed. Technology experts, social scientists, vaccine and public health experts, and ethicists must convene and take a hard look at the different roles each group has in addressing this challenge. It needs everyone’s attention.

Does Fossil Site Record Dino-killing Impact?

[These excerpts are from an article by Colin Barras in the 5 April 2019 issue of Science.]

      A fossil site in North Dakota records a stunningly detailed picture of the devastation minutes after an asteroid slammed into Earth about 66 million years ago, a group of researchers argues in a paper published online this week. Geologists have theorized that the impact, near what is now the town of Chicxulub on Mexico’s Yucatan Peninsula, played a role in the mass extinction at the end of the Cretaceous period, when all the dinosaurs (except birds) and much other life on Earth vanished.

      The team, led by Robert DePalma…, says it has uncovered a record of apocalyptic destruction 3000 kilometers from Chicxulub. At the site, called Tanis, the researchers say they have discovered the chaotic debris left when tsunamilike waves surged up a river valley. Trapped in the debris is a jumbled mess of fossils, including freshwater sturgeon that apparently choked to death on glassy particles raining down from the fireball lofted by the impact.

      …The deposit may also provide some of the strongest evidence yet that nonbird dinosaurs were still thriving on impact day….

      But not everyone has fully embraced the find, perhaps in part because it was first announced to the world last week in an article in The New Yorker. The paper, published in the Proceedings of the National Academy of Sciences (PNAS), does not include all the scientific claims mentioned in The New Yorker, including that numerous dinosaurs as well as fish were buried at the site….

      In the early 1980s, the discovery of a clay layer rich in iridium, an element found in meteorites, at the very end of the rock record of the Cretaceous at sites around the world led researchers to link an asteroid to the End Cretaceous mass extinction. A wealth of other evidence has persuaded most researchers that the impact played some role in the extinctions. But no one has found direct evidence of its lethal effects.

      DePalma and his colleagues say the killing is captured in forensic detail in the 1.3-meter-thick Tanis deposit, which they say formed in just a few hours, beginning perhaps 13 minutes after impact. Although fish fossils are normally deposited horizontally, at 'Tanis, fish carcasses and tree trunks are preserved haphazardly, some in near vertical orientations, suggesting they were caught up in a large volume of mud and sand that was dumped nearly instantaneously. The mud and sand are dotted with glassy spherules—many caught in the gills of the fish—isotopically dated to 65.8 million years ago. They presumably formed from droplets of molten rock launched into the atmosphere at the impact site, which cooled and solidified as they plummeted back to Earth. A 2-centimeter-thick layer rich in telltale iridium caps the deposit.

      Tanis at the time was located on a river that may have drained into the shallow sea covering much of what is now the eastern and southern United States. DePalma’s team argues that as seismic waves from the distant impact reached Tanis minutes later, the shaking generated 10-meter waves that surged from the sea up the river valley, dumping sediment and both marine and freshwater organisms there. Such waves are called seiches: The 2011 Tohoku earthquake near Japan triggered 1.5-meter-tall seiches in Norwegian fjords 8000 kilometers away….

      Until a few years ago, some researchers had suspected the last dinosaurs vanished thousands of years before the catastrophe. If Tanis is all it is claimed to be, that debate—and many others about this momentous day in Earth’s history—may be over.

A Recharge Revolution

[These excerpts are from an article by Jennifer Marcus in the Spring 2019 issue of the USC Trojan Family Magazine.]

      …One hour of sunlight provides more than all of the energy consumed on the planet in a year. Solar panels are one way for us to tap into some of this universal, free power source—but what happens on a rainy day?

      Solar panels can only generate power when the sun shines on them, and wind turbines can only generate power when the wind blows. The ups and downs in supply from these renewable sources make it difficult for power companies to rely on them to meet customer demand in real time….

      If batteries could store surplus energy to keep a consistent supply on hand, that sporadic unreliability could cease to be a problem. That’s why Prakash and Narayan have developed a water-based organic battery that is long-lasting and built from inexpensive, eco-friendly components. This new design uses no metals or toxic materials and is intended for use in solar and wind power plants, where its large-scale storage capacity could make the energy grid more resilient and efficient.

      Their design differs from the conventional batteries familiar to consumers. It’s called a redox flow battery and consists of two tanks of fluid, which store the energy. The fluids are pumped through electrodes that are separated by a membrane. The fluid contains electrolytes, and ions and electrons flow from one fluid into the other through the membrane and then the electrode, creating an electric current….

      Resembling a small building, the redox flow battery Prakash envisions would act as a battery farm of sorts, storing surplus energy generated from nearby solar panels or wind turbines….

      The new water-based organic flow batteries last for about 5,000 recharge cycles—five times longer than traditional lithium-ion batteries—giving them about a 15-year life span. At wok one-tenth the cost of lithium-ion batteries, they’re also much cheaper to manufacture thanks to their use of abundant, sustainable materials.

      Narayan and Prakash have tested a 1-kilowatt flow battery capable of powering the basic electricity needs of a small house….

      With annual global energy consumption projected to continue increasing by about 50 percent in the next 30 years, relying on renewable resources is one of the most important motivators driving sustainable technology research forward. The world can’t continue to rely on fossil fuels to meet energy demands without devastating environmental consequences….

      Marinescu focuses on gathering energy harvested from sunlight and storing it as chemical energy—much like plants do through photosynthesis. She and her team are working on a way to convert that stored energy into electricity by using what are called metal-organic frameworks. These flexible, ultra-thin and highly porous crystalline structures have unique properties that have been used by scientists primarily to absorb and separate different types of gas. Their use for energy applications seemed like a lost cause because researchers believed they couldn't conduct electricity. But Marinescu’s work has changed that.

      In the lab, her team experimented with the materials. Typically, electrons were localized in bonds (which prevents them from conducting any electricity). But the team created new materials with the electrons spread over multiple bonds, developing solids that could now carry electric current the same way that metals do….The frameworks developed by her research group contain inexpensive elements and can transform acidic water into hydrogen. This represents a huge advance, as these materials could one day be used in technologies like those for hydrogen-powered vehicles. They can also be spread thin across a huge area: It only takes 10 grams of the material to coat a surface the size of a football field.

      The technology opens the door for storing renewable energy at a huge, almost unthinkable scale….

Science during Crisis

[These excerpts are from an editorial by Rita R. Colwell and Gary E. Machlis in the 5 April 2019 issue of Science.]

      In April 1902, on the Caribbean island of Martinique, La Commission sur le Vulcan convened to make a fateful decision. Mt. Pelee was sending smoke aloft and spreading ash across the capital city of Saint-Pierre. Comprising physicians, pharmacists, and science teachers, the commission debated the danger of an eruption and the burden of evacuation, and judged the safety of the city’s population to be “absolutely assured.” Weeks later, Mt. Pelee erupted and approximately 30,000 residents died within minutes, leaving only two survivors. Environmental crises require pivotal decisions, and such decisions need timely, credible scientific information and science-based advice. This requirement is the focus of a report released last month by the American Academy of Arts and Sciences, calling attention to improvements in the operation and delivery of science during crises.

      Science has provided essential data and insight during disaster responses in the United States, including the World Trade Center attack (2001), Deepwater Horizon oil spill (2010), Hurricane Sandy (2012), and the Zika virus epidemic (2016). The context of scientific work done during such major disasters differs from that of routine science in several ways. Conditions change rapidly—wildfires spread swiftly, hurricanes intensify within hours, and aftershocks render buildings unsafe. In such scenarios, scientists must respond within tightly constrained time frames to collect data, do analyses, and provide findings that normally would involve months or years of work. Decision-makers need actionable information (such as risk assessments or mitigation techniques), yet scientific information is only one of many inputs to disaster response. Because communication networks may be severely disrupted, as occurred in Puerto Rico during Hurricane Maria (2017), delivery of science becomes even more difficult.

      Thus, science during crisis involves specialized actions such as heightened attention to coupled human-natural systems and cascading consequences. Important responses include rapid establishment of interdisciplinary scientific teams, local knowledge quickly integrated into scientific work, clear and compelling visualization of results, and concise communication to decision-makers, disaster-response specialists, and the public….

      In 2018, the United States experienced 14 weather and climate disasters with losses exceeding $1 billion each and a total of 247 lives lost. The summer wildfire season in the American West will soon again begin, followed by the start of the 2019 hurricane season in the Atlantic Ocean. There will be new disasters and science will play a critical role, informing and guiding decisions governing disaster response and recovery. Science during a crisis must be as effective as possible….

A Deadly Ambhibian Disease Goes Global

[These excerpts are from an article by Dan A. Greenberg and Wendy J. Palen in the 29 March 2019 issue of Science.]

      …Three decades ago, biologists began to report the decline or extinction of amphibian populations around the world. This concerted global phenomenon spawned a proliferation of hypotheses, especially to explain “enigmatic” declines in remote places, where neither habitat loss nor direct exploitation were apparent. The suspected culprit was a species new to science: Batraehochytrium dendrobatidis (Bd), a fungal pathogen found in amphibian skin that belongs to the chytrids, a group of otherwise benign soil and water fungi.

      Bd is one of two species responsible for chytridiomycosis, the disease that appeared to be causing mass die-offs of amphibians. Bd is present in much of the world, but in the past century, a group of pathogenic strains originated in and spread from Asia; this spread coincided with the expansion of the global trade in live amphibians….Scientists have only been able to guess at the scale of damage caused by Bd to amphibian populations across the world, mostly because the baseline population data needed to decipher where and when species were lost through this disease have not been available.

      Scheele et al. overcame these data limitations to reconstruct the biodiversity impact of the global spread of pathogenic Bd. They compiled a detailed dataset of chytridiomycosis-associated declines from both published records and interviews with regional experts around the world. They estimate that chytridiomycosis has contributed to the decline or extinction of at least 501 amphibian species, earning Bd the inauspicious title of the most destructive pathogen for biodiversity ever recorded. The analysis suggests that of the 501 species, 90 are presumably extinct, with another 124 suffering severe declines. Many of these species belong to a few particularly susceptible frog lineages.

      Because ecology and life history shape the susceptibility of species to Ed, the authors used their dataset to test for commonalities in species loss across six continents. The results suggest that large-bodied, range-restricted, and aquatic-associated species are most at risk of severe declines from chytridiomycosis. This information is vital for identifying regions that have the right environmental conditions for Bd and many potential hosts and where pathogen introduction could thus trigger extinctions. This information is particularly relevant given concern about the spread of pathogenic Bd strains to amphibian evolutionary hotspots where the pathogen is thought to be absent or rare, mainly oceanic islands like Papua New Guinea and Madagascar.

      The authors’ data also suggest that chytridiomycosis-associated declines peaked in the 1980s in many regions, just as scientists were beginning to note these enigmatic losses (3). They conclude that the severity of declines may be dwindling over time. It remains unclear whether this is a sign of hosts and pathogen achieving a stable coexistence or merely a lull after the first of many waves of outbreaks….

      Scientific understanding of Ed is still unfolding decades after its discovery, raising the ominous possibility that our ability to react to complex cases of biodiversity decline may always lag behind the emergence of threats. As studies such as that by Scheele et al. reconstruct what is already lost, there is a critical need to leverage these data into proactive management that considers multiple threats.

      Bd is but one more nail in the coffin for the state of amphibians globally. Habitat loss, exploitation, and climate change remain the main threats for thousands of species. These stressors often act in concert, but clear management actions exist to address at least some of them: protect habitat, limit collection of wild populations, and restrict trade. By contrast, there appear to be few viable management actions available once pathogenic Bd strains have established; if trade restrictions fail, then the only hope will be that evolutionary rescue can save at least some species. Moving forward, conservationists must carefully consider what management solutions are going to be most effective for each region, with habitat loss, climate change, and pathogen introduction all simultaneously threatening amphibian diversity….

Natural History Museums Face their Own Past

[These excerpts are from an article by Grethchen Vogel in the 29 March 2019 issue of Science.]

      Step into the main hall of the Natural History Museum here and you’ll be greeted by a towering dinosaur skeleton, the tallest ever mounted. Nearly four stories high and twice as long as a school bus, the sauropod Giraffatitan brancai was the largest dinosaur known for more than a half-century. It has been a crowd magnet since it was first displayed in 1937.

      But the tidal flats Giraffatitan bestrode 150 million years ago weren’t in Europe. It lived in eastern Africa, today’s Tanzania, much of which was a German colony when the fossil was unearthed in the early 1900s. Now some Tanzanian politicians argue the fossils should return to Africa.

      Berlin’s Natural History Museum isn’t the only one facing calls for the return of fossils, which echo repatriation demands for human remains and cultural artifacts. Many specimens were collected under conditions considered unethical today, such as brutal colonial rule that ignored the ownership rights and knowledge of indigenous people….

      Although German paleontologists have traditionally gotten credit for discovering Giraffatitan, it was in fact local residents, who knew the bones and used them in religious rites, who guided the foreigners to the find….

      The Natural History Museum in London is now facing at least three repatriation requests for prominent specimens: Gibraltar has asked for two Neanderthal skulls; Chile has requested exquisitely preserved skin, fur, and bones from a 12,000-year-old giant ground sloth (Mylodon darwinii); and Zambia has asked for the Broken Hill skull, a famous early hominin about 300,000 years old that’s usually classified as Homo heidelbergensis. < /p>

      Many of these objects were sent to Europe without much thought given to who might own them. The first Neanderthal discovered, for example, was unearthed in 1848 by a British lieutenant stationed in what was then the U.K. military base of Gibraltar. Some 70 years later, British archaeologist Dorothy Garrod found a Neanderthal child’s skull in a Gibraltar cave and also sent it to England for study….

      Chile is making a similar claim for the Mylodon remains. European explorers found them in the 1890s and shipped them home without permission from local authorities. They have ended up distributed among half a dozen museums in Europe. Chile retains a few smaller Mylodon bones and dung, but the spectacularly preserved skin specimens are a key part of its natural heritage….

      Giraffatitan, for its part, will likely stay put. Rather than press for its return, the Tanzanian government has said it would prefer support for excavating new fossils, training local paleontologists, and strengthening its museums….

Integrating Tactics on Opioids

[These excerpts are from an editorial by Alan I. Leshner in the 29 March 2019 issue of Science.]

      Many parts of the world are in the middle of an opioid addiction crisis. It is an equal opportunity destroyer, affecting rich and poor, urban and rural people alike. The current epidemic differs from the long-standing heroin addiction problem in its broader demographic and in that it has resulted from inappropriate marketing and overprescription of pain medicines and the intrusion of powerful and lethal synthetic opioids. The magnitude of the crisis is also unprecedented: In the United States alone, more than 2 million people are estimated to have “opioid use disorder,” and 47,000 people died of an opioid overdose in 2017. Traditional strategies for dealing with addiction have had limited success. They have primarily used parallel tactics of “supply control” (limiting availability) and “demand control” (trying to prevent or reduce use), which might be considered as criminal justice and public health approaches. But this side-by-side approach may be counterproductive. Last week, the U.S. National Academies of Sciences, Engineering, and Medicine CP released a report on the state of medication-based treatments for opioid addiction. What is clear is that in addition to the need for more research, the nature of the epidemic requires new approaches that integrate public health, regulatory, and criminal justice strategies.

      Although additional studies would be useful in refining strategies, there already exists a body of evidence that should be used to improve current tactics. Take the case, for example, of addicted individuals who are accused of or have committed crimes. If these individuals are not treated while they are under criminal justice control, the rates of recidivism to both crime and drug use upon release are extremely high. If, however, they are treated while incarcerated and after release, recidivism rates fall substantially, as do post-release mortality rates. In addition, only 25 to 30% of prison and jail inmates are estimated to receive any drug abuse treatment, whereas over 50% suffer from substance use disorders. Even more dramatic, only 5% of justice-referred individuals receive any of the medications approved by the U.S. Food and Drug Administration (FDA) for their addiction, despite substantial evidence of their effectiveness. The conclusion from that effectiveness research is obvious: It is foolish and borders on being unethical to withhold medical treatment from people with opioid use disorder who are under criminal justice control. There are extensive data that could provide better guidance for refining prevention, prescription, and regulatory policies as well.

      The lack of successful strategies to address opioid addiction results in part from other barriers to progress on this epidemic. These include misunderstandings about the nature of addiction and the lack of medications used to treat it, as well as the insidious ideology and stigma that have long surrounded the issue of drug use and addiction. A large body of scientific evidence has established that addiction is a chronic disease of the brain that requires medical intervention. It is not a moral weakness or a failure of will….

      To make real progress in tackling the opioid epidemic, people on all sides of the issue will have to give up many of their long-held biases and beliefs. Prog-ess will require more researchers working across fields and more informed public health, regulatory, and criminal justice officials, as well as members of the public, agreeing on the actual nature of the opioid crisis and science-based, integrated strategies to deal with it.

Big Floods Highlight Prediction Needs

[This excerpt is from a news article in the 29 March 2019 issue of Science.]

      Two major floods in March wrought destruction in the United States and Mozambique, highlighting that scientists are falling short in accurately predicting high water so communities around the world can prepare. Above-average winter rainfall helped swell the Missouri River to record levels, flooding thousands of homes and destroying stored crops. U.S. government forecasters predict more than 200 million Americans and 25 states may be affected by “unprecedented” flooding later this spring. The inundation has shocked many residents, which likely reflects shortcomings in U.S. floodplain maps that predict where large floods will strike….The maps don’t include many small streams, and many are dated, resulting in big underestimates of the population now at risk….41 million Americans live in the path of a once-in-a-century flood, rather than the 13 million indicated by existing maps. Meanwhile in Mozambique, hundreds have died since a cyclone’s torrential rain flooded more than 2000 square kilometers. Scientists recently reported progress in using high-resolution satellite data and enhanced computing power to create global flood models; these could improve emergency flood warnings and long-term planning in Mozambique and other developing countries that lack these tools.

Rapid Apple Decline Has Researchers Stumped

[These excerpts are from an article by Erik Stokstad in the 22 March 2019 issue of Science.]

      Six years ago, an unpleasant surprise greeted plant pathologist Kari Peter as she inspected a research orchard in Pennsylvania. Young apple trees were dying—and rapidly. At first, she suspected a common pathogen, but chemical treatments didn’t help. The next year, she began to hear reports of sudden deaths from across the United States and Canada. In North Carolina, up to 80% of orchards have shown suspicious symptoms….

      Now, as their trees prepare to blossom, North America’s apple producers are bracing for new losses, and scientists are probing possible causes. Apples are one of the continent’s most valuable fruit crops, worth some $4 billion last year in the United States alone. Growers are eager to understand whether rapid or sudden apple decline, as it is known, poses a serious new threat to the industry.

      Weather-related stress—drought and se-vere cold—could be an underlying cause….Early freezes are becoming more common across the eastern United States, for example. But that doesn’t appear to be the whole story, and scientists are examining an array of other factors, including pests, pathogens, and the growing use of high-density orchards….

      One common symptom in trees struck by rapid decline is dead tissue at the graft union, the part of the trunk where the fruit-bearing budwood of an apple variety is joined to hardy rootstock to create new trees. The union is vulnerable to late-season freezes because the tissue is the last to go dormant.

      A team led by plant pathologist Awais Khan of Cornell found dead tissue just below the graft union in trees from an affected orchard in New York. They suspect the cause was the extremely cold winter of 2014-15, which was followed by a drought. The dying tissue could have weakened the trees, allowing pests or pathogens to invade. But Khan and colleagues could not locate any known culprits in the affected trees or nearby soil….

      Observations from other apple-growing regions suggest extreme weather isn’t entirely to blame. In Canada, rapid decline “exploded” in British Columbia in the summer of 2018, after a string of unusually mild winters….These orchards are irrigated, suggesting drought was not a factor.

      Some scientists wonder whether certain rootstocks or exposure to herbicides might make trees more susceptible. Decline seems to be more common in trees with a popular rootstock, called M9, which can be slower to dormant in fall….decline appears to be more common in orchards with fewer weeds, leading him to suspect herbicides play a role.

      Meanwhile, the search for new pathogens is accelerating….But getting an answer could take up to 5 years….

      In hard-hit North Carolina, researchers 1 have found ambrosia beetles infesting the graft union of dying trees. These stubby insects burrow into weakened trees and cultivate fungus for their larvae to eat. Those fungi or stowaway fungi might harm the trees….

      Modern apple farming methods could also be a factor. Rapid decline is most common in dense orchards, which are increasingly planted because they are efficient to manage. Instead of about 250 trees per hectare, high-density orchards can have 1200 or more. Tightly packed trees must compete for nutrition and moisture. They also have shallow roots, which make them easier to trellis but more vulnerable to drought….

The Need to Catalyze Changes in High School Mathematics

[These excerpts are from an article by Robert Q. Berry III and Mathew R. Larson in the March 2019 issue of Phi Delta Kappan.]

      In the last few decades, policy makers and reformers have described the goal of boosting students’ college and career readiness (and, by extension, preparing those students to contribute to national defense and economic prosperity) as more or less the only purpose for learning high school mathematics, while other purposes, such as teaching students to think critically and participate actively in civic life, have been given short shrift….But in fact, mathematics can serve multiple purposes, and should be taught in ways that prepare students to “flourish as human beings….”

      Mathematics underlies much of the fabric of society, from polling and data mining in politics, to algorithms used in targeting advertisements, to complex mathematical models of financial instruments and policies that affect the lives of millions of people. Students should leave high school with the quantitative literacy and critical-thinking processes necessary to determine the validity of claims made in scientific, economic, social, and political arenas….Students should have an appreciation for the beauty and usefulness of mathematics and statistics. And students should see themselves as capable lifelong learners and confident doers of mathematics and statistics….

      Mathematics education at the high school level is part of a complex system of policies, traditions, and societal expectations. This system and its structures (school district policies, practices, and conditions) must be critically examined, changed, and improved. All stakeholders — school, district, and state administrators; instructional leaders and coaches; classroom teachers; counselors; curriculum and assessment developers; higher education administration and faculty, and policy makers at all levels — will need to be part of the process of reexamining long-standing beliefs, practices, and policies. This work is critical for all of us to undertake. It is also long overdue.

      Francis Su, past president of the Mathematical Association of America, has argued that answering the “why we teach mathematics” question is critical because the answer will have a strong influence on who we think should learn math-ematics and how we think mathematics should be taught….The work of making this happen will not be easy because the challenges are real and long-standing. But we owe this effort not only to our students but also to ourselves as we work together to create and nurture the society we wish to inhabit.

Standards, Instructional Objectives, and Curriculum Design: A Complex Relationship

[These excerpts are from an article by David A. Gamson, Sarah Anne Eckert, and Jeremy Anderson in the March 2019 issue of Phi Delta Kappan.]

      Since the formation of the American republic and the nation’s first halting steps toward building state public school systems, educators and policy makers have debated which kinds of knowledge and skills schoolchildren should acquire and the levels of proficiency they should reach. Today, we often call these expectations standards, and the most recent effort at defining these types of academic objectives is the Common Core State Standards, an outgrowth of the standards-based reform movement that has been with us now for a quarter century, with no signs of dissipating. For instance, the most recent reauthorization of the Elementary and Secondary Education Act — the 2015 Every Student Succeeds Act — reinforces the movement by requiring “that all students in America be taught to high academic standards that will prepare them to succeed in college and careers.”

      The vocabulary used to describe what students should know and be able to do when they complete a lesson, a unit, or course has shifted over the past 100 years, but the underlying belief in the necessity of standards has not…. /p>

      The origins of the current standards-based reform movement are usually pegged to events of the late 1980s and early 1990s, when state legislatures and governors began to assert a new level of authority over the details of student learning. But while these developments marked an important shift in reform strategy, the obsession with standards in our national educational experience actually goes back much further. Over the past century, policy makers have made many efforts to specify the learning objectives toward which the K-12 curriculum should lead.

      Beginning in the 1890s, and with greater regularity than most scholars have recognized, prominent educational leaders dedicated themselves to identifying the core curricular material necessary for American students to thrive in and out of school. The first major national undertaking, sponsored by the National Education Association (NEA), was the Committee of Ten’s (1893) attempt to determine what knowledge high school pupils require.

      The Committee of Ten was successful in sketching out core curricular requirements, such as five weekly periods of Latin each year in grades 9-12, five weekly periods of chemistry in grade 11, and one year of natural history (botany, zoology, etc.) at some point during high school. At the same time, though, its recommendations grated against the zeitgeist of the times. Many progressives thought the notion of a common curriculum for all students was hopelessly anachronistic in an era in when pupils could (or should) be scientifically sorted using IQ tests and divided into separate tracks of coursework. Moreover, young educators and reformers wanted to escape the rigidity that characterized their experience with the 19th-century curriculum….

      In virtually every period of American educational history, but especially in times of national crisis, critics have argued that American students were floundering academically due to intellectually feeble and flabby academic objectives….

      That assertion was repeated in 1958 after the Soviets launched Sputnik (and again in the 1980s and 1990s after states had been shocked into action by A Nation at Risk). Critics of public schools in the 1950s argued that schools had become “educational wastelands” and so often forsook fundamentals that “Johnny” couldn’t read. A Life magazine series (1958) argued that standards had become “shockingly low” and documented examples of failure within public schools, in part by contrasting the rigor of Soviet schools with the weak academics and leisure-oriented offerings of their American counterparts. Other developments of the time, including the work of educational scholars, reinforced the quest for strong and clear objectives….

      One vocal critic of an overreliance on objectives or other predetermined “ends” of education was Elliott Eisner (1967), who explained that “the outcomes of education are far more numerous and complex for educational objectives to encompass”….He believed that curricula driven by predetermined outcomes prohibited the development of “curiosity, inventiveness, and insight”….In other words, heavy reliance on objectives in curriculum design not only eliminated the potential for unanticipated learnings, but also had the potential to limit the development of creativity and critical thinking in students — which, to Eisner, were the true goals of schooling.

      …Behavioral objectives, argued Ebel, rarely match the real intent of instruction. Furthermore, simply stating an objective in behavioral terms does not inherently make it worth striving for. For example, it is quite difficult to specify an observable outcome when the intent of instruction is for students “to respond adaptively and effectively to unique future problem situations”…, and clearly stating that a child should be able to name all of the rivers in South America doesn’t necessarily make this a worthwhile skill….

      …In part because clearly articulating and measuring higher-level thinking skills was more difficult, only a handful of states adopted such standards for all students….

      For well over a century, educational leaders and policy makers have turned to standards as a kind of safety zone, a common ground whereupon (they hoped) all educators could meet and agree on dear, consistent, and rationally developed objectives. Americans find comfort in knowing precisely what students should know and be able to do, especially at times of national uncertainty or economic transition — whether represented by World War I, the Great Depression, the Space Race, or anxiety about global competitiveness. Even skeptics of standards acknowledge that some skills are amenable to concise articulation and demonstrable achievement and that some proficiencies can be measured; their main concern is with universal adoption of standards as the primary tool for curriculum development.

      Nevertheless, American educators have trod this terrain before, as it turns out, and we would do well to listen to the echoes of previous experiences. Several enduring dilemmas are posed by the persistent desire for measurable objectives, and policy makers and practitioners can benefit from the lessons gleaned by a close reading of the educational past. Aside from the controversies that will attend the development of any new learning goals, policy makers must acknowledge the constraints that standards are likely to place on classroom instruction.

      More specifically, state and local leaders will need to remain watchful of the riptide of confusion and consternation that will follow on the publication of large, potentially overwhelming, lists of objectives. Whether standards can be unpacked and translated into realistic and vibrant classroom activities will depend on the support and resources made available to teachers. Otherwise, the aims of academic disciplines and the knowledge embedded within core subject areas will all too easily be shattered into the thin, disconnected shards of minor objectives, which in turn will ultimately narrow any broader vision of education….

      The history of reform movements demonstrates that one reform regime is often pushed aside by a wave of novel educational innovations. Whether the standards-based reform movement can regularly upgrade itself enough to escape such a fate remains to be seen. Finally, then, persists the danger that Bode once voiced about Bobbin’s objectives: Standards that stagnate, drifting on unrevised or unevolved, will do more to perpetuate the status quo rather than to prepare students for the future. In an increasingly unequal society, that hazard is well worth avoiding.

Zombie Spiders

[These excerpts are from an article by Joshua Rapp Learn in the March 2019 issue of Scientifi American.]

      Talk about a raw deal: deadly parasitic wasps ruin the lives of adolescent spiders by taking over their minds, forcing them to become hermits and then eating them alive.

      A remarkable species of social spider lives in parts of Latin America, in colonies of thousands. Anelosimus eximius spiders dwell in basket-shaped webs up to 25 feet wide attached to vegetation near the jungle floor, where they protect their eggs and raise broods cooperatively. A colony works together to take down much larger prey, such as grasshoppers, which sometimes fall into a web after blundering into silk lines L that stick out of it vertically….

      But Fernandez-Fournier recently observed a wasp species—not previously named or described in the scientific literature—that can bend these social spiders to its will in an even more nightmarish way. This parasitic puppet master camps out beside the web, apparently waiting for a young spider to stray from its colony. The wasp may prefer juveniles because of their softer shells and “less feisty” nature….

      Scientists do not know how a wasp larva ends up on the spider—but once there it starts feeding on the arachnid’s abdomen. As the larva grows, it starts to control the spider’s brain, inducing it to leave the safety of its colony. Then the young spider weaves a ball of silk that seals it off from the outside world. The larva completes its life cycle by eating the rest of the spider, using the conveniently surrounding web to build its own cocoon and pupate into an adult wasp.

      Fernandez-Fournier believes the wasp larvae most likely release a chemical that activates specific genes in their hosts, triggering antisocial behavior. Other related spiders are less social, leaving their colonies when they are young….the mind-controlling wasp larvae may be tapping into this latent genetic pathway. The spiders may have evolved toward social living for protection from predators, but the parasites could be pulling the genetic strings in their favor….

Feverish Planet

[These excerpts are from an article by Tanya Lewis in the March 2019 issue of Scientific American.]

      A devastating heat wave swept across Europe in 2003, killing tens of thousands of people, scientists estimate. Many were elderly, with limited mobility, and some already suffered from chronic diseases. But climate change is making such extreme weather more common—and the effects will not be limited to the old and sick. Warming temperatures do not only threaten lives directly. They also cause billions of hours of lost labor, enhance conditions for the spread of infectious diseases and reduce crop yields, according to a recent report….

      The report found that millions of people worldwide are vulnerable to heat-related disease and death and that populations in Europe and the eastern Mediterranean are especially susceptible—most likely because they have more elderly people living in urban areas. Adults older than 65 are particularly at risk, as are those with chronic illnesses such as heart disease or diabetes. Places where humans tend to live are exposed to an average temperature change that is more than twice the global L L average-0.8 versus 0.3 degree Celsius….

      Sweltering temperatures also affect productivity. A staggering 153 billion hours of labor—80 percent of them in agriculture—were lost to excessive heat in 2017, the new report found, with the most vulnerable areas being in India, Southeast Asia, sub-Saharan Africa and South America. The first stage of heat’s impact is discomfort….But there comes a point at which it is simply too hot for the body to function. For example, sweating heavily without replenishing water can result in chronic kidney disease….News reports have documented - farm workers in Central America dying from kidney problems afteryears of working in the hot fields. Richer countries such as the U.S. may avoid the worst effects because of better access to drinking water and, in the case of indoor work, air-conditioning. But these solutions can be expensive….

      Climate change also threatens food security. Our planet still produces more than enough food for the world, but 30 countries have seen crop yields decline as a result of extreme weather….

      Among the biggest steps countries can take to mitigate these health effects are phasing out coal-fired power and shifting to greener forms of transportation….Electric vehicles are making inroads in places…and “active” transport, such as walking or cycling, is also important….

The Weather Amplifier

[These excerpts are from an article by Michael E. Mann in the March 2019 issue of Scientific American.]

      Consider the following summer extremes: In 2003 Europe’s worst heat wave in history killed more than 30,000 citizens. In 2010 wildfires in Russia and floods in Pakistan caused unprecedented damage and death. The 2011 U.S. heat wave and drought caused rarchers in Oklahoma to lose a quarter of their cattle. The 2016 Alberta wildfires constituted the costliest disaster in Canadian history. And the summer of 2018 that the U.S. experienced the notorious: temperatures flared above 100 degrees Fahrenheit for days on end across the desert Southwest, heavy rains and floods inundated the mid-Atlntic states, and California had a shocking wildfire season. Extreme heat waves, floods and wildfires raged across Europe and Asia, too.

      …All these events had a striking feature in common: a very unusual pattern in the jet stream. The jet stream is a narrow band of strong wind that blows west to east around the Northern Hemisphere, generally along the U.S.-Canada border, continuing across the Atlantic Ocean, Europe and Asia. The band is sometimes fairly straight, but it can take on big bends—shaped like an S lying on its side. It typically curls northward from the Pacific Ocean into western Canada, then turns southward across the U.S. Midwest, then back up toward Nova Scotia. This shape usually proceeds west to east across the U.S. hi a few days, bringing warm air north or cool air south and creating areas of rain or snow, especially near the bends. The jet stream controls our daily weather.

      During the extreme events I noted, the jet stream acted strangely. The bends went exceptionally far north and south, and they stalled—they did not progress eastward. The larger these bends, the more punishing the weather gets near the northern peak and southern trough. And when they stall—as they did over the U.S. in the summer of 2018—those regions can receive heavy rain day after day or get baked by the sun day after day. Record floods, droughts, heat waves and wildfires occur….

Don’t Let Bots Pull the Trigger

[These excerpts are from an editorial by the editors of the March 2019 issue of Scientific American.]

      The killer machines are coming. Robotic weapons that target and destroy without human supervision are poised to start a revolution in warfare comparable to the invention of gunpowder or the atomic bomb. The prospect poses a dire threat to civilians—and could lead to some of the bleakest scenarios in which artificial intelligence runs amok. A prohibition on killer robots, akin to bans on chemical and biological weapons, is badly needed. But some major military powers oppose it.

      The robots are no technophobic fantasy. In July 2017, for example, Russia’s Kalashnikov Group announced that it had begun development of a camera-equipped 7.62-millimeter machine gun that uses a neural network to make “shoot/no-shoot” decisions. An entire generation of self-controlled armaments, including drones, ships and tanks, is edging toward varying levels of autonomous operation. The U.S. appears to hold a lead in R&D on autonomous systems—with $18 billion slated for investment from 2016 to 2020. But other countries with substantial arms industries are also making their own investments.

      …The inability to read behavioral subtleties to distinguish civilian from combatant or friend versus foe should call into question whether AIs should replace GIs in a foreseeable future mission. A killer robot of any kind would be a trained assassin, not unlike Arnold Schwarzenegger in The Terminator. After the battle is done, moreover, who would be held responsible when a machine does the killing? The robot? Its owner? Its maker?

      With all these drawbacks, a fully autonomous robot fashioned using near-term technology could create a novel threat wielded by smaller nations or terrorists with scant expertise or financial resources. Swarms of tiny, weaponized drones, perhaps even made using 3-D printers, could wreak havoc in densely populated areas. Prototypes are already being tested: the U.S. Department of Defense demonstrated a nonweaponized swarm of more than 100 micro drones in 2016….

      …Because of opposition from the U.S., Russia and a few others, the discussions have not advanced to the stage of drafting formal language for a ban. The U.S., for one, has argued that its policy already stipulates that military personnel retain control over autonomous weapons and that premature regulation could put a damper on vital Al research.

      A ban need not be overly restrictive. The Campaign to Stop Killer Robots, a coalition of 89 nongovernmental organizations from 50 countries that has pressed for such a prohibition, emphasizes that it would be limited to offensive weaponry and not extend to antimissile and other defensive systems that automatically fire in response to an incoming warhead….

      Since it was first presented at the International Joint Conference on Artificial Intelligence in Stockholm in July, 244 organizations and 3,187 individuals have signed a pledge to “neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.” The rationale for making such a pledge was that laws had yet to be passed to bar killer robots. Without such a legal framework, the day may soon come when an algorithm makes the fateful decision to take a human life.

Nowhere to Hide

[These excerpts are from an article by Amy Yee in the 15 March 2019 issue of Science.]

      The small pangolin tucked its head toward its belly and curled its tail around its body. Clad in large scales, it resembled a pine cone. After a moment, the creature—a mammal, despite appearances—uncoiled and raised its slender head. Currantlike eyes blinked and a pointy nose trembled inquisitively. Its feet had tender pink soles tipped with long, curved claws, but it did not scratch or fight.

      This animal, a white-bellied pangolin (Phataginus tricuspis), was lucky. It had most likely been illegally caught in a nearby forest not long ago; a tip had led the Uganda Wildlife Authority (UWA) to rescue it. One of its brown scales had been ripped off, perhaps for use in a local witchcraft remedy. But after a long, jarring car ride on bumpy dirt roads, the pangolin was being released back into the wild in a national park….Weighing just 2.5 kilograms, the pangolin heaved as if panting.

      The rescue and release was part of a growing global effort to save pangolins which face a bleak future as the world’s most poached and trafficked animal. They are in demand for both their meat and their scales, believed in some Asian countries to have medicinal properties. The past 2 months have seen record-setting seizures of pangolin body parts both in Asia and Africa….

      In 2014, the International Union for Conservation of Nature (IUCN) classified all eight pangolin species—four of which live in Asia and four in Africa—as threatened with extinction. And in 2017, the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) banned international trade in pangolins. Several groups and government agencies, including UWA, are now intensifying conservation efforts both in Asia and Africa.

      Yet it’s an uphill battle. Large endangered wildlife, such as elephants and rhinos, attract tourist dollars, giving policymakers an incentive to save them. But pangolins are small, shy, and believed to be mostly nocturnal. Patchy understanding of their population size, breeding behavior, migratory patterns, and physiology also hampers conservation efforts….

      Pangolins are unique among mammals because of their scales, which are made of keratin, the stuff of hair and fingernails. They live on a diet of ants and termites—hence their nickname “scaly anteater.” The scales are their major defense mechanism; when faced with danger, they curl up in an immobile, hard ball. Sadly, that also makes them easy for hunters and poachers to catch.

      Pangolins have long been on the menu for people in Asia and Africa. According to a 2018 study, 400,000 to 2.7 million pangolins are hunted annually in the forests of six Central African countries for bushmeat. (The range is so wide because researchers used three different ways of estimating the harvest, says the paper's first author, Daniel Ingram of University College London, but the lower figure is more likely.)

      Demand for pangolin scales seems to be surging, particularly in China and Vietnam, where they are believed to cure ailments ranging from poor circulation and skin diseases to asthma. With Asian species in sharp decline, poachers are increasingly turning to Africa for scales, adding to the ushmeat toll….

      Pangolins are slow-breeding and give birth to one offspring a year at most, which means depleted populations are slow to bounce back. They are easily stressed and tend to die in captivity; which makes studying their physiology and behavior difficult. So far, efforts to breed them have largely failed. Biologists also know little about their movements and population sizes, which Lould guide efforts to protect them….

The Meat without the Cow

[These excerpts are from an article by Niall Firth in the March/April 2019 issue of Technology Review.]

      …A little over five years later, startups around the world are racing to produce lab-grown meat that tastes as good as the traditional kind and costs about as much.

      They’re already playing catch-up: “plant-based” meat, made of a mix of non-animal products that mimic the taste and texture of real meat, is already on the market. The biggest name in this area: Impossible Foods, whose faux meat sells in more than 5,000 restaurants and fast food chains in the US and Asia and should be in supermarkets later this year. Impossible’s research team of more than 100 scientists and engineers uses techniques such as gas chromatography and mass spectrometry to identify the volatile mole-cules released when meat is cooked.

      The key to their particular formula is the oxygen-carrying molecule heme, which contains iron that gives meat its color and metallic tang. Instead of using meat, Impossible uses genetically modified yeast to make a version of heme that is found in the roots of certain plants.

      Impossible has a few competitors, particularly Beyond Meat, which uses pea protein (among other ingredients) to replicate ground beef. Its product is sold in supermarket chains like Tesco in the UK and Whole Foods in the US, alongside real meat and chicken. Both Impossible and Beyond released new, improved versions of their burgers in mid-January.

      In contrast, none of the lab-grown-meat start-ups has yet announced a launch date for its first commercial product. But when that happens—some claim as early as the end of this year—the lab-grown approach could turn the traditional meat industry on its head.

      …The answer is that our meat consumption habits are, in a very literal sense, not sustainable. Livestock raised for food already contribute about 15% of the world's global greenhouse-gas emissions. (You may have heard that if cows were a country, it would be the world's third biggest emitter.) A quarter of the planet’s ice-free land is used to graze them, and a third of all cropland is used to grow food for them. A growing population will make things worse. It’s estimated that with the population expected to rise to 10 billion, humans will eat 70% more meat by 2050. Greenhouse gases from food production will rise by as much as 92%.

      In January a commission of 37 scientists reported in The Lancet that meat’s damaging effects not only on the environment but also on our health make it “a global risk to people and the planet.” In October 2018 a study in Nature found that we will need to change our diets significantly if we’re not to irreparably wreck our planet’s natural resources….

      The good news is that a growing number of people now seem to be rethinking what they eat. A recent report from Nielsen found that sales of plant-based foods intended to replace animal products were up 20% in 2018 compared with a year earlier. Veganism, which eschews not just meat but products that come from greenhouse-gas-emitting dairy livestock too, is now considered relatively mainstream.

      That doesn’t necessarily equate to more vegans. A recent Gallup poll found that the number of people in the US who say they are vegan has barely changed since 2012 and stands at around just 3%. Regardless, Americans are eating less meat, even if they're not cutting it out altogether….

      The traditional meat industry doesn’t see it that way. The National Cattlemen's Beef Association in the US dismissively dubs these new approaches “fake meat.” In August 2018, Missouri enacted a law that bans labeling any such alternative products as meat. Only food that has been “derived from harvested production of livestock or poultry” can have the word “meat” on the label in any form. Breaking that law could lead to a fine or even a year’s jail time.

      The alternative-meat industry is fighting back….to get the law overturned….

      …But the Missouri battle is just the start of a struggle that could last years. In February 2018, the US Cattlemen's Association launched a petition that calls on the US Department of Agriculture (USDA) to enact a similar federal law.

      Traditional meat-industry groups have also been very vocal on how cultured meat and plant-based meats are to be regulated….

      But there are other issues, says Datar, of New Harvest. She says we still don’t understand the fundamental processes well enough. While we have quite a deep understanding of animals used in medical research, such as lab mice, our knowledge of agricultural animals at a cellular level is rather thin….

      Lab-grown meat has another—more tangible—problem. Growing muscle cells from scratch creates pure meat tissue, but the result lacks a vital component of any burger or steak: fat. Fat is what gives meat its flavor and moisture, and its texture is hard to replicate. Plant-based meats are already getting around the problem—to some extent—by using shear cell technology that forces the plant protein mixture into layers to produce a fibrous meat-like texture. But if you want to create a meat-free “steak” from scratch, some more work needs to be done. Cultured meat will need a way to grow fat cells and somehow mesh them with the muscle cells for the end result to be palatable. That has proved tricky so far, which is the main rea-son that first burger was so mouth-puckeringly dry….

      As it stands, lab-grown meat is not quite as virtuous as you might think. While its greenhouse emissions: are below those associated with the biggest villain, beef, it is more polluting than chicken or the plant-based alternatives, because of the energy currently:- required to produce it. A World Economic Forum white paper on the impact of alternative meats found that lab-grown meat as it is made now would produce only about 7% less in greenhouse-gas emissions than beef. Other replacements, such as tofu or plants, pro-duced reductions of up to 25%....

      Expecting the whole world to go vegan is unrealistic. But a report in Nature in October 2018 suggested that if everyone moved to the flexitarian lifestyle (eating mostly vegetarian but with a little poultry and fish and no more than one portion of red meat a week), we could halve the greenhouse-gas emissions from food production and also reduce other harmful effects of the meat industry, such as the overuse of fertilizers and the waste of fresh water and land. (It could also reduce premature mortality by about 20%, according to a study in The Lancet in October, thanks to fewer deaths from ailments such as coronary heart disease, stroke, and cancer.)…

Is Carbon Removal Crazy or Critical?

[These excerpts are from an article by Spencer Lowell in the March/April 2019 issue of Technology Review.]

      …Lackner…has now been working on the problem for two decades. In 1999, as a particle physicist at Los Alamos National Laboratory, he wrote the first scientific paper exploring the feasibility of combating climate change by pulling carbon dioxide out of the air. His was a lonely voice for years. But a growing crowd has come around to his thinking as the world struggles to slash climate emissions fast enough to prevent catastrophic warming. Lackner’s work has helped inspire a handful of direct-air-capture startups, including one of his own, and a growing body of scientific literature….

      No one, including Lackner, really knows whether the scheme will work. The chemistry is easy enough. But can we really construct anywhere near enough carbon removal machines to make a dent in climate change? Who will pay for them? And what are we going to do with all the carbon dioxide they collect?

      Lackner readily acknowledges the unknowns but believes that the cheaper the process gets, the more feasible it becomes….

      The concentration of carbon dioxide in the atmosphere is approaching parts per million. That has already driven global temperatures nearly 1 °C above pre-industrial levels and intensified droughts, wildfires, and other natural disasters. Those dangers will only compound as emissions continue to rise. /p>

      The latest assessment from the UN’s Intergovernmental Panel on Climate Change found that there’s no way to limit or return global warming to 1.5 °C without removing somewhere between 100 billion and a trillion metric tons of carbon dioxide by the end of the century. On the high end, that means reversing nearly three decades of global emissions at the current rate.

      There are a handful of ways to draw carbon dioxide out of the atmosphere. They include planting lots of trees, restoring grasslands and other areas that naturally hold carbon in soils, and using carbon dioxide-sucking plants and other forms of biomass as a fuel source but capturing any emissions when they’re used (a process known as bio-energy with carbon capture and storage).

      But a report from the US National Academies in October found that these approaches alone probably won’t be enough to prevent 2 °C of warming—at least, not if we want to eat. That's because the amount of land required to capture that much carbon dioxide would come at the cost of a huge amount of agricultural food production.

      The appeal of direct-air-capture devices like the ones Lackner and others are developing is that they can suck out the same amount of carbon dioxide on far less land. The big problem is that right now it’s much cheaper to plant a tree. At the current cost of around $600 per ton, capturing a trillion tons would run some $600 trillion, more than seven times the world's annual GDP….

      However, selling carbon dioxide isn’t an easy proposition.

      Global demand is relatively small: on the order of a few hundred million tons per year, a fraction of the tens of billions that eventually need to be removed annually, according to the National Academies study. Moreover, most of that demand is for enhanced oil recovery, a technique that forces compressed carbon dioxide into wells to free up the last drips of oil, which only makes the climate problem worse….

How We’ll Invent the Future

[These excerpts are from an article by Bill Gates in the March/April 2019 issue of Technology Review.]

      …My mind went to—of all things—the plow. Plows are an excellent embodiment of the history of innovation. Humans have been using them since 4000 BCE, when Mesopotamian farmers aerated soil with sharpened sticks. We’ve been slowly tinkering with and improving them ever since, and today’s plows are technological marvels.

      But what exactly is the purpose of a plow? It’s a tool that creates more: more seeds planted, more crops harvested, more food to go around. In places where nutrition is hard to come by, it’s no exaggeration to say that a plow gives people more years of life. The plow—like many technologies, both ancient and modern—is about creating more of something and doing it more efficiently, so that more people can benefit.

      Contrast that with lab-grown meat, one of the innovations I picked for this year’s 10 Breakthrough Technologies list. Growing animal protein in a lab isn’t about feeding more people. There’s enough livestock to feed the world already, even as demand for meat goes up. Next-generation protein isn’t about creating more — it’s about making meat better. It lets us provide for a growing and wealthier world without contributing to deforestation or emitting methane. It also allows us to enjoy hamburgers without killing any animals.

      Put another way, the plow improves our quantity of life, and lab-grown meat improves our quality of life. For most of human history, we’ve put most of our innovative capacity into the formetAnd our efforts have paid off: world-wide life expectancy rose from 34 years in 1913 to 60 in 1973 and has reached 71 today….

      To be clear, I don’t thinkhuman-ity will stop trying spans anytime soon. We’re still far from a world where everyone everywhere lives to old age in perfect health, and it’s going to take a lot of innovation to get us there. Plus, “quantity of life” and “quality of life” are not mutually exclusive. A malaria vaccine would both save lives and make life better for children who might otherwise have been left with developmental delays from the disease.

      We’ve reached a point where we’re tackling both ideas at once, and that’s what makes this moment in history so interesting….

      The 30 minutes you used to spend reading e-mail could be spent doing other things. I know some people would use that time to get more work done—but I hope most would use it for pursuits like connecting with a friend over coffee, helping your child with homework, or even volunteering in your community.

      That, I think, is a future worth working toward.

Sanitation without Sewers

[These excerpts are from an article by Erin Winick in the March/April 2019 issue of Technology Review.]

      About 2.3 billion people don’t have good sanitation. The lack of proper toilets encourages people to dump fecal matter into nearby ponds and streams, spreading bacteria, viruses, and parasites that can cause diarrhea and cholera. Diarrhea causes one in nine child deaths worldwide.

      Now researchers are working to build a new kind of toilet that’s cheap enough for the developing world and can not only dispose of waste but treat it as well….

      Most of the prototypes are self-contained and don’t need sewers, but they look like traditional toilets housed in small buildings or storage containers. The NEWgenerator toilet, designed at the University of South Florida, filters out pollutants with an anaerobic membrane, which has pores smaller than bacteria and viruses. Another project, from Connecticut-based Biomass Controls, is a refinery the size of a shipping container; it heats the waste to produce a carbon-rich material that can, among other things, fertilize soil….

      So the challenge now is to and more adaptable to communities of different sizes….

The Cow-free Burger

[These excerpts are from an article by Markkus Rovito in the March/April 2019 issue of Technology Review.]

      The UN expects the world to have 9.8 billion people by 2050. And those people are getting richer. Neither trend bodes well for climate change—especially because as people escape poverty, they tend to eat more meat.

      By that date, according to the predictions, humans will consume 70% more meat than they did in 2005. And it turns out that raising animals for human consumption is among the worst things we do to the environment.

      Depending on the animal, producing a pound of meat protein with Western methods requires 4 to 25 times more water, 6 to 17 times more land, and 6 to 20 times more fossil fuels than producing a pound of plant protein.

      The problem is that people aren’t likely to stop eating meat anytime soon. Which means lab-grown and plant-based alternatives might be the best way to limit the destruction.

      Making lab-grown meat involves extracting muscle tissue from animals and growing it in bioreactors. The end product looks much like what you’d get from an animal, although researchers are still working Lon the taste….One drawback 1 of lab-grown meat is that the environmental benefits are still sketchy at best—a recent World Economic Forum report says the emissions from lab-grown meat would be only around 7% less than emissions from beef production.

      The better environmental case can be made for plant-based meats from companies like Beyond Meat and Impossible Foods (Bill Gates is an investor in both companies), which use pea proteins, soy, wheat, potatoes, and plant oils to mimic the texture and taste of animal meat….a Beyond Meat patty would probably generate 90% less greenhouse-gas emissions than a conventional burger made from a cow.

Sun in a Box

[These excerpts are from an article by Jennifer Chu in the March/April 2019 issue of MIT News.]

      MIT engineers have come up with a conceptual design fora system that could store renewable energy and deliver it back into an electric grid on demand. Such a system could power a small city not just when the sun is up or the wind is high, but around the clock.

      The new design stores heat generated by excess electricity from solar or wind power in large tanks of molten silicon, and then converts the light from the glowing metal back into electricity when it’s needed. The researchers expect it would be vastly more affordable than lithium-ion storage systems.

      The system consists of two large, heavily insulated, 10-meter-wide tanks made from graphite. One is filled with liquid silicon, kept at a “cold” temperature of almost 3,500 °F (1,927 °C). A bank of tubes, exposed to heating elements, then connects this cold tank to the second, “hot” tank. When electricity from the town’s solar cells comes into the system, this energy is converted to heat in the heating elements. Meanwhile, liquid silicon is pumped out of the cold tank, collects heat from the heating elements as it passes through the tubes, and enters the hot tank, where it is now stored at a much higher temperature of about 4,300 °F (2,371 °C).

      When electricity is needed (say, after the sun has set), the hot liquid silicon—so hot that it’s glowing white—is pumped through an array of tubes that emit that light. Specialized solar cells, known as multi-junction photovoltaics, then turn that light into electricity, which can be supplied to the town’s grid. The now-cooled silicon can be pumped back into the cold tank until the next round of storage—so the system effectively acts as a large rechargeable battery….

      Henry says the system could be sited anywhere, regardless of a location's landscape. This is in contrast to pumped hydroelectric systems, currently the cheapest form of energy storage, which require locations that can accommodate large waterfalls and dams to store energy from falling water….

The Climate Optimist

[These excerpts are from an article by Amanda Schaffer in the March/April 2019 issue of MIT News.]

      …the data she gathered that night and over two months in Antarctica would change our understanding of how chlorofluorocarbons, released into the atmosphere from refrigerants and a range of other consumer products, damage the ozone layer, which helps protect Earth from ultraviolet radiation. In response to this and other scientific work, an international agreement limited and then banned the use of CFCs. Thirty years later, [Susan] Solomon was also the first to clearly demonstrate that, thanks to this change, the Antarctic ozone hole has slowly begun to heal….

      Solomon is quick to acknowledge that climate change poses tougher political challenges than ozone depletion, because fossil-fuel consumption is so integral to the world economy. Still, she argues that by studying past environmental successes, her students will come to understand “what is actually going to have to happen to make progress….”

      The spectral data that Solomonl and her team gathered provided support for the theory that chlorine originally locked up in chlorofluorocarbons is released by way of a surface reaction between hydrochloric acid and chlorine nitrate on polar stratospheric clouds. That chlorine in turn reacts with ozone to produce chlorine monoxide and goes on to deplete the ozone. Further buttressing her theory, Solomon’s expedition found that stratospheric concentrations of hydrochloric acid were low and those of chlorine monoxide and chlorine dioxide were high. (They also used balloons to measure concentrations of ozone directly and found that it, too, was severely depleted in the stratosphere, as expected.)….

      In her role on the IPCC report, Solomon worked with some 150 of the best climate scientists in the world, helping to synthesize current research related to climate change. Finding areas of consensus required a mix of science and diplomacy…."

      In 2007, the group produced a textbook-size tome, which Solomon keeps above her desk, and which garnered attention for stating, for the first time, that “warming is unequivocal.” (Solomon herself came up with that wording, based on the researchers' thorough examination of the research.) It also said that most of the warming of the past 50 years was “very likely due to human activity.” (The report defined “very likely” as meaning that the conclusion was 90% certain.) The year the report was published, the IPCC and former vice president Al Gore shared the Nobel Peace Prize for their efforts….

      In the years that followed, Solomon continued to work on climate change. In 2009, she published a paper showing that some of the effects of carbon dioxide, which takes a long time to dissipate from the atmosphere, are probably irreversible. Other researchers had conducted a set of experiments modeling where the carbon currently in the atmosphere would end up if emissions came to a halt. After examining this work, Solomon noticed that in all the models, Earth’s surface temperature did not significantly decrease for a thousand years, even in the absence of new carbon dioxide emissions. “On a human time scale, a thousand years is pretty close to irreversible,” she says. “I realized that that story had not been told clearly enough in a simple paper. I also wanted to understand why it was true.”

      She ultimately concluded that the key factor was the ocean, which is slow to warm but stores heat—and therefore warms the atmosphere—for long periods of time. In addition, her team found that even without further emissions of carbon dioxide, sea levels would also continue to rise for hundreds of years. And her lab at MIT would later reach a similar conclusion for even short-lived greenhouse gases like methane. Her research raises questions about whether vulnerable island nations and coastal populations can still be saved from the consequences of climate change….

      In other climate-related work, Solomon and her team explored how volcanic eruptions affect ozone depletion. In 2011, they found that even eruptions that appear to cause damage only at the local level can still fling enough sulfur into the stratosphere to put it into circulation there, where it can have global consequences….

      Recent news related to global climate change has been bleak—from the rapid rise in sea levels to catastrophic wildfires in California. The US government’s fourth National Climate Assessment, released by the Trump administration on the Friday of Thanksgiving weekend in 2018, warned that if significant steps are not taken to reduce emissions, the world will experience ever more dangerous heat waves, fires, floods, and hurricanes. These events are likely to cause crop failures, more frequent wildfires, and severe disruptions in supply chains and trade. Together, the changes could reduce the GDP of the United States alone 10% by 2100.

      But even in the face of dispiriting data, Solomon continues to focus on figuring out how to fix things. She tells her students that making headway on climate change depends on getting people to understand how the problem will affect them personally—and to believe there are practical solutions. And she’s hopeful about the progress she’s seeing.

      “Climate change won’t be solved until alternative energies become more widely adopted, but they are already becoming adopted at an incredibly astonishing pace,” she says. “Admittedly, it may not be as fast as it would need to be to hold temperatures to one and a half degrees—that would really be Herculean. But we can bend the warming curve.”

      Solomon notes that even though solar and wind power are not yet competitive in many parts of the United States, in many other places, clean-energy alternatives are becoming cheaper than fossil fuels. Certainly, there's a good deal more work to be done. Innovation must continue to bring down the costs of clean energy and improve the battery technology needed to store it. And the challenge of converting existing infrastructure to cleaner power sources will be massive….

More Screen Time, Lower Well-Being

[This brief newswothry item appeared in the February 2019 issue of Phi Delta Kappan.]

      A study published in Preventive Medicine Reports finds that as screen time goes up among children ages 2 to 17, psychological well-being goes down.

      Jean Twenge and W. Keith Campbell surveyed caregivers about the amount of time children in their care spent watching TV and videos, playing video games, and using computers and mobile devices for activities other than schoolwork. The survey also asked caregivers to respond to questions about children's emotional stability, relationships with caregivers, self-control, diagnoses of mood disorders, and mental health treatments.

      Screen time averaged 3.2 hours per day, with the amount of time going up as children get older. In general, young people spending up to an hour with screens had similar results on measures of psychological well-being as students who did not spend time with screens. However, once screen time reached more than an hour, and the more screen time increased beyond that, young people were rated lower on their ability to stay calm, finish tasks, learn new things, make friends, and avoid conflict with caregivers. In addition, children with higher levels of screen time were more likely to be diagnosed with anxiety and depression. The associations were largest among adolescents.

      The researchers note, however, that it is not clear from their study whether screen time leads to lower levels of well-being or whether lower levels of well-being leads to more screen time.

The Myth of de facto Segregation

[These excerpts are from an article by Richard Rothstein in the February 2019 issue of Phi Delta Kappan.]

      For nearly 30 years, the nation’s education policy makers have proceeded from the assumption that disadvantaged children would have much greater success in school if not for educators’ low expectations of them. In theory; more regular achievement testing and tougher accountability practices would force teachers to pursue higher academic standards for all children, resulting in improved instruction and greater student proficiency.

      However, there never was any evidence to support this theory, and even its most eager proponents have come to realize that it was flawed all along. In fact, there are a host of reasons why disadvantaged children often struggle to succeed academically. Undeniably, one is that some schools in low-income neighborhoods fall short in their traditional instructional roles. Another is that many schools have failed to embrace effective out-of-classroom programs —'such as health clinics or early childhood centers — that might enable students to be more successful in the classroom. Perhaps most important, however, is the influence of children’s out-of-school social and economic conditions, which predict academic outcomes to a far greater extent than what goes on in the classroom. Researchers have long known that only about one-third of the Black-White academic achievement gap results from variations in school quality. The rest stems from social and economic factors that render some children unable to take full advantage of what even the highest-quality schools can offer.

      Racial segregation exacerbates achievement gaps between Black and White children because it concentrates students with the most serious social and economic challenges in the same classrooms and schools. Consider childhood asthma, for example: Largely because of poorly maintained housing and environmental pollution, urban African-American children have asthma at as much as four times the rate of White middle-class children. Asthmatic children often come to school drowsy and inattentive from sleeplessness, or they don’t come to school at all. Indeed, asthma is the single most important cause of chronic absenteeism. No matter how good the teacher, or their instruction, children who are frequently absent will see less benefit than children who come to school well rested and regularly. Certainly, some asthmatic children will excel — there is a distribution of out-comes for every human condition — but on average, children in poorer health will fall short.

      Children from disadvantaged families suffer disproportionately from a number of other such problems, including lead poisoning that diminishes cognitive and behavioral capacity; toxic stress, from experiencing or witnessing violence; irregular sleep or meal times, related to their parents' working multiple jobs with contingent work schedules; housing instability or homelessness; parental incarceration, and many others. A teacher can give special attention to a few who come to school with challenges that impede learning, but if an entire class has such problems, average achievement inevitably declines.

      We cannot expect to address our most serious educational issues if the most disadvantaged of the nation’s children are concentrated in separate neighborhoods and schools. Today though, racial segregation characterizes every metropolitan area in the United States and bears responsibility for our most serious social and economic problems: Not only does it produce achievement gaps but it predicts lower life expectancies and higher disease rates for African Americans who reside in less healthy neighborhoods, and it corrupts our criminal justice system when police engage in violent altercations with young men who are concentrated in neighborhoods with inferior access to good jobs in the formal economy and without transportation to access those jobs (and for the same reason, segregation exacerbates economic inequality, too).

      Racial segregation also undermines our ability to succeed, economically and politically, as a diverse society. Some might argue that “a Black child does not have to sit next to a White child to learn.” They are wrong: Not only should Black children sit next to White children, but White children should sit next to Black children. A diverse adult society is inevitable; failing to prepare children for it invites disastrous conflict. This has become readily apparent, as our growing political polarization — which maps closely onto racial lines — threatens our very existence as a democratic society….

      …But if segregation has been created by government's explicit racial policies — that is, if residential segregation itself is a civil rights violation — then not only are we permitted to remedy it, we are required to do so.

      And we are so required. Not only did local police forces organize and support mob violence to drive Black families out of homes on the White side of racial boundaries, the federal government purposefully placed public housing in high-poverty, racially isolated neighborhoods to concentrate the Black population. It created a Whites-only mortgage insurance program to shift the White population from urban neighborhoods to exclusively White suburbs. The Internal Revenue Service granted tax exemptions to nonprofit institutions that openly sought neighborhood racial homogeneity. State government licensing agencies enforced a real estate brokers “code of ethics” that prohibited the sale of homes to African Americans in White neighborhoods. Federal and state regulators allowed the banking, thrift, and insurance industries to deny loans to homeowners in other-race communities.

      When the federal government first constructed civilian public housing during the Great Depression, it built separate projects for White and Black families, often segregating previously integrated communities. For instance, the great African-American poet, Langston Hughes, described in his autobiography how, in early-20th-century Cleveland, he went to an integrated neighborhood high school where his best friend was Polish and he dated a Jewish girl. However, the Public Works Administration — a federal agency created under the New Deal — demolished housing in that integrated neighborhood to build racially segregated public housing, creating residential patterns that persisted long into the future. This was the case even in places that today consider themselves racially progressive. In Cambridge, Mass., for example, the Central Square neighborhood between Harvard and the Massachusetts Institute of Technology was integrated in the 1930s, about half Black and half White. But the federal government razed integrated housing to create segregated projects that, with other projects elsewhere in the region, established a pattern of segregation throughout the Boston metropolitan area.

      During World War II, hundreds of thousands ofWhite and African-American migrants flocked to war plants in search of jobs, and federal agencies systematically segregated the war workers’ housing. In many cases, officials did so in places where few African Americans lived before the war and little previous pattern of segregation existed. Richmond, Calif., a suburb of Berkeley, was one such case. It was the largest shipbuilding center on the West Coast, employing 100,000 workers by war's end. In Berkeley, African-American workers were housed in separate buildings along the railroad tracks in an industrial area, while White workers were housed adjacent to a shopping area and White neighborhoods.

      Residents of even the most segregated communities couldn’t count on staying put, however. At the end of the war, local housing agencies in most parts of the country assumed responsibility for such projects and maintained their racial boundaries. However, Berkeley and the University of California (which owned some of the land on which war workers had been housed) refused to permit the public housing to remain, arguing not only that it would change the “character” of the community but also that the site wasn’t suitable for housing. The war projects were demolished and African-American residents were placed in public housing in Oakland. Then, the university reconsidered the site's suitability for housing and used the property for graduate student apartments.

      To be sure, some public officials fought against such policies and practices. In 1949, for instance, the U.S. Congress considered a proposal to prohibit racial discrimination in public housing. It was voted down, however, and federal agencies went on to cite this vote as justification for segregating all federal housing programs for at least another decade.

      Thus, during the years after World War II, the Federal Housing Administration (FHA) and Veterans Administration (VA) subsidized the development of entire subdivisions to house returning veterans and other working-class families on a Whites-only basis. Communities like Levittown (east of New York City), Lakewood (south of Los Angeles), and hundreds of others in between could be built only because the FHA and VA guaranteed the builders’ bank loans for purchase of land and construction of houses. The FHA's Underwriting Manual for appraisers who investigated applications for such suburbs required that projects could be approved only for “the same racial and social classes” and prohibited developments close enough to Black neighborhoods that they might risk “infiltration of inharmonious racial” groups….

      By 1962, when the federal government renounced its policy of subsidizing segregation, and by 1968, when the Fair Housing Act banned private discrimination, the residential patterns of major metropolitan areas had already been set in concrete. White suburbs that had previously been affordable to the Black working class were no longer so, both because of the increase in suburban housing prices and because other federal policies had depressed Black incomes while supporting those of Whites.

      …Further, when researchers have looked closely at the handful of experimental programs that have assisted low-income families with young children to move to integrated housing, they have observed positive effects on those children’s performance in school.

      …That’s why it's so critical, for example, to challenge those who would misinform young people about the country's recent past. Even today, the most widely used middle and high school history textbooks neglect to mention the role of public housing in creating segregation, and they portray the FHA as an agency that made home ownership possible for working-class Americans, with no mention of those who were excluded. Likewise, they describe state-sponsored segregation as a strictly Southern phenomenon, and they portray discrimination in the North as the result of private prejudice alone, saying nothing about the active participation of local, state, and federal governments….

Confronting our Beliefs about Poverty and Discipline

[These excerpts are from an article by Edward Fergus in the February 2019 issue of Phi Delta Kappan.]

      Dating back to Brown v. Board of Education, Mendez v. Westminster, and other landmark court decisions of the mid-20th century, civil rights advocates have prioritized efforts to desegregate school systems and ensure the equitable distribution of educational resources.

      However, the civil rights struggle has always focused not just on passing laws and securing resources but also on challenging the beliefs that underlie segregation and worsen its effects. And the more researchers have learned about the psychology of racial discrimination, the more obvious the need to tackle certain biases that continue to be prevalent among educators, resulting in deficit-based thinking…, low academic expectations for particular students…, and misguided claims of “colorblindness”….

      An additional form of bias — poverty-disciplining belief — has received somewhat less attention from equity adVocates, but it appears to be quite common in schools. Poverty-disciplining belief is the assumption that poverty itself is a kind of “culture,” characterized by dysfunctional behaviors that prevent success in school….In effect, it pathologizes children who live (or whose parents lived) in low-income communities. And while it doesn't focus on race per se, it is often used as a proxy for race and to justify racial disparities in disciplinary referrals, achievement, and enrollment in gifted, AP, and honors courses, as well as to justify harsh punishments for “disobedience” or “disorderly conduct” or “disrespect….”

      The belief that poor people are in need of discipline rests, in turn, on a highly debatable premise, the idea that the economic status of a community determines the value of its cultural practices: The poorer the community, the more impoverished and dysfunctional its culture; the richer the community, the more culturally refined it must be.

      That’s hardly a new idea; elites in every society tend to assert the superiority of their chosen customs, norms, and behaviors. But in recent decades it has been given an academic sheen, beginning with the cultural deprivation theory (also known as the “culture of poverty” argument) of the 1960s, which maintained that the low academic perfor-mance of racial and ethnic minorities stemmed from their deficient cultural practices….Supposedly, parents in certain cultures tend to suppress the development of linguistic, cognitive, and affective skills their children need to succeed in school.

      …Among the nearly 1,600 practitioners surveyed, nearly a third agreed (ranging from somewhat to strongly agree) that the values students learn growing up in disadvantaged neighborhoods conflict with school values; more than a quarter agreed that such students do not value education, and roughly one in six believe poor kids lack the abilities necessary to succeed in school. In short, a significant percentage of school practitioners appear to believe that the values and behaviors learned in low-income communities conflict with those taught in school.

      Increasingly, I’ve found also that educators are framing their assumptions about poor and minority children in terms borrowed from the biological and cognitive sciences, especially research into the effects of long-term exposure to lead paint, food insecurity, violence, and other environmental dangers, and a lack of exposure to certain positive influences, such as frequent reading time at home….

      In short, not only do significant numbers of school practitioners believe that when students from low-income backgrounds struggle it must be the fault of their culture, but some practitioners are tempted to dress up that belief in “scientific” evidence about what it means to grow up in poverty. Supposedly, it is the nature of low-income families to expose their children to trauma and to deny them appropriate support for language development.

      But in fact, mountains of research findings suggest that while poverty may put children at somewhat elevated risk for trauma and other negative influences on development, poverty is far from a deterministic condition….If an individual student has trouble learning to read, behaving appropriately in class, or meeting other expectations, it is for complex reasons having to do with that individual and the specific people and institutions in their lives. It is not simply because they are poor….

      The challenge for educators is to get over the habit of pathologizing entire populations of young people, as though the struggles of individual students could ever be explained merely by pointing out that they're poor. We need to get it into our heads that poverty is not a determin-istic condition; it doesn't tell us anything about the ways in which any particular kid — from any particular race or ethnicity — will develop, the kinds of instruction they’ll need, or the level of “discipline” they require.

      Further, given the inevitability of our own blind spots, we have a responsibility to seek out regular feedback on the racial and economic ecology of our schools. As Eduardo Bonilla-Silva…has argued, even if there are no out-and-out racists among us, racism is still often woven tightly into the social fabric of our schools, subtly influencing our assumptions about ability, intelligence, behavior, and more….

Why We Need a Diverse Teacher Workforce

[These excerpts are from an article by Dan Goldhaber, Roddy Theobald and Christopher Tien in the February 2019 issue of Phi Delta Kappan.]

      …A significant body of literature argues that a match between the race and ethnicity of teachers and students leads to better student outcomes, particularly in high-poverty environments with significant at-risk student populations….At least three commonly cited theoretical rationales suggest why racially matched teacher role models have positive educational benefits for students of color in particular. The first is that students of color, particularly those living and attending schools in disadvantaged settings, benefit from seeing role models of their race in a position of authority….In particular, some scholars have suggested that having an adult role model who exemplifies academic success could alleviate the stigma of “acting White” among some students of color….

      Second, some researchers argue that teachers of color are more likely to have high expectations for students of color….This is important because students of color, especially Black students, appear to be more sensitive to teacher expectations than middle-class White students….And when teachers allow negative stereotypes to lower expectations, a “self-fulfilling prophecy” takes hold to perpetuate poor performance of students of color….

      Finally, some argue that teachers of different backgrounds are able to draw on their own cultural contexts when determining instructional strategies and interpreting students’ behavior. A vast literature finds that Black students are more likely to be disciplined and suspended from school than other students, even after accounting for the nature of students’ misconduct….These dispari-ties in disciplinary actions could be based in part on teacher interpretation of student behavior, which may be informed by negative stereotypes….

      These theoretical arguments suggest several ways that increasing the diversity of the teacher workforce might improve outcomes for students of color. When empirical researchers have considered the effects of teacher diversity, they have generally found that, all else being equal (and, importantly, all else is often not equal), students of color do appear to benefit when they are taught by a teacher of the same race or ethnicity. Much of this empirical evidence focuses on student test performance, but we also discuss empirical evidence related to other important outcomes such as subjective evaluations and discipline….

      The theoretical arguments and empirical evidence generally support the notion that improving the diversity of the teacher workforce would help close racial achievement gaps in public schools. However, teacher workforce diversity is just one of many ways to improve the education system, and diversifying the teacher workforce may present substantial challenges and potential unintended consequences. One challenge is that we know very little about what contributes to the lack of diversity in the teaching workforce. We must understand the answer to that question before we can design effective strategies to recruit more teachers of color. Another challenge is that, while the empirical evidence is consistent with the three theoretical arguments about the importance of teacher workforce diversity discussed above, we don’t have conclusive evidence for why students of color appear to benefit from assignment to a teacher of the same race….

Voluntary Integration in Uncertain Times

[These excerpts are from an article by Jeremy Anderson and Erica Frankenberg in the February 2019 issue of Phi Delta Kappan.]

      The U.S. Supreme Court’s 1954 decision in Brown v. Board of Education — declaring state laws segregating students by race to be “inherently unequal” — stands as one of the most important decisions in the nation’s history. Over the following 15 years, and with help from other branches of the federal government, courageous Black lawyers and plaintiffs, and Black educators working behind the scenes…, the South’s schools were transformed. Despite initial protests and fierce resistance in many local communities, the region’s public school systems became the most integrated in the country and remained so for several decades.

      Since the 1990s, however, many judges have chosen to release school districts from court-ordered desegregation plans, which has prompted a wave of resegregation, especially in the South….Further, the decline in court supervision means that any new integration efforts must be carried out voluntarily by school districts. However, even voluntary school integration has been dealt a setback, thanks to the U.S. Supreme Court’s 2007 decision in Parents Involved in Community Schools v. Seattle School District #1, which limited the ways in which districts can choose to promote diversity and reduce racial isolation….

      This seeming reversal of Brown comes despite growing evidence that students from all backgrounds, White students included, tend to benefit both academically and socially from racial integration….Additionally, students who have attended desegregated schools tend to be more comfortable with those of other racial and ethnic backgrounds, and better prepared to participate in our increasingly diverse democracy…than those who were educated in more racially isolated schools….

      Convinced of these benefits, many school district officials continue to pursue school integration even though they are not legally required to do so, and even though it can be very challenging politically for them to change existing student assignment policies….However, since the Parents Involved decision, those officials have received mixed and confusing signals about which methods of voluntary integration they are and are not permitted to use.

      In 2009, recognizing that many district leaders were confused about their options, Congress authorized a small grant program to provide some districts with assistance. Further, in 2011, the Obama administration published additional legal and policy guidance including examples of ways in which schools could legally pursue voluntary integration and methods that incorporate race as a factor in school assignment decisions alongside those that use methods not involving race….And in 2016, the U.S. Department of Education proposed a larger program to assist districts in designing or implementing voluntary integration strategies. However, that program was cancelled by newly appointed Secretary of Education Betsy DeVos in March 2017….

      This back-and-forth over what is and isn’t permissible has had a chilling effect on school districts’ voluntary integration plans. While some districts have forged ahead, others have given up on their plans, fearing that whatever approach they choose would run into legal challenges….

School Segregation: A Realist’s View

[These excerpts are from an article by Jerry Rosiek in the February 2019 issue of Phi Delta Kappan.]

      As a nation, we often think of racial segregation in schools as an unjust form of social organization that we put behind us long ago, like aristocratic monarchies or the denial of women's right to vote. The inequity of these arrangements is so obvious, it feels indisputable that we should never return to them. The truth, however, is that racial segregation has incrementally returned to U.S. schools over the last 30 years. Like a disease that was never fully cured, school segregation has come out of remission and returned in a form that is more pervasive and harder to treat.

      This relapse should not be surprising. The U.S. Justice Department actively desegregated public schools for only five years. Although the Supreme Court made its landmark Brown v. Board of Education decision in 1954, it was not until 1968 that the Green v. County School Board decision enabled federal enforcement of desegregation orders. That same year Richard Nixon was elected president and soon ordered the Justice Department to reduce enforcement of desegregation rulings….As the cases already in the system ran their course, the momentum of the judicial desegregation movement dissipated. By 1980, desegregation of the schools had peaked.

      This reduction in enforcement was followed by an organized effort by conservative activists to roll back desegregation gains in the courts. In 1992, the Supreme Court Freeman v. Pitts decision made it easier to get desegregation orders lifted, and since that time, more than half of the desegregation orders issued by federal courts have indeed been rescinded. In almost every case where such orders have been lifted, school districts have moved back in the direction of greater segregation….As a result, 50 years after the Green decision, our schools are more racially segregated by some measures now than they were in 1968….

      The theory of change underlying court-mandated desegregation was that a generation of citizens educated in racially desegregated schools would normalize racial integration. The hope was that communities would achieve what the courts called “unitary status,” a condition in which racism would dissipate enough that communities could be trusted not to racially segregate their schools once court orders were no longer in place. Given the rapidity and consistency with which school districts have resegregated, we are forced to conclude that this was a false hope.

      …The 1974 U.S. Supreme Court ruling Milliken v. Bradley struck down efforts to desegregate schools across district lines. In demographically diverse districts, segregation often involves creating schools with few or no White students, so as to maintain relatively high percentages of White students at the remaining schools….In other cases, we see secessionist movements where wealthy predomi-nantly White neighborhoods break away from more diverse school systems and form their own school districts….

      The new segregation also combines race and class segregation. Because of persistent patterns of race and class segregation in housing, as well as racial disparities in wealth accumulation, students of color and low-income students of all races are concentrated in the same schools. Wealthier households (in which White families are overrepresented) can afford to relocate to residential zones with more political clout, and once there, these families invest their political capital in securing advanced curriculum and other educational resources for their schools, but not for others. Voucher systems, open enrollment, and charter schools have all been offered as means of disrupting the influence of residential segregation on school enrollment; however, the evidence indicates such school choice plans either increase school segregation or leave it unaltered….

      Efforts to rezone schools to address racial segregation increasingly are met with objections that such efforts vio-late the principle of color-blind jurisprudence, objections delivered without the slightest sense of irony….

      Racial segregation in schools today is unequivocally a national phenomenon. From 1970 to the early 1980s, federal desegregation orders focused mainly on the Southeastern states, resulting in the lowest levels of racial segregation in the country. However, while segregation is now on the rise again in the South, it is no longer concentrated in that region, having increased dramatically in major urban centers in the North, Midwest, and West New York City, for example, currently has the most racially seg-regated schools in the United States….

      Further, even where federal desegregation orders have remained in force, racial segregation has quietly reappeared at the classroom level…..

      The tendency of White citizens to hoard educational resources for themselves has proven more resilient than civil rights era desegregationists anticipated. Denied the instruments of explicit law and policy, the desire for White majority educational spaces has found other means of enactment. It has used residential housing patterns and school zoning policy. It has bent the courts to its defense. It has camouflaged itself with new rhetoric and new ratio-nales. It has moved into the capillaries of our education system, segregating students at the classroom level through tracked curriculum. What it has not done is go away….

      Although the form of racial segregation has evolved, its odious effects have remained consistent. Black children in racially isolated schools perform less well on standardized tests, their graduation rates are lower, and college attendance is lower. Income levels and wealth accumulation across generations are lower for Black people who attend racially segregated schools, and lower health outcomes across a person's life span are correlated with racially iso-lated schooling….Careful research suggests that these effects are not the consequence of characteristics found within students or their families. Instead, they are strongly correlated with the greater per-pupil spending that come with enrollment in racially integrated or pre-dominantly White schools….In other words, they are a consequence of the way resources follow White students….

      Anti-racist professional development and activism are already practiced in many places, to be sure. Unfortunately, these practices remain largely on the margins of our education system. The fact that we are sending our children to racially resegregating schools means that anti-racist policy and practice must become mainstream. Anything less con-stitutes a failure to face the persistent reality of racism in our schools.

United States Alone in Opposing US Resolution

[This news clip by Marian Starkey is in the March 2019 issue of Population Connection.]

      At the 55th plenary meeting of the UN General Assembly in December, the United States was alone in voting against a nonbinding resolution on the “Intensification of efforts to prevent and eliminate all forms of violence against women and girls: sexual harassment.” The rest of the attending countries either voted yes (130) or abstained from voting (31). Countries that voted yes included Afghanistan, Congo, Myanmar, North Korea, Saudi Arabia, South Sudan, and Yemen.

      In another vote that same day, on “Child, early, and forced marriage,” the United States was one of two countries to vote no. The other country was Nauru, a small Pacific island nation that serves as a detention camp for refugees and migrants to Australia. In that vote, 134 countries voted yes and 32 abstained from voting.

      References to sexual and reproductive health in both resolutions raised concerns among American delegates that voting yes could be interpreted as supporting or promoting abortion.

Climate Change Impacts on Fisheries

[These excerpts are from an article by Eva Plaganyi in the 1 March 2019 issue of Science.]

      Food security, climate change, and their complex and uncertain interactions are a major challenge for societies and ecologies. Global assessments of predicted changes in crop yield under climate change, combined with international trade dynamics, suggest that disparities between nations in production and food availability will escalate. But climate change has already affected productivity. For example, weather-related factors caused declines in global maize and wheat production of 3.8% and 5.5%, respectively, between 1980 and 2008….that indicates a 4.1% decline between 1930 and 2010 in the global productivity of marine fisheries, with some of the largest fish-producing ecoregions experiencing losses of up to 35%. Their spatial mapping can help to inform future planning and adaptation strategies….

      Losses in seafood production are of concern because seafood has unique nutritional value and provides almost 20% of the average per-person intake of animal protein for 3.2 billion people. However, wild capture fisheries yield has likely already reached its natural limits. Coupled with projected human population increases, this means that per-person seafood availability is certain to decline. This shortfall will be exacerbated by future losses in fisheries production as a result of warming….Aquaculture, which parallels the intensification of crop production on land, has been proposed as one potential solution. However, this sector’s initial high growth rates have declined, and its future scope for growth is uncertain and susceptible to environmental extremes. Given the widening gap between current and projected per-person fisheries yield…, other protein sources and solutions will be needed to avoid food shortages.

      Future fisheries production may be at even greater risk considering that, owing to anthropogenic climate change, the oceans are continuing to warm even faster than originally predicted. Moreover, extreme temperature events are on the increase, with profound negative consequences for fisheries and aquaculture. Regional declines in some species will thus increasingly not be counterbalanced by increases, as populations exceed their thermal optima or become subject to other environmental stressors such as reduced oxygen concentration and ocean acidification. These additional effects could not be accounted for….

      Regional fishery managers and stake-holders can influence future sustainable fisheries production and food security through the development, adoption, and enforcement of sustainable management strategies and practices. These strategies should be pretested for robustness to temperature-driven changes in productivity. However, global efforts are needed to contain the rise in global mean temperature to no more than 2°C, beyond which the integrity of marine and terrestrial ecological sys-tems, and hence our food supplies, become compromised.

Mobilize for Peace

[This editorial by Jonathan A. King and Aron Bernstein in the 1 March 2019 issue of Science.]

      Fifty years ago, on 4 March 1969, research and teaching at the Massachusetts Institute of Technology (MIT) came to a halt as students, faculty, and staff held a “research strike” for peace. The strike protested United States involvement in the Vietnam War and MITs complicity in this engagement through its Instrumentation Laboratory (now the Draper Laboratory), a major contracting lab for the U.S. Department of Defense. The anniversary of this activism by scientists…is a reminder that the scientific community must continue to recognize its social responsibilities and promote science as a benefit for all people and for a peaceful world.

      By the end of 1968, nearly 37,000 American soldiers had died in the Vietnam conflict and a draft lottery pulled about 300,000 men into the military in that year alone. In the years following the MIT strike, opposition to the U.S. role in Vietnam grew across the nation, particularly on college campuses. Academia condemned the nation's hand in the war, including the American Association for the Advancement of Science…, which, in 1972, passed a resolution denouncing U.S. involvement in Vietnam, stating that scientists and engineers did not endorse such “wanton destruction of man and environment.” These activities arguably helped to end the Vietnam Wax. In addition, the March 4th strike elevated the voices of scientists urging the use of research and technology for peaceful purposes. Indeed, some years later, as nuclear weapons proliferated and tensions escalated between the United States and the Soviet Union, scientists protested again. They helped initiate the national Nuclear Weapons Freeze campaign, which brought more than 1,000,000 people to New York’s Central Park in protest in 1982. The event galvanized public support to curb the nuclear arms race, and limitations were eventually negotiated between the two nations.

      Over the past 50 years, science has seen rapid growth in biotechnology, computer science, and telecommunications, among other fields. These advances have opened up rich arenas for applications of research and technology, and consequently, have raised ethical concerns. For example, advances in epidemiology and biochemistry identified environmental and occupational toxins and carcinogens, but engendered strong resistance from manufacturers. And in documenting accelerating climate change and its potentially devastating social and economic consequences, the scientific community has informed national and global climate policies, but has been met by resistance from sectors of the energy industry.

      Such commitment and engagement must now be extended once again to the renewed danger of nuclear war. Last month, President Trump announced that the United States would pull out of the Intermediate-Range Nuclear Forces Treaty, threatening a new nuclear arms race with Russia In return, last week, President Putin issued a warning about bolstering Russia’s nuclear arms. The 1987 pact bans the development of certain ground-based missiles and has been a model treaty for arms control agreements between major powers. Withdrawing from this treaty is a threat to U.S. national security and to Europe. Congress has also supported spending $1.7 trillion over the next 25 years to upgrade land-launched nuclear missiles, nuclear missile-armed submarines, and aircraft armed with nuclear missiles and bombs.

      The scientific community needs to remind the Trump administration, Congress, and the public of the destructive power of atomic bombs and to communicate how federal investment in housing, health care, education, environmental protection, sustainable energy development, and basic and biomedical research will be sacrificed to build up a nuclear weapons arsenal. Young scientists, in particular, need to fight for a peaceful future and speak out against profound misuses of resources for war. It is time to mobilize for peace in ways that effectively promote social responsibility in science.

Is Antarctica Collapsing?

[These excerpts are from an article by Richard B. Alley in the February 2019 issue of Scientific American.]

      Glaciers are melting, seas are rising. We already know ocean water will move inland along the Eastern Seaboard, the Gulf of Mexico and coastlines around the world. What scientists are urgently trying to figure out is whether the inundation will be much worse than anticipated—many feet instead of a few. The big question is: Are we entering an era of even faster ice melt? If so, how much and how fast? The answer depends greatly on how the gigantic Thwaites Glacier in West Antarctica responds to human decisions. It will determine whether the stingrays cruising seaside streets are sports cars or stealthy creatures with long, ominous tails.

      Global warming is melting glaciers up in mountainous areas and expanding ocean water, while shrinking ice at both poles. Averaged over the planet's oceans for the past 25 years, sea level has risen just over a tenth of an inch per year, or about a foot per century. Melting the rest of the globe's mountain glaciers would raise the sea a little more than another foot. But the enormous ice sheets on land in the Arctic and Antarctic hold more than 200 feet of sea-level rise; a small change to them can create big changes to our coasts. Ice cliffs many miles long and thousands of feet high could steadily break off and disappear, raising seas significantly.

      Well-reasoned projections for additional sea-level rise this century have remained modest—maybe two feet for moderate warming and less than four feet even with strong warming. Scientists have solid evidence that long-term, sustained heating will add a lot to that over ensuing centuries. But the world might be entering an era of even more rapid ice melt if the front edges of the ice sheets retreat.

      To learn whether this could happen, we look for clues from changes underway today, aided by insights gained about Earth's past and from the physics of ice. Many of the clues have come from dramatic changes that started about two decades ago on Jakobshavn Glacier, an important piece of the Greenland Ice Sheet. Glaciers spread under their own weight toward the sea, where the front edges melt or fall off, to be replaced by ice flowing from behind. When the loss is faster than the flow from behind, the leading edge retreats backward, shrinking the ice sheet on land and raising sea level.

      During the 1980s Jakobshavn was among the fastest-moving glaciers known, racing toward Baffin Bay, even though it was being held back by an ice shelf—an extension of the ice floating on top of the sea. In the 1990s ocean warming of about 1.8 degrees Fahrenheit (one de-gree Celsius) dismantled the ice shelf, and the glacier on land behind it responded by more than doubling its speed toward the shore. Today Jakobshavn is retreating and thinning extensively and is one of the largest single contributors to global sea-level rise. Geologic records in rocks there show that comparable events have occurred in the past. Our current observations reveal similar actions transforming other Greenland glaciers.

      If Thwaites, far larger, unzips the way Jakobshavn did, it and adjacent ice could crumble, perhaps in as little as a few decades, raising sea level 11 feet. So are we risking catastrophic sea-level rise in the near future? Or is the risk overhyped? How will we know how Thwaites will behave? Data are coming in right now….

      Warming air can create lakes on top of the ice shelves. When the lakes break through crevasses, a shelf can fall apart. For example, the Larsen B Ice Shelf in the Antarctic Peninsula, north of Thwaites, disintegrated almost completely in a mere five weeks in 2002, with icebergs breaking off and toppling like dominoes. That did not immediately raise sea level—the shelf was floating already—but the loss of the shelf allowed the ice sheet on land behind it to flow faster into the ocean—like pulling a spatula away, allowing the batter to run. The ice flowed as much as six to eight times quicker than it had been moving earlier. Fortunately, there was not a lot of ice behind the Larsen B Ice Shelf in the narrow Antarctic Peninsula, so it has raised sea level only a little. But the event put society on notice that ice shelves can disintegrate quickly, releasing the glaciers they had been holding back. Ice shelves can also be melted from below by warming seawater, as happened to Jakobshavn.

      When shelves are lost, icebergs calve directly from ice-sheet cliffs that face the sea. Although this delights passengers on cruise ships in Alaska and elsewhere, it speeds up the ice sheets demise. At Jakobshavn today, the icebergs calve from a cliff that towers more than 300 feet above the ocean's edge—a 30-story building—and extends about nine times that much below the water. As these icebergs roll over, they make splashes 50 stories high and earthquakes that can be monitored from the U.S.

      So far ice-shelf loss and ice-cliff calving are contributing moderately to sea-level rise. But at Thwaites, this process could make the rise much more dramatic because a geologic accident has placed the glacier near a “tipping point” into the great Bentley Subglacial Trench….

Ghosts of Wildlife Past

[These excerpts are from an article by Rachel Nuwer in the February 2019 issue of Scientific American.]

      Kenya’s national parks serve as oases in an increasingly human-crowded world, but they are not a conservation panacea. As in much of East Africa, a striking two thirds of the country's wildlife resides outside of national parks—and these animals are not welcome visitors for many landowners, who see them as competition for livestock. But in a rare win-win situation for humans and nature, researchers have now shown that livestock and wildlife can benefit from each other’s presence. A study published last October in Nature Sustainability found that wildlife can boost bottom lines by providing opportunities for tourism, and livestock improve the quality of grass for all grazing species.

      Recent history explains this symbiosis. Animals and savanna grasses evolved together for millennia—but Kenya’s wildlife population dropped by about 70 percent between 1977 and 2016, according to a 2016 PLOS ONE study. With fewer animals around to encourage new growth by removing old and dead grass stems, it seems livestock have stepped in to fill that ecological role.

      …To the team members’ surprise, they found I only benefits in combining moderate numbers of cattle and wildlife. At mixed properties, livestock treated for ticks reduced the overall number of those pathogen-carrying parasites by 75 percent—and grass quality was higher than in livestock- or wildlife-only areas, which tended to be overgrazed or underg razed, respectively….

Reengineering the Colorado River

[These excerpts are from an article by Heather Hansman in the February 2019 issue of Scientific American.]

      …wanted to see if holding the river at a consistent level would aid the struggling native bug population, 85 percent of which lay their eggs in the intertidal zone. Those eggs can get wet, but they cannot get dry; eggs laid at high tides desiccate within an hour of the water dropping.

      Bugs might seem like a lowly thing to focus on. But they form the basis of a complex food web. When their numbers drop, that reduction affects species, such as bats and endangered humpback chub, that feed on them. In a national park held up as an iconic wild, Kennedy and his group are trying to figure out why, accord-ing to their research published in 2016 in BioScience, the Grand Canyon section of the Colorado has one of the lowest insect diver-sities in the country….

      Last summer the researchers were testing whether adjusting dam releases so that the Colorado runs closer to its natural course might help insect populations recover. In those tests, they artificially created the kind of flow patterns that allowed life to flourish before the dam went in—without removing the dam itself.

      Nearly 40 million people depend on the Colorado for the necessities of daily life, including electricity, tap water and the irrigation of 10 percent of land used for U.S. food production. Ever since Glen Canyon Darn opened in 1963, the river has been engineered to accommodate these demands. Doing so changed the ecosystem balance, which was dependent on ingredients such as sediment, snowmelt and seasonal flows. For more than 30 years researchers have been trying to figure out how to help the ecosystem coexist with human needs, and they are finally beginning to test some solutions. By working out an experimental flow schedule that minimally impacted power generation, the 2018 bug tests marked one of the first times that dam operations were adjusted for species health in the Grand Canyon.

      Meanwhile, though, the river is dwindling. The Colorado River Basin has been in a drought for almost two decades; 2018 was the third-driest year ever recorded. Since 2000 ambient temperatures in the basin have been 1.6 degrees Fahrenheit warmer than 20th-century averages, and researchers predict they will reach up to 9.5 degrees F hotter still by 2100. The effects of climate change could decrease river flow by as much as half by the end of the century. With earlier snowmelt and more evaporation, the Bureau of Reclamation has predicted that it may have to cut the amount of water it sends downstream for the first time—as soon as 2020. That will stress every part of the system, from hydropower and city water supplies to native fish populations. It will also mean less room for experimental flows, a tool the scientists think is critical for understanding how to protect the canyon.

      The insect research is a meaningful step toward sustaining the river for habitat as well as for humans. It also runs straight into a core conflict between science and Colorado River policy: scientists want the flexibility to experiment, whereas power and water managers want stability. As the Colorado dries up, this conflict will intensify. And yet if Kennedy and others can show that changing the flow can bring back insect populations, it could make ecosystem health a bigger priority for those who manage the most used river in the West….

      Giving scientists a voice in water management has led to new insights about how the Colorado River reservoirs are suffering from climate change. Much of the news is alarming. A 2017 study…found that average Colorado River flows in the 21st century were 19 percent lower than in the 20th. They predicted flows could drop by up to 55 percent by 2100 as a result Lathe effects of global warming….

      There is also problematic math at Lake Powell. Because the Colorado's water is allocated down to virtually the last drop, the lake level is crucial to a 1922 legal compact that guarantees 8.23 million acre-feet of water will flow past Lees Ferry every year. The Bureau of Reclamation built reservoirs—including Powell—starting in the 1950s. But these storage systems only work if they are replenished. Lake Powell is considered “full” at 3,700 feet above sea level; the last time that happened was 1986. In 2002, the driest year on record, only as million acre-feet flowed into Powell from upstream on the Colorado. Because Lake Powell’s entire purpose is to keep the downstream water supply consistent even when it does not rain or snow, the legally obligated 8.23 million acre-feet still went out.

      This logic, however, is fundamentally flawed. The compact water was allocated based on calculations done by the Bureau of Reclamation in the early 1900s, the wettest recorded period in measured history, which concluded that 18 million acre-feet of water flowed through the river basin every year. Data collected from USGS river gauges installed at Lees Ferry in 1922 have showed that average yearly flows are actually 14.8 million. Because the compact is federal law, the 8.23 million acre-feet of downstream obligations still stand. Water managers call this a structural deficit, and it means that if every state claims its entire share—a near-future scenario thanks to projects like the Lake Powell Pipeline—there will not be enough to go around.

      The accelerating drought has become so threatening that in 2018, seven “basin states” drafted contingency plans. Each state outlined how much of its allocated compact water it would leave in reservoirs if Lake Powell’s level hit 3,525 feet above sea level—just high enough to comfortably maintain power production at the dam. (In November 2018 Powell was at 3,588 feet.) Although interim guidelines came out in 2007, this official step marked the first time since the compact was signed nearly 100 years ago that the basin states made a legally enforceable plan for a drier future. Finally, policy is starting to reflect science….

Urine Trouble

[These excerpts are from an article by Andy Extance in the February 2019 issue of Scientific American.]

      In a disturbing trend, scam artists are using commercially sold fake urine to fool doctors into prescribing pain medications such as hydrocodone—which can then be consumed or illegally sold. The synthetic pee lets patients pass tests intended to ensure they are not already taking opioid medications or drugs of abuse….

      Hoping to address the situation, Kyle and his pathologist colleague Jaswinder Kaur have now shown how legal indulgences— F including chocolate, coffee and cigarettes—can help distinguish real pee from fake….

      The new method, described at the annual Society of Forensic Toxicologists (SOFT) meeting last October in Minneapolis, looks for four substances common in urine: caffeine and theobromine, both found in chocolate, tea and coffee; cotinine, produced as nicotine breaks down; and urobilin—degraded hemoglobin that gives urine its yellow color. The technique employs liquid chromatography to separate urine, just as water spilled on paper separates ink into different colors. The compounds then flow into mass spectrometers that identify them by their molecular weights.

      …the test will not detect people passing off others’ pee as their own. “If someone is carrying synthetic urine or somebody else’s clean urine, you have to do observed collection,” she says. Peace also warns that fake urine makers could easily add substances such as caffeine or theobromine to their products.

      Some already do, Kyle says. He emphasizes that testing must therefore look for compounds naturally produced in our bodies (urobilin, in this case). Combining that with commonly consumed substances makes the test even more powerful—and is potentially more practical than watching people pee.

Call the Midwife … If You Can

[These excerpts are from an editorial by the editors in the February 2019 issue of Scientific American.]

      Despite the astronomical sums that the U.S. spends on maternity care, mortality rates for women and infants are significantly higher in America than in other wealthy countries. And because of a shortage of hospitals and ob-gyns, especially in rural areas, many women struggle to access proper care during pregnancy. Moreover, the rate of cesarean sections is exceedingly high at 32 percent—the World Health Organization considers the ideal rate to be around 10 percent—and 13 percent of women report feeling pressured by their providers to have the procedure.

      Widespread adoption of midwife-directed care could alleviate all these problems. In many other developed countries, such as the U.K., France and Australia, midwifery is at least as common as care by obstetricians. In the U.S., certified midwives and nurse-midwives must hold a graduate degree from an institution accredited by the American College of Nurse-Midwives, and certified professional midwives must undergo at least two years of intensive training. This is designed to make midwives experts in normal physiological pregnancy and birth. Thus, for women with low-risk pregnancies who wish to deliver vaginally, it often makes sense to employ a midwife rather than a more costly surgeon. Yet only about 8 percent of US. births are attended by midwives.

      The roots of America's aversion to midwifery go back to the late 1800s, when the advent of germ theory and anesthesia reduced much of the danger and discomfort associated with childbirth. The benefits of these technologies brought doctors to the forefront of maternity care and pushed midwives aside. Obstetricians helped to bar midwives from practicing in hospitals, which were now considered the safest birth settings. By the ear-ly 1960s midwifery was virtually obsolete.

      It has made a comeback since then, with practitioners just as well trained as doctors to supervise uncomplicated deliveries. Studies show that midwife-attended births are as safe as physician-attended ones, and they are associated with lower rates of C-sections and other interventions that can be costly, risky and disruptive to the labor process. But midwifery still remains on the margins of maternity care in the U.S.

      …Half of planned nonhospital births are currently paid for by patients themselves, compared with just 3.4 percent of hospital births. Thus, a less expensive birth at home may paradoxically be out of reach for women who cannot afford to pay out of pocket. U.S. hospitals charge more than $13,000, on average, for an uncomplicated vaginal birth, whereas a similar midwife-attended birth outside of the hospital reduces that figure by at least half. Insurers would save money by embracing midwife-attended, nonhospital birth as a safe and inexpensive alternative.

      A national shortage of birth centers further limits women’s choices. These homelike settings are designed to support naturally laboring women with amenities such as warm baths and spacious beds and are consistently rated highly in surveys of patient satisfaction. Yet there are only around 350 existing freestanding birth centers in the entire nation, and nine states lack regulations for licensing such facilities. More government support for birth centers would help midwives meet a growing demand, which has already fueled an increase of 82 percent in centers since 2010.

      Policy makers, providers and insurers all have good reasons to encourage a shift toward midwifery. The result will be more choices and better outcomes for mothers and babies.

The Law and Vaccine Resistance

[These excerpts are from an editorial by Dorit Rubenstein Reiss in the 15 February 2019 issue of Science.]

      Last week, the Centers for Disease Control and Prevention announced that more than 100 cases of measles, spanning 10 states, had been reported in the United States since the beginning of the year. This news came on the heels of the World Health Organization’s estimate of over 200,000 cases of measles in 2018. These numbers signal the reemergence of a preventable, deadly disease, attributed in significant part to vaccine hesitancy. Communities and nations must seriously consider leveraging the law to protect against the spread of this highly contagious disease.

      In the United States, measles was deemed “eliminated” in 2000 because of vaccination success. Since then, its reemergence has been associated with a resistance to vaccination. This also reflects the fact that unvaccinated U.S. residents visit countries that have seen large measles outbreaks (such as Ukraine, the Philippines, and Israel), become infected, and bring the disease back home.

      Outbreaks in the United States are still fewer than in, say, Europe because of unique U.S. policies and laws that maintain high vaccination coverage. All 50 states and the District of Columbia have laws requiring vaccinations for school and daycare attendance. School mandates have proven very effective: The stronger they are, the higher the vaccination rate, and the lower the risk of outbreaks.

      …States have extensive leeway to protect public health, and courts have consistently upheld strong school immunization mandates. Thus, states could tighten nonmedical exemptions (for example, by requiring consultation with a doctor) or remove these exemptions completely from school mandates. Valid medical exemptions are important, but it is less clear whether nonmedical exemptions are appropriate. Some scholars are concerned that eliminating nonmedical exemptions may generate resentment among parents and interfere with parental autonomy. Others—including professional medical associations—disagree, because mandates protect children, and a parent’s freedom to send an unvaccinated child to school places classmates at risk of dangerous diseases. There is a strong argument for removing nonmedical exemptions, and at the least, they should be hard to get, to further incentivize parents to vaccinate. In many states, however, getting an exemption is as easy as checking a box. States and localities could also require schools to provide their immunization rates to parents at the start of the school year.

      Beyond school mandates, states can consider other legal tools that have not yet been used. States could implement workplace mandates for those working with vulnerable populations, such as health care workers, teachers in schools, and providers of daycare. States could impose tort liability (civil law damages for harm) when unexcused refusal to vaccinate leads to individuals becoming infected unnecessarily or worse, to a large outbreak. States could permit teenagers to consent to vaccinations without parental approval. And states could mandate vaccinations to enroll in institutions of higher education.

      Vaccine hesitancy is a problem with many components. In handling it, societies should improve public understanding of vaccinations but also not hesitate to use the law to prevent deadly diseases from spreading.

How Rabbits Escaped a Deadly Virus—At Least for Now

[These excerpts are from an article by Elizabeth Pennisi in the 15 February 2019 issue of Science.]

      In Australia, a few dozen European rabbits introduced in the mid-1800s for hunters did what the animals famously do. They multiplied until hundreds of millions were chowing down on crops. So, in 1950, after a smallpoxlike virus found in South American rabbits turned out to kill the European relative, Australian authorities released the virus into the wild, cutting the rabbit population by 99%. A few years later, the virus, called myxoma, was released in France and eventually spread to the United Kingdom.

      …Within a decade, rabbit numbers were on the rise again as some evolved resistance to this deadly infection and the virus itself became less deadly.

      One allele shift affected the rabbits’ interferon, a protein released by immune cells that sounds the alarm about a viral attack and helps trigger an immune response. Compared with preinfection rabbits, modern rabbits make an interferon that is better at responding to the biocontrol virus, the researchers found when they added different versions of the protein to rabbit cell lines.

      The virus has not stood still. In 2017, Holmes and his colleagues reported that in the 1970s the virus developed a greater ability to suppress the rabbit's immune responses. That change, as well as the natural emergence of another rabbit-killing virus, has caused populations to decline again. But in contrast to the parallel evolution in rabbits, myxoma viruses in the various locations took different genetic paths to regaining potency….

Controversial Flu Studies Can Resume, Panel Says

[These excerpts are from an article by Jocelyn Kaiser in the 15 February 2019 issue of Science.]

      Controversial lab studies that modify bird flu viruses in ways that could make them more risky to humans will soon resume after being on hold for more than 4 years….last year, a U.S. government review panel, quietly approved two projects previously considered so dangerous that federal officials had imposed an unusual moratorium on the research….

      The outcome marks the latest twist in a nearly decadelong debate over the risks and benefits of studies that aim to make pathogens more potent or more likely to spread in order to better understand them and prepare defenses. And it represents the first test of a new federal review process intended to ensure that government funding flows to such studies only when they are scientifically and ethically justifiable and done under strict safety rules….

      Some other researchers are sharply critical of the approvals, saying the review lacked transparency. After a long public discussion to develop new standards that “consumed countless weeks and months of time for many scientists, we are now being asked to trust a completely opaque process,” says Harvard University epidemiologist Marc Lipsitch, who helped lead a push to require the reviews….

A Harvest of Change in the Heartland

[These excerpts are from an article by Peter Klebnikov in the Winter 2019 issue of Solutions, the newsletter of the Environmental Defense Fund.]

      In American agriculture, corn Is king. More than 89 million acres were planted in 2018, enough to fill a freight train that would more than encircle the earth.

      But growing corn has a steep environ-mental cost. Excess fertilizer runs off fields into rivers, lakes and groundwater, polluting drinking water around the Midwest and creating algae-filled dead zones around the country. It also forms nitrous oxide, a potent greenhouse gas.

      Historically, farmers often didn’t know how much fertilizer to use, so they applied extra to be on the safe side. This hurts downstream communities. Today, farmers increasingly want to use fertilizer more efficiently, which also saves money, and adopt other conservation practices….

      That means partnering with farmers and trade groups to advance practices such as applying fertilizer more precisely, using no-till techniques that leave more carbon in the soil, creating buffers and wetlands along rivers and streams to reduce erosion and improve water quality, and planting cover crops to protect the soil.

      Today’s tech-savvy, data-hungry farmers are using these practices to reinvent their approach to the land….

Getting America Back on Track

[These excerpts are from an article by Charlie Miller in the Winter 2019 issue of Solutions, the newsletter of the Environmental Defense Fund.]

      The state motto is “Land of Enchantment.” New Mexico's high plains, mountains and stunning deserts cast a spell over many visitors. But beyond the scenery lies a dirty secret —a 2,500-square-mile invisible cloud of methane from oil and gas operations hovers over New Mexico’s San Juan Basin, a cloud so enormous that scientists can spot it in infrared images from space.

      New Mexico’s last governor, Susanna Martinez, had no problem with this. She backed a congressional effort to allow unfettered emissions of methane, a greenhouse gas 84 times more powerful than carbon dioxide. She also vetoed three solar bills that would have nudged the state toward clean energy.

      The state’s new governor, Michelle Lujan Grisham, couldn’t be more different. She is one of many new faces of 2019 who are turning the page to an era of renewed environmental progress….

      In some races, climate was an explicit issue. Sean Casten is a writer, scientist and clean energy entrepreneur. He defeated an 11-year incumbent representing suburban Chicago who called climate change “junk science.” Casten put climate change at the heart of his campaign.

      In a Florida district spanning the Everglades and the Florida Keys—both areas highly vulnerable to climate change—the candidates, Democrat and Republican, even released dueling television ads over who was tougher on climate….

      The implications of a more environmentally friendly Congress are profound. Bills that would harm the environment, such as proposed legislation to eviscerate the Endangered Species Act and slash environmental protection budgets, are now nonstarters. And environmental supporters in Congress can now fend off moves to defend critical programs and agencies. And although the Trump administration will still try to roll back key safeguards, it no longer has a completely free hand.

      A disturbing hallmark of the Trump administration has been its unrelenting attacks on science, with climate denial at the center. One proposal now under consideration at Trump’s EPA would invalidate many studies that underpin key public health protections. Supporters of sound science can now push back.

      The House Science Committee is now led by Rep. Eddie Bernice Johnson, elected in 1992 and the first registered nurse to serve in Congress. As chair, she replaced retiring Rep. Lamar Smith, who often brought climate deniers to testify before the committee….

      A comprehensive carbon bill isn’t likely to pass in this Congress, but these are building years to get a strong, bipartisan law passed after 2020. Already, three House committees have announced climate change hearings. We expect House leaders to revive the House Select Committee on Climate Change, which through hearings can draw attention to the issue. With so many new members in Congress, this committee can also educate House newcomers on the dangers of climate change through the testimony of credible experts and scientists, instead of industry shills….

      A critical role for Congress is to provide oversight of the executive branch, a role largely abdicated in the past two years. Acting EPA Administrator Andrew Wheeler will be summoned to appear before Congress to testify under oath about why, for example, he is promoting a sham Clean Power Plan that will do almost nothing to halt climate change.

      Wheeler will face relentless scrutiny of his failure to enforce clean air and clean water rules, as well as the exodus of talented but dispirited staff from the EPA. Many have been handcuffed by political appointees as they try to protect the public from pollution and climate change….

      Victories for the environment extended far beyond Washington, D.C., on Election Day. hi seven states, gubernatorial candidates promising strong climate action won their races. Six more state legislatures now have a majority supporting climate action, and more than 300 state House and Senate seats flipped….

Hope Arrives on Fragile Wings

[These excerpts are from an article by Tasha Kosviner in the Winter 2019 issue of Solutions, the newsletter of the Environmental Defense Fund.]

      It weighs less than a dollar bill and is capable of flying 2,500 miles. The beguiling strength and fragility of the monarch butterfly has captured American hearts. But in the past 30 years, loss of habitat has seen the butterfly’s population plummet 95 percent. It faces an Endangered Species Act listing decision this year.

      Last summer, on a vibrant prairie near Greenville, Missouri, thousands of monarchs seemed to defy this bleak outlook. They flitted between native flowers, fueling up for an extraordinary migration that would carry them across the Great Plains to Mexico….

      It’s hard to imagine that just a few years ago, this was degraded pasture, a monoculture of nonnative grass with L no monarchs and little ecological value….

      Saving the beloved butterfly and supporting renewable energy are not the only benefits. The presence of monarchs and other pollinators such as bees and birds is a key indicator of the health of the land. Restored prairie, rich in native flora, is an ecological powerhouse. It sequesters carbon, prevents soil erosion, retains water and absorbs excess fertilizer, keeping waterways clean….

A Plan to Rescue the Amazon

[These excerpts are from an article by Tasha Kosviner in the Winter 2019 issue of Solutions, the newsletter of the Environmental Defense Fund.]

      …Tropical forest loss is responsible for 16-19 percent of global climate emissions—more than all the world’s cars and trucks combined. In the Amazon, where 95 percent of deforestation is caused by farming, scientists estimate that once 20-25 percent of the forest is destroyed, it will reach a tipping point. After this, damage to the water cycle and resulting reductions in rainfall mean the forest will begin degrading on its own. Currently we are at 19 percent.

      …But we have a steep climb ahead. In 2018, Brazil reported the worst annual deforestation rates in a decade. And in October, Jair Bolsonaro, the country’s new populist president, sailed to victory Among his campaign pledges was a promise to open up forests on indigenous lands to mining and agriculture.

      The news is bleak. But there are some reasons for hope. Caio Penido is one of those reasons. He has ended deforestation on his ranch, leaving more than 5,000 acres of forest intact. By planting more nutritious grasses and practicing better land management he has been able to increase the number of cows per hectare, growing his business without clearing more land. Elsewhere in Mato Grosso similar projects are unfolding, increasing profitability while reducing emissions and saving forests. The Novo Campo project in the Alta Floresta region has assisted a group of beef ranchers to reduce climate emissions associated with beef production by 60 percent. The project will expand to cover 250,000 acres in the next four years.

      …the state—which produces 30 percent of Brazil’s soy and is home to more than 30 million cattle—has committed to ending illegal deforestation by 2020. To achieve this, it has partnered with local producers, communities and environmental organizations to help coordinate and fund initiatives that increase productivity while reducing deforestation….

      The proof is in the numbers. If it were a country, Mato Grosso would have been the world’s fifth-largest greenhouse gas emitter in 2004. By 2014 it would have been 50th. If it stays on track, it could cumulatively reduce emissions by more than six gigatons—that’s nearly the annual emissions of the U.S.—by 2030….

      Success ultimately will depend on local buy-in. In the lush Araguaia Valley, Caio Penido has enlisted 70 other ranchers to comply with Brazil’s forest laws, conserving 160,000 acres of forest while still intensifying production. The ranchers’ efforts will cut carbon emissions by half a million tons by next year….

Gut Bacteria Linked to Mental Well-being and Depression

[These excerpts are from an article by Elizabeth Pennisi in the 8 February 2019 issue of Science.]

      Of all the many ways the teeming ecosystem of microbes in a person’s gut and other tissues might affect health, its potential influences on the brain may be the most provocative. Now, a study of two large groups of Europeans has identified several species of gut bacteria that are largely missing in people with depression. The researchers can’t say whether the absence is a cause or an effect of the illness, but they showed that many gut bacteria could make or break down substances that affect nerve cell function—and maybe mood.

      …took a closer look at 1054 Belgians they had recruited to assess a “normal” microbiome. Some in the group-173 in total—had been diagnosed with depression or had done poorly on a quality of life survey, and the team compared their microbiomes with those of other participants.

      Two kinds of bacteria, Coprococcus and Dialister, were missing from the microbiomes of the depressed subjects, but not from those with a high quality of life. The finding held up when the researchers allowed for factors such as age, sex, or antidepressant use, all of which influence the microbiome, the team reports this week in Nature Microbiology. And when the team looked at another group-1064 Dutch people whose microbiomes had also been sampled—they found the same two species were missing in depressed people, and they were also missing in seven subjects suffering from severe clinical depression. The data don’t prove causality, Raes says, but they are “an independent observation backed by three groups of people.”

      Looking for something that could link microbes to mood, Raes and his colleagues compiled a list of 56 substances important for proper nervous system function that gut microbes either produce or break down. They found that Coprococcus seems to make a metabolite of dopamine, a brain signal involved in depression, although it’s not clear whether the bacteria break down the neurotransmitter or whether the metabolite has its own function. The same microbe makes an anti-inflammatory substance called butyrate; increased inflammation may play a role in depression. (Depressed subjects also had an increase in bacteria implicated in Crohn disease, an inflammatory disorder.)

      Linking the bacteria to depression “makes sense physio-logically….” Still, no one has shown that microbial compounds in the gut influence the brain. One possible channel is the vagus nerve, which links that organ and the gut.

      Resolving the microbiome-brain connection “might lead to novel therapies,” Raes suggests. Some physicians are already exploring probiotics—oral bacterial supplements—for depression, although most don't include the missing gut microbes identified in the new study….

Learning to Think

[These excerpts are from an editorial by Steve Metz in the February 2019 issue of The Science Teacher.]

      We are bombarded daily by a barrage of claims and counter-claims. Cable news commentary, social media, and partisan political pronouncements routinely ask us to accept opinion masquerading as fact, presented alongside data that is often misleading, out of context, or even patently false. Has there ever been a greater need for the rigorous, evidence-based critique of ideas?

      In an age where facts must compete with “alternative facts,” it is more important than ever for our students to learn and practice the skills of scientific argumentation. Taken from the Latin arguer—to make bright or enlighten—argument is central to scientific progress….

      When students defend and critique scientific explanations, experimental designs, or engineering solutions, they learn to create and evaluate arguments using evidence and logical reasoning. Through critical discourse, they are challenged to distinguish opinion from evidence. They learn that argumentation is how scientists collaboratively construct and revise scientific knowledge.

      Argumentation requires students to engage in a social process as they consider competing ideas from multiple voices and generate knowledge through peer-to-peer interaction. They develop the important skill of respectfully considering more than one competing idea. Creating multiple opportunities for students to engage in argumentation can thus promote equity in a classroom where all ideas are valued and the whole class works together in the evaluation and revision of diverse ideas and sources of evidence.

      Perhaps most importantly, argumentation and critique help students learn life skills that extend far beyond the science classroom. Students develop the understanding that claims must be supported by evidence and sound reasoning, not by opinion, belief, emotion, or appeals to authority. Evidence-based argumentation and critique are our only defense against prejudice, pseudoscience, and demagoguery.

Algae Suggest Eukaryotes Get Many Gifts of Bacteria DNA

[These excerpts are from an article by Elizabeth Pensini in the 1 February 2019 issue of Science.]

      Algae found in thermal springs and other extreme environments have heated up a long-standing debate: Do eukaryotes—organisms with a cell nucleus—sometimes get an evolutionary boost in the form of genes transferred from bacteria? The genomes of some red algae, single-celled eukaryotes, suggest the answer is yes. About 1% of their genes have foreign origins, and the borrowed genes may help the algae adapt to their hostile environment….

      Many genome studies have shown that prokaryotes—bacteria and archaea—liberally swap genes among species, which influences their evolution. The initial sequencing of the human genome suggested our species, too, has picked up microbial genes. But further work demonstrated that such genes found in vertebrate genomes were often contaminants introduced during sequencing.

      …any such transfers only occurred episodically—early in the evolution of eukaryotes, as they internalized the bacteria that eventually became organelles such [as] mitochondria or chloroplasts….

In Hot Water

[These excerpts are from an article by Warren Cornwall in the 1 February issue of Science.]

      …The data, collected by research trawlers, indicated cod numbers had plunged by 70% in 2 years, essentially erasing a fishery worth $100 million annually. There was no evidence that the fish had simply moved elsewhere. And as the vast scale of the disappearance became clear, a prime suspect emerged: “The Blob.”

      In late 2013, a huge patch of unusually warm ocean water, roughly one-third the size of the contiguous United States, formed in the Gulf of Alaska and began to spread. A few months later, Nick Bond, a climate scientist at the University of Washington in Seattle, dubbed it The Blob. The name, with its echo of a 1958 horror film about an alien life form that keeps growing as it consumes everything in its path, quickly caught on. By the summer of 2015, The Blob had more than doubled in size, stretching across more than 4 million square kilometers of ocean, from Mexico’s Baja California Peninsula to Alaska’s Aleutian Islands. Water temperatures reached 2.5°C above normal in many places.

      By late 2016, the marine heat wave had crashed across ecosystems all along North America’s western coast, reshuffling food chains and wreaking havoc. Unusual blooms of toxic algae appeared, as did sea creatures typically found closer to the tropics….Small fish and crustaceans hunted by larger animals vanished. The carcasses of tens of thousands of seabirds littered beaches. Whales failed to arrive in their usual summer waters. Then the cod disappeared.

      The fish “basically ran out of food,” Barbeaux now believes. Once, he didn’t think a food shortage would have much effect on adult cod, which, like camels, can harbor energy and go months without eating. But now, it is “something we look at and go: ‘Huh, that can happen.’”

      Today, 5 years after The Blob appeared, the waters it once gripped have cooled, although fish, bird, and whale numbers have yet to recover. Climate scientists and marine biologists, meanwhile, are still putting together the story of what triggered the event, and how it reverberated through ecosystems. Their interest is not just historical.

      Around the world, shifting climate and ocean circulation patterns are causing huge patches of unusually warm water to become more common, researchers have found. Already, ominous new warm-patches are emerging in the North Pacific Ocean and elsewhere, and researchers are applying what they've learned from The Blob to help guide predictions of how future marine heat waves might unfold. If global warming isn’t curbed, scientists warn that the heat waves will become more frequent, larger, more intense, and longer-lasting. By the end of the century, Bond says, “The ocean is going to be a much different place.”

      …The Blob was spawned, experts say, by a long-lasting atmospheric ridge of high pressure that formed over the Gulf of Alaska in the fall of 2013. The ridge helped squelch fierce winter storms that typically sweep the gulf. That dampened the churning winds that usually bring colder, deeper water to the surface, as well as transfer heat from the ocean to the atmosphere—much like a bowl of hot soup cooling as a diner blows across it. As a result, the gulf remained unusually warm through the following year.

      But it took a convergence of other forces to transform The Blob into a monster. In the winter of 2014-15, winds from the south brought warmer air into the gulf, keeping sea temperatures high. Those winds also pushed warm water closer to the coasts of Oregon and Washington. Then, later in 2015 and in 2016, the periodic warming of the central Pacific known as El Nino added more warmth, fueling The Blob’s growth. The heat wave finally broke when La Nifia—El Nino’s cool opposite number—arrived at the end of 2016, bringing storms that stirred and cooled the ocean….

      Krill—tiny shrimp that, like copepods, are a key food for many fish—felt the heat, too. In 2015 and 2016, as The Blob engulfed the coasts of Washington and Oregon, the heat-sensitive creatures vanished from biologists’ nets.

      As the base of the food chain crumbled, the effects propagated upward. One link higher, swarms of small fish that dine on copepods and krill—and in turn become food for larger animals—also became scarce as warm waters spread. On a remote island in the northern gulf, where scientists have tracked seabird diets for decades, they noticed that capelin and sand lance, staples for many bird species, nearly vanished from the birds' meals. In 2015, by one estimate, the populations of most key forage fish in the gulf fell to less than 50% of the average over the previous 9 years….

Democracy’s Plight

[These excerpts are from an editorial by Rush Holt in the 1 February 2019 issue of Science.]

      Scientists work with a deep sense that their quest for reliable knowledge leads somewhere—that following the evidence and excluding bias help to make sense of the world. It may be a slow process, and interactions in the scientific community are not without friction and false steps, yet scientists are devoted to the quest because they observe that it works. One can make sense of the world. Einstein famously said, “the eternal mystery of the world is its comprehensibility,” and scientists understand that evidence-based scientific thinking leads to this comprehension. Scientists could do a better job of sharing this powerful insight.

      …Observers speak of “truth decay,” dismissal of expertise, and neglect of evidence. Collectively, these are problems of enormous importance because they threaten democracy itself. Democracy is at risk when it becomes simply a contest of fervently held opinions or values not grounded in evidence. When one opinion is as good as another—each asserted as strongly, and even as deceptively, as possible—democracy cannot survive. Society is drowning in a sea of unmoored opinons and values. Democracy requires a citizenry that is informed, as well as engaged. We must find an opening to reinforce among citizens a renewed appreciation for evidence. Approaching an understanding of the actual state of things is what science does well.

      In the United States, the public's approval and trust in science are relatively strong compared with other institutions, a finding that has been observed in public surveys for decades. Here, then, may be an opening. Can scientists share the admired successes of science in a manner that leads citizens to embrace for themselves the essence of science? This essence of science is to demand evidence at every turn and to discard ideas when they are shown not to comport with the evidence. This thinking is not reserved solely for scientists, and one need not be an expert to demand evidence. Given the public's respect for science and science’s reverence for evidence, can the public be moved to demand and embrace evidence for themselves? This connection seems logical—we need to make it feasible. Can scientists achieve what normal civic education has not achieved? There is no time to waste in finding this out.

From Competition to Collaboration

[These excerpts are from an op-ed article by Giovanni Camanni in the 25 January 2019 issue of Science.]

      It was just like any other morning. I was at the bus stop, on my way to the lab where I was a post-doctoral fellow. But as I watched the people around me—headphones dangling from their ears, eyes cast down, unsmiling faces—something began to stir inside me. They looked unhappy. And, I realized, I was one of them. Suddenly, I could no longer continue with my work life. I turned around, went back to my flat, and booked a one-way ticket to fly home the next morning. I didn’t know how long I would be away or what would come next. All I knew was that, even though I loved science and research, what I had been doing wasn’t working.

      Over the years, as I dealt with the pressures of finishing my Ph.D. and securing and starting my post-doc, I had grown more competitive. To prove that I was a valuable researcher, I pushed myself to be the first to generate sensational results and to publish in high-impact journals. Those who could have been collaborators became rivals I resented.

      But the effect of this competitive streak was exactly the opposite of what I had hoped for. The pressure became overwhelming. When I encountered scientific problems, I thought I had to solve them myself instead of asking for help. I began to feel alone and lost. I became less and less productive. But the culture of academia—prizing competition and individual successes above all else—seemed to reinforce my approach. I was sure that this was not the right time to show any insecurities, so I persevered.

      That day at the bus stop, I hit my breaking point. The race had to end….

      Three months after I left so suddenly, I was prepared to go back to work. I was excited to get back to the science that I loved, and I now had a foundation to be more open with my colleagues. I understood that we all struggle sometimes, and that vulnerability and collaboration can be more powerful than competition. It doesn’t have to be a zero-sum game.

      The first days were difficult. I had naively thought that, right away, everything would be different. But as soon as I was back in that workplace, I felt the stirrings of that old competitiveness. I focused on maintaining my new perspective and being patient as I readjusted. With a bit of time, I understood that, although the place and position were the same, I had changed. I hadn't just accepted my vulnerability; I had embraced it and opened up about it to my colleagues.

      As a result, collaboration has replaced competition. Working with others and seeking help doesn’t diminish my value or contributions; it means we can all win. Now, when I encounter problems in my work, I frequently discuss them with colleagues, knowing that considering multiple points of view often leads to solutions. I have become more productive. Working relationships are now genuine human ones. I no longer feel like one of the lonely, unhappy people at the bus stop.

Light Skin May Be Legacy of Native American Ancestors

[These excerpts are from an article by Lizzie Wade in the 25 January 2019 issue of Science.]

      Walk down a busy street in most Latin American cities today and you’ll see a palette of skin colors from dark brown to sepia to cream. For 500 years, people have assumed this variation comes from the meeting and mixing of Native Americans, Europeans, and Africans during colonial times and later. People with lighter skin are thought to have more European ancestry, whereas those with darker skin are taken to have more Native American or African ancestry—and are often targeted for discrimination.

      Now, a new study of the genes of more than 6000 people from five Latin American countries undercuts the simplistic racial assumptions often made from skin color. An international team discovered a new genetic variant associated with lighter skin found only in Native American and East Asian populations. That means that in Latin America, lighter skin can reflect Native American as well as European ancestry….

      Latin America is fertile ground for such studies. People there often have Native American, European, and African ancestors, and because Native American populations are closely related to those from East Asia, researchers can also spot East Asian variants in Latin American genomes….

      …People at high latitudes in Europe and East Asia seem to have independently evolved lighter skin to produce vitamin D more efficiently with less sunlight….

      The larger lesson…is the pitfalls of a Eurocentric view "Our study shows that going beyond Europeans one can find additional genes, even for well-studied traits. Clearly the bias towards Europeans has led to a restricted view of human diversity."

Where to Find Fantastic Beasts at Sea

[These excerpts are from an article by Nicholas D. Pyenson in the 25 January 2019 issue of Science.]

      The biggest predators in the oceans captivate us for good reasons: Sharks, billfishes, whales, and penguins have big appetites, range over large distances, and have achieved similar body forms from vastly different starting points on the tree of life. Evolutionary convergences among large marine predators are also more than skin deep; those with ancestries on land, such as marine mammals and seabirds, have independently evolved an array of molecular and tissue specializations for maximizing oxygen and warmth. Beyond these fantastic traits, marine predators also possess large body sizes and trophic linkages that make them ecologically important consumers in marine food webs….

      Macroecologists who study marine predators have long known that convergences in form do not yield similar ecological distributions over geographic space. Most marine mammal species occur at higher latitudes, whereas sharks and fishes are found closer to the equator. Endothermic marine mammals are also most plentiful in polar and temperate seas. This latitudinal distribution contrasts with that of nearly every other marine animal group, which shows peaks in equatorial to temperate seas. Why the difference?...a possible answer: A fundamental asymmetry in metabolism gives endotherms an advantage when hunting in colder, more prey-rich waters.

      …The basic distribution maps show startling gaps in top predator occupancy across the globe. For example, there is a lack of marine mammals in the Indo-Australian Archipelago, despite this region being an epicenter for marine biodiversity….

      Explaining this geographic pattern requires revisiting the models that describe the cost-benefit trade-offs of top predators feeding on their prey. On first principles alone, individual endothermic predators maintain consistent metabolism across latitudes, whereas the metabolism of ectotherms plummets in colder waters, which would affect relative foraging performance. Scaling up to the ecosystem level, where prey production is relatively uniform, each endothermic and ectothermic predator species should have strongly differential consumption rates across latitudes—all driven by water temperature as a primary structuring factor….

Oldest Images of Dogs Show Hunting, Leashes

[These excerpts are from an article by David Grimm in the 17 November 2018 issue of Science.]

      Carved into a sandstone cliff on the edge of a bygone river in the Arabian Desert, a hunter draws his bow for the kill. He is accompanied by 13 dogs, each with its own coat markings; two animals have lines running from their necks to the man's waist.

      The engravings likely date back more than 8000 years, making them the earliest depictions of dogs, a new study reveals. And those lines are probably leashes, suggesting that humans mastered the art of training and controlling dogs thousands of years L earlier than previously thought….

      The hunting scene comes from Shuwaymis, a hilly region of northwestern Saudi Arabia where seasonal rains once formed rivers and supported pockets of dense vegetation….

      Starting about 10,000 years ago, hunter-gatherers entered—or perhaps returned to—the region. What appear to be the oldest images are thought to date to this time and depict curvy women. Then about 7000 to 8000 years ago, people here became herders, based on livestock bones found at Jubbah; that's likely when pictures of cattle, sheep, and goats began to dominate the images. In between—carved on top of the women and under the livestock—are the early hunting dogs: 156 at Shuwaymis and 193 at Jubbah. All are medium-sized, with pricked up ears, short snouts, and curled tails—hallmarks of domestic canines. In some scenes, the dogs face off against wild donkeys. In others, they bite the necks and bellies of ibexes and gazelles. And in many, they are tethered to a human armed with a bow and arrow.

      …the engravings may not be as old as they seem. To confirm the chronology, scientists will need to link the images to a well-dated archaeological site—a challenge, she says, because “the archaeological record in this region is really spotty.”

Beyond Plastic Waste

[These excerpts are from an editorial by Dame Ellen MacArthur in the 17 November 2018 issue of Science.]

      With more than 8 million tons of plastic entering the ocean each year, humanity must urgently rethink the way we make and use plastics, so that they do not become waste in the first place.

      Cheap, light, and versatile, plastics are the dominant materials of our modern economy. Their production is expected to double over the next two decades. Yet, only 14% of all plastic packaging is collected for recycling after use, and vast quantities escape into the environment. This not only results in a loss of $80 billion to $120 billion per year to the global economy, but if the current trend continues, there could be more plastic than fish by weight in the oceans by 2050.

      Some companies have started changing their habits. Unilever, for example, has promised that by 2025, all its plastic packaging will be fully reusable, recyclable, or compostable in a commercially viable manner. Given that up to a third of all plastic packaging items are too small (such as straws and sachets) or too simplex (such as multimaterial films and take-away coffee cups) to be economically recycled, achieving these commitments will require a great degree of redesign and innovation.

      Such company commitments and innovations are a step in the right direction. But creating a plastics system that works will require collaboration among all participants in the plastics sector….

      …Bans on or charges for single-use shopping bags have, for example, led to rapid reductions in their use in France, Rwanda, and the United Kingdom. A few uncommon types of plastic used in packaging are too expensive to recycle and should be phased out. A science-based approach is needed to replace chemicals such as endocrine disruptors that are found in some plastics and pose a risk to human health.

      Such restrictions need to be complemented by mechanisms that foster innovation. Policy-makers can connect the design of plastic packaging with its collection, sorting, and subsequent reuse, recycling, or composting by supporting deposit-refund schemes for drinks bottles, as in Germany and Denmark, or by requiring producers to consider what happens to their packaging products after use. A useful policy approach is extended producer responsibility (EPA), which makes producers responsible for the entire product life cycle. EPR policies have been introduced in European Union legislation and at the national level for packaging, batteries, vehicles, and electronics. Such policies can support good design and improve the economics of after-use options for packaging materials.

      However, the most potent tool for policy-makers remains the setting of a clear common vision and credible high-level ambitions that drive investment decisions. In the case of plastics, a crucial pillar of such a policy ambition must be stimulating scientific breakthroughs in the development of materials that can be economically reused, recycled, or composted….

Massive Fish Die-off Sparks Outcry in Australia

[These excerpts are from an article by Dennis Normile in the 25 January 2019 issue of Science.]

      Australians knew another long drought was hammering the country’s southeast. But it took a viral Facebook video posted on 8 January to drive home the ecological catastrophe that was unfolding in the Murray-Darling river system. In the footage, Rob McBride and Dick Arnold, identified as local residents, stand knee-deep among floating fish carcasses in the Darling River, near the town of Menindee. They scoff at authorities’ claims that the fish die-off is a result of the drought. Holding up an enormous, dead Murray cod, a freshwater predator he says is 100 years old, McBride says: “This has nothing to do with drought, this is a manmade disaster.” Arnold, sputtering with rage, adds: “You have to be bloody disgusted with yourselves, you politicians and cotton growers.”

      Scientists say McBride probably overestimated the age of the fish. But they agree that the massive die-off was not the result of drought….

      Excessive water use has left river flows too low to flush nutrients from farm runoff through the system, leading to large algal blooms, researchers say. A cold snap then killed the blooms, and bacteria feeding on the dead algae sucked oxygen out of the water, suffocating between 100,000 and 1 million fish. The death of so many individuals that had survived previous droughts is “unprecedented,” says ANU ecologist Matthew Colloff. And with fish of breeding age decimated, recovery will be slow. “But only a bloody fool would put a time frame on that,” Colloff says.

      This wasn’t supposed to happen. In 2012, the national government adopted the Murray-Darling Basin Plan, touted as a “historic” deal to ensure that enough water remained in the rivers to keep the eco-system healthy even after farmers and households took their share….

      The 1-million-square-kilometer Murray-Darling Basin accounts for 40% of Australia’s agricultural output, thanks in part to heavy irrigation. By the early 2000s, water flows in the lower reaches of the basin were just a third of historical levels, according to a 2008 study. During the millennium drought, which started in the late 1990s and lasted for a decade, downstream communities faced water shortages.

      In 2008, the federal government created the Murray-Darling Basin Authority to wrestle with the problem. In 2010, a study commissioned by the authority concluded that farmers and consumers would have to cut their use of river water by at least 3000 but preferably by 7600 gigaliters annually to ensure the health of the ecosystem. Farmers, who saw their livelihoods threatened, tossed the report into bonfires.

      The final plan, adopted as national law in 2012, called for returning just 2750 gigaliters to the rivers, in part by buying water rights back from users….

      Implementation has exacerbated the problems. Since 2012, the federal government has spent AU$6 billion on the plan, but two-thirds has gone to improving irrigation infrastructure, on the premise that efficiency would ease demands on the rivers. Critics say the money would have been better spent on purchasing water rights.

      Grafton says there are also suspicions of widespread water theft; up to 75% of the water taken by irrigators in the northern part of the system is not metered. Farmers are also now recapturing the runoff from irrigated fields that used to flow back into streams, and are increasing their use of ground water, leaving even less water in the system….

      In February 2018, such issues prompted a group of 12 academics, including scientists and policy experts, to issue the Murray-Darling Declaration. It called for independent economic and scientific audits of completed and planned water recovery schemes to determine their effects on stream flows. The group, which included Williams and Grafton, also urged the creation of an independent, expert body to provide advice on basin water management. Young, who wasn't on the declaration, wants to go further and give that body the power to manage the basin’s water, the way central banks manage a country’s money supply, using stream levels to determine weekly irrigation allocations and to set minimum flow levels for every river.

      Before the fish kill, such proposals had garnered little attention. But Young hopes the public outrage will influence federal elections that have to take place by mid-May….

Bearing Witness

[These excerpts are from an article by Douglas Starr in the 25 January 2019 issue of Science.]

      In a St. Louis, Missouri, courtroom last summer, David Egilman testified in a lawsuit filed by 22 women who claimed to have contracted ovarian cancer from exposure to Johnson & Johnson (J&J) baby powder. Like millions of women before them, they had dusted their babies with the powder and used it on themselves thousands of times. They alleged the talc was tainted with asbestos and that exposure to the carcinogenic fiber likely played a role in their cancers.

      Egilman, a professor of family medicine at Brown University who served as a paid expert witness, brought science and medical gravitas to his testimony. He interviewed the women about their frequency and duration of talc use, and he factored in the levels of asbestos that outside scientists had found in samples of talc, which is sometimes mined from formations that also yield asbestos. From those data, he said, he could calculate the women’s doses of the carcinogen, and he argued that their exposure had doubled their risk of ovarian cancer.

      J&J had its own experts, who said the talc was asbestos-free and there was no proof that the product caused cancer. But Egilman’s decisive contribution may have come from his scrutiny of company documents. After examining thousands of pages of internal J&J documents unearthed during the litigation, he and his student researchers concluded that J&J found no asbestos in the talc because its tests were not sensitive enough. He compared the company's methods to trying to weigh a needle on a bathroom scale.

      …(Since the trial, other internal J&J documents have become public, which show the company knew about the asbestos contamination for decades but suppressed the information.)

      Egilman believes corporations have minimized their costs at the expense of their employees’ health and that of the public and the environment—and that recent weakening of regulations has made that problem worse. He sees speaking out in court as a mission. “As a doctor, I can treat one cancer patient at a time,” he explained during a trial last year. “But by being here, I have the potential to save millions.” More broadly, he says corporate money and power have intimidated scientists and corrupted science itself—a concern that has led him into battle not only with corporations, but also with journals that publish what he describes as tainted results.

      In 35 years as an expert witness, he has given depositions and testimony in more than 600 cases of occupational or environmental disease. He has helped win billions of dollars for injured or sick workers or consumers, or for the families of those who have died. By his reckoning, he has earned more than $5 million for such legal work; he says he has donated some of his fees to charities, including a nonprofit he founded to improve health in developing countries.

      Specializing in occupational lung disease, Egilman diagnoses patients and marshals data. But he also digs into corporate records uncovered during litigation, invariably finding memos and studies showing that com-panies knew about industrial hazards long before warning employees or the public….

      Egilman’s sense of mission propels a prodigious work ethic—by his own estimate, he works 100 to 120 hours per week—and some-times emboldens him to act rashly. In 2007, patients were suing Eli Lilly and Company, claiming that its antipsychotic medicine olanzapine (Zyprexa) had caused dramatic weight gain and diabetes. Reviewing company documents as an expert witness for the plaintiffs, Egilman found emails implying Eli Lilly knew of the danger but long tried to play it down. The judge ordered that the documents be kept confidential to protect the company’s proprietary sales and marketing strategies, but Egilman leaked them to a New York Times reporter. “A physician’s oath never says to keep your mouth shut,” Egilman says.

      The court was not amused by that “brazen breach” of protocol. “They could have put me in jail,” Egilman says. To avoid criminal charges, he paid a $100,000 settlement. But after the newspaper story ran, 30 states subpoenaed documents on Eli Lilly's sales and marketing activities, showing the same incriminating behavior. In 2009, it agreed to pay $1.4 billion to settle criminal cbarges and civil suits….

Nuclear Power and Promise

[These excerpts are from a book review by Jacob Darwin Hamblin in the 18 January 2019 issue of Science.]

      Less than a year after the Fukushima nuclear accident in Japan, physicist Gregory B. Jaczko tried to break the “first commandment” of nuclear regulation: Thou shalt not deny a license to operate a reactor. As chairman of the U.S. Nuclear Regulatory Commission (NRC), he knew that the tradition was to encourage doomed applications to be withdrawn. But when one company refused, Jaczko dug in his heels and opposed the license. It turned out to be a futile gesture that the other commissioners opposed. But it was one of many examples, he contends, of the weaknesses in the nation’s top nuclear regulatory body and an exemplar of its obeisance to the nuclear power industry.

      Confessions of a Rogue Nuclear Regulator is one part engrossing memoir and another part seething diatribe, depicting a government agency that routinely caves to industry pressure….

      …The ensuing political fracas convinced Jaczko that the nuclear industry used the NRC as a tool for promoting rather than regulating nuclear power. He believes that a national repository for radioactive waste puts too much responsibility on the taxpayer….

      The answer is to stop producing nuclear waste, argues Jaczko, and indeed stop producing nuclear power at all. He wishes that as chairman, he’d “had the courage to say this, but my courage had its limits.”

      Most of Jaczko’s short book hammers on the theme that industry lobbyists hold sway over the would-be regulators. He highlights the longstanding concept of “enforcement discretion” and skewers it as one of nuclear regulation’s “greatest oxymorons.”

      Rather than demand safety compliance, the NRC historically has allowed nuclear plants to develop alternative approaches and has granted exceptions and exemptions. Recounting an episode in which he tried to abolish enforcement discretion in fire safety, Jaczko writes: “What happened over the next several weeks was more brutal than Roman imperial succession.”

      The political infighting was particularly intense after the 2011 Fukushima disaster. Jaczko visited Japan and grew impatient with the "litany of guarantees" from industry about American nuclear facilities. He tried to insist on new requirements to mitigate accidents triggered by natural disasters such as floods, earthquakes, and tsunamis. One internal NRC report drafted after Fukushima criticized the practice of relying on voluntary industry initiative to address safety concerns. Jaczko’s descriptions of other commissioners’ attempts to quash or edit the report provide a disturbing glimpse of the dynamic of trust and betrayal within the agency….

Flotilla Launches Large Survey of Antarctic Krill

[These excerpts are from an article by Erik Stokstad in the 18 January 2019 issue of Science.]

      Krill, crustaceans smaller than a cigarette, play an outsize role in the ecology of the ocean around Antarctica: Penguins, whales, and other predators feast on vast swarms of the shrimplike animals. Now, researchers have launched a broad international survey of krill’s main habitat in and around the Scotia Sea—the first in nearly 20 years—to learn whether a growing fishing industry is leaving enough for krill’s Lnatural predators….

      Soviet vessels were the first to ply the Southern Ocean for krill, which was ground into fish meal. By the 1980s, scientists began to worry about the effect on krill-feeding predators. The Convention for the Conservation of Antarctic Marine Living Resources (CCAMLR), a treaty organization established in 1982, set tight limits on fishing, now at 620,000 tons per year. Most fishing stopped after the 1991 collapse of the Soviet Union, but it has been slowly growing again. Norway takes about half the current catch, extracting omega-3 fatty acids for nutritional supplements….

      During the survey, vessels will retrace the previous transects, measuring krill swarms with echosounders, a kind of sonar, and confirming the identification with sampling trawls. Some ships will measure oceanographic variables as well, such as temperature, currents, and plankton, to see whether they can be used to predict krill abundance….

      The survey itself won't be able to reveal how the overall krill population in the Scotia Sea might have changed since the 2000 survey, given the variability of krill populations over space and time. Finding out what drives population changes will require more research on the seasonal movement of krill, for example, and the impact of climate change. Loss of sea ice, which protects young krill from predators, is expected to reduce their abundance, and rising water temperatures and acidification could also pose serious threats—ones that even the best management plan might not avert.

Did Neurons Arise from an Early Secretory Cell?

[These excerpts are from an article by Elizabeth Pennisi in the 18 January 2019 issue of Science.]

      Swimming through the oceans, voraciously consuming plankton and other small creatures—and occasionally startling a swimmer—the beautiful gelatinous masses known as comb jellies won’t be joining Mensa anytime soon. But these fragile creatures have nerve cells—and they offer insights about the evolutionary origins of all nervous systems, including our own. Inspired by studies of a glue-secreting cell unique to these plankton predators, researchers have now proposed that neurons emerged in the last common ancestor of today’s animals—and that their progenitors were secretory cells, whose primary function was to release chemicals into the environment….

      Today, nerve cells are among the most specialized cell types in the body, able to transmit electrical signals, for example. Some versions talk to each other, others relay information from the environment to the brain, and still more send directives to muscles and other parts of the body. They are an almost universal feature of animals; only sponges and placozoans, an obscure group of tiny creatures with the simplest of animal structures, lack them….

      When and how the animal nervous sys-em arose has remained murky; however….

Why We Need Fetal Tissue Research

[These excerpts are from an editorial by Sally Temple and Lawrence S.B. Goldstein in the 18 January 2019 issue of Science.]

      A vocal minority in the United States is intent on stopping federal funding for research using human fetal tissue, citing stem cell-based or other alternatives as adequate. This view is scientifically inaccurate. It ignores the current limitations of stem cell research and disregards the value of fetal tissue research in finding therapies for incurable diseases. If there is to be continued rapid progress in treating cancer, birth defects, heart disease, and infectious diseases, then we need fetal tissue research.

      Life-saving advances, including the development of vaccines against rubella, rabies, and hepatitis A viruses, and antiviral drugs that prevent HIV/AIDS, required fetal tissue research. Today, fetal tissue is being used to develop new medicines including vaccines for HIV/AIDS, preventives for Zika virus, and immunotherapies to battle untreatable cancers.

      Although research into alternatives is worthwhile, there are several aspects of fetal tissue research for which alternatives do not and will not exist. For example, to discover which fetal cells go awry and cause childhood cancers such as retinoblastoma, a cancer of the eye, or rhabdomyosarcoma, a muscle cancer, we must understand which cells are the culprits. For that, we need access to relevant fetal tissues. Zika virus can cross the placenta and attack specific fetal brain cells. To determine the mechanism of viral entry, which cell types are vulnerable, and how to prevent infection and damage, we need access to fetal brain tissue. Beyond diseases affecting children, some forms of hereditary Alzheimer’s disease cause neural impairments in utero that persist over decades and manifest later in life. Without access to fetal cells, we cannot understand and effectively combat diseases that begin in utero.

      Fetal tissue alternatives for some applications may be developed as science advances, but this will take time. The transition to any new model of research should be data-driven and based on scientific evidence. Opponents of fetal tissue research state that human stem cell-derived organoids are adequate models, supplanting fetal tissue research, but that is incorrect….

      Although alternatives maybe established in some cases, fetal tissue remains an essential resource for many applications. It is important to remember that the fetal tissue used in research would otherwise be discarded and thus unavailable in the fight against disease. U.S. researchers also follow rigorous, well-established medical and ethical standard practices for this research. Fetal tissue research has been supported for decades by both Republican and Democratic administrations and congresses. Rigorous U.S. government-sponsored review processes have also concluded that this research is ethical and valuable.

The Bones of Bears Ears

[These excerpts are from an article by April Reese in the 18 January 2019 issue of Science.]

      Thousands of such rare fossils pepper Bears Ears, a sweep of buttes and badlands whose candy-striped sedimentary rocks catalog hundreds of millions of years of Earth’s history. The region’s rich paleontological and archaeological record—and the lobbying of southwestern tribes whose ancestors lived here—persuaded former President Barack Obama to designate the area a national monument just over 2 years ago, in the waning days of his administration.

      Now, those fossils, and the influx of special research funding that came with the designation, are under threat. In December 2017, urged on by Utah officials, President Donald Trump slashed the size of the 547,000-hectare monumpt by 85%, leaving just 82,000 hectares split into two separate units. Since Trump’s order took effect in February 2018, the excised lands, which hold thousands of Native American artifacts and sites—and possibly the world’s densest cache of fossils from the Triassic period, roughly 250 million to 200 million years ago—are open again to mining, expanded grazing, and cross-country trekking by off-mad vehicles.

      That prospect spurred the typically apolitical Society of Vertebrate Paleontology (SVP), based in Bethesda, Maryland, to sue the Trump administration in federal court, joining archaeologists, environmentalists, outdoor companies, and five Native American tribes. Their argument: The 1906 Antiquities Act used to create Bears Ears only allows presidents to establish monuments—not to drastically reduce them. The cutbacks represent an “extreme overreach of authority,” SVP said in announcing the lawsuit just days after Trump’s move. If SVP wins, the ruling could set a precedent that would help safeguard the boundaries of the 158 national monuments created under presidential authority; if it loses, future presidents could gain new powers to downsize them.

      …A similarly rich fossil trove, from the era when dinosaurs ruled, helped make the case for that monument, which was established by then-President Bill Clinton in 1996 and cut in half by Trump in another December 2017 proclamation. An influx of federal funding followed, which Polly credits with allowing researchers to uncover some of the world’s best records of the Late Cretaceous.

      Within 10 years, researchers had discovered fossils from 25 taxa new to science and documented the rise of flowering plants, insects, and the ancestors of mammals between 145 million and 66 million years ago….

      Bears Ears’s record begins earlier, more than 340 million years ago, when the supercontinent Pangaea spanned much of the planet. A tropical sea that covered the area began to fill with sediment shed by the uplifting Rocky Mountains, leaving thousands of prehistoric sea creatures, mammallike reptiles, and dinosaurs entombed in hardened mudflats. Some of those fossils help tell the story of the “great dying” 252 million years ago, which killed 96% of marine species and 70% of terrestrial ones, clearing the way for dinosaurs. Others chronicle the End Triassic extinction some 50 million years later, which wiped out 76% Of terrestrial and marine life….

      The loss of monument status means 1 those treasures could be exposed to many dangers. Off-road vehicles are now allowed to crisscross the monument’s former grounds, which are once again open to mining (although new projects must go through ELM’s usual review process). The land will also lose out on resources aimed at beefing up research, such as personnel—Grand Staircase got its own paleontologist, for example—and special funding to develop scientific and cultural resources.

      That money—part of federal funding for BLM lands protected for their scientific resources—not only funds ongoing projects and spurs new discoveries; it also helps ensure that scientists find those resources before looters do. Looting has long been a problem in San Juan County, where the monument is located….

Cheap Oil vs. Climate Change

[This excerpt is from a book report by Miriam R. Aczel in the 11 January 2019 issue of Science.]

      Since the discovery of oil in the 1930s, the six Gulf monarchies—Saudi Arabia, Qatar, Kuwait, Oman, Bahrain, and the United Arab Emirates—have become known for their seemingly endless supply of cheap energy and a growing domestic appetite for it. However, in the face of a changing climate, the very policies that led to their development and prosperity are today posing a threat as the region faces record-breaking temperatures. Jim Krane’s compelling Energy Kingdoms takes readers inside this critical energy conundrum.

      Krane evaluates the forces behind the Gulf region’s “feverish demand for its chief export commodity” and argues that the monarchies’ economic and political systems have created contradictory phenomena: Subsidization of energy; a key political institution, undermines oil and gas exporting, their main economic institution. “[L]ong term, these two crucial components of governance cannot remain in conflict. Either the political structures will bend or the economy will yield.”

How Fast Are the Oceans Warming?

[These excerpts are from an article by Lijing Cheng, John Abraham, Zeke Hausfather and Kevin E. Trenberth in the 11 January 2019 issue of Science.]

      Climate change from human activities mainly results from the energy imbalance in Earth’s climate system caused by rising concentrations of heat-trapping gases. About 93% of the energy imbalance accumulates in the ocean as increased ocean heat content (OHC). The ocean record of this imbalance is much less affected by internal variability and is thus better suited for detecting and attributing human influences than more commonly used surface temperature records. Recent observation-based estimates show rapid warming of Earth’s oceans over the past few decades….This warming has contributed to increases in rainfall intensity, rising sea levels, the destruction of coral reefs, declining ocean oxygen levels, and declines in ice sheets; glaciers; and ice caps in the polar regions. Recent estimates of observed warming resemble those seen in models, indicating that models reliably project changes in OHC….

      The fairly steady rise in OHC shows tha the planet is clearly warming. The prospects for much higher OHC, sea level, and sea-surface temperatures should be of concern given the abundant evidence of effect on storms, hurricanes, and the hydrological cycle, including extreme precipitation events. There is a clear need to continue to improve the ocean observation any analysis system to provide better estimate of OHC, because it will enable more refines regional projections of the future. In addition, the need to slow or stop the rate of climate change and prepare for the ex pected impacts is increasingly evident.

A Fresh Look at Nuclear Energy

[This excerpt is from an editorial by John Parsons, Jacopo Buongiorno, Michael Corradini and David Petti in the 11 January 2019 issue of Science.]

      We are running out of time, as the Intergovernmental. Panel on Climate Change (IPCC) warned last October in a special report, Global Warming of 1.5°C. National commitments under the 2015 Paris Agreement are only the first step toward decarbonization, but most countries are already lagging behind. It is time to take a fresh look at the role that nuclear energy can play in decarbonizing the world's energy system.

      Nuclear is already the largest source of low-carbon energy in the United States and Europe and the second-largest source worldwide (after hydropower). In the September report of the MIT Energy Initiative, The Future of Nuclear Energy in a Carbon-Constrained World, we show that extending the life of the existing fleet of nuclear reactors worldwide is the least costly approach to avoiding an increase of carbon emissions in the power sector. Yet, some countries have prioritized closing nuclear plants, and other countries have policies that undermine the financial viability of their plants. Fortunately, there are signs that this situation is changing. In the United States, Illinois, New Jersey, a and New York have taken steps to preserve their nuclear plants as part of a larger decarbonization strategy. In Taiwan, voters rejected a plan to end the use of nuclear energy. In France, decisions on nuclear plant closures must account for the impact on decarbonization commitments. In the United Kingdom, the government’s decarbonization policy entails replacing old nuclear plants with new ones. Strong actions are needed also in Belgium, Japan, South Korea, Spain, and Switzerland, where the existing nuclear fleet is seriously at risk of being phased out.

      What about the existing electricity sector in developed countries—can it become fully decarbonized? In the United States, China, and Europe, the most effective and least costly path is a combination of variable renewable energy technologies—those that fluctuate with time of day or season (such as solar or wind energy), and low-carbon dispatchable sources (whose power output to the grid can be controlled on demand). Some options, such as hydropower and geothermal energy, are geographically limited. Other options, such as battery storage, are not affordable at the scale needed to balance variable energy demand through long periods of low wind and sun or through seasonal fluctuations, although that could change in the coming decades. Nuelear energy is one low-carbon dispatchable option that is virtually unlimited and available now. Excluding nuclear power could double or triple the average cost of electricity for deep decarbonization scenarios because of the enormous overcapacity of solar energy, wind energy, and batteries that would be required to meet demand in the absence of a dispatchable low-carbon energy source.

      One obstacle is that the cost of new nuclear plants has escalated, especially in the first-of-a-kind units currently being deployed in the United States and Western Europe. This may limit the role of nuclear power in a low-carbon portfolio and raise the cost of deep decarbonization. The good news is that the cost of new nuclear plants can be reduced, not only in the direct cost of the equipment, but also in the associated civil structures and in the processes of engineering, licensing, and assembling the plant….

Seeing the Dawn

[These excerpts are from an article by Robert F. Service in the 11 January 2019 issue of Science.]

      A cataclysm may have jump-started life on Earth. A new scenario suggests that some 4.47 billion years ago—a mere 60 million years after Earth took shape and 40 million years after the moon formed—a moon-size object sideswiped Earth and exploded into an orbiting cloud of molten iron and other debris.

      The metallic hailstorm that ensued likely lasted years, if not centuries, ripping oxygen atoms from water molecules and leaving hydrogen behind. The oxygens were then free to link with iron, creating vast rust-colored deposits of iron oxide across our planet’s surface. The hydrogen formed a dense atmosphere that likely lasted 200 million years as it ever so slowly dissipated into space.

      After things cooled down, simple organic molecules began to form under the blanket of hydrogen. Those molecules, some scientists think, eventually linked up to form RNA, a molecular player long credited as essential for life’s dawn. In short, the stage for life's emergence was set almost as soon as our planet was born.

      …No rocks or other direct evidence remain from the supposed cataclysm….

      The metal-laden rain accounts for the distribution of metals across our planet’s surface today. The hydrogen atmosphere would have favored the emergence of the simple organic molecules that later formed more complex molecules such as RNA. And the planetary crash pushes back the likely birthdate for RNA, and possibly life’s emergence, by hundreds of millions of years, which better aligns with recent geological evidence suggesting an early emergence of life.

      The impact scenario joins new findings from laboratory experiments suggesting how the chemicals spawned on early Earth might have taken key steps along the road to life—steps that had long baffled researchers. Many in the field see a consistent narrative describing how and when life was born starting to take shape….

      The case isn’t settled, Luptak and others say. Researchers still disagree, for example, over which chemical path most likely gave rise to RNA and how that RNA combined with proteins and fats to form the earliest cells….

      Since the 1960s, a leading school of thought has held that RNA arose first, with DNA and proteins evolving later. That’s because RNA can both serve as a genetic code and catalyze chemical reactions. In modern cells, RNA strands still work alongside proteins at the heart of many crucial cellular machines.

      In recent years, chemists have sketched out reactions that could have produced essential building blocks for RNA and other compounds….

The Last Resort

[These excerpts are from an article by Richard Conniff in the January 2019 issue of Scientific American.]

      …In a special report in October 2018, the Intergovernmental Panel on Climate Change warned that we have just 12 years to act if we hope to avoid slipping past 1.5 degrees C, the level regarded by most scientists as the furthest we can go if we hope to preserve life more or less as we know it.

      Staying under that threshold mandates a specific “carbon budget,” an overall amount of carbon dioxide we can add to the atmosphere without pushing warming beyond that temperature. At today’s emissions—about 40 billion to 50 billion tons a year—“there may be only five years’ worth of CO2 emissions left” in the 1.5-degree scenario….

      On a hardened lava field of boulders and moss in the foothills just outside Reykjavlk, Iceland, a machine the size of a one-car garage pulls air through a chemical filter that extracts carbon dioxide. It is powered by waste heat from the geothermal power plant next door, and it pumps the captured carbon dioxide more than 700 meters underground, where the gas reacts with basalt rock and becomes solid mineral. Climeworks, a Swiss start-up, calls the operation the first direct air capture and storage plant in the world. It sequesters a modest 50 tons of carbon dioxide a year.

      Direct air capture and storage may be the most straightforward path to negative emissions: banks of fans would harvest CO2 from the sky and bury it. Scientific scenarios project that this technology could remove 10 billion to 15 billion tons of carbon dioxide a year by the end of the century; a few experts think 35 billion or 40 billion tons may be possible. This is such a tantalizing prospect that many climate scientists worry it could pose a moral hazard: people might think they can delay fossil-fuel reductions now in the hope of technological salvation later….

      Direct air capture also consumes enormous amounts of energy. Removing a million tons of carbon dioxide a year would require a 300- to 500-megawatt power plant….If that were a coal-fired plant, it would create more emis-sions than it would remove. If power came from solar or wind farms, it would cover a lot of land that might already be in demand for farming or nature. And of course, a million tons would barely make a dent in the target of 20 billion tons a year….

On the Safe Side

[These excerpts are from an article by Eric Lindberg in the Winter 2018 issue of the USC Trojan Family.]

      Childhood is a time for wonder and learning, I for exploration and discovery, for making new friends and building confidence. But sometimes that period of innocence is shattered by violence, discrimination or harassment. And that turmoil tends to occur where kids spend most of their day: at school.

      Teachers and administrators deal with bad behavior every day in classrooms across the country.

      Nearly eight in 10 public K-12 schools reported at least one incident of violence, theft or other crime on campus in 2016, according to the National Center for Education Statistics. The latest statewide data show one third of students in California schools are bullied. Rates of suicide and depression among teens have increased substantially in recent decades. And school shootings, although still extremely rare, continue to garner widespread attention.

      So how can we make our schools safer and ensure that students can thrive…?

      There is some cause for optimism, he says, noting that overall crime and violence have decreased both in U.S. society in general and in schools since the mid-1990s, for reasons that remain unclear. Researchers have found that shooting incidents involving students are on the decline, as are virtually all indicators of violence and crime in schools. Nevertheless, many remain fearful.

      A recent survey found that 57 percent of teens say they are worried about a shooting happening at their school. Parents of teens are even more likely to be concerned; 63 percent report being at least somewhat worried about school shootings. Many researchers trace this perception of exaggerated danger in schools to breathless media coverage of high-profile incidents. Mass shootings like the recent tragedy in Parkland, Florida, can have a dramatic effect on feelings of safety….

Toxic Baby Food? Really?

[These excerpts are from an editorial by the editors in the January 2019 issue of Scientific American.]

      Many babies’ first solid food is rice cereal. It is a childhood staple, commonly recommended by pediatricians. And it is often poisoned—at least a little bit. Studies have found that many brands contain measurable amounts of inorganic arsenic, the most toxic kind. It’s not just rice: an August 2018 study by Consumer Reports tested 50 foods made for babies and toddlers, including organic and nonorganic brands such as Gerber, Earth’s Best, Beech-Nut and other popular labels, and found evidence of at least one dangerous heavy metal in every product. Fifteen of the 50 contained enough contaminants to pose potential health risks to a child eating one serving or less a day.

      Heavy metals can impair cognitive development in children, who are especially at risk because of their smaller size and tendency to absorb more of these substances than adults do. Inorganic arsenic in drinking water has been found to lower the IQ scores of children by five to six points. And asheavy metals accumulate in the body over time, they can raise the risk of cancer, reproductive problems, type 2 diabetes, cardiovascular disease and cognitive issues. Of course, finding out your favorite brand is contaminated is not a reason to panic. Low levels of exposure for short periods are unlikely to cause devastating effects, and parents should focus on reducing the overall levels of these toxic substances in their children's total diet to limit harm.

      Heavy metals occur naturally on Earth and are present in soil and water. But pesticides, mining and pollution boost their concentrations, and farming and food manufacturing processes can contribute even more. Some crops inevitably absorb more heavy metals. Rice, for example, readily takes in arsenic both because of its particular physiology and because it is often grown in fields flooded with water, which is a primary source of the metal.

      Cereal makers are clearly capable of keeping baby food poison-free: roughly a third of the products Consumer Reports tested did not contain worrisome metal levels. Companies just do not take enough safety steps….

      Some companies are already trying to investigate the sources of contamination in their products and reduce them. More should follow and be transparent about these efforts. But the best chance of real change from food companies most likely will come with regulation.

      Currently there are no U.S. rules on acceptable levels of heavy metals in baby foods. In 2012, 2015 and 2017 Congress tried and failed to pass legislation imposing limits on arsenic and lead in fruit juice and rice products….

Fast and Furious

[These excerpts are from an article in the Winter 2019 issue of the American Museum of Natural History Rotunda.]

      There’s still a lot we don’t know about the Cretaceous Period’s most famous predator, Tyrannosaurus rex. One thing is for sure: T. rex was a giant. Its size is one of the extinct dinosaur’s most impressive features—along with its bone-crushing bite and disproportionally tiny arms, of course.

      Getting a fuller understanding of T. rex the giant requires scientists to try to learn more about T. rex as a tyke—before it was a mega-predator and, when it was still something else’s prey—as well as its lesser-known, lesser-sized relatives….

      Like many living species, T. rex hatchlings started out a fraction of the size of an adult dinosaur—which could weigh upward of 15,000 pounds, or about the same as five compact cars. But what the small theropods lacked in size, they made up for in speed: scientists think that T. rex gained up to 4.6 pounds a day, or an astonishing 1,690 pounds per year, until its early 20s.

      A ferociously rapid growth rate was one of the things that set T. rex apart from its Mesozoic peers. It matured at an exceptionally quick clip for a dinosaur, leaping way ahead in size of other tyrannosaur species like Albertosaurus and Gorgosourus around age 12. That gave this predator a distinct advantage: by growing out of its young and vulnerable phase quickly, T. rex could spend about 30 percent of its lifetime as one of the largest predators ever to walk the Earth. Compare that to modern crocodilians—close cousins that grow very slowly and, while reaching relatively massive sizes, attain only a fraction of the size and weight of an adult T. rex.

      Paleontologists have derived T. rex’s stunning growth rate by examining a cross-section of fossilized bones for growth lines—markings that are similar to tree rings, and present in nearly all vertebrates. For dinosaurs like T. rex, researchers often sample the thigh bone (femur)….also found success sampling the pelvis, calf bone (fibula), ribs, gastralia, and skull bones. And as with trees, bone growth rings offer a glimpse into an organism’s life history: in T. rex, wide gaps between lines record growth spurts at early ages, and lines that form closer together show a slowdown in growth as the animal approached adulthood….

      Growth rates also allow scientists to peek back at an animal’s early years. Scientists have yet to find a T. rex hatchling fossil. But recent studies based on the growth curve of T. rex suggest that these animals would have been around 2 feet long straight out of the egg, and as juveniles may have weighed as little as 10 pounds….

      Along with growth curves, paleontologists have also looked to other tyrannosaurs—a group that’s bigger, and better studied, today than ever before— for additional clues about how T. rex may have developed and behaved….

      Life expectancy for T. rex improved significantly as it grew, but juvenile fossil specimens are very rare. When a specimen is discovered, paleontologists run up against another problem: how to confirm that a fossil is of a young animal, not of an adult of a smaller, totally new species?

Tropical Uplift May Set Earth’s Thermostat

[These excerpts are from an article by Paul Voosen in the 4 January 2019 issue of Science.]

      Hate the cold? Blame Indonesia. It may sound odd, given the contributions to global warming from the country’s 270 million people, rampant deforestation, and frequent carbon dioxide (CO2)-belching volcanic eruptions. But over much longer times, Indonesia is sucking CO2 out of the atmosphere.

      Many mountains in Indonesia and neighboring Papua New Guinea consist of ancient volcanic rocks from the ocean floor that were caught in a colossal tectonic collision between a chain of island volcanoes and a continent, and thrust high. Lashed by tropical rains, these rocks hungrily react with CO2 and sequester it in minerals. That is why, with only 2% of the world’s land area, Indonesia accounts for 10% of its long-term CO2 absorption. Its mountains could explain why ice sheets have persisted, waxing and waning, for several million years (although they are now threatened by global warming).

      Now, researchers have extended that theory, finding that such tropical mountain-building collisions coincide with nearly all of the half-dozen or so significant glacial periods in the past 500 million years….

      Most geologists agree that long-term changes in the planet’s temperature are governed by shifts in CO2, and that plate tectonics somehow drives those shifts as it remakes the planet's surface. But for several decades, researchers have debated exactly what turns the CO2 knob. Many have focused on the volcanoes that rise where plates dive beneath one another. By spewing carbon from Earth’s interior, they could turn up the thermostat. Others have emphasized rock weathering, which depends on mountain building driven by plate tectonics. When the mountains contain seafloor rocks rich in calcium and magnesium, they react with CO2 dissolved in rainwater to form limestone, which is eventually buried on the ocean floor. Both processes matter….

      Having the right rocks to drive the CO2-chewing reaction is not sufficient. Climate matters, too. For example, the Siberian Traps, a region that saw devastating volcanic eruptions 252 million years ago, are rich in such rocks but absorb little….

Do Plants Favor Their Kin?

[These excerpts are from an article by Elizabeth Pennisi in the 4 January 2019 issue of Science.]

      For people, and many other animals, family matters. Consider how many jobs go to relatives. Or how an ant will ruthlessly attack intruder ants but rescue injured, closely related nestmates. There are good evolutionary reasons to aid relatives, after all. Now; it seems, family feelings may stir in plants as well.

      A Canadian biologist planted the seed of the idea more than a decade ago, but many plant biologists regarded it as heretical—plants lack the nervous systems that enable animals to recognize kin, so how can they know their relatives? But with a series of recent findings, the notion that plants really do care for their most genetically close peers—in a quiet, plant-y way—is taking root. Some species constrain how far their roots spread, others change how many flowers they produce, and a few tilt or shift their leaves to minimize shading of neighboring plants, favoring related individuals….

      Beyond broadening views of plant behavior, the new work may have a practical side. In September 2018, a team in China reported that rice planted with kin grows better, a finding that suggested family ties can be exploited to improve crop yields….

      …She grew American searocket (Calcile edentula), a succulent found on North American beaches, in pots with relatives or with unrelated plants from the same population. With strangers, the searocket greatly expanded its underground root system, but with relatives, it held these competitive urges in check, presumably leaving more room for kin roots get nutrients and water. The claim, published in 2007, shocked colleagues. A few sharply criticized the work, citing flawed st-tistics and bad study design.

      Since then, however, other researchers have confirmed her findings….After growing 770 seedlings in pots either alone or with three or six neighbors of varying relatedness, the team found the plants grown with kin put out more flowers, making them more alluring to pollinators. The floral displays were especially big in plants in the most crowded pots of relatives….

      Doubts linger. Is a plant identifying genetic kin, or simply recognizing that its neighbor is more or less similar to itself…?

      Sagebrush bushes (Artemisia tridentate) have provided some strong clues, however. When injured by herbivores, these plants release volatile chemicals that stimulate neighboring sagebrush to make chemicals toxic to their shared enemies….

      Since then, he has shown that when sunflower kin are planted close together, they, too, arrange themselves to stay out of one another's way. The sunflowers incline their shoots alternately toward one side of the row or the other….Taking advantage of the effect, they planted 10 to 14 related plants per square meter—an unheard of density for commercial growers—and got up to 47% more oil from plants that were allowed to lean away from each other than plants forced to grow straight up.

Small Steps, Big Changes

[These excerpts are from an article by Joshua P. Starr in the December 2018/January 2019 issue of Phi Delta Kappan.]

      If we want to make significant improvements in public education, then our goal shouldn't be to find a few more superstars. We know that in every profession people tend to be distributed along a bell curve, with the largest group performing somewhere in the middle. That certainly applies to the 3.7 million teachers working in our public schools. So the way to make the biggest difference is to help the largest group of teachers, those in the middle of the pack, to make regular, ongoing progress, so that the whole bell curve shifts. If we were to bump up the average level of teaching performance across the country over a number of years, that would translate to historic gains in student learning.

      But if our goal is to promote steady progress on a large scale, what does that mean for educational leadership? How do we organize and lead school systems to help our average-performing teachers and staff to get better at their work? I believe, and decades of research findings suggest, that it starts with defining a clear vision for what students need to know and be able to do, and every part of the school system must then be aligned with that goal.

      Again, I know this isn’t the most exciting sales pitch, but it’s much more effective for school and district leaders to implement a number of deliberate and modest, well-aligned changes thoughtfully than to throw themselves (and their professional ambitions) into one or two big, splashy, high-profile initiatives. Creating this sort of systemwide alignment requires us to take a careful look at every part of the system — policy, curriculum, teacher development, assessment, accountability, resource allocation, staffing, community engagement, and school culture — and try to find realistic ways for each to make a stronger contribution to our shared goals. Everything matters, and small steps add up….

      …Whether they focus on curriculum, grading, student assignment, equity, or another topic, policy discussions give teachers and administrators the opportunity to clarify what they want their students to learn and what kinds of support those students will require….

      …Teachers often and rightly complain that one-size-fits-all district-mandated training is meaningless. Yet, educators shouldn’t be left to their own devices either, as professional learning must be aligned with the priorities of the school and system, as well as the developmental needs of both teacher and students.

The Happy (and not so Happy) Accidents of Bush-Obama School Reform

[These excerpts are from an article by Frederick M. Hess and Michael Q. McShane in the December 2018/January 2019 issue of Phi Delta Kappan.]

      The American educational system is sprawling, diverse, and complex. It sits within a political system that itself is sprawling, diverse, and complex. In turn, that system sits within an American culture that is also sprawling, diverse, and complex. These are not design flaws. They represent —for good and bad — the true face of American democracy after more than two centuries of evolution.

      But if our educational system resembles a riddle wrapped in an enigma inside of a mystery, then it must be extraordinarily difficult to predict how reforms will unspool. Thus, reformers should be open to serendipity and Lvalue its gifts….

      No matter how pure our motives or brilliant our theory of action, we do well to recognize that school reforms rarely work as intended, and they sometimes only serve to make matters worse. That’s especially true then it comes to policy implementation. More often than not, students, teachers, and administrators react and behave in unexpected ways, or political forces and real-world dynamics interfere with our carefully designed plans.

      In the case of test-based teacher evaluation and the Common Core, advocates dug in their heels and kept insisting that their ambitious plans made sense — even as those reforms began to face practical challenges and backlash. Further, because policy makers attached timelines to these initiatives, state leaders felt compelled to charge forward, whatever the obstacles, and they were loath to make course corrections for fear of missing benchmarks or breaking promises.

      Thus, instead of taking parents and educators’ concerns seriously and admitting things weren’t playing out as they’d hoped, proponents tended to dismiss their critics as ideological, unreasoning, and “anti-child,” which only made it tougher to find common ground or defuse concerns. Over time, they became more and more preoccupied with tweaking their messages and securing short-term political wins, rather than addressing significant problems or trying to understand the growing opposition to their policies.

      When advocates build a head of steam behind a particular school reform, they may be reluctant to slow down, acknowledge concerns, and address obvious problems. But if they can bring themselves to hit the brakes when necessary; rather than trying to plow through every obstacle, Lthen they’ll be much better equipped for the long haul….

      For all the uncertainties of education policy, some truisms do hold, no matter how inconvenient we may find them: Incentivizing certain behaviors will tend to cause more of that behavior; people will go to great lengths to keep their jobs; if people distrust those who make the rules, they will push back on those rules; and if reformers neglect to anticipate and account for these things, they will likely encounter more unhappy accidents than happy ones.

How Cognitive Psychology Informs Classroom Practice

[These excerpts are from an article by Pooja K. Agarwal and Henry L. Roediger III in the December 2018/January 2019 issue of Phi Delta Kappan.]

      1. Retrieval practice boosts learning by pulling information out of students’ heads (by responding to a brief writing prompt, for example), rather than cramming information into their heads (by lecturing at students, for example). In the classroom, retrieval practice can take many forms, including a quick no-stakes quiz. When students are asked to retrieve new information, they don't just show what they know, they solidify and expand it.

      2. Feedback boosts learning by revealing to students what they know and what they don’t know. At the same time, this increases students' metacognition — their understanding about their own learning progress.

      3. Spaced practice boosts learning by spreading lessons and retrieval opportunities out over time so that new knowledge and skills are not crammed in all at once. By returning to content every so often, students’ knowledge has time to be consolidated and then refreshed.

      4. Interleaving — or practicing a mix of skills (such as doing addition, subtraction, multiplication, and division problems all in one sitting) — boosts learning by encouraging connections between and discrimination among closely related topics. Interleaving sometimes slows students’ initial learning of a concept, but it leads to greater retention and learning over time….

      Many teachers already implement these strategies in one form or another. But they may be able to get much more powerful results with a few small tweaks. For example, we often observe teachers using think-pair-share activities in their classrooms — typically, they will give students a few minutes on their own to think about a topic or prompt, then a few more minutes to discuss it with a partner, and then a chance to share their ideas as part of a larger class discussion. But what, exactly, are students doing during the think stage? They could easily be daydreaming, or wondering what to eat for lunch, rather than actively considering the prompt. But if the teacher simply asks them to write down a quick response, rather than just think, it becomes an opportunity for retrieval practice, ensuring that students are drawing an idea out of their heads and onto the paper.

      Similarly, rather than assigning students to consider a new topic, the teacher might ask them to do a think-pair-share about content they learned the day or week before — and now it becomes an opportunity for spaced practice; students get to return to material and solidify their under-standing of it.

      Here’s another example: We often observe teachers begin their classes by saying something to the effect of, “Here’s what we did yesterday ...” and then reviewing the content. Instead, they can pose it as a question, “What did we do yesterday?” and give students a minute to write down what they remember. It's a subtle shift (from a lecture by the teacher to an opportunity for retrieval practice), but it can significantly improve student learning, without requiring additional preparation or classroom time. Even a single question, writing prompt, or quick no-stakes quiz can make a difference, encouraging students to pull information out of their heads rather than cramming it into them via lecturing or telling.

      Why do these strategies improve learning? Consider this quick question: Who was the fourth president of the United States? A plausible answer may have jumped instantly to mind, but you probably had to struggle mentally to come up with a response. It’s precisely this productive struggle or “desirable difficulty” during retrieval practice and the three additional strategies that improves learning. (By the way, the fourth president was James Madison, but you’ll likely remember that much better if you managed to retrieve it from your memory rather than waiting for us to remind you of the name!)

      Teachers can use these four strategies (retrieval practice, feedback-driven metacognition, spaced practice, and interleaving) with confidence because they are strongly backed by research both in laboratories and classrooms. The rigor of science gives us confidence that these strategies aren’t fads, and successful classroom implementation gives us confidence that they work in the real world, not just in the laboratory.

Spin Control

[These excerpts are from an article by Jennifer Chu in the January-February 2019 issue of MIT News.]

      The Beaufort Gyre is an enormous, 600-mile-wide pool of swirling cold, fresh water in the Arctic Ocean, just north of Alaska and Canada. In the winter, this current is covered by a thick cap of ice. Each summer, as the ice melts away, the exposed gyre gathers up sea ice and river runoff, and draws it down to create a huge reservoir of frigid fresh water equal to the volume of all the Great Lakes combined.

      Scientists at MIT have now identified a key mechanism, which they call the “ice-ocean governor,” that controls how fast the Beaufort Gyre spins and how much fresh water it stores. In a recent paper in Geophysical Research Letters, the researchers report that the Arctic's ice cover essentially sets a speed limit on the gyre’s spin.

      In the past two decades, as surface air temperatures have risen, the Arctic’s summer ice has progressively shrunk. The team has observed that with less ice available to control the Beaufort Gyre’s spin, the current has sped up in recent years, gathering up more sea ice and expanding in both volume and depth

      If Arctic temperatures continue to climb, the researchers predict, the mechanism governing the gyre's spin will diminish. With no governor to limit its speed, the gyre is likely to transition into “a new regime” and eventually spill over like an overflowing bathtub, releasing huge volumes of cold, fresh water into the North Atlantic. That could affect the global climate and ocean circulation….

      As Arctic temperatures have risen inl the last two decades, summertime ice has shrunk with each year, the speed of the Beaufort Gyre has increased, and its currents have become more variable and unpredictable, and they are only slightly slowed by the return of ice in the winter.

      An increasingly unstable Beaufort Gyre could also disrupt the Arctic's halocline—the underlying layer of ocean water that insulates ice at the surface from much deeper, warmer, and saltier water. If the halocline is weakened by a more unstable gyre, this could encourage warmer waters to rise up, further melting the Arctic ice….

The Science of Slime

[These excerpts are from an article by Katharina Ribbeck in the January-February 2019 issue of MIT News.]

      Snot. Boogers. Phlegm. The goo that drips from your nose when you have a cold. No matter what you call it, mucus has a bad reputation as an unpleasant waste product, a sign of disease.

      But despite its high ick factor, the slimny substance performs a remarkable range of vital functions. After all, it lines more than 200 square meters of our bodies—from our mouths to the digestive system, urinary tract, lungs, eyes, and cervix. It lubricates and hydrates, lets us swallow, determines what we taste and smell, and selectively filters nutrients, toxins, and living cells such as bacteria, sperm cells, and fungi….

      …mucus is very successful at “taming” normally pathogenic microbes. Until recently, scientists thought this was because it acted as a mechanical barrier, trapping bacteria and other pathogens….

      The primary building blocks of mucus are mucins—long, bottlebrush-like proteins with many sugar molecules called glycans attached. And mucins…actually disrupt many key functions of infectious bacteria….

      With those powers cut off, bacteria can no longer colonize on a surface to form persistent slimy layers called biofilms, which tend to be more harmful than the cells are individually: they can cause a wide range of health problems, including dental cavities and ulcers, and can prove fatal for people with cystic fibrosis….

      More than 10% of babies born worldwide arrive before full term, defined as 37 weeks of gestation, but there had been no reliable way to predict preterm labor….the mucus from women at high risk of early labor is mechanically weaker, more elastic, more permeable, and less adhesive. Preterm birth may occur because the cervical mucus is more susceptible to invasion by microbes that can cause infection.

      Other conditions that alter mucus include digestive diseases such as Crohn’s and ulcerative colitis, as well as respiratory diseases….

Artificial Intelligence and Ethics

[This excerpt is from an article by Jonathan Shaw in the January-February 2019 issue of Harvard Magazine.]

      “Artificial intelligence” refers to systems that can be designed to take cues from their environment and, based on those inputs, proceed to solve problems, assess risks, make predictions, and take actions. In the era predating powerful computers and big data, such systems were programmed by humans and followed rules of human invention, but advances in technology have led to the development of new approaches. One of these is machine learning, now the most active area of AI, in which statistical methods allow a system to “learn” from data, and make decisions, without being explicitly programmed. Such systems pair an algorithm, or series of steps for solving a problem, with a knowledge base or stream—the information that the algorithm uses to construct a model of the world.

      Ethical concerns about these advances focus at one extreme on the use of AI in deadly military drones, or on the risk that Al could take down global financial systems. Closer to home, Al has spurred anxiety about unemployment, as autonomous systems threaten to replace millions of truck drivers, and make Lyft and Uber obsolete. And beyond these larger social and economic considerations, data scientists have real concerns about bias, about ethical implementations of the technology, and about the nature of interactions be-tween AI systems and humans if these systems are to be deployed properly and fairly in even-the most mundane applications.

      Consider a prosaic-seeming social change: machines are already being given the power to make life-altering, everyday decisions about people. Artificial intelligence can aggregate and assess vast quantities of data that are sometimes beyond human capacity to analyze unaided, thereby enabling AI to make hiring recommendations, determine in seconds the creditworthiness of loan applicants, and predict the chances that criminals will re-offend.

      But such applications raise troubling ethical issues because AI systems can reinforce what they have learned from real-world data, even amplifying familiar risks, such as racial or gender bias. Systems can also make errors of judgment when confronted with unfamiliar scenarios. And because many such systems are “black boxes,” the reasons for their decisions are not easily accessed or understood by humans—and therefore difficult to question, or probe.

      Examples abound. In 2014, Amazon developed a recruiting tool for identifying software engineers it might want to hire; the system swiftly began discriminating against women, and the company abandoned it in 2017. In 2016, ProPublica analyzed a commercially developed system that predicts the likelihood that criminals will re-offend, created to help judges make better sentencing decisions, and found that it was biased against blacks. During the past two years, self-driving cars that rely on rules and training data to operate have caused fatal accidents when confronted with unfamiliar sensory feedback or inputs their guidance systems couldn’t interpret. The fact that private commercial developers generally refuse to make their code available for scrutiny, because the software is considered proprietary intellectual property, is another form of non-transparency—legal, rather than technical.

      Meanwhile, nothing about advances in the technology, per se, will solve the underlying, fundamental problem at the heart of AI, which is that even a thoughtfully designed algorithm must make decisions based on inputs from a flawed, imperfect, unpredictable, idiosyncratic real world.

Needle in the Haystack

[These excerpts are from an article by Thomas Hingham and Katrina Douka in the December 2018 issue of Scientific American.]

      Denisova Cave is at the center of a revolution in scientists’ understanding of how our ancestors in the Paleolithic, or Old Stone Age, behaved and interacted. Our species, Homo sapiens, originated in Africa hundreds of thousands of years ago. When it eventually began spreading into Europe and Asia, it encountered other human species, such as the Neandertals, and shared the planet with them for millennia before those archaic species disappeared. Scientists know these groups encountered one another because people today carry DNA from our extinct relatives—the result of interbreeding between early H. sapiens and members of those other groups. What we do not yet know and are eager to ascertain is when and where they crossed paths, how often they interbred and how they might have influenced one another culturally. We actually have quite a few important archaeological sites from this transitional period that contain stone tools and other artifacts. But many of these sites, including Denisova, lack human fossils that are complete enough to attribute to a particular species on the basis of their physical traits. That absence has hindered our ability to establish which species made what—and when.

      Now a technique for identifying ancient bone fragments, known as zooarchaeology by mass spectrometry (ZooMS), is finally allowing researchers to start answering these long-standing questions. By analyzing collagen protein preserved in these seemingly uninformative fossil scraps, we can identify the ones that come from the human/great ape family and then attempt to recover DNA from those specimens. Doing so can reveal the species they belong to—be it Neandertal, H. sapiens or something else. What is more, we can carry out tests to determine the ages of the fragments.

      Directly dating fossils is a destructive process—one has to sacrifice some of the bone for analysis. Museum curators are thus usually loath to subject more complete bones to these tests. But they have no such reservations with the scraps.

      The ability to directly date fossils found in association with artifacts is especially exciting with regard to Denisova and other sites we know sheltered multiple human species in the past. A number of researchers have argued that symbolic and decorative artifacts, which are proxies for modern cognitive abilities, are unique to H. sapiens. Others maintain that Neandertals and other species made such items, too, and may have even passed some of their traditions along to the H. sapiens they met. The ability to identify and date these fossil fragments means researchers can begin to reconstruct the chronology of these sites in far greater detail and elucidate a critical chapter of human prehistory….

      ZooMS, also called collagen peptide mass fingerprinting, allows investigators to assign fragments of bone to the proper taxonomic group by analyzing the proteins in bones. Bone collagen protein is made up of hundreds of small compounds called peptides that vary slightly among different types of animals. By comparing the peptide signatures of mystery bones against a library of such signatures from known animals, it is possible to assign the unidentified bones to the correct family, genus and sometimes species….

Meat Grown from Stem Cells

[These excerpts are from an article by G. Owen Schaefer in the December 2018 issue of Scientific American.]

      Imagine biting into a juicy beef burger that was produced without killing animals. Meat grown in a labora-tory from cultured cells is turning that vision into a reality. Several start-ups are developing lab-grown beef, pork, poultry and seafood….

      If widely adopted, lab-grown meat, also called clean meat, could eliminate much of the cruel, unethical treatment of animals that are raised for food. It could also reduce the considerable environmental costs of meat production; resources would be needed only to generate and sustain cultured cells, not an entire organism from birth.

      The meat is made by first taking a muscle sample from an animal. Technicians collect stem cells from the tissue, multiply them dramatically and allow them to differentiate into primitive fibers that then bulk up to form muscle tissue. Mesa Meat says that one tissue sample from a cow can yield enough muscle tissue to make 80,000 quarter-pounders.

      A number of the start-ups say they expect to have products for sale within the next fewyears. But clean meat will have to overcome a number of barriers if it is to be commer-cially viable.

      Two are cost and taste. In 2013, when a burger made from lab-grown meat was presented to journalists, the patty cost more than $300,000 to produce and was overly dry (from too little fat). Expenses have since fallen. Memphis Meats reported this year that a quarter-pound of its ground beef costs about $600. Given this trend, clean meat could become competitive with traditional meat within several years. Careful attention to texture and judicious supplementing with other ingredients could address taste concerns.

      To receive market approval, clean meat will have to be proved safe to eat. Although there is no reason to think that lab-produced meat would pose a health hazard, the FDA is only now beginning to consider how it should be regulated. Meanwhile traditional meat producers are pushing back, arguing that the lab-generated products are not meat at all and should not be labeled as such, and surveys show that the public has only tepid interest in eating meat from labs. Despite these challenges, the clean meat companies are forging ahead. If they can succeed in creating authentic-tasting products that are also affordable, clean meat could make our daily eating habits more ethical and environmentally sustainable.

Mimicking Spider Silk

[This excerpt is from an article by Prachi Patel in the December 2018 issue of Scientific American.]

      Spiders spin the stuff of engineers’ dreams. Their silk is as strong as steel, stretchy, nontoxic and biodegradable. But spiders are not easy to farm. Each produces only a minuscule amount of silk, and some are cannibalistic. For decades scientists have tried to mimic the silvery strands to use for sutures, athletic gear and bulletproof vests, but their synthetic fibers have fallen short. Now a team has coaxed bacteria to produce silk as tough and elastic as the natural version.

      Researchers have previously transplanted silk-making DNA from spiders into bacteria, silkworms, plants and even goats in an effort to mass-produce the substance. Until now, however, the best engineered fibers have been only half as strong as the real thing. The secret to spider silk’s strength lies in large protein molecules composed of hundreds of strings of repeated amino acids encoded by similarly lengthy, repetitive DNA sequences….

Fossils Push Back Origin of Key Plant Groups Millions of Years

[These excerpts are from an article by Elizabeth Pennisi in the 21 December 2018 issue of Science.]

      Paleobotanists exploring a site near the Dead Sea have unearthed a startling connection between today’s conifer forests in the Southern Hemisphere and an unimaginably distant time torn apart by a global cataclysm. Exquisitely preserved plant fossils show the podocarps, a group of ancient evergreens that includes the massive yellowwood of South Africa and the red pine of New Zealand, thrived in the Permian period, more than 250 million years ago. That's tens of millions of years earlier than thought, and it shows that early podocarps survived the “great dying” at the end of the Permian, the worst mass extinction the planet has ever known.

      …the fossils push back the origins not just of podocarps, but also of groups of seed ferns and cycadlike plants. Beyond altering notions of plant evolution, the discoveries lend support to a 45-year-old idea that the tropics serve as a “cradle” of evolution….

      During the Permian, from 299 million to 251 million years ago, Earth's landmasses had merged to form a supercontinent, bringing a cooler, drier climate. Synapsids, thought to be ancient predecessors of mammals, and saumpsids, ancestors to reptiles and birds, roamed the landscape. Simple seed-bearing plants had already appeared on the scene. Family trees reconstructed from the genomes of living plants suggest more sophisticated plant groups might also have evolved during the Permian, but finding well-preserved plant fossils from that time has been difficult.

      …Many of the fossils preserve the ancient plants' cuticle, a waxy surface layer that captures fine features, such as the leaf pores called stomata. That made it possible for the team to positively identify many of the plants….

      Such finds could help resolve an ongoing debate about why the tropics have more species than colder latitudes do. Some have suggested that species originate at many latitudes but are more likely to diversify in the tropics, with its longer growing seasons, higher rainfall and temperatures, and other features. But another theory proposes that most plant—and animal—species actually got their start near the equator, making the low latitudes an evolutionary “cradle” from which some species migrate north and south….

      It’s not clear how the newfound Permian plants made it through the great dying, a 100,000-year period when, for reasons that are still unclear, 90% of marine life and 70% of life on land disappeared. But their presence in the Permian raises the possibility that other plant groups thought to have later origins actually emerged then in the tropics….

Antarctic Ice Melt 125,000 Years Ago Offers Warning

[These excerpts are from an article by Paul Voosen in the 21 December 2018 issue of Science.]

      Some 125,000 years ago, during the last brief warm period between ice ages, Earth was awash. Temperatures during this time, called the Eemian, were barely higher than in today’s greenhouse-warmed world. Yet proxy records show sea levels were 6 to 9 meters higher than they are today, drowning huge swaths of what is now dry land.

      Scientists have now identified the source of all that water: a collapse of the West Antarctic Ice Sheet. Glaciologists worry about the present-day stability of this formidable ice mass. Its base lies below sea level, at risk of being undermined by warming ocean waters, and glaciers fringing it are retreating fast. The discovery, teased out of a sediment core and reported last week at a meeting of the American Geophysical Union in Washington, D.C., validates those concerns, providing evidence that the ice sheet disappeared in the recent geological past under climate conditions similar to today’s….

      …Once the ancient ice sheet collapse got going, some records suggest, ocean waters rose as fast as some 2.5 meters per century.

      …Global temperatures were some 2°C above preindustrial levels (compared with 1°C today). But the cause of the warming was not greenhouse gases, but slight changes in Earth's orbit and spin axis, and Antarctica was probably cooler than today. What drove the sea level rise, recorded by fossil corals now marooned well above high tide, has been a mystery.

Choices in the Climate Commons

[These excerpts are from an editorial by Scott Barrett in the 14 December 2018 issue of Science.]

      Climate change is a tragedy of the commons of existential importance….American ecologist Garrett Hardin’s classic article, “The Tragedy of the Commons,” published in Science 50 years ago this week, vividly describes the dilemma that causes this behavior….

      …Hardin’s proposed corrective is “mutual coercion.” Writing in 1651, British philosopher Thomas Hobbes similarly concluded that a sovereign is needed to tie people “by fear of punishment to the performance of their covenants.”

      However, a critical difference between climate change and Hardin’s parable is that the players in the climate game are nation states. Although individuals can be subjected to coercion by a higher authority, human organization has not evolved to give any institution sovereignty over the nation state. Solutions to global collective action problems must involve covenants (treaties) among states that are self-enforcing.

      To stabilize the climate, a treaty must get all states to (i) participate in and (ii) comply with an agreement that (iii) drives emissions to zero. The Paris Agreement, adopted at the 2015 summit, secures the first requirement, and possibly the second, but only because it is a voluntary agreement and will fall short of meeting the third requirement. The Montreal Protocol, negotiated in 1987 to protect the stratospheric ozone layer, meets all three requirements, thanks partially to a ban on trade in chlorofluorocarbons between parties to the protocol and nonparties. Because of the ban, once the vast majority of countries joined the agreement, all others wanted to join. William Nordhaus, a recipient of this year’s Nobel Memorial Prize in Economic Sciences, has recently analyzed a similar cure for climate change in which members of a “climate club” who agree to curb emissions impose a tariff on imports from nonmembers to encourage their participation. Unfortunately, his analysis shows that as the carbon tax rises to the level needed to stabilize the climate, participation in the club collapses.

      Breaking up the problem may provide more leverage for enforcement. The Kigali Amendment to the Montreal Protocol, adopted in December 2016, phases down hydrofluorocarbons, a group of greenhouse gases, and this will be effective in addressing this particular cause of climate change for the same reasons that the Montreal Protocol has been effective in protecting the ozone layer. Other climate agreements, adopted in parallel with the Paris Agreement, should be negotiated for individual sectors, such as aluminum and steel and international aviation and shipping, all linked to trade.

      However, the time has come to contemplate other, more radical solutions. The October 2018 Intergovernmental Panel on Climate Change special report concluded that limiting temperature change to 1.5°C cannot be achieved by simply curbing emissions, but requires removing CO2 from the atmosphere. The only true “backstop” for limiting climate change is removal of CO2 by industrial processes, which converts the problem from one of changing behavior into one of joint financing of a large-scale project. Another option, solar geoengineering, acts directly on global mean temperature, but is considered risky. Of course, not using it could also be risky. In the end, regardless of pathways forward, we will have to choose between risks to address the scale of this problem and achieve, rather than merely aspire to, global collective action on climate change.

Ireland Slashes Peat Power to Lower Emissions

[These excerpts are from an article by Emily Toner in the 14 December 2018 issue of Science.]

      In Ireland, peat has been used for centuries to warm homes and fire whiskey distilleries. For a country with little coal, oil, and gas, peat—deep layers of partially decayed moss and other plant matter—is also a ready fuel for power plants. Peat power peaked in the 1960s, providing 40% of Ireland’s electricity. But peat is particularly polluting. Burning it for electricity emits more carbon dioxide than coal, and nearly twice as much as natural gas. In 2016, peat generated nearly 8% of Ireland’s electricity, but was responsible for 20% of that sector’s carbon emissions….

      …Bord na Mona, which supplies peat to the three remaining power stations burning it for electricity, announced in October that it would cut its peat supply for electricity by a third by 2020 and end it completely by 2027. Ireland will need to find alternative, lower carbon sources of electricity And approximately 60 bogs no longer needed for fuel will be converted back to wetlands or put to commercial uses such as land for wind farms.

      Behind the phaseout is Ireland's promise to the European Union to reduce greenhouse gas emissions by 20% in 2020, compared with 2005 levels….Despite rapid growth in wind power and increasingly energy efficient homes and vehicles, it will struggle to reduce emissions by even 1%...

      And replacing peat with biomass, as the power companies plan to do, is not a panacea. A decade ago, Bord na Mona began to cofuel a peat-burning station with mixtures of biomass including a grass called miscanthus, olive pits, almond shells, palm kernel shells, and beet pulp, much of it imported from all over the world. Because biomass takes up carbon from the atmosphere as it grows, the European Union counts it as a carbon-neutral, renewable resource—even though transportation, processing, and land-use costs make it less so….

      Rehabilitating the harvested peatlands, however, is a clear plus for climate. When bogs are drained to harvest peat, or for any other use, such as agriculture, grazing, or forestry, exposure to oxygen jump-starts the decomposition of the stored organic matter, releasing carbon into the atmosphere….

      …As a result, say ecologists, conserving peatlands has a triple benefit: reducing emissions from both power plants and exposed fields and, with restored plant life, sequestering more carbon in future peat deposits….

      Moreover, healthy peatlands improve water quality and provide needed habitat for threatened species such as curlews and marsh fritillary butterflies….

Why Modern Humans Have Round Heads

[These excerpts are from an article by Ann Gibbons in the 14 December 2018 issue of Science.]

      Ever since researchers first got a good look at a Neanderthal skull in the 1860s, they were struck by its strange shape: stretched from front to back like a football rather than round like a basketball, as in living people. But why our heads and those of our ice age cousins looked different remained a mystery.

      Now, researchers have found an ingenious way to identify genes that help explain the contrast. By analyzing traces of Neanderthal DNA that linger in Europeans from their ancestors’ trysts, researchers have identified two Neanderthal gene variants linked to slightly less globular head shape in living people….

      Cradle a newborn and you’ll see that infants start life with elongated skulls, somewhat like Neanderthals. It’s only when the modern human brain nearly doubles in size in the first year of life that the skull becomes globular….

Wake-up Call from Hong Kong

[These excerpts are from an editorial by Victor J. Dzau, Marcia McNutt and Chunli Bai in the 14 December 2018 issue of Science.]

      The Second International Summit on Human Genome Editing, held in Hong Kong last month, was rocked by the revelation from a researcher from She nzhen that twins were born whose healthy embryonic genomes had been edited to confer resistance to HIV Despite widespread condemnation by the summit organizing committee, world scientific academies, and prominent scientific leaders that such research was “deeply disturbing” and “irresponsible,” and the launch of an investigation in China into the researcher's actions, it is apparent that the ability to use CRISPR-Cas9 to edit the human genome has outpaced nascent efforts by the scientific and medical communities to confront the complex ethical and governance issues that they raise. The current guidelines and principles on human germline genome editing are based on sound scientific and ethical principles. However, this case highlights the urgent need to accelerate efforts to reach international agreement upon more specific criteria and standards that have to be met before human germline editing would be deemed permissible….

      To maintain the public's trust that someday genome editing will be able to treat or prevent disease, the research community needs to take steps now to demonstrate that this new tool can be applied with competence, integrity, and benevolence. Unfortunately, it appears that the case presented in Hong Kong might have failed on all counts, risking human lives as well as rash or hasty political reaction.

      To maintain the public's trust that someday genome editing will be able to treat or prevent disease, the research community needs to take steps now to demonstrate that this new tool can be applied with competence, integrity, and benevolence. Unfortunately, it appears that the case presented in Hong Kong might have failed on all counts, risking human lives as well as rash or hasty political reaction.

The Last of the Ocean Wilderness

[These excerpts are from an article by Kendall Jones and James Watson in the December 2018 issue of Scientific American.]

      The ocean covers more than 70 percent of our planet, an area of over 160 million square miles. It is so immense that explorers once thought there was no way to cross it. When our ships were advanced enough to do so, naturalists then thought it impossible for humans to ever exhaust fisheries or drive marine species to extinction.

      They were wrong.

      Commercial fishing now covers an area four times that of agriculture, and much of that expanse has Peen rendered completely unsustainable. We have depleted 90 percent of formerly important coastal species. Large fish have been harvested so heavily that they are virtually wiped out in many places. Indeed, studying once vital fish habitats such as coral reefs has been compared to trying to understand the Serengeti by studying termites and locusts while ignoring the wildebeest and lions.

      Some may hope that there are immense areas still untouched, given that humans do not live on the ocean and we need specialized ships to go far beyond the coast. But that is incorrect. In a study published this summer in Current Biology, we used fine-scale global data on 15 human stressors to the ocean—including commercial shipping, sediment runoff and several types of fishing—to show that Earth’s “marine wilderness” is dwindling. Just 13 percent of the ocean remains as wilderness, and in coastal regions, where human activities are most intense, there is almost no wilderness left at all. Of the roughly 21 million square miles of marine wilderness remaining, almost all is found in the Arctic and Antarctic or around remote Pacific island nations with low populations.

      These remnants of wilderness are home to unparalleled marine life, sustaining large predators and high levels of genetic diversity. The lack of human impact can also make them highly resilient to rising sea temperatures and coral bleaching—stressors that cannot be halted without globally coordinated efforts to reduce emissions.

      In an era of widespread marine biodiversity loss, wilderness areas also act like a window into the past, revealing what the ocean looked like before overfishing and pollution took their toll. This is crucial information for marine conservation….

      What concerns us now is that most wilderness remains unprotected. This means it could be lost at any time, as advances in technology allow us to fish deeper and ship farther than ever before. Thanks to a warming climate, even places that were once safeguarded because of year-round ice cover are now open to fishing and shipping.

      This lack of protection stems in large part from international environmental policies failing to recognize the unique values of wilderness, instead focusing on saving at-risk ecosystems and avoiding extinctions. This is akin to a government using its entire health budget on emergency cardiac surgery, without preemptive policies encouraging exercise to decrease the risk of heart attacks occurring in the first place.

      If Earth’s marine biodiversity is to be preserved in perpetuity it is time for conservation to focus not only on the E.R. but also on preventive health measures….

Rethinking the “Anthropocene”

[These excerpts are from an editorial in the December 2018 issue of Scientific American.]

      Alarming news, arriving almost daily, about rising temperatures, melting glaciers, species diebacks, radioactive waste, oceans contaminated with plastic, and other calamities has seared in our minds the staggering impact that humanity now has on our planet In 2000 atmospheric scientist Paul Crutzen encapsulated these concerns into a single word, the “Anthropocene,” which he proposed as the geologic name of an era dominated by the human race. Geologists cannot agree on when the Anthropocene began, and stratigraphers are still debating whether it is in fact a geologic era at all. Even so, the term has riveted public attention and sparked impassioned arguments about the relationship between humans and the natural world.

      Several scholars in the humanities criticize the name itself, however, arguing that it perpetuates long-standing misconceptions about this relationship. Replacing “Anthropocene” with a name that focuses instead on its underlying causes might be more conducive to helping us tackle them….

      A second indictment is that “Anthropoeene” implicitly blames the entire human race for a crisis caused by a relative few. Surely the “Man of the Hole,” the last survivor of an uncontacted hunter-gatherer tribe in the Brazilian Amazon, bears less responsibility for our present predicament than, say, former Secretary of State Rex Tillerson, who was CEO of ExxonMobil. The tribe’s carbon emissions are essentially zero, whereas ExxonMobil is the fifth largest carbon emitter in the world, according to the Carbon Majors Report.

      Yet another critique is that the “Anthropocene discourse” tends to hold not only all humans but human nature responsible for this predicament. That makes little sense to anthropologists, who point out that people have repeatedly figured out how to live within their ecological means and even thrive….

      …Working collaboratively, the natural sciences and the humanities can help us break through thought barriers and generate fresh ideas. To quote Albert Einstein: “We cannot solve our problems with the same thinking we used when we created them.”

“Half the Truth is often a Great Lie.” – Benjamin Franklin

[This editorial by John Seager is in the December 2018 issue of Population Connection.]

      By failing even to mention population growth in the thirty-three-page “Summary for Policymakers” of its new publication, Global Warming of 1.5°C, the Intergovernmental Panel on Climate Change (IPCC) offered less than the whole truth. This is despite the fact that a 2010 paper published in the Proceedings of the National Academy of Sciences concluded that by the end of the century, slower population growth could reduce total fossil filel emissions by 37-41 percent.

      The IPCC “Summary for Policymakers” finds room to call for ecosystem-based adaptation, ecosystem restoration, biodiversity management, sustainable aquaculture, efficient irrigation, social safety nets, disaster risk management, green infrastructure, sustainable land use, and water management. But not one single word about population stabilization.

      Ignoring the impacts of soaring population growth on climate change is like failing to mention the Himalayas while describing Nepal.

      Overpopulation remains the elephant in the room. Why the silence? The hard truth is that many experts worry they might offend someone, somewhere. This is a shameful betrayal of reason.

      What will they tell hundreds of millions fated to become stateless environmental refugees, pushed to relocate due to climate change? What will they say to subsistence farmers forced to watch their families starve as meager plots become arid wastelands? And what about the countless, voiceless species doomed to extinction because so many authorities tremble at the very possibility of a negative reaction?

      Benjamin Franklin, the redoubtable author of Poor Richard’s Almanack, warns, “You may delay, but time will not.”

      There is no excuse for timid temporizing in the face of a global crisis. When climate experts fail to address population growth — arguably the single biggest driver of climate change — they place far more than their own reputations at risk.

      What is so hard about letting the world know that if every woman and every couple had unfettered access to all reproductive health services, we'd see population challenges evaporate? Then maybe, just maybe, we’d have a fighting chance to save Earth as we know it. End this silence, now!

The Color Black

[These excerpts are from an article by Paul G. Hewitt in the November/December 2018 issue of The Science Teacher.]

      As science people we like to say that black is not a color—black is the absence of light. But try telling that to a paint specialist when you buy a can of black paint! Similarly, science people say that white is not a color unto itself but the combination of the colors that make up the rainbow. However, if the paint specialist doesn't have white paint, please don’t request that the colors red through violet be mixed! The mixing rules for pigments and lights are very different. In the painter’s world, black and white are legitimate colors.

      Roger King at City College of San Francisco contrasts black and white with his “light box”…. Students see the hole in the wooden box as black, which suggests a black interior. But is it black inside? Roger then opens the top…to show that the inside of the box is white. So why does the hole look black, when clearly the interior of the box is a bright white?

      …Light from the environment enters the hole in the box and is scattered inside by multiple reflections in all directions. An important feature of light is that a portion of it is always absorbed by the reflecting surface whenever reflection occurs. Reflection of light from a surface is never 100 percent; light intensity diminishes with each encounter with the box’s inner surface. The light that does exit the hole is much dimmer than the light that entered.

      Other examples of holes that appear black are open doorways or windows of distant houses in the daytime. Openings appear black because the light that enters them is reflected to and fro on the inside walls many times, and is partly absorbed with each reflection. As a result, very little of the light exits the openings and travels to our eyes….Open ends of pipes in a stack and the openings to caves all appear black as well.

      You may have noticed that some surfaces look darker when they are wet than when they are dry. Light rays incident on a wet surface undergo repeated reflections and absorptions inside the transparent wet region, enough to darken the net reflected light that reaches your eye.

Climate Change and Marine Mass Extinction

[These excerpts are from an article by Lee Kump in the 7 December 2018 issue of Science.]

      Voluminous emissions of carbon dioxide to the atmosphere, rapid global warming, and a decline in biodiversity-the storyline is modern, but the setting is ancient: The end of the Permian Period, some 252 million years ago. For the end-Permian, the result was catastrophic: the greatest loss of plant and animal life in Earth history. Understanding the details of how this mass extinction played out is thus crucial to its use as an analog for our future….

      A number of kill mechanisms for end-Permian extinction have been proposed, most triggered by the tremendous volcanic activity associated with the emplacement of the vast lava flows of the Siberian Traps, the eruption of which was coincident with the mass extinction. The Siberian Traps are estimated to have released tens of thousands of petagrams of carbon as carbon dioxide and methane, explaining the 10° to 15°C tropical warming revealed by oxygen isotope compositions of marine fossils. On land, unbearably hot temperatures and hypoxia likely were the main cause of mass extinction of plants and animals, although ultraviolet radiation exposure from a collapsed ozone shield contributed as well. Rapid warming also likely led to the loss of oxygen from the ocean’s interior, extending up onto the continental shelves—a conclusion supported both by the widespread distribution of indicators for marine anoxia in sedimentary rocks and by numerical modeling of the Permian ocean-atmosphere system.

      Once considered nonselective, mass extinc-ions are increasingly revealing patterns of differential impact across species, lifestyles, and geographic locations through their fossil records. A geographic pattern to Permian extinction, however, has remained elusive….

Taking Aim

[These excerpts are from an article by Meredith Wadman in the 7 December 2018 issue of Science.]

      …Guns are the second-leading cause of death of children and teens in the United States, after motor vehicle crashes….In 2016, the most recent year for which data are available, they killed nearly 3150 people aged 1 to 19, according to data from the Centers for Disease Control and Prevention (CDC) in Atlanta. Cancer killed about 1850. But this year, the National Institutes of Health (NIH) 'in Bethesda, Maryland, spent $486 million researching pediatric cancer and $4.4 million studying children and guns….

      That's because gun violence research has been operating under a chill for more than 2 decades. In 1996, Congress crafted an amendment, named for its author, then-Arkansas Representative Jay Dickey (R), preventing CDC—the government’s lead injury prevention agency—from spending money “to advocate or promote gun control.”

      That law was widely interpreted as banning any CDC studies that probe firearm violence or how to prevent it. The agency’s gun injury research funding was quickly zeroed out, and other health agencies grew wary. The few dozen firearm researchers who persisted were forced to rely on modest amounts from other agencies or private funders…to tackle a massive problem.

      Now, there may be early signs of a thaw. In March, in the wake of the mass shooting at a Parkland, Florida, high school, Congress wrote that CDC is free to probe the causes of gun violence, despite the Dickey amendment. The agency has not done so, citing a lack of money.) And annual firearm-related funding from NIH…roughly tripled after a 2013 presidential directive that was issued in the wake of the mass shooting at Sandy Hook Elementary School in Newtown, Connecticut. Just as importantly, the agency began to flag firearm violence in some of its calls for research.

      …And last month, the National Rifle Association (NRA) in Fairax, Virginia, provoked a firestorm when it tweeted that “self-important anti-gun doctors” should “stay in their lane.” Hundreds of emergency department doctors tweeted back, many including photographs of their scrubs, hands, and shoes bloodied from treating gunshot victims. More than 40,000 health care professionals…signed an open letter to NRA complaining that the group has hobbled gun violence research, declaring, “This is our lane!”

      All the same, there’s still little public money for gun research….

Chess, a Drosophila of Reasoning

[These excerpts are from an editorial by Garry Kasparov in the 7 December 2018 issue of Science.]

      Much as the Drosophila melanogaster fruit fly became a model organism for geneticists, chess became a Drosophila of reasoning. In the late 19th century Alfred Binet hoped that understanding why certain people excelled at chess would unlock secrets of human thought. Sixty years later, Alan Turing wondered if a chess-playing machine might illuminate, in the words of Norbert Wiener, “whether this sort of ability represents an essential dif-ference between the poten-tialities of the machine and `the mind.”

      Much as airplanes don’t flap their wings like birds, machines don’t generate chess moves like humans do. Early programs that attempted it were weak. Success came with the “minimax” algorithm and Moore’s law, not with the ineffable human combination of pattern recognition and visualization. This prosaic formula dismayed the artificial intelligence (AI) crowd, who realized that profound computational insights were not required to produce a machine capable of defeating the world champion.

      But now the chess fruit fly is back under the microscope….AlphaZero starts out knowing only the rules of chess, with no embedded human strategies. In just a few hours, it plays more games against itself than have been recorded in human chess history. It teaches itself the best way to play, reevaluating such fundamental concepts as the relative values of the pieces. It quickly becomes strong enough to defeat the best chess-playing entities in the world, winning 28, drawing 72, and losing none in a victory over Stockfish.

      …Programs usually reflect priorities and prejudices of programmers, but because AlphaZero programs itself, I would say that its style reflects the truth. This superior understanding allowed it to outclass the world's top traditional program despite calculating far fewer positions per second. It’s the embodiment of the cliche, “work smarter, not harder?”

      AlphaZero shows us that machines can be the experts, not merely expert tools….

      Machine learning systems aren't perfect, even at a closed system like chess. There will be cases where an Al will fail to detect exceptions to their rules. Therefore, we must work together, to combine our strengths. I know better than most people what it’s like to compete against a machine. Instead of raging against them, it’s better if we're all on the same side.

Leon Max Lederman (1922-2018)

[These excerpts are from an article by Rocky Kolb in the 30 November 2018 issue of Science.]

      …Leon’s group used the Nevis accelerator to discover nonconservation of parity in muon decay. Remarkably, they conceived of the idea during the half-hour drive from the main campus to Nevis one Friday, constructed the experiment that evening, and collected the data by the end of the weekend, demonstrating clear evidence for the fundamental result.

      In 1962, Leon and his colleagues led a team that established the existence of a second-generation neutrino, the muon neutrino. He would share the Nobel Prize in recognition of this work. In addition to the importance of the two-neutrino result in the development of the standard model of particle physics, the experiment pioneered the use of neutrino beams to study weak interactions.

      In the 1970s, Leon’s interests turned to particles produced at high transverse momentum in high-energy proton-proton collisions. A sequence of ever-higher-energy experiments at Brookhaven, the Intersecting Storage Ring at CERN, and finally at Fermilab culminated in the 1977 discovery of the bottom quark, one of six known quarks. Shortly afterward, Leon became the second director of Fermilab. During his three decades as an experimental particle physicist, he supervised 50 Columbia University Ph.D. candidates and was a mentor to many other young students and postdocs….

      …In addition to scientific leadership, with characteristic charm and humor, Leon transformed Fermilab from a frontier outpost in the cornfields of Illinois into a cosmopolitan center for science and science education.

      As a professor at Columbia University, Leon reached many generations of undergraduates in his “physics for poets” class. He transposed his love of teaching to Fermilab, where he instigated Saturday Morning Physics, a weekend class for area high-school students featuring lectures by Leon and other Fermilab scientists. He also started a public education and outreach effort at Fermilab directed toward precollege students. Beyond Fermilab, Leon was a founder of a residential state-sponsored high school for science and mathematics and a Chicago-based teacher academy for math and science. His leadership in education and outreach inspired physicists to communicate with the public. The author of several books for the general public, including The God Particle, Leon wrote with the same humor, wit, and charm that drew people to his public lectures. He had no equal f in communicating the joys of physics.

Define the Human Right to Science

[These excerpts are from an article by Jessica M. Wyndham and Margaret Weigers Vitullo in the 30 November 2018 issue of Science.]

      The adoption of the Universal Declaration of Human Rights (UDHR) by the United Nations (UN) General Assembly will mark its 70th anniversary on 10 December. One right enshrined in the UDHR is the right of everyone to “share in scientific advancement and its benefits.” In 1966, this right was incorporated into the International Covenant on Economic, Social and Cultural Rights, a treaty to which 169 countries have voluntarily agreed to be bound. Unlike most other human rights, however, the right to science has never been legally defined and is often ignored in practice by the governments bound to implement it. An essential first step toward giving life to the right to science is for the UN to legally define it….

      The scientific community has contributed three key insights to the ongoing UN process. One is that the right to science is not only a right to benefit from material products of science and technology. It is also a right to benefit from the scientific method and scientific knowledge, whether to empower personal decision-making or to inform evidence-based policy. In addition, access to science needs to be understood as nuanced and multifaceted. People must be able to access scientific information, translated and actionable by a nonspecialist audience. Scientists must have access to the materials necessary to conduct their research, and access to the global scientific community. Essential tools for ensuring access include science education for all, adequate funding, and an information technology infrastructure that serves as a tool of science and a conduit for the diffusion of scientific knowledge. Also, scientific freedom is not absolute but is linked to and must be exercised in a manner consistent with scientific responsibility.

      …Three of the most important questions were: What should be the relationship between the right to benefit from science and intellectual property rights? How should government obligations under the right differ based on the available national resources? What is scientific knowledge and how should it be differentiated, if at all, from traditional knowledge?

      …The effort to define the right must not become mired in demands to resolve questions a priori that can only be answered over time. Insights from the scientific and engineering communities provide responses to many of the questions. Civil society must continue to illustrate how the right to science complements existing human rights protections. The scientific community, particularly in the 169 countries bound to implement the right, Margaret Weigers must demonstrate how the right can be instantiated a within their own national contexts….

      The power and potential of the right to science for empowering individuals, strengthening communities, and improving the quality of life can hardly be overstated. It is time for the UN process to reach a responsible and productive end and for the right to science to be put into practice as was intended when it was first recognized by the United Nations in 1948.

Climate Impacts Worsen, Agencies Say

[This brief article by Jeffrey Brainard is in the 30 November 2018 issue of Science.]

      In a stark display of the U.S. political tensions surrounding global warming, President Donald Trump’s administration is dismissing a major report, written by the government’s own experts, which warns that climate change poses a serious and growing threat to the nation’s economic and environmental health. More than 300 specialists contributed to the 1600-page report, formally known as Volume II of the Fourth National Climate Assessment, released 23 November by federal agencies. It warns that worsening heat waves, coastal flooding, wildfires, and other climate-related impacts are already afflicting the United States and could reduce its economic output by 10% in coming decades. But White House officials down-played such findings, claiming they are based on outdated climate models and “the most extreme” warming scenario. Despite the report, the administration says it has no plans to alter its efforts to weaken policies aimed at curbing climate change. Meanwhile, the World Meteorological Organization reported on 20 November that atmospheric concentrations of carbon dioxide, a primary warming gas, reached a global average of 405.5 parts per million (ppm) in 2017, up from 403.3 ppm in 2016. Many scientists believe concentrations will need to remain below 450 ppm to avoid catastrophic warming.

A Rigged Economy

[These excerpts are from an article by Joseph E. Stiglitz in the November 2018 issue of Scientific American.]

      The notion of the American Dream—that, unlike old Europe, we are a land of opportunity—is part of our essence. Yet the numbers say otherwise. The life prospects of a young American depend more on the income and education of his or her parents than hi almost any other advanced country. When poor-boy-makes-good anecdotes get passed around in the media, that is precisely because such stories are so rare.

      Things appear to be getting worse, partly as a result of forces, such as technology and globalization, that seem beyond our control, but most disturbingly because of those within our command. It is not the laws of nature that have led to this dire situation: it is the laws of humankind. Markets do not exist in a vacuum: they are shaped by rules and regulations, which can be designed to favor one group over another. President Donald Trump was right in saying that the system is rigged—by those in the inherited plutocracy of which he himself is a member. And he is making it much, much worse.

      America has long outdone others in its level of inequality, but in the past 40 years it has reached new heights. Whereas the income share of the top 0.1 percent has more than quadrupled and that of the top 1 percent has almost doubled, that of the bottom 90 percent has declined. Wages at the bottom, adjusted for inflation, are about the same as they were some 60 years ago! In fact, for those with a high school education or less, incomes have fallen over recent decades. Males have been particularly hard hit, as the U.S. has moved away from manufacturing industries into an economy based on services.

      Wealth is is even less equally distributed, with just three Americans having as much as the bottom 50 percent—testimony to how much money there is at the top and how little there is at the bottom. Families in the bottom 50 percent hardly have the cash reserves to meet an emergency. Newspapers are replete with stories of those for whom the breakdown of a car or an illness starts a downward spiral from which they never recover….

      Defenders of America’s inequality have a pat explanation. They refer to the workings of a competitive market, where the laws of supply and demand determine wages, prices and even interest rates—a mechanical system, much like that describing the physical universe. Those with scarce assets or skills are amply rewarded, they argue, because of the larger contributions they make to the economy. What they get merely represents what they have contributed. Often they take out less than they contributed, so what is left over for the rest is that much more.

      This fictional narrative may at one time have assuaged the guilt of those at the top and persuaded everyone else to accept this sorry state of affairs. Perhaps the defining moment exposing the lie was the 2008 financial crisis, when the bankers who brought the global economy to the brink of ruin with predatory lending, market manipulation and various other antisocial practices walked away with millions of dollars in bonuses just as millions of Americans lost their jobs and homes and tens of millions more worldwide suffered on their account. Virtually none of these bankers were ever held to account for their misdeeds.

      ….At the time of the Civil War, the market value of the slaves in the South was approximately half of the region’s total wealth, including the value of the land and the physical capital—the factories and equipment. The wealth of at least this part of this nation was not based on industry, innovation and commerce but rather on exploitation. Today we have replaced this open exploitation with more insidious forms, which have intensified since the Reagan-Thatcher revolution of the 1980s. This exploitation, I will argue, is largely to blame for the escalating inequality in the U.S.

      After the New Deal of the 1930s, American inequality went into decline. By the 1950s inequality had receded to such an extent that another Nobel laureate in economics, Simon Kuznets, formulated what came to be called Kuznets's law. In the early stages of development, as some parts of a country seize new opportunities, inequalities grow, he postulated; in the later stages, they shrink. The theory long fit the data—but then, around the early 1980s, the trend abruptly reversed.

      …Overall, wages are likely to be far more widely dispersed in a service economy than in one based on manufacturing, so the transition contributes to greater inequality. This fact does not explain, however, why the average wage has not improved for decades. Moreover, the shift to the service sector is happening in most other advanced countries: Why are matters so much worse in the U.S.?

      Again, because services are often provided locally, firms have more market power: the ability to raise prices above what would prevail in a competitive market. A small town in rural America may have only one authorized Toyota repair shop, which virtually every Toyota owner is forced to patronize. The providers of these local services can raise prices over costs, increasing their profits and the share of income going to owners and managers. This, too, increases inequality. But again, why is US. inequality practically unique?

      …In the U.S., the market power of large corporations, which was greater than in most other advanced countries to begin with, has increased even more than elsewhere. On the other hand, the market power of workers, which started out less than in most other advanced countries, has fallen further than elsewhere. This is not only because of the shift to a service-sector economy—it is because of the rigged rules of the game, rules set in a political system that is itself rigged through gerrymandering, voter suppression and the influence of money. A vicious spiral has formed: economic inequality translates into political inequality, which leads to rules that favor the wealthy, which in turn reinforces economic inequality.

      …We are already paying a high price for inequality, but it is just a down payment on what we will have to pay if we do not do something—and quickly. It is not just our economy that is at stake; we are risking our democracy.

      As more of our citizens come to understand why the fruits of economic progress have been so unequally shared, there is a real danger that they will become open to a demagogue blaming the country’s problems on others and making false promises of rectifying “a rigged system.” We are already experiencing a foretaste of what might happen. It could get much worse.

Dereliction of Duty

[This editorial is in the November 2018 issue of Scientific American.]

      There are several hundred people in Washington, D.C., paid with taxpayer dollars, who are not doing their jobs. This November we have the chance to do something about that because these people are members of the U.S. Congress, and in upcoming elections, they can be replaced with representatives who will live up to their responsibilities.

      Those responsibilities, set out by the Constitution, include oversight of the executive branch, in this case the Trump administration. That administration's agencies are supposed to craft policies based, in part, on good evidence and good science. For the past 21 months, many of them have not. Yet Congress has refused to hold them accountable.

      Exhibit A is the Environmental Protection Agency. Its mission, the agency says, is “to protect human health and the environment ... based on the best available scientific information.” Instead the EPA has ignored scientific evidence to justify lowering power plant emissions and greenhouse gas targets; made it more difficult for people to learn about potentially dangerous chemicals in their communities; replaced independent scientists on advisory boards with people connected to businesses the agency is supposed to regulate; and tried to make it harder to use science as a basis for regulations to protect human health.

      During all of this, Congress has done next to nothing.

      Consider what happened this past spring, when EPA director Scott Pruitt, who has since resigned amid a dozen ethics investigations, proposed that no research could be used to form environmental policy unless all data connected to it were publicly available. He said this proposed rule would ensure transparency. It was really a transparent effort to ignore science.

      Specifically, it would ignore research that links industrial pollution to human health. These studies include confidential patient data, such as names, addresses, birthdays and health problems—data that were only provided by patients under a guarantee of privacy. The Six Cities study, begun in the 1970s, was the first research to show that particulate matter in the air hurts and kills people. It has been replicated several times. But because its publications do not include all private patient data, the study would be ignored by the EPA when it considers permissible pollution levels. The World Health Organization estimates that this kind of pollution, largely from minute particulates, kills three million people worldwide every year. For these reasons, the rule has been condemned by every major health and science group.

      There were two congressional hearings involving the EPA after this rule was proposed. The House Committee on Energy and Commerce’s environmental subcommittee interviewed Pruitt, starting off with the chair, Republican Representative John Shimkus of Illinois, stating he was “generally pleased” with what the agency was doing. The senior minority member, Democratic Representative Paul Tonko of New York, did voice concerns about science, but the focus of the hearing remained elsewhere. In the Senate, an appropriations subcommittee gave Pruitt a much tougher time on his personal ethics but also spent almost no effort on science.

      Pruitt has departed, but there is no reason to think that his antiscience approach has gone with him. The healthstudiesrule is still under active consideration. Further, the EPA announced looser power plant standards this August despiteHdmitting, in its own document, that the extra pollution would lead to 1,400 additional deaths in the U.S. each year.

      Similar evidence-free approaches have taken hold at the Department of the Interior, which is scuttling a wildfire-eghting science program whose discoveries help firefighters save lives by forecasting the direction of infernos. The Department of Energy has stopped a set of new efficiency standards for gas furnaces and other appliances. Congress has been quiet.

      Congressional committees work by majority rule, so if the Republicans in the current majority do not want to hold hearings or use their control over agency budgets to compel changes, there are none. But the American people can make a change. The entire House of Representatives and one third of the Senate are up for reelection right now (except for those who are retiring). We can, with our votes, make them do their jobs.

Artificial Wood

[These excerpts are from an article by Sid Perkins in the November 2018 issue of Scientific American.]

      A new lightweight substance is as strong as wood yet lacks its standard vulnerabilities to fire and water.

      To create the synthetic wood, scientists took a solution of polymer resin and added a pinch of chitosan, a sugar polymer derived from the shells of shrimp and crabs. They freeze-dried the solution, yielding a structure filled with tiny pores and channels supported by the chitosan. Then they heated the resin to temperatures as high as 200 degrees Celsius to cure it, forging strong chemical bonds.

      The resulting material…is as crush-resistant as wood….

      Unlike natural wood, the new material does not require years to grow. Moreover, it readily repels water—samples soaked in water and in a strong acid bath for 30 days scarcely weakened, whereas samples of balsa wood tested under similar conditions lost two thirds of their strength and 40 percent of their crush resistance. The new material was also difficult to ignite and stopped burning when it was removed from the flame.

      …Its porosity lends an air-trapping capacity that could make it suitable as an insulation for buildings….

Kids these Days

[These excerpts are from an article by Michael Shermer in the December 2018 issue of Scientific American.]

      …suicide rates increased 46 percent between 2007 and 2015 among 15- to 19-year-olds. Why are iGeners [members of the Internet Generation] different from Millennials, Gen Xers and Baby Boomers?

      Twenge attributes the malaise primarily to the widespread use of social media and electronic devices, noting a positive correlation between the use of digital media and mental health problems. Revealingly, she also reports a negative correlation between lower rates of depression and higher rates of time spent on sports and exercise, in-person social interactions, doing homework, attending religious services, and consuming print media, such as books and magazines. Two hours a day on electronic devices seems to be the cutoff, after which mental health declines, particularly for girls who spend more time on social media, where FOMO (“fear of missing out”) and FOBLO (“fear of being left out”) take their toll….This, after noting that the percentage of girls who reported feeling left out increased from 27 to 40 between 2010 and 2015, compared with a percentage increase from 21 to 27 for boys.

      …iGeners have been influenced by their overprotective “helicoptering” parents and by a broader culture that prioritizes emotional safety above all else. The authors identify three “great untruths”:

      1. The Untruth of Fragility: “What doesn’t kill you makes you weaker?”

      2. The Untruth of Emotional Reasoning: “Always trust your feelings?”

      3. The Untruth of Us versus Them: “Life is a battle between good people and evil people.”

      Believing that conflicts will make you weaker, that emotions are a reliable guide for responding to environmental stressors instead of reason and that when things go wrong, it is the fault of evil people, not you, iGeners are now taking those insalubrious attitudes into the workplace and political sphere….

      Solutions? “Prepare the child for the road, not the road for the child” is the first folk aphorism Lukianoff and Haidt recommend parents and educators adopt. “Your worst enemy cannot harm you as much as your own thoughts, unguarded” is a second because, as Buddha counseled, “once mastered, no one can help you as much.” Finally, echoing Aleksandr Solzhenitsyn, “the line dividing good and evil cuts through the heart of every human being,” so be charitable to others.

      Such prescriptions may sound simplistic, but their effects are measurable in everything from personal well-being to societal harmony. If this and future generations adopt these virtues, the kids are going to be alright.

Income Inequality and Homicide

[These excerpts are from an article by Maia Szalavitz in the November issue of Scientific American.]

      Income inequality can cause all kinds of problems across the economic spectrum—but perhaps the most frightening is homicide. Inequality—the gap between a society’s richest and poorest—predicts murder rates better than any other variable....It is more tightly tied to murder than straight-forward poverty, for example, or drug abuse. And research conducted for the World Bank finds that both between and within countries, about half the variance in murder rates can be accounted for by looking at the most common measure of inequality….

      Another possible explanation is that as richer people retreat into ever more exclusive communities, their virtual disappearance masks rises in local inequality that are felt by former neighbors. A society in which millions struggle to pay their student loans and make a decent living while watching U.S. secretary of education Betsy DeVos—a woman of enormous wealth—cut the education budget and protect predatory for-profit schools is unlikely to be a safe and stable one. When men have little hope of a better future for either themselves or their kids, fights over what little status they have left take on outsize power. To break the cycle, everyone must recognize that it is in no one’s interest to escalate such pain.

The Decline of Africa’s Largesy Mammals

[These excerpts are from an article by Rene Bobe and Susana Carvalho in the 23 November 2018 issue of Science.]

      The human species is causing profound climatic, environmental, and biotic disruptions on a global scale. In the present time (now called the Anthropocene), most species of large terrestrial herbivores are threatened with extinction as their populations decline and their geographic ranges collapse under the pressure of human hunting, poaching, and encroachment. Although the scale of on-going anthropogenic ecological disruptions is unprecedented, human-driven extinctions are not new: There is strong evidence that humans played a major role in the wave of megafaunal losses at the end of the Pleistocene, between about 10,000 and 50,000 years ago. But when did humans, or our ancestors, begin to have such a profound effect on large herbivores to the point of causing extinctions?...

      Hominins—species on our side of the evolutionary divergence that separated us from the chimpanzees—first appeared in Africa in the late Miocene, about 7 million years ago. The late Miocene was a time of global climatic and environmental change, with an expansion of grasslands…in tropical latitudes and an increasing frequency of fires. At that time, large mammals were abundant in eastern Africa. At Lothagam, for example, in the Turkana Basin of Kenya, there were 10 species of megaherbivores, including the earliest records of the elephant family, Elephantidae. Near Lothagam is the early Pliocene site of Kanapoi, with a rich fossil record dated to about 4.2 million years ago. Kanapoi has the earliest record of the hominin genus Australopithecus, which coexisted with at least 11 species of megaherbivores: five proboscideans, two rhinocerotids, two giraffids, and two hippopotamids, along with a diverse fauna of large carnivores that included giant otters, hyenas, two species of saber-tooth felids, and three species of crocodiles. Kanapoi, at about 4.2 million years ago, with an area of 32 km2, had twice the number of megaherbivore species as the entire continent of Africa has today. Among the five species of proboscideans, there were the ancestors of the modern African and Asian elephants, Loxodonta and Elephas, respectively. From their first appearance in eastern Africa, both elephants fed predominantly on increasingly abundant grasses. It is probable that these elephants and other megaherbivores played a beneficial role for early hominins by opening up wooded environments, thereby result-ing in the mix of woodlands and grasslands where hominins seemed to thrive….

Cracking the Cambrian

[These excerpts are from an article by Joshua Sokol in the 23 November 2018 issue of Science.]

      …During the Cambrian, which began about 540 million years ago, nearly all modern animal groups—as diverse as mollusks and chordates—leapt into the fossil record. Those early marine animals exhibited a dazzling array of body plans, as though evolution needed to indulge a creative streak before buckling down. For more than a century, scientists have struggled to make heads or tails—sometimes literally—of those specimens, figure out how they relate to life today, and understand what fueled I the evolutionary explosion….

      Other sites around the world are also opening new vistas of the Cambrian. Scientists can now explore the animal explosion with a highlight reel of specimens, along with results from new imaging technologies and genetic and developmental stud-ips of living organisms….Researchers may be closer than ever to fitting these strange creatures into their proper places in the tree of life—and understanding the “explosion” that birthed them.

      Each new find brings the simple joy of unearthing and imagining a seemingly alien creature….

      How Cambrian species are related to today’s animals has been debated since the fossils first came to light. Walcott classified his oddities within known groups, noting that some Burgess Shale fossils, such as the brachiopods, persisted after the Cambrian or even into the present. So, for example, he concluded almost all the creatures resembling today’s arthropods were crustaceans.

      But later paleontologists had other ideas. Harvard University's Stephen Jay Gould perhaps best captured the charisma of Cambrian life in his 1989 book Wonderful Life: The Burgess Shale and the Nature of History, in which he lavished attention on the “weird wonders” excavated from Walcott’s city block-size quarry Gould argued that oddballs such as the aptly named Hallueigenia, a worm with legs and hard spines, seem unrelated to later animals. He slotted the unusual forms into their own phyla and argued that they were evolution's forgotten experiments, later cast aside by contingencies of fate.

      Contemporary paleontologists have settled on yet another way to understand them. Consider the arthropods, arguably Earth's most successful animals. In a family tree, the spray of recent branches that end in living arthropods—spiders, insects, crustaceans—constitutes a “crown” group. But some animals in the Burgess Shale probably come from earlier “stems” that branched off before the crown arthropods. These branches of the tree don’t have surviving descendants, like a childless great-uncle grinning out from a family photo. In that view, many of Gould's weird wonders are stem group organisms, related to the ancestors of current creatures although not ancestors themselves. Newer fossils from the Canadian Rockies help support that view. Caron argued in 2015, for example, that his specimens of Hallueigenia have features suggesting the animal belongs on a stem group of the velvet worms, creatures that still crawl around in tropical forests spitting slime….

      Bold claims that use anatomy to revise family trees engender similar controversy throughout the field. One argument that Hallueigenia fits with the velvet worms, for example, depends on the exact shape of its claws. But other teams counter that the claws aren’t diagnostic of ancestry.

      The uncertainties leave paleontologists ever hungry for newer, better specimens….

      Although show-stopping animals keep falling out of the strata, the full significance of the Cambrian explosion remains a mystery. Arthropods, the most diverse and common creatures known from the time, littered Cambrian ecosystems….the Cambrian witnessed both the birth and step-by-step diversification of many modern groups. Another approach yields a different answer, however. Geneticists use a tool called molecular clocks to trace back down the tree of life. By starting with genetic differences between living animals, which have accrued as a result of random mutations over the eons, molecular clocks can rewind time to the point where branches diverged.

      According to recent studies using that method, modem animals began to march off into their separate phyla some 100 million years before the Cambrian. The finding implies that those groups then hung out, inconspicuous or unnoticed in the fossil record, before suddenly stepping on stage.

      Paleontologists have a cryptic set of clues about life before the explosion. Long before the odd beasts of the Cambrian evolved, an even more alien set of ocean organisms left impressions on sedimentary rocks now seen in Namibia and Australia. The Ediacarans, as those fossils are called, taunt paleontologists with the same kind of interpretive challenge as the Cambrian’s weird wonders. But they’re even weirder. Their imprints suggest some grew in fractal patterns; others had three-part symmetry. Unhelpfully, they don’t have obvious mouths, guts, or appendages.

      …Caron and others keep hunting for fossil features that could reveal the relationships among Ediacaran, Cambrian, and present-day groups. Other researchers struggle to explain what caused the explosion of animal forms. Atmospheric oxygen may have spiked, enabling animals to grow bigger, stronger, and more active. Or erosion could have dumped toxic calcium into the oceans, prompting organisms to shunt it into building hard skeletons.

      Or biology itself could have led the way. Inventions such as predation, free swimming, and burrowing into the sea floor—all first seen in or shortly before the Cambrian—could have transformed a placid global ecology into a high-stakes contest, spurring waves of call-and-response innovation between groups. The explosion might also mark the moment when, after millions of years of quiet progress, animals had finally accrued the developmental recipes to build body parts and improvise on basic themes….Or, of course, multiple causes could have piled up together.

Fire and Rain

[These excerpts are from an editorial by Steve Metz in the November/December 2018 issue of The Science Teacher.]

      In the summer of 2018 a prolonged heat wave turned northern Europe brown. Sweden, Greece, and dozens of other European countries were ablaze with uncharacteristic wildfires, while states in the western U.S. experienced some of the largest fires in their history. Torrential rains flooded areas from California to Connecticut. Hurricane Florence dropped a meter of rain on the Carolinas, causing the Cape Fear River to crest at almost 19 meters (62 feet)….

      …Levels of atmospheric carbon dioxide reached 407 parts per million in August this year, tip over 20% in the past 40 years, and now higher than any time in at least the past 800,000 years. The last time atmospheric CO, concentrations were this high was more than three million years ago, when the temperature was 2°-3°C higher than during the pre-industrial era, and sea level was 15-25 meters higher than today….

      The effect of global warming on individual weather events is difficult to determine. Still, an emerging area of science—sometimes called “event attribution”—is rapidly advancing our understanding of the links between specific extreme events and human-caused climate change. For example, researchers estimated that “human interference in the climate system” increased Hurricane Florence’s rainfall forecast by over 50% and the storm’s projected diameter by about 80 kilometers….

      The events of summer 2018 warn us that climate change is not a far-off event. Global warming is real, it is caused mainly by human activity, and it is here now. Teachers must stand up for evidence-based climate science. We need to provide students with the accurate knowledge that can prepare and inspire them to take action on a personal, community, and global level.

A Physicist’s Final Reflections

[This excerpt is from an article by Andrew Robinson in the 16 November 2018 issue of Science.]

      …begins with Einstein. “Where did his ingenious ideas come from?” asks Hawking. He answers, “A blend of qualities, perhaps: intuition, originality, brilliance. Einstein had the ability to look beyond the surface to reveal the underlying structure. He was undaunted by common sense, the idea that things must be the way they seemed. He had the courage to pursue ideas that seemed absurd to others. And this set him free to be ingenious, a genius of his time and every other.”

      Was Hawking a genius, too? He never won a Nobel Prize, and the book gives no indication that Hawking regarded himself as a genius. On the other hand, he was one of the very few scientists since Einstein to become a household name….

Whose Science? A New Era in Regulatory “Science Wars”

[This excerpt is from an article by Wendy Wagner, Elizabeth Fisher and Pasky Pascual in the 9 November 2018 issue of Science.]

      …the reforms generally take the form of legislation or regulation, they do not simply suggest best practices for conducting scientific analyses but establish legal lines that cannot be crossed. Moreover, even though they create legal ground rules for scientific deliberations, the reforms have not been developed by the scientific community, but by members of Congress and political officials. In providing a birds’-eye view of the legal developments in regulatory science over the past 50 years, we identify just how idiosyncratic these current reforms are and why the scientific community needs to be aware of their implications.

      Although the agency’s underlying scientific analysis is often subject to scrutiny by stakeholders and political officials and review by the courts, these new proposals cut deeper and dictate in part how the formative scientific assessments themselves must be done. For example, these proposals require the exclusion of potentially relevant research during agencies’ initial review of the literature, dictate the types of computational models that must be considered in analyzing that information, and exclude respected scientists from peer reviewing the analysis. If the agency does not respect these legal lines, the agency’s review of the scientific literature is legally invalid and technically illegal. This contrasts with present practice where norms governing scientific analyses are rebuttable and subject to modification in light of specific contexts and scientific progress. The proposals thus reach down to control and limit the scientific record.

      The scientific community has been vocal in pointing out how the rules diverge from normal scientific practices, even while the legal requirements—some of which are still proposed and others which are final—purport to advance common goals, like data transparency and reproducibility….

Early Mongolians Ate Dairy but Lacked the Gene to Digest It

[These excerpts are from an article by Andrew Curry in the 9 November 2018 issue of Science.]

      More than 3000 years ago, herds of horses, sheep, and cows or yaks dotted the steppes of Mongolia. Their human caretakers ate the livestock and honored them by burying the animal bones with their own. Now, analysis of deposits on ancient teeth shows that early Mongolians milked their animals as well. That may not seem surprising. But the DNA of the same ancient individuals shows that as adults they lacked the ability to digest lactose, a key sugar in milk.

      The findings deepen an emerging puzzle, challenging an oft-told tale of how people evolve lactase persistence, the ability to produce a milk-digesting enzyme as adults….

      Most people in the world lose the ability to digest lactose after childhood. But in pastoralist populations, the story went, culture and DNA changed hand in hand. Mutations that allowed people to digest milk as adults would have given their carriers an advantage, enabling them to access a rich, year-round source of fat and protein. Dairying spread along with the adaptation, explaining why it is common in herding populations in Europe, Africa, and the Middle Fast.

      But a closer look at cultural practices around the world has challenged that picture. In modern Mongolia, for example, traditional herders get more than a third of their calories from dairy products. They milk seven kinds of mammals, yielding diverse cheeses, yogurts, and other fermented milk products, including alcohol made from mare’s milk….

      Geneticists are going back to the drawing board to understand why lactase persistence is common—and apparently selected for—in some dairying populations but absent in others….

      How dairying reached Mongolia is also a puzzle. The Yamnaya’s widespread genetic signature shows they replaced many European and Asian Bronze Age populations. But they seem to have stopped at the Altai Mountains west of Mongolia….

Facing Hatred

[These excerpts are from an editorial by Jose-Alain Sahel in the 9 November 2018, issue of Science.]

      …Two weeks ago, the mass killing at a Pittsburgh synagogue proved what I knew but wanted to forget—that no place on Earth is “safe” from hatred. But regardless of where any one of us lives and works, we are faced with the same, immense challenge: The quest for facts, enlightenment, and care versus ignorance and hatred matters more than ever.

      In the early 1930s, Albert Einstein asked Sigmund Freud to contribute to an initiative launched by the League of Nations that sought prominent individuals to promote peace and its values. Their exchange, written under the title Why War?, fell short of finding ways to counteract violence. It was published after Hitler had already been appointed chancellor. The rest is history. Should we simply believe that today there is a new cycle of history taking place? Moreover, in contrast to the League of Nations, nobody is asking the scientific community for help today. Science, with its fundamental quest for truth, or at least facts, and its open discourse, seems to have lost its iconic status, despite its contributions to societal well-being. The scientific community should not passively watch the disastrous rise of hatred worldwide.

      We are tasked with building a society of knowledge and care, where truth, integrity, and respect for all prevail. The heartening responses of health care providers to incidents in the United States and France epitomize this ideal. In 2015, after the mass killing at the Bataclan theater in Paris, nurses and physicians spontaneously converged on hospitals to help the victims, limiting the massive toll of the attacks. Likewise, in Pittsburgh, nurses and physicians treated the wounded, including the presumed killer, with efficiency and humanity. Helplines offering information and support were set up for a whole city in mourning. Religious and political leaders of all faiths and backgrounds, and community members from across the cultural spectrum, were united against hatred. In both cities, the responses were deeply rooted in society’s best, most inspiring traits….

      Caring means that each life matters, and that we all can and should be supported to grow and give back to society. The French Jewish philosopher Emmanuel Levinas asserted that looking into the face of one's fellow man invokes the imperative: “Thou shalt not kill.” This sounds naive and far too simplistic in the face of guns and strongly held prejudices. Yet, is there anything else in the world more meaningful than looking into human faces and listening?

      If we are truly an enlightened and caring society, then our response to violence must be to reject resignation and to include actions by those who seek truth and fact. This is now, as ever, our inheritance.

Money, Power, and Choices

[These excerpts are from an article by Maria Fergusob in the November 2018 issue of Phi Delta Kappan.]

      Most years, K-12 education ranks far below topics such as the economy, health care, and immigration among the issues of greatest concern to American voters. In the run-up to the 2018 midterm elections, though, education has taken on a sizable, if not quite leading, role. Across the country, in races big and small, pollsters report high levels of interest in where the candidates stand on teacher pay, student safety, and other school-related topics.

      On the surface, that might suggest a broad resurgence in support for public schooling. But the truth is that public education means very different things to different people. Some voters think of free and open education as a great equalizer, an essential democratic institution that undergirds our very sense of ourselves as a nation. Others think of it solely as a means of promoting their own children’s interests — and they vote accordingly. Not everyone buys into the notion that public schools are meant to be yours, mine, and ours together.

      Indeed, while public school systems, with their tax-based funding and local governing boards, may have been designed to secure the common good, they have changed dramatically over the last half century, becoming less and less equitable and more and more vulnerable to a host of political pressures and moneyed interests. Further, some of the most consequential debates and decisions about public schooling have occurred behind the curtain, outside the view of ordinary Americans. When they vote in local, state, and federal elections, most people — even voters who try to stay well-informed — know little about how campaign donations have influenced the positions their candidates take.

      …wealthy outsiders collaborate to “hijack the democratic process” and unduly influence voting in states and communities where they have no roots, no children, and no ownership. How, they ask, does that kind of outsider influence square with a system that is supposed to be locally controlled?

      It is a fair question. Even if many of these wealthy outsiders have the best of intentions, can we really say that a public system is locally controlled if one point of view is supported by so much external firepower? Unfortunately, the Supreme Court’s 2010 decision in the Citizens United case made this fair question a moot point. Money from all kinds of sources can and does influence elections….

Changing the Game on Climate

[These excerpts are from an interview with Gary Yohe in the 2018 Year in Review issue of The Nature Conservancy.]

      …Climate change poses many risks to nature and human health, and the cost of reducing those risks by reducing carbon emissions will only increase the longer we wait. We’ll get farther in both if, instead of relying on benefit-cost analysis, we accept the recent wisdom of the IPCC that climate change is a risk-management problem. There are real consequences—human lives, extreme weather, ecosystem collapse—that clarify the tradeoffs without undue reliance on monetary analyses.

      …Technology has been changing in favor of renewable energy and will continue to move in that direction. For individuals, speaking to ways that their communities and states can drive clean energy policy and implementation is key. Happily, it’s becoming clear to many corporations that curbing emissions is good for their bottom lines. Many major companies are committing themselves, but everyone needs to follow their lead.

      …Adaptation is essential. We’re not going to “fix” climate change. We’re already seeing its risks, damages and health effects, so supporting adaptation doesn’t mean giving up on mitigation. Both are an essential part of the most efficient portfolio of responses. To help, get educated and figure out what you and your communities can do.

Composites from Renewable and Sustainable Resources: Challenges and Innovations

[This excerpt is from an article by Amar K. Mohanty, Singaravelu Vivekanandhan, Jean-Mathieu Pin and Manjusri Misra in the 2 November 2018 issue of Science.]

      The era of natural fiber composites currently known as biocomposites dates to 1908 with the introduction of cellulose fiber-reinforced phenolic composites. This innovation was followed by synthetic glass fiber-reinforced polyester composites, which obtained commodity status in the 1940s. The use of biobased green polymers to manufacture auto parts began in 1941, when Henry Ford made fenders and deck lids from soy protein-based bioplastic. The use of composite materials, made with newable and sustainable resources, has become one of the vital components of the next generation of industrial practice. Their expanding use is driven by a variety of factors, including the need for sustainable growth, energy security, lower carbon footprint, and effective resource management, while functional properties of the materials are simultaneously being improved. Innovative sustainable resources such as biosourced materials, as well as wastes, coproducts, and recycled materials, can be used as both the matrix and reinforcement in composites to minimize the use of nonrenewable resources and to make better use of waste streams.

      Composite materials find a wide range of potential applications in construction and auto-parts structures, electronic components, civil structures, and biomedical implants. Traditionally, indus-trial sectors that require materials with superior mechanical properties use composites made from glass, aramid, and carbon fibers to reinforce thermoplastics such as polyamide (PA), poly-propylene (PP), and polyvinyl chloride) (PVC), as well as thermoset resins such as unsaturated polyester (UPE) and epoxy resin. In addition to fiber, mineral fillers such as talc, clay, and calcium carbonate are being used in composite manufacturing. Such hybrids of fiber and mineral fillers play a major role in industrial automotive, housing, and even packaging applications. Carbon black plays a vital role as a reinforcement, especially in rubber-based composites. The key environmental concern with regard to composite materials is the difficulty of removing individual components from their structures to enable recycling at the end of a material’s service life. At this point, most composite materials are either sent to a landfill or incinerated. Wood and other natural fibers (e.g., flax, jute, sisal, and cotton), collectively called “biofibers,” can be used to reinforce fossil fuel-based plastic, thus resulting in biocomposite materials. Synthetic glass fiber-reinforced biobased plastics such as polylactides (PLAs) are a type of bio-composite. Biofiber-PP and biofiber-UPE composites have reached commodity status in many auto parts, as well as decking, furniture, and housing applications. Hybrid biocomposites of natural and synthetic fibers as well as mixed matrix systems also represent a key strategy in engineering new classes of biobased composites. As part of feedstock selection, a wide range of renewable products that includes agricultural and forestry residues, wheat straw, rice straw, and waste wood, as well as undervalued industrial coproducts including biofuel coproducts such as lignin, bagasse, and clean municipal solid wastes, is currently being explored to deriVe chemicals and materials. Recent advancements in biorefinery concepts create new opportunities with side-stream product feedstock that can be valorized in the fabrication of a diverse array of biocomposites.

      Materials scientists can help in advancing sustainable alternatives by quantifying the environmental burden of a material through its product life-cycle analysis. The exponential growth of population and modernization of our society will lead to a threefold increase in the demand for global resources if the current resource-intensive path is continued. According to the United Nations, a truckload of plastic waste is poured into the sea every minute. By 2050, at current rates, the amount of plastic in the ocean will exceed the number of fish. The benefit of diverting plastic packaging material is estimated at around $80 billion to $120 billion, which is currently lost to the economy. If diverted for composite use, the recycled and waste plastic currently destined for landfills and incineration would be used for sustainable development, thereby reducing dependence on nonrenewable resources such as petroleum. Postindustrial food processing wastes are being explored as biofiller in biodegradable plastics for the development of compostable biocomposites. Low-value biomass and waste resources can be pyrolyzed to provide biocabon (biochar) as sustainable filler for biocomposite uses. The increased sustainability in composite industries requires basic and transformative research toward the design of entirely green composites. Renewable resourced-based sustainable polymers and bioplastics, as well as advanced green fibers such as lignin-based carbon fiber and nanocellulose, have great potential for sustainable composites. Biobased nonbiodegradable composites show promising applications in auto parts and other manufacturing applications that require durability. Biodegradable composites also show promise in sustainable packaging This comprehensive Review on composites from sustainable and renewable resources aims to summarize their current status, constraints on wider adoption, and future opportunities. In keeping with the broad focus of this article, we analyze the current development of such composites and discuss various fibers and fillers for reinforcements, current trends in polymer matrix systems, and integration of recycled and waste coproducts into composite systems to outline future research trends.

Shifting Summer Rains

[These excerpts are from an article by David McGee in the 2 November 2018 issue of Science.]

      Most of China's water supply depends on rainfall from the East Asian summer monsoon (EASM), a seasonal progression of rains that begins along the southern coast in spring, then sweeps north, reaching northeastern China in midsummer….Projections of the EASM’s response to future climate change are complicated by its complex interaction with the mid-latitude jet stream, which appears to govern the monsoon’s northward march each spring and summer. To investigate the monsoon's sensitivity and dynamics, many scientists have turned to examining its past changes recorded in natural archives. Although past climates are not a direct analog of the 21st-century climate, they offer vital tests of the ability to describe monsoon behavior through theories and numerical models….Through examination of trace elements in Chinese stalagmites (a proxy for local precipitation amount) and climate modeling experiments, they show that cooling episodes in the North Atlantic shifted the summer jet stream south, delaying the onset of monsoon rains in northeastern China and increasing rainfall in central China. The finding demonstrates that local rainfall in the EASM regions can vary in opposition to monsoon strength, and it highlights the importance of future high-latitude warming in determining precipitation patterns in China.

      Central to investigations of the EASM’s history are records of the oxygen isotope composition of rainfall recorded in stalagmites from Chinese caves…

      …The study presents measurements of trace impurities in the calcium carbonate lattice—substitutions of magnesium and strontium for calcium—in two stalagmites from central China that span the transition from the peak of the last ice age to the beginning of the current interglacial period, between 21,000 and 10,000 years ago. These trace-element variations are interpreted to reflect changes in the rate with which infiltrating waters passed through the rock above the cave, which should track local precipitation. Oxygen isotopes in these samples record the same pronounced variations as other Chinese stalagmites during the last deglaciation….

Restoring Lost Grazers Could Help Blunt Climate Change

[These excerpts are from an article by Elizabeth Pennisi in the 26 October 2018 of Science.]

      Restoring reindeer, rhinoceroses, and other large mammals could help protect grasslands, forests, and tundra from catastrophic wildfires and other threats associated with global warming, new studies suggest. The findings give advocates of so-called trophic rewilding—reintroducing lost species to reestablish healthy food webs—a new rationale for bringing back the big grazers….

      Rewilding is often associated with an am-bitious proposal to restore large mammals, including even ice age mammoths, to a huge park in Russia….But mammoth resurrection is still just a dream, and most rewilders are focused on restoring animals including giant tortoises, dam-building beavers, or herds of grazers.

      Now, it seems rewilding could offer a climate bonus. As the planet has warmed, fire seasons have become 25% longer than they were 30 years ago, and more areas are experiencing severe blazes….

      …In the case of white rhinos, fires averaged just 10 hectares when the animals were present—because they kept plants closely cropped, and their paths created fire breaks—but increased to an average of 500 hectares after the rhinos vanished….

      Others are skeptical. Like any ecosystem re-engineering effort, the long-term effects of rewilding are hard to anticipate….Some modeling, for example, suggests increased Arctic grazing will lead to greater carbon release, not less. And creating Arctic herds big enough to make a difference could be difficult….

Like this World of Ours

[This excerpt is from an article by Marcia Bartusiak in the Fall 2018 issue of MIT Spectrum.]

      …But speculation that planetary systems circle other stars started long, long ago—in ancient times. In the fourth century BCE, the Greek philosopher Epicurus, in a letter to his student Herodotus, surmised that there are “infinite worlds both like and unlike this world of ours.” As he believed in an infinite number of atoms careening through the cosmos, it only seemed logical that they’d ultimately construct limitless other worlds.

      The noted 18th-century astronomer William Herschel, too, conjectured that every star might be accompanied by its own band of planets but figured they could “never be perceived by us on account of the faintness of light.” He knew that a planet, visible only by reflected light, would be lost in the glare of its sun when viewed from afar.

      But astronomers eventually realized that a planet might be detected by its gravitational pull on a star, causing the star to systematically wobble like an unbalanced tire as it moves through the galaxy. Starting in 1938, Peter van de Kamp at Swarthmore College spent decades regularly photographing Barnard’s star, a faint red dwarf star located six light-years away that shifts its position in the celestial sky by the width of the Moon every 180 years, faster than any other star. By the 1960s, van de Kamp got worldwide attention when he announced that he did detect a wobble, which seemed to indicate that at least one planet was tagging along in the star’s journey. But by 1973, once Allegheny Observatory astronomer George Gatewood and Heinrich Eichhorn of the University of Florida failed to confirm the Barnard-star finding with their own, more sensitive photographic survey, van de Kamp’s celebrated claim of detecting the first extrasolar planet disappeared from the history books.

      The wobble technique lived on, however, in another fashion. Astronomers began focusing on how a stellar wobble would affect the star’s light. When a star is tugged radially toward the Earth by a planetary companion, the stellar light waves get compressed—that is, made shorter—and thus shifted toward the blue end, of the electromagnetic spectrum. When pulled away by a gravitational tug, the waves are extended and shifted the other way, toward the red end of the spectrum. Over time, these periodic changes in the star’s light can become discernible, revealing how fast the star is moving back and forth due to planetary tugs.

      In 1979, University of British Columbia astronomers Bruce Campbell and Gordon Walker pioneered a way to detect velocity changes as small as a dozen meters a second, sensitive enough for extrasolar planet hunting to begin in earnest. Constantly improving their equipment, planet hunters were even more encouraged in 1983 and 1984 by two momentous events: the Infrared Astronomical Satellite (IRAS) began seeing circumstellar material surrounding several stars in our galaxy; and optical astronomers, taking a special image of the dwarf star Beta Pictoris, revealed an edge-on disk that extends from the star for some 37 billion miles (60 billion kilometers). It was the first striking evidence of planetary systems in the making, suggesting that such systems might be common after all.

      The first indication of an actual planet orbiting another star arrived unexpectedly and within an unusual environment. In 1991, radio astronomers Alex Wolszczan and Dale Frail, while searching for millisecond pulsars at the Arecibo Observatory in Puerto Rico, saw systematic variations in the beeping of pulsar B1257+12, which suggested that three bodies were orbiting the neutron star. Rotating extremely fast, millisecond pulsars are spun up by accreting matter from a stellar companion. So, this system, reported Wolszczan and Frail, “probably consists of 'second generation' planets created at or after the end of the pulsar’s binary history.”

      The principal goal for extrasolar planet hunters, though, was finding evidence for “first generation” planets around stars like our Sun—planets that formed from the stellar nebula itself as a newborn star is created. That long-anticipated event at last occurred in 1994 when Geneva Observatory astrono-mers Michel Mayor and Didier Queloz, working from the Haute-Provence Observatory in southern France, discerned the presence of an object similar to Jupiter orbiting 51 Pegasi, a sunlike star 45 light-years distant in the constellation Pegasus….

Youth Climate Trial Showcases Science

[These excerpts are from an an article by Julia Rosen in the 26 October 2018 issue of Science.]

      Next week, barring a last-minute intervention by the Supreme Court, climate change will go to trial for just the second time in U.S. history. In a federal courtroom in Eugene, Oregon, 21 young people are scheduled to face off against the U.S. government, which they accuse of endangering their future by promoting policies that have increased emissions of carbon dioxide (CO2) and other planet warming gases. The plaintiffs aren’t asking for monetary damages. Instead, they want District Judge Ann Aiken to take the unprecedented step of ordering federal agencies to dramatically reduce the amount of CO2 in the atmosphere.

      Government attorneys are not expected to challenge the scientific consensus that human activities, including the burning of fossil fuels, cause global warming. But the outcome could hinge, in part, on how Aiken weighs other technical issues….

      The civil trial will be a milestone in a hard-fought legal battle that began in 2015, when environmental groups joined with youth activists and retired NASA scientist James Hansen to push for climate action. The lawsuit rests on the novel argument that the government has knowingly violated the plaintiffs’ rights to a “safe” climate by taking actions—such as subsidizing fossil fuels—that cause warming….

      If the trial proceeds, the youths’ lawyers will have to persuade the judge that the government's actions have helped cause climate change; that the warming exacerbated storms, droughts, and wildfires; and that individual plaintiffs have suffered injuries as a result….Hotter, dryer weather increases the risk of fires….And warmer air can hold more moisture, boosting rainfall by up to 20%—a factor he says worsened floods that affected plaintiffs living in Louisiana, Florida, and Colorado….

      The two sides disagree over whether the United States can reduce emissions as fast as the plaintiffs would like. But the government may also argue that any cuts would have a limited impact, because of the global nature of climate change. Other countries now produce roughly 88% of the world’s greenhouse gas emissions….Therefore, the solution requires international cooperation….

      Courts have rejected that argument as a rationale for inaction in previous cases….In the only other climate lawsuit to get to the trial stage—a landmark 2007 case that challenged EPA’s refusal to regulate CO2 from vehicles—the agency claimed doing so wouldn’t matter because U.S. cars contributed just 6% of global emissions. But the Supreme Court disagreed….

Imagine a World without Facts

[These excerpts are from an an editorial by Jeremy Berg in the 26 October 2018 issue of Science.]

      …We are now living in a world where the reality of facts and the importance of scientific inquiry and responsible journalism are questioned with distressing frequency. This trend needs to be called out and arrested; the consequences of allowing it to continue are potentially quite damaging.

      Facts are statements that have a very high probability of being verified whenever appropriate additional observations are made. Thus, facts can be reliably used as key components in interpreting other observations, in making predictions, and in building more complicated arguments….

      Consider the present “post-fact” world in this context. The lack of acceptance and cynical or ignorant questioning of well-documented evidence erode the perception that many propositions are well-supported facts, weakening the foundation on which many discussions and policies rest. Under these circumstances, numerous alternatives appear to be equally plausible because the evidence supporting some of these alternatives has been discounted. This creates a world of ignorance where many possibilities seem equally likely, causing subsequent discussions to proceed without much foundation and with outcomes determined by considerations other than facts.

      Overly discounting information from appropriately trained researchers based on well-conducted studies, or from well-qualified journalists who pursue information with good practices that include interactions with multiple independent sources, can falsely restrict the available evidence. If evidence is judged without regard for its methodological basis, as well as a thorough assessment of sources of bias, unreliable conclusions may be drawn. If shoddy evidence is accepted, false interpretations may appear to be plausible even though they lack substantial evidentiary support. If robust evidence is undervalued or ignored, excess uncertainty will remain even when some propositions should be considered well established.

      To avoid sliding further into a world without facts, we must articulate and defend the processes of evidence generation, evaluation, and integration. This includes not only clear statements of conclusions, but also clear understanding of the underlying evidence with recognition that some propositions have been well established, whereas others are associated with substantial remaining uncertainty. We should acknowledge and accept responsibility for, but not exaggerate, challenges within the scientific enterprise. At the same time, we should continue to call out statements put forward that are factually incorrect with reference to the most pertinent evidence. Without taking these steps forcefully, we risk living in a world where many things do not work as well as we need them to.

Your Genome, on Demand

[These excerpts are from an article by Ali Tozkamani and Eric Topol in the November-December 2018 issue of MIT Technology Review.]

      In early 2018, it was estimated that over 12 million people had had their DNA analyzed by a direct-to-consumer genetic test. A few months later, that number had grown to 17 million. Meanwhile, geneticists and data scientists have been improving our ability to convert genetic data into useful insights—forecasting which people are at triple the average risk for heart attack, or identifying women who are at high risk for breast cancer even if they don't have a family history or a BRCA gene mutation. Parallel advances have dramatically changed the way we search for and make sense of volumes of data, while smartphones continue their unrelenting march toward becoming the de facto portal through which we access data and make informed decisions.

      Taken together, these things will transform the way we acquire and use personal genetic information. Instead of getting tests reactively, on a doctor's orders, people will use the data proactively to help them make decisions about their own health.

      With a few exceptions, the genetic tests used today detect only uncommon forms of disease. The tests identify rare variants in a single gene that causes the disease.

      But most diseases aren’t caused by variants in a single gene. Often a hundred or more changes in genetic letters collectively indicate the risk of common diseases like heart attack, diabetes, or prostate cancer. Tests for these types of changes have recently become possible, and they produce what is known as your “polygenic” risk score. Polygenic risk scores are derived from the combination of these variants, inherited from your mother and father, and can point to a risk not manifest in either parent’s family history. We’ve learned from studies of many polygenic risk scores for different diseases that they provide insights we can’t get from traditional, known risk factors such as smoking or high cholesterol (in the case of heart attack). Your polygenic score doesn’t represent an unavoidable fate—many people who live into their 80s and 90s may harbor the risk for a disease without ever actually getting it. Still, these scores could change how we view certain diseases and help us understand our risk of contracting them.

      Genetic tests for rare forms of disease caused by a single gene typically give a simple yes or no result. Polygenic risk scores, in contrast, are on a spectrum of probability from very low risk to very high risk. Since they’re derived from combinations of genome letter changes that are common in the general population, they’re relevant to everybody. The question is whether we'll find a way to make proper use of the information we get from them….

      Statin drugs are a good case study for this. They’re widely used, even though 95% of the people taking them who haven't had heart disease or stroke get no benefit aside from a nice cholesterol lab test. We can use a polygenic risk score to reduce unnecessary statin use, which not only is expensive but also carries health risks such as diabetes. We know that if you are in the top 20% of polygenic risk for heart attack, you're more than twice as likely to benefit from statins as people in the bottom 20%; these people can also benefit greatly from improving their lifestyle (stop smoking, exercise more, eat more vegetables). So knowing your polygenic risk might cause you to take statins but also make some lifestyle changes….

      And it’s not just about heart disease. A polygenic risk score might tell you that you’re at high risk for breast cancer and spur you to get more intensive screening and avoid certain lifestyle risks. It might tell you that you’re at high risk for colon cancer, and therefore you should avoid eating red meat. It might tell you that you're at high risk for type 2 diabetes, and therefore you should watch your weight….

      Another challenge will be to convince people to forgo or delay medical interventions if they have a low risk of a certain condition. This will require them to agree that they're better off accepting a very low risk of a catastrophic outcome rather than needlessly exposing themselves to a medical treatment that has its own risks….

      You can't change your genetic risk. But you can use lifestyle and medical interventions to offset that risk….

Opening a Door to Eugenics

[These excerpts are from an article by Nathaniel Comfort in the November-December 2018 issue of MIT Technology Review.]

      If this is “the science,” the science is weird. We’re used to thinking of science as incrementally seeking causal explanations for natural phenomena by testing a series of hypotheses. Just as important, good science tries as hard as it can to disprove the working hypotheses.

      Sociogenomics has no experiments, no null hypotheses to accept or reject, no deductions from the data to general principles. Nor is it a historical science, like geology or evolutionary biology, that draws on a long-running record for evidence.

      Sociogenomics is inductive rather than deductive. Data is collected first, without a prior hypothesis, from longitudinal studies like the Framingham Heart Study, twin studies, and other sources of information—such as direct-to-consumer DNA companies like 23andMe that collect biographical and biometric as well, as genetic data on all their clients.

      Algorithms then chew up the data and spit out correlations between the trait of interest and tiny variations in the DNA, called SNPs (for single-nucleotide polymorphisms). Finally, sociogenomicists do the thing most scientists do at the outset: they draw inferences and make predictions, primarily about an individual’s future behavior.

      Sociogenomics is not concerned with causation in the sense that most of us think of it, but with correlation. The DNA data often comes in the form of genome-wide association studies (GWASs), a means of comparing genomes and linking variations of SNPs. Sociogenomics algorithms ask: are there patterns of SNPs that correlate with a trait, be it high intelligence or homosexuality or a love of gambling?

      Yes—almost always. The number of possible combinations of SNPs is so large that finding associations with any given trait is practically inevitable.

      …statistical significance does not equal biological significance. The number of people buying ice cream at the beach is correlated with the number of people who drown or get eaten by sharks at the beach. Sales figures from beachside ice cream stands could indeed be highly predictive of shark attacks. But only a fool would bat that waffle cone from your hand and claim that he had saved you from a Great White….

      Sociogenomics is the latest chapter in a tradition of hereditarian social science dating back more than 150 years. Each iteration has used new advances in science and unique cultural moments to press for a specific social agenda. It has rarely gone well.

      The originator of the statistical approach that sociogenomicists use was Francis Galion, a cousin of Charles Darwin. Galton developed the concept and method of linear regression—fitting the best line through a curve—in a study of human height. Like all the traits he studied, height varies continuously, following a bell-curve distribution. Galton soon turned his attention to personality traits, such as “genius,” “talent,” and “character.” As he did so, he became increasingly hereditarian. It was Galton who gave us the idea of nature versus nurture. In his mind, despite the “sterling value of nurture,” nature was “by far the more important.”

      Galton and his acolytes went on to invent modern biostatistics—all with human improvement in mind….

      …After prominent eugenicists canvassed, lobbied, and testified on their behalf, laws were passed in dozens of states banning “miscegenation” or other “dysgenic” marriage, calling for sexual sterilization of the unfit, and throttling the stream of immigrants from what certain politicians today might refer to as “shithole countries.”

      …Genetics has an abysmal record for solving social problems. In 1905, the French psychologist Simon Binet invented a quantitative measure of intelligence—the IQ test—to identify children who needed extra help in certain areas. Within 20 years, Binet was horrified to discover that people were being sterilized for scoring too low, out of a misguided fear that people of subnormal intelligence were sowing feeblemindedness genes like so much seed corn.

The Cell’s Power Plant

[These excerpts are from an article by Jonathan Shaw in the November-December 2018 issue of Harvard Magazine.]

      Mitochondria produce metabolic energy by oxidizing carbohydrates, protein, and fatty acids. In a five-part respiratory chain, the organelle captures oxygen and combines it with glucose and fatty acids to create the complex organic chemical ATP (adenosine triphosphate), the fuel on which life runs. Cells can also produce a quick and easy form of sugar-based energy without the help of mitochondria, through an anaerobic process called glycolysis, but a mitochondrion oxidizing the same sugar yields 15 times as much energy for the cell to use. This energy advantage is generally accepted as the reason that, between one billion and one and a half billion years ago, a single free-living bacterium and a single-celled organism with a nucleus entered into a mutually beneficial relationship in which the bacterium took up residence inside the cell. No longer free-living, that bacterium evolved to become what is now the mitochondrion, an intracellular organelle.

      Recent discoveries in the Mootha lab have suggested an alternative explanation for this unusual partnership that focuses on the organelle’s ability to detoxify oxygen by consuming it. But whatever the underlying reason, this ancient, extraordinary connection formed just once, Mootha says—and the evolutionary success it conferred was so great that the single cell multiplied and became the ancestor of all plants, animals, and fungi.

      Nobody knows what that first cell looked like, but the bacterium that hitched a ride inside it was probably a relative of the “bugs” that cause Lyme disease, typhus, and chlamydia. In fact, mitochondria are similar enough to these bacteria that when physicians target such intracellular infections with specialized antibiotics, mitochondria are impaired, too….

      The answer lies in evolutionary history. After mitochondria took tki up residence in cells more than a billion years ago—in what was probably a long, drawn-out process—many genes were transferred from the mitochondria into the genome of the host—in other words, into the nucleus of the cell, where most DNA resides. The result of this gene transfer is that the mitochondrial genome, with just 16,000 base pairs (the building blocks of DNA), has been stripped down to bare essentials. Compared to its ancestral form and also to living relatives, such as the Rickettsia bacterium that causes typhus, which has more than a million base pairs, its genome is now tiny.

      …An outpouring of research began to hint at the versatile, indispensable role that mitochondria play in the regulation of cell death, the immune system, and cell signaling. The traditional focus on energy production, in other words, may have misled researchers—all the way back to their interpretation of what happened more than a billion years ago, when that single cell and a lone bacterium entered into a long-term relationship.

      Oxygen levels on early Earth were low at that time, but rising, Mootha says. “We think of oxygen as a life-giving molecule, and it is, but it can also be very corrosive”—think of how it rusts a car. In biology, oxygen and its byproducts are known to cause cellular damage, and are implicated in aging.

      Mitochondria, on the other hand, are consumers of oxygen. Maybe, according to a hypothesis favored by the Mootha lab, the selective advantage that accrued to the first cell to host a mitochondrion was not only more energy, but better control of the toxic effects of oxygen. Normal gene expression supports this idea….

An Alternative Urban Green Carpet

[These excerpts are from an article by Maria Ignatieva and Marcus Hedblom in the 12 October 2018 issue of Science.]

      Lawns are a global phenomenon. They green the urban environment and provide amenable public and private open spaces. In Sweden, 52% of the urban green areas are lawns. In the United States, lawns cover 1.9% of the country’s terrestrial area and lawn grass is the largest irrigated nonfood crop. Assuming lawn would cover 23% of cities globally [on the basis of data from the United States and Sweden], it would occupy 0.15 million to 0.80 million km2 (depending on urban definitions)—that is, an area bigger than England and Spain combined or about 1.4% of the global grassland area. Yet, lawns exact environmental and economic costs, and given the environmental and economic impacts of climate change, it is time to consider new alternative “lawnscapes” in urban planning as beneficial and sustainable alternatives.

      Although lawns are widespread, their properties have received less attention from the scientific community compared to urban trees or any other types of green areas. Designers, urban planners, and politicians tend to highlight the positive ecosystem services provided by lawns. For example, lawns produce oxygen, sequestrate carbon, remove air pollution (although this has not been supported by good quantitative studies), reduce water runoff, increase water infiltration, mit-gate soil erosion, and increase groundwater recharging. But perhaps the most important positive ecosystem service is the aesthetic and recreational benefits they provide. Aesthetics are a primary factor in modern urban planning and landscaping practice. For example, in developing countries located in arid zones, designers argue that lawns and irrigated turfs considerably enhance the quality of urban life.

      Recent heat waves and an increasing prevalence of droughts have raised economic and environmental concerns about the effects of urban lawns on climate change. These circumstances have encouraged researchers and the public to reconsider the green-carpet concept and assess the controversial aspects of lawns. It has been argued that lawns moderate urban temperatures, but this when compared to the absence of any vegetation. In arid regions of the United States, lawn irrigation accounts for 75% of the total annual household water consumption. In Perth, Australia, the annual volume use of groundwater for irrigating public greenspaces is 73 gigaliters (GI) and an additional 72 Gls of water in unlicensed backyard bores for private lawn irrigation. Another concern is the contamination of groundwater or runoff water due to overuse of fertilizers, herbicides, and pesticides. In 2012, the U.S. home and garden sector used 27 million kg of pesticides. The positive effect of soil carbon sequestration on the climate footprint of intensively managed lawns was found to be negated by greenhouse gas emissions from management operations such as mowing, irrigation, and fertilization. Gasoline-powered lawn mowers emitted high amounts of carcinogenic exhaust pollutants. Moreover, substituting degraded open green areas with plastic lawns eliminates real nature from cities and arguably reduces overall sustainability given that they reduce habitats, decrease soil organisms, pollute runoff water, and may well have yet unknown negative consequences for human health through plastic particles.

      However, the most noticeable constraint to new alternative lawn thinking may be the contribution of lawns to urban aesthetic uniformity and urban ecological homogenization, where lawn plant communities become similar across different biophysical settings. From the vast variety of the grass genera, only a limited number of species are selected for lawns….

      The reason behind lawn uniformity may lie in its origin. Grass plots in ornamental gardens most likely appeared in medieval times and were probably obtained from the closest pastures and meadows. They were small and quite biodiverse (containing a large number of meadow herbaceous plants). In the 17th century; the role of lawns increased in the decorative grounds of geometrical gardens. For example, in iconic French Versailles, short-cut green grass was perfectly blended with the ideology of the power of man over nature. The English landscape, or the “natural” style of the 18th century, introduced an idealized version of urban nature: grazed grasslands with scarcely planted shade trees and the “pleasure ground” next to the mansion with a short, smoothly cut lawn. With the introduction of mowers and lawn-seed nurseries, the English pastoral vision flourished further in the public parks during the second half of the 19th century. In the 20th century, the modernistic prefabricated landscape was based on the same English picturesque model as the “natural” landscape, which was often mistaken for ecological quality. Consequently, monoculture and intensively managed lawnscapes dislodged the majority of native zonal plant communities in urban environments. At the beginning of the 21st century, perfect green lawns became part of the uptake by non-Western countries of the ideal Western lifestyle and culture….

      What is the state of alternative lawnscapes today? The most dramatic implementation of new lawn thinking is in Berlin, Germany, where spontaneous vegetation is accepted as a fundamental landscape design tool. In Gleisdreieck Park and Sudgelande Nature Park (both established on abandoned railways), some areas were left to “go wild” and thus have been colonized by spontaneous vegetation. This has successfully challenged societal norms of conventional green lawns and has been accepted by local people. Recent research in the United Kingdom and Sweden showed that people desire to change the monotonous lawn to a more diverse environment. Grass-free lawn is one of the latest movements in both nations and is based on the use of low-growing native (in Sweden) or native and exotic (in the United Kingdom) herbaceous plants without the use of grasses. The goal is to create a dense biodiverse and low-maintenance mat, which can be used for recreation.

      What about future research on urban lawns? Of course, new plant species that can survive heavy tramping or long drought, for example, are desirable. Beyond plant research, studies on planting design must go beyond the theoretical. Specific geographical, cultural, and social conditions of each country must be factored into such alternative lawn planning research. In Australia, South Africa, and Arizona, alternative lawns might be based on xeriscape local plants. In Chinese cities, historically proved groundcover species could provide sustainability and a sense of place. Lawns could become a universal model for experimental sustainable design and urban environment monitoring.

      New alternative lawns represent a new human-made “wild” nature, which is opposed to the “obedient” nature of conventional lawns. Creating a new, ecological norm requires demonstration displays, public education, and the introduction of “compromised” design solutions. For example, a strip of conventional lawn can frame prairie or meadow and other “wild” vegetation and show a presence of culture in nature. The use of colors such as gray, silver, yellow, and even brown in lawnlike covers can add a feeling of real nature. Thus, one crucial challenge is how to accelerate people’s understanding of sustainable alternatives and acceptance of a new vegetation aesthetic in urban planning and design.

The Persistence of Polio

[These excerpts are from a book review by Pamela J. Hines in the 12 October 2018 issue of Science.]

      In the mid-20th century, several successful vaccines against polio were developed, eventually leading to an international initiative in 1988 to eradicate the poliovirus from the planet. After the success of smallpox eradication in 1980, the optimism of the initiative’s advocates and supporters seemed well placed. But some viruses are more unruly than others.

      Is poliovirus eradication an achievable and worthwhile goal or a misapplication of public health efforts…?

      Discovery of polio’s key vaccines was first E, stalled by scientific missteps and later driven by scientists in competition. Many people dedicated their lives to the vaccination programs in hope of eradicating this disease, and others lost theirs to violent resistance against those same campaigns. Evolution and natural selection drove resurgence of new poliovirus strains in regions thought to be cleared. Poverty and remote geography enabled pockets of viral persistence. And the mismatch between where the motivation came from and where the action needed to happen weakened the end game….

      The poliovirus spreads in environments where poor sanitation contaminates water supplies. Children who encounter the virus early in life tend to be less affected, whereas children who are protected from early virus exposure by good water and sanitation systems may be left susceptible to more severe disease when they encounter the virus later on. Thus, better water and sanitation systems may have paradoxically increased the risk of severe disease, offering a potential explanation for why the U.S. polio epidemic seemed to hit children in well-off suburban neighborhoods particularly hard. But regardless of what drove the disease’s spread, the result was a hue and cry to stop the polio outbreaks from countries that had the scientific expertise and money to go after the problem.

      Because of the efforts that ensued, large portions of the world are now free from the constant threat of polio. Not since 1979 has a case of polio originated in the United States, which is stunning progress given that some 58,000 cases of polio originated in the United States in 1952.

      The 1988 initiative to completely eradi-cate polio worldwide, however, has failed to replicate this success….

      With polio cleared from wealthier nations, most of the action now occurs in developing areas, where public health systems must deal with a variety of pressing challenges, of which polio is only one. The virus persists in impoverished regions, where inadequate water and sanitation systems facilitate its spread. Weak health care systems struggle to sustain even minimal programs against scourges of childhood, ranging from measles to malnutrition.

      The vaccines do work. But controlling the poliovirus has proven to be more dif-ficult than expected.

A New Leaf

[These excerpts are from an article by Erik Stokstad in the 12 October 2018 issue of Science.]

      Mikael Fremont was up to his shoulders in rapeseed…to point out something nearly hidden: a mat of tattered, dead leaves covering the soil.

      Months earlier, Fremont had planted this vetch and clover along with the rapeseed. The two legumes had grown rapidly, preventing weeds from crowding out the emerging rapeseed and guarding it from hungry beetles and weevils. As a result, Fremont had cut by half the herbicide and insecticide he sprayed. The technique of mixing plant species in a single field had worked “perfectly,” he said.

      This innovative approach is just one of many practices, now spreading across France, that could help farmers achieve an elusive national goal. In 2008, the French government announced a dramatic shift in agricultural policy, calling for pesticide use to be slashed in half. And it wanted to hit that target in just a decade. No other country with as large and diverse an agricultural system had tried anything so ambitious….

      Since then, the French government has spent nearly half a billion euros on implementing the plan, called Ecophyto. It created a network of thousands of farms that test methods of reducing chemical use, improved national surveillance of pests and plant diseases, and funded research on technologies and techniques that reduce pesticide use. It has imposed taxes on farm chemicals in a bid to decrease sales, and even banned numerous pesticides, infuriating many farmers.

      The effort has helped quench demand on some farms. Overall, however, Ecophyto has failed miserably. Instead of declining, national pesti-cide use has increased by 12%, largely mirroring a rise in farm production….

      There is also optimism. Despite Ecophyto’s failure, it showed farmers have powerful options, such as mixing crops, planting new varieties, and tapping data analysis systems that help identify the best times to spray. With the right incentives and support, those tools might make a bigger difference this time around. And the fact that France isn't backing away from its ambitious goal inspires many observers….

      It’s not only farmers who will have to adjust if France is to meet its ambitious goals. Reducing the cost of food production to the environment and public health will likely increase the cost to consumers and taxpayers….

Teaching Tolerance

[These excerpts are from an article by Josh Grossberg is in the Autumn 2018 issue of USC Trojan Family.]

      The 2018 Winter Olympics were in full swing and the students in Ivy Schamis’ high school classroom were upbeat. The teacher was guiding the students in her Holocaust history class through a discussion about the 1936 Summer Olympics. – She described how a German man had given famed black track athlete Jesse Owens a pair of specially designed running shoes despite risk of repercussions from the Nazi government. – “Does anybody know who that man was?” Schamis asked. A young man raised his hand. For the first time Schamis could remember, a student knew the answer. The shoemaker was Adi Dassler, the founder of Adidas, 17-year-old Nicholas Dworet said proudly.

      That’s when the shooting started.

      Gunfire rang out, one bullet after another. Someone was firing into the classroom. “We just flew out of our seats and tried to take cover,” Schamis says, “but there was nowhere to hide.”

      Two minutes later, Dworet and another student, named Helena Ramsay, also 17, were killed by gunfire. They were among the 17 students slain on Feb. 14, 2018, in the tragic shooting at Marjory Stoneman Douglas High School in Parkland, Florida….

      It may seem ironic that a classroom dedicated to embracing cross-cultural understanding would be visited upon by such senseless violence. But it was that kind of violence that prompted Schamis to delve into the topic with her students in the first place.

      “The lessons of the Holocaust came right into Room 1214 that day,” Schamis says….

      Can education truly instill decency and respect, and do the actions of individuals matter in fighting hate? Vigorous evaluations and studies conducted by the institute provide reason for hope. USC Shoah Foundation research has shown that 78 percent of students recognized that one person can make a difference if they see an example of stereotyping against a group of people. Another 78 percent believe it’s important to speak up against stereotyping when they see it around them.

      “If we have racist and anti-Semitic ideologies coming at us, then we can develop in students the skills, knowledge, abilities and capacities to counter those ideologies, make better decisions and be more active in opposing those ideologies as dtizens,”Wiedeman says.

      Wiedeman and USC Shoah Foundation aren’t alone in be-lieving that empathy, openness and tolerance can be taught….

      Florida high school teacher Ivy Schamis says that despite the shooting in her classroom, she remains undaunted in her quest to spread a belief in shared human values. She’s more eager than ever to teach students the lessons of the Holocaust.

      “Hate is never OK. Tell everybody,” she says. “We have to be there for each other.”

The Good Fight

[These excerpts are from a letter to the editor by Michael P. Jansen is in the October 2018 issue of Chem 13 News.]

      …when people discover I’m a teacher (a chemistry teacher! AHHH!!!), it’s pretty much game over….

      I cannot take peoples' illusion that chemistry is memorization. Or that school is about getting good marks — so a student can get accepted to some university…in order to — you guessed it — get more good marks — so he can be a doctor (like his aunt in England).

      I hate that parents have their kids take a summer course “to get it out of the way”. I hate that students and parents and (I am sorry to say) many teachers and some “experts” believe that computers are the Holy Grail in this business.

      Let’s get real. School is — and always was and always will be — about learning. I don’t understand why this straight-forward message is so difficult. We need to promote learning, while de-emphasizing marks. This is a war. We need to fight — student by student, class by class, year over year, parent interview by parent interview. It won’t be easy; there will be casualties, like my sanity — or your sanity.

      Let’s stand together and fight the good fight. For chemistry education. For chemistry learning.

Japan Needs Gender Equality

[These excerpts are from an editorial by Yumiko Murakami and Francesca Borgonovi is in the 12 October 2018 issue of Science.]

      Last month, Tokyo Medical University (TMU) announced Yukiko Hayashi as its first female president. This comes on the heels of discovering that the insitution had manipulated entrance exam scores for many years to curb female enrollment. Hayashi may be an attempt by TMU to restore its reputation, but the scandal should be a wake-up call for Japanese society to ensure that men and women have equal opportunities to succeed….

      Yet, Japan boasts one of the most sophisticated educational systems in the world. Japanese 15-year-old students are among the highest performers in mathematics and science according to the OECD’s Programme for International Student Assessment (PISA). However, a gender gap is noticeable, especially among top-tier students. The latest PISA assessment in 2015 indicates that Japanese boys outperform girls in mathematics by ~15% of a standard deviation on the achievement scale. Among the top 10% of students in Japan, there is an even larger gender gap in both mathematics and science. But by international standards, Japanese girls perform at very high levels. For example, the highest-achieving girls in Japan performed significantly above the highest-achieving boys from most of the other 70 education systems assessed by PISA in both mathematics and science….

      Fostering confidence in female students, trainees, and professionals would be a highly effective way to close the gender gap in Japan. Gender mainstreaming in various segments of Japanese society is crucial to address unconscious biases. A better gender balance in professional occupations, especially in STEM fields, would boost self-efficacy in female students, and TMU’s decision to appoint a female president is a major step toward promoting more female role models in medicine. Medical schools should work with hospitals to improve working conditions so that female physicians are not hampered in their careers by life events such as pregnancies and child-rearing.

      Halving Japan’s gender gap in the labor force by 2025 could add almost 4 percentage points to projected growth in gross domestic product over the period from 2013 to 2025. Japan clearly needs to embrace women to bolster its economy. And having more highly educated women in medical professions is particularly beneficial for a country with a rapidly aging population. Gender equality will be the key for better lives for all Japanese.

Father of ‘the God Particle’ Dies

[This brief news article is in the 12 October 2018 issue of Science.]

      Leon Lederman, a Nobel Prize-winning physicist and passionate advocate for science education, died last week at age 96. He and two colleagues shared the physics Nobel in 1988 for their discovery 26 years earlier that elusive particles called neutrinos come in more than one type. Lederman became identified with the neologism in the title of his 1993 book, The God Particle: If the Universe Is the Answer, What Is the Question? He was referring to the Higgs boson, the last missing piece of physicists’ standard model of fundamental particles and forces, which was finally discovered in 2012. Some physicists scorned the title. Lederman puckishly claimed his publisher balked at Goddamn Particle, which would have conveyed how hard physicists were struggling to detect the Higgs.

Mythical Androids and Ancient Automatons

[These excerpts are from a book review by Sarah Olson in the 5 October 2018 issue of Science.]

      Long before the advent of modern robots and artificial intelligence (AI), automated technology existed in the storytelling and imaginations of ancient societies. Hephaestus, blacksmith of the mythical Greek gods, fashioned mechanical serving maids from gold and endowed them with learning, reason, and skill in Homer’s Iliad. Designed to anticipate their master’s requests and act on them without instruction, the Golden Maidens share similarities with modern machine learning, which allows AI to learn from experience without being explicitly programmed.

      The possibility of AI servants continues to tantalize the modern imagination, resulting in household automatons such as the Roomba robot vacuum and Amazon’s virtual assistant, Alexa. One could imagine a future iteration of Alexa as a fully automatic robot, perhaps washing our dishes or reminding us to pick up the kids after school. In her new book, Gods and Robots, Adrienne Mayor draws comparisons between mythical androids and ancient robots and the AI of today….

      Through detailed storytelling and careful analysis of popular myths, Mayor urges readers to consider lessons learned from these stories as we set about creating a new world with AI. Like Pandora, an automaton created by Hephaestus who opened a box and released evils into the world, AI could bring about unforeseen problems. Here, Mayor cites Microsoft’s 2016 experiment with the Twitter chatbot “Tay,” a neural network programmed to converse with Twitter users without requiring supervision. Only hours after going live, Tay succumbed to a group of followers who conspired to turn the chatbot into an internet troll. Microsoft’s iteration the following year suffered a similar fate.

      Despite her extensive knowledge of ancient mythology Mayor does little to demonstrate an understanding of modern Al, neural networks, and machine learning; the chatbots are among only a handful of examples of modern technology she explores.

      Instead, Mayor focuses on her own area of expertise: analyzing classical mythology. She recounts, for example, the story of the bronze robot Talos, an animated statue tasked with defending the Greek island of Crete from pirates….

      When Pandora opens her box of evils, she releases hope by accident—inadvertently helping humanity learn to live in a world now corrupted by evil. Like this story Gods and Robots is cautionary but optimistic. While warning of the risks that accompany the irresponsible deployment of technology Mayor reassures readers that AI could indeed bring about many of the wonderful things that our ancestors imagined.

Junk Food, Junk Science?

[This excerpt is from a book review by Cyan James in the 5 October 2018 issue of Science.]

      …In her latest book, Unsavory Truth, [Marion] Nestle levels a withering fusillade of criticism against food and beverage companies that use questionable science and marketing to push their own agendas about what should end up on our dinner tables.

      In her comprehensive review of companies’ nutrition research practices, Nestle (whose name is pronounced “Nes-sul” and who is not affiliated with the Swiss food company) reveals a passion for all things food, combined with pronounced disdain for the systematic way food companies influence the scientists and research behind the contents of our pantries. Her book is an autopsy of modern food science and advertising, pulling the sheet back on scores of suspect practices, as well as a chronicle of Nestle’s own brushes with food company courtship.

      Is it shocking that many food companies do whatever they can in the name of fatter profits? Maybe, but it's old hat for Nestle, who has spent five decades honing her expertise and is a leading scholar in the field of nutrition science. In this book, she details nearly every questionable food company tactic in the playbook, from companies that fund their own food science research centers and funnel media attention to nondietary explanations for obesity, to those that cherry-pick data or fund professional conferences as a plea for tacit approval.

      Even companies that hawk “benign” foods such as blueberries, pomegranate juice, and nuts come under the author’s strict scrutiny because, as she reminds readers, “Foods are not drugs. To ask whether one single food has special health benefits defies common sense.”

      Instead, Nestle urges eaters to look behind the claims to discover who funds food-science studies, influences governmental regulations, advises policy-makers, and potentially compromises researchers. Food fads, we learn, can spring from a few findings lifted out of context or interpreted with willful optimism. Companies, after all, can hold different standards for conducting and interpreting research than those of independent academic institutions or scientific organizations and can be driven by much different motives.

      Nestle wields a particularly sharp pen against junk-food purveyors, soda companies, and others who work hard to portray excess sugar as innocuous or who downplay the adverse effects their products can have. These companies adopt tactics such as founding research centers favorable to their bottom line, diverting consumer attention away from diet to focus instead on the role of exercise in health, or trying to win over individual researchers to favorably represent their products. Nestle calls foul on these attempts to portray junk foods as relatively harmless, even knocking the “health halo” shine from dark chocolates and other candies masquerading as responsible health foods.

      Not content to point to the many thorny problems lurking behind food company labels and glitzy sponsored meetings, however, Nestle offers a constructive set of suggestions for how nutrition scientists can navigate potential conflicts of interest.

      Consumers, too, bear a responsibility for promoting eating habits that steer clear of dubious advertising. Nestle advocates adhering to a few simple guidelines: “[E]at your veggies, choose relatively unprocessed foods, keep junk foods to a minimum, and watch out for excessive calories!” We have the power of our votes and our forks, she reminds us, and can use both to insist that food compa-nies help us eat well. Nestle’s determination to go to bat for public health shines through, illuminating even her occasional sections of workaday prose or dense details.

      Nestle marshals a convincing number of observations on modern food research practices while energetically delineating how food companies’ clout can threaten the integrity of the research performed on their products. There is indeed something rotten in the state of dietary science, but books like this show us that we consumers also hold a great deal of power.

  Website by Avi Ornstein, "The Blue Dragon" – 2016 All Rights Reserved