Increase Your Brain Power
Sonia in Vert
Shared Idea
Interesting Excerpts
Awards and Honors
This Week's Puzzle
Last Week's Puzzle
Interesting Excerpts
The following excerpts are from articles or books that I have recently read. They caught my interest and I hope that you will find them worth reading. If one does spark an action on your part and you want to learn more or you choose to cite it, I urge you to actually read the article or source so that you better understand the perspective of the author(s).
Keeping Curiosity Alive

[These excerpts are from an editorial by Ann Haley MacKenzie in the August 2019 issue of The Science Teacher.]

      …Children are curious creatures when they enter kindergarten, but by the time they enter high school, science teachers find them questioning less, not following their curiousity, and often graduating indifferent toward a life of wonder.

      But isn’t the mark of an exemplary scientist (and person) to be naturally curious throughout life? The goal for ALL science teachers is to re-instill this innate curiosity and sense of wonder in their students….

      Science teachers see more and more students coming to their classrooms with trauma-based issues. Drug-addicted parents, incarcerated family members, and turmoil at home make it difficult for students to focus on key science concepts, like natural selection, black holes, equilibrium, and plate tectonics….

Mite Fight

[These excerpts are from an op-ed article by Erik Stokstad in the 26 July 2019 issue of Science.]

      The death of his favorite queens in 2013 was the final straw for BartJan Fernhout, an amateur beekeeper in Boxmeer, the Netherlands. Fernhout’s queens, which he had purchased from a specialty breeder, produced workers with prized traits: They were calm and made plenty of honey. Then, Fernhout’s hives became infested with a parasitic mite named Varroa destructor, which has become a major contributor to the demise of bee colonies worldwide.

      Chemicals and other methods can control the parasites. But the mites are developing resistance, and the treatments sometimes don’t work—or even backfire. The chemical Fernhout used to fight his mites, for example, stopped his queens from laying eggs. That caused the workers to kill the barren queens and begin to raise new royalty, a ruthless reaction the bees evolved long ago Lto ensure the future of their hives….

      It's been slow, laborious work. Since the mite jumped from Asian honey bees (Apis cerana) to the common domesticated European honey bee (A. miffera) more than a half-century ago, researchers have discovered some bees can keep the mite in cheek through behaviors such as fastidious grooming and removing mite-infested larvae. But identifying bees able to mount these responses is tedious. A standard way to evaluate grooming, for example, is to count how many mite legs have been chewed off by vigilant bees. And the complexities of bee reproduction make it cumbersome to combine mite-resistance traits with others valued by apiarists. Although researchers and breeders have created bees that require fewer pesticides, even these colonies can be overrun by mites—and very few lines can yet survive without any treatment….

      New molecular tools promise to accelerate those efforts. A new protein-based test, for example, would allow beekeepers to simply send a laboratory a few dozen antennae, plucked from their bees, to learn whether the insects have mite-detecting powers. Other scientists are sequencing the genomes of huge numbers of bees, hoping to create a relatively cheap and easy way to identify bees that carry genes for the protective behaviors. Such a test “is almost the Holy Grail” of anti-Varroa research, Fernhout says.

      A success would help secure the future of the multibillion-dollar honey bee industry, which supplies honey and enables the large-scale pollination of high-value crops, such as almonds….

      Like other ruinous pests, Varroa started to cause trouble after it moved to a new host. One species, V. jacobsoni, is a long-standing parasite of Asian honey bees throughout their home range of southern Asia. It reproduces in the bees’ brood cells, where it feeds on the larvae, but it typically doesn’t destroy colonies. One reason is that the mite lays its eggs only on larvae that will become drones—the males that mate with queens—and hives produce only a few drones. If the mite does target the more numerous larvae of worker bees, they commit suicide (a process called social apoptosis), preventing the mite from reproducing. The natural process of starting a new colony, called swarming, also gives colonies a fresh start; when a queen and a swarm of workers abandon their old hive, they leave behind the reproducing mites as well.

      In the mid-20th century, after apiarists brought European colonies to Asia, the mite found its new host. The European bee, which beekeepers prefer for its large colonies and docile workers, generally lacked the Asian variety’s defenses. Breeders had selected against swarming behavior, for example, because keepers don’t like queens to abandon their hives. The mite quickly adapted to its new host, and it routinely infests the larvae of European worker bees. The result was a new strain of Varroa—defined as the new destructor species in 2000—which ran amok. It now afflicts European bees everywhere except Australia and a few islands.

      The role of pesticides such as neonicotinoids in honey bee die-offs is debated. But there is no question that the mites have been a major factor. V. destructor weakens both adult and larval bees by consuming their fat stores. The mite also spreads viruses, including a lethal one that deforms wings, preventing bees from flying. Parasitized colonies lose workers, make less honey, and often fail within a year if not treated.

      Modern beekeeping, which involves keeping hives in close proximity, appears to have accelerated the mite's spread. When one colony is collapsing, bees from others come to rob honey and also pick up mites and viruses….

      In theory, there is a simple solution: Don’t do anything and let natural selection eliminate bees that can’t resist the mite. It’s brutal, but it works. Populations of feral honey bees crashed after Varroa arrived, for example. Then, in a few places they began to recover, suggesting some colonies had defenses. Beekeepers who let their hives fend for themselves also saw results….

      Breeding mite-resistant bees is an increasingly appealing alternative….

      The worker taps its antennae on several cells, checking for chemical cues. Then it nibbles a hole in the mite-infested cell. Other bees, responsible for cleaning, will remove the larva from the hive, preventing the mite from reproducing. This behavior, called Varroa sensitive hygiene (VSH), is heritable and a key target for breeders….

      …they have experimentally in-fested more than 1500 colonies with mites, then selected queens from the colonies that were good at VSH. In the best hives, the bees were able to detect and remove every reproducing mite….

      And getting a resistance trait into a queen is just half the battle. Keeping it in her offspring is also a challenge. That’s because queens mate with multiple drones while flying up to 10 kilometers from their hive. The behavior provides beneficial genetic diversity to the colony but can undo a breeder’s efforts if a resistant queen mates with drones that lack mite-fighting genetics.

      Breeders can overcome this problem by artificially inseminating queens. It's a tricky technique that requires collecting semen from drones, anaesthetizing the queen with carbon dioxide, and then inseminating her under a microscope….

      Another approach is to send queens to isolated mating stations, where the only drones available for mating are ones brought there by beekeepers….

      Economics may also slow the adoption of resistant strains. The cost of treating and living with mites is low enough that many commercial beekeepers don’t see an advantage to buying improved, resistant queens. And many breeders—who can already sell every queen they produce—don’t have an incentive to invest in selecting for Varroa resistance. Researchers predict that will change if the mite continues to develop resistance to amitraz, now the pesticide of choice in many countries. “If amitraz fails,” Danka says, “the landscape changes overnight.”

      Fernhout and other breeders want to be ready for the eventuality. They are close, they believe, to creating a world in which mite-resistance genes are widespread in honey bee populations, and beekeepers can set aside their failing pesticides….

NYC to Gridlock: Drop Dead

[These excerpts are from an article by Shanti Menon in the Summer 2019 issue of EDF’s Solutions.]

      They called it the summer of hell. New York City’s underfunded transit system hit a crisis point in 2017, with commuter trains in disarray and 6 million subway riders facing 70,000 delays a month. One sultry June evening, a southbound F train stopped in a tunnel for 45 minutes. When it crawled into the station, windows fogged by body heat, desperate passengers clawed their way out like zombies. Above ground, vehicles in midtown averaged just 5 miles per hour — the slowest rate of any U.S. city, and barely faster than walking.

      The moment was ripe for a solution….we helped New York City become the first in the United States to adopt congestion pricing, a plan that imposes a fee for driving into the clogged city center. The proceeds — expected to reach at least $1 billion per year — will be invested in public transit. It’s a bold solution that addresses the city’s traffic, pollution and climate issues all at once, and it will help millions of New Yorkers breathe cleaner air. It also signals a way for other cities, such as Seattle and Los Angeles, to solve their transportation woes and meet climate goals.

      By charging vehicles to enter the most crowded parts of a city— in Manhattan, the proposed zone is south of 60th Street, with round-trip peak fees likely in the range of $12-$14 for cars — congestion pricing reduces traffic and the problems it creates, including lost time and money, climate pollution and damage to people’s lungs and quality of life.

      In other cities, the method has proved its worth. Traffic delays in Stockholm dropped 30% to 50% and the number of children hospitalized with asthma fell nearly 50%. In London, congestion pricing gives 8 million residents an extra 1,888 years of life expectancy from cleaner air, according to research from King’s College London….

      Details of New York’s plan have yet to be finalized, such as which transit improvements will get funded first….Electrifying the nation’s largest transit bus fleet could also help bring down prices for batteries nationwide.

      Seattle, Los Angeles and San Francisco are studying congestion pricing. Officials in Portland, Oregon and Philadelphia, too, are watching New York’s progress….

The New Era of Megafires

[These excerpts are from an article by Rod Griffin in the Summer 2019 issue of EDF’s Solutions.]

      Six months after the Camp Fire roared through Paradise in the Sierra foothills 90 miles north of Sacramento, the town still looks like a scene in a post-apocalyptic movie. There are signs of life — even patches of green — but the activity is mostly construction crews removing toxic debris….

      The inferno, the deadliest in California history, killed 85 people, destroyed 14,000 homes and caused $16 billion in damage. It all started with a spark from a high-voltage power line operated by PG&E, the state's largest utility. Facing multiple lawsuits, the utility declared bankruptcy in January. The Wall Street Journal called it “the first major corporate casualty of climate change….”

      Nowhere is fire risk more evident than the arid West, where tens of millions of acres are succumbing to drought and pine bark beetle infestation, providing fuel for fires. Of the 10 worst fires in California history, four have occurred since 2017 and 15 of the top 20 have been in the past two decades.

      Climate change is a clear culprit — or at least an accomplice. A 2016 study in Proceedings of the National Academy of Sciences found that climate change has doubled the number of acres burned in the West since the 1980s and that the fire season is 78 days longer than it was.

      Compounding the problem, there’s been rampant development in the wildland-urban interface, where buildings abut natural areas, posing greater fire risk to humans and making firefighting more difficult.

      In addition to the human toll, high-intensity wildfires destroy wildlife habitat, degrade water quality and pollute the air. During the Camp Fire, smoke traveled for hundreds of miles, with Northern California cities recording the worst air quality in the world….

      There are many causes for the uptick in destructive wildfires, but U.S. forest policy is one of them. For more than a century, the Forest Service has been on a mission to stamp out wildfires. The agency's chief spokesperson for the last 75 years has been Smokey Bear, whose stern message is: “Only YOU can prevent wildfires.”

      The Smokey Bear campaign, inspired by a scared bear cub found by firefighters in the Capitan Mountains in New Mexico during a wildfire, helped campers become more responsible, but it also reinforced the perception that all fire is bad. Many U.S. forests are not only adapted to burn periodically but depend on fire for rejuvenation and heath. So does wildlife.

      Today, less than 1% of wildfires are allowed to burn. As a result, forests and grasslands are overgrown. That means that when fire does come, it’s more destructive. It kills more trees, torches more homes and sends far more carbon into the atmosphere, contributing to climate change.

      In addition, unsustainable logging practices removed the largest, most fire-resistant trees from many forests, leaving smaller trees and brush that serve as ladders to move fires from forest floor to canopy. Once a fire climbs to the treetops, known as a crown fire, it can move very rapidly. Some fires burn so hot they can sterilize huge swaths of land, depleting the soil of organic matter and increasing the risk of erosion, mudslides and toxic runoff.

      Following the Camp Fire, President Trump suggested that raking the forest floor may have helped prevent the conflagration. The solution is far more complex. Forest experts say the best hope for sustaining forests will be to thin areas with dead and declining trees and use controlled burns more frequently, restoring a more resilient forest….

      The problem, of course, is bigger than California. The U.S. Forest Service estimates that between 65 and 82 million acres, from Alaska to Florida, are in need of restoration on lands within their 193-million-acre forest and grassland system. Properly managed, forests absorb carbon dioxide from the atmo-sphere and are essential for watershed quality. Forest areas produce more than half of the nation’s drinking water.

      The challenge is how to pay for all this restoration. The U.S. Congress took an important step last year when it passed the Fire Funding Fix, which will allow the Forest Service to stop diverting funds away from forest management to pay for firefighting, a practice that consumes more than half of the agency’s budget (compared to 16% in 1995)….

Banana Fungus Puts Latin America on Alert

[These excerpts are from an article by Erik Stokstad in the 19 July 2019 issue of Science.]

      In a long-feared development, an extremely damaging banana disease has apparently reached Latin America. Late last week, the Colombian Agricultural Institute (ICA) in Bogota confirmed that four plantations in northern Colombia have been quarantined because of suspected infection with Fusarium wilt tropical race 4 (TR4), a fungus that kills plants by clogging their vascular system. Already widespread in Asia, the disease can wipe out entire plantations.

      The finding has yet to be confirmed, but countries in the region are on high alert. Neighboring Ecuador is the largest banana exporter in the world; Colombia, Costa Rica, and Guatemala are big producers as well. A major outbreak of TR4 could ruin many farmers and drive up banana prices globally….

      TR4 is a variant of Panama disease, which wiped out banana plantations across Latin America in the mid-20th century. The industry recovered after it replaced the most widely cultivated banana variety at the time, Gros Michel—also known as the Big Mike—with a new one, the Cavendish, that is resistant to Panama disease and now dominates the export industry.

      TR4, which easily overcomes the defenses of the Cavendish and many other banana varieties, emerged in Indonesia in the 1960s and has spread to many other countries since then. It surfaced in Jordan in 2013, in Mozambique 2 years later, and also in India, the world’s largest banana producer. Scientists dreaded its jump to the Americas, suspecting it was only a matter of time…. /p>

      In June, staff at a large Colombian banana plantation spotted suspicious symptoms on trees and alerted ICA. After an initial polymerase chain reaction test for the fungus DNA came back positive, ICA launched its contingency plan, closing four farms and destroying all plants within 10 meters of samples that tested positive. ICA officials also established checkpoints to disinfect vehicles and boots and expanded disease surveillance in another 1100 hectares. So far, samples from the wider area have come back negative….

Making Peace with Palm Oil

[These excerpts are from an article by Dyna Rochmyaningsih in the 12 July 2019 issue of Science.]

      …Oil palm (Elaeis grrineensis) is one of the most controversial crops today, because the plantations often replace tropical rainforests rich in biodiversity depriving iconic species such as the orangutan of their habitats. Vast swaths of Indonesia and Malaysia are given over to the crop. But Agung and a growing number of other scientists say it's time to work with oil palm companies—some of them in the crosshairs of environmental activists—to make the best of a bad situation….

      Some critics call the approach naive. By accepting industry funding—and using its giant plantations as laboratories—scientists risk losing their independence, they say, and they legitimize the companies’ business by giving it a veneer of sustainability….

      But scientists working with oil palm companies say they don't feel constrained scientifically, and they welcome the money….

      Palm oil is used in a staggering number of consumer products, from fast food, chocolate spread, and cereals to toothpaste and dog chow. It is also a source of biodiesel. Some 90% of the global supply comes from Indonesia and Malaysia, where plantations cover 17 million hectares, almost half the area of Germany. Growing demand is pushing the industry into Africa and South America….

      Policies to stem the tide have not worked very well….

      Yet banning palm oil would not end biodiversity loss, according to a 2018 report by the International Union for Conservation of Nature (IUCN); it would only displace it to other parts of the globe and possibly worsen it. One hectare of tropical land can produce 4 tons of oil annually, at least four times the yield of 1 hectare of rapeseed, sunflowers, or soybeans planted in temperate regions. Unlike those crops, the oil palm is a tree that can live up to 2.5 years—enough for a diverse ecosystem to thrive in a plantation, if growers allow it….

Gut Microbes May Help Malnourished Children

[These excerpts are from an article by Elizabeth Pennisi in the 12 July 2019 issue of Science.]

      Even after starving children get enough to eat again, they often fail to grow. Their brains don’t develop properly, and they remain susceptible to diseases, even many years later. Two studies…now suggest fostering the right gut microbes may help these children recover. The work also pinpoints combinations of foods that nurture the beneficial microbes.

      Most of the experiments were in animals, but a small group of malnourished children given those foods also showed signs of improvement….

      Together, their teams reported in 2014 that the gut microbiome normally “matures” as an infant grows into a toddler. They also noticed that it remains immature in severely malnourished children, dominated by bacteria found in younger healthy children. Two years later, the researchers put mature and immature microbiomes from children into mice raised without microbes. Animals given the immature microbiomes put on less muscle, had weaker bones, and had impaired metabolisms, suggesting a mature microbiome might be needed for proper development….

Solar plus Batteries Is Now Cheaper than Fossil Power

[These excerpts are from an article by Robert F. Service in the 12 July 2019 issue of Science.]

      This month, officials in Los Angeles, California, are expected to approve a deal that would make solar power cheaper than ever while also addressing its chief flaw: It works only when the sun shines. The deal calls for a huge solar farm backed up by one of the world's largest batteries. It would provide 7% of the city's electricity beginning in 2023 at a cost of 1.997 cents per kilowatt hour (kWh) for the solar power and 1.3 cents per kWh for the battery. That's cheaper than any power generated with fossil fuel.

      …As if on cue, last week a major U.S. coal company—West Virginia-based Revelation Energy LLC—filed for bankruptcy; the second in as many weeks….

      Precipitous price declines have already driven a shift toward renewables backed by battery storage. In March, an analysis of more than 7000 global storage projects…reported that the cost of utility-scale lithium-ion batteries had fallen by 76% since 2012, and by 35% in just the past 18 months, to $187 per MWh....Another market watch firm…predicts a further halving by 2030….

      Large-scale battery storage generally relies on lithium-ion batteries—scaled-up versions of the devices that power laptops and most electric vehicles. But…batteries are only part of the energy storage an-swer, because they typically provide power for only a few hours….

      Local commitments to switch to 100% renewables are also propelling the rush toward grid-scale batteries….54 countries and eight U.S. states have required a transition to 100% renewable electricity. In 2010, California passed a mandate that the state's utilities install electricity storage equivalent to 2% of their peak electricity demand by 2024….

Balancing Science and Security

[These excerpts are from an editorial by Mary Sue Coleman in the 12 July 2019 issue of Science.]

      Federal elected officials and members of the United States intelligence community have expressed concern about the security of the nation’s scientific and technological information, articulating worries about China, Russia, and Iran. The fears include potential academic espionage, theft of intellectual property, and threats to academic integrity. As federal policy-makers respond, it is critical that they work with the scientific community to balance securing strategically important information with maintaining the free flow of fundamental scientific knowledge and international talent necessary for scientific progress. History is a guide to striking this balance.

      During the Cold War, the U.S. security community raised similar concerns about the Soviet Union….

      Building on those discussions, in 1982 the U.S. National Academy of Sciences released Scientific Communication and National Security. Citing this report, President Reagan issued National Security Decision Directive 189 (NSDD 189) in 1985. NSDD 189 states that to the maximum extent possible, the products of basic and applied research funded by the federal government should be published and widely disseminated, and that classification should be used in those limited circumstances when controlling scientific information is necessary to protect national security. NSDD 189 was reaffirmed in 2001 by Secretary of State Condoleezza Rice during the George W. Bush administration….

      The core principles underlying NSDD 189 are now threatened. Legislative proposals, such as that introduced recently in Congress by Sen. J. Hawley (R-MO), would impose new limitations on who can work on, and what information can be shared about, unclassified research projects deemed by government bureaucrats to be “sensitive”—a category that actually does not exist under current rules. If enacted, this proposal would negatively affect universities’ ability to engage in scientific research on behalf of the U.S. government.

      A more effective approach to address the current security concerns is contained in the Securing American Science and Technology Act, introduced in May by Rep. IL M. Sherrill (D-NJ) and Rep. A. Gonzalez (R-OH). This legislation, now part of a larger bill, establishes an interagency working group under the existing authority granted to the White House Office of Science and Technology Policy’s National Science and Technology Council….

      For American science to advance, basic and applied research must be openly and widely shared. At the same time, the United States must continue to benefit—as it has for decades—from the world's best and brightest scholars coming to the country to study and work. Indiscriminate restrictions on either could do irreparable harm to the U.S. scientific enterprise.

The Freedom to Teach

[These excerpts are from an article by Randi Weingarten in the Summer 2019 issue of American Education.]

      Consider what teachers have recently said about why they teach:

      “I teach because I want to change the world, one child at a time, and to show them to have passion and wonder in their learning.”

      “I teach so the next generation will question—everything. The classroom should be a place where we set children's minds free.”

      “I teach because our democracy cannot survive without citizens capable of critical analysis."

      Why I felt called to teach is best summed up by this poster I have moved from office to office since I taught in the 1990s: “Teachers inspire, encourage, empower, nurture, activate, motivate, and change the world.”

      Teaching is unlike any other profession in terms of mission, importance, complexity, impact, and fulfillment. Teachers get the importance of their work. So do parents and the public. But teachers know that some people don’t get it—whether it's the empty platitudes, or the just plain dissing. And this has taken a huge toll.

      Teachers and others who work in public schools are leaving the profession at the highest rate on record. There were 110,000 fewer teachers than were needed in the last school year, almost doubling the shortage of 2015. All 50 states started the last school year with teacher shortages.

      This is a crisis, yet policymakers have largely ignored it.

      And it’s getting worse. Enrollment in teacher preparation programs is plummeting—dropping 38 percent nationally between 2008 and 2015….

      The teacher uprisings of the last two years have laid bare the frustration over insufficient resources, deplorable facilities, and inadequate pay and benefits for educators. In what President Trump calls the “greatest economy ever,” 25 states still spend less on public education than they did a decade ago….

      In 38 states, teacher salaries are lower than before the Great Recession. Research from the Economic Policy Institute shows that teachers are paid 24 percent less than other college graduates….

      Inadequate funding for education is sometimes the result of weak economies. But more often, it is a deliberate choice—to cut funds for the public schools 90 percent of our students attend—in order to finance tax cuts for corporations and the superrich or to siphon off funds for privatization…

      Ask teachers why they leave the profession. It’s not just underfunding. Teachers are frustrated, demoralized, and really stressed. The lack of classroom autonomy and discretion supercharge that dissatisfaction….

What, Us Worry?

[These excerpts are from an article by Steve Mirsky in the July 2019 issue of Scientific American.]

      …We and the world are facing big problems, and Diamond points out that we’re never going to solve those problems without acknowledging their existence. In fact, he sets up his arguments by examining how individuals in personal crises do or do not deal with those situations successfully and then drawing analogies, when possible, to countries.

      In such a framework, a decision by a smiling Senator James Inhofe of Oklahoma in 2015 to display a snowball on the Senate floor to somehow refute the reality of climate change could be considered a symptom of a national delusional disorder.

      Of course, that disorder has really bloomed in the years since. “Not enough American citizens and politicians take our current major problems seriously,” Diamond writes, regarding the deterioration of political compromise, the increase in incivility, tainted elections (including by voter suppression) and economic inequality (Climate change is in the section on global threats.)

      …The notion of exceptionalism dates to Alexis de Tocqueville in the 19th century and originally covered the country’s democracy and personal freedoms. But in more recent times it often seems (especially if you tune for a moment to Fox News) like exceptionalism has come to signify a belief that the U.S. is simply special—and shame on you if you question that specialness.

      Nevertheless, Diamond notes, “although per-capita income is somewhat higher in the U.S. than in most European countries, life expectancy and measures of personal satisfaction are consistently higher in Western Europe. That suggests that Western European models may have much to teach us.”

      …”Skepticism about science is increasingly widespread in the U.S., and that's a very bad portent, because science is basically just the accurate description and understanding of the real world.” But as the muckraking writer Upton Sinclair put it in 1934, “It is difficult to get a man to understand something, when his salary depends upon his not understanding it.” Especially if that man is a U.S. senator.

Broken Promises

[This excerpt is from an article by Rowan Moore Gerety in the July 2019 issue of Scientific American.]

      Madagascar broke free of the land that makes up Africa and India nearly 100 million years ago. Across the eons, evolution in isolation has given the island unparalleled ecological richness: Four out of five plants and animals there are found nowhere else, the sweeping cast of characters in a wide array of highly specialized symbiotic niches. The country’s 83 species of screw pine alone serve as breeding grounds for dozens of different reptiles and amphibians. But the ballet between this particular tree and frog is now confined to a tiny collection of forest fragments….

      Rio Tinto came to Madagascar in the 1980s, looking for ilmenite, a mineral used to make titanium dioxide, which pro vides the white pigment found in products ranging from pain and plastics to toothpaste. Test pits hit pay dirt near Tolagnaro (Fort Dauphin), at the southeastern tip of the island. The ilmenite deposits that interest the company lie underneath the rem nants of dense evergreen forests that once grew on sand dune along most of Madagascar’s eastern coast, forming a contin ous band covering perhaps 465,000 hectares. Since human colonization of the island some 2,000 years ago, these littoral forests, as they are known, have dwindled to at most 10 percent o their original expanse. As such, Rio Tinto’s concession weav through one of the most threatened ecosystems on the planet.

      …When ilmenite prices slumped during the Great Recession, Rio Tinto’s priorities shifted, and by 2016, the company reneged on its grand conservation promise. Instead it adopted the vague goal of avoiding making things too much worse. Today mining near Mandena is poised to extinguish this biodiversity hotspot. For the people who live there and dozens of endemic species such as the ring-wearing tree frog, destiny now turns on the outcome of this long-running experiment, a test case for industry’s role in conservation and the role conservationists can play in the mining industry….

      Rio Tinto discovered ilmenite near Tolagnaro in 1986. At the time, the forests in the region were already heavily fragmented and degraded by human activity. But the company’s prospecting soon brought new roads to the area and an influx of people looking for work, hastening the deforestation underway for charcoal production and new farmland to supply the growing city.

      Rio Tinto determined that the region around Toiagnaro contained some 70 million metric tons of ilmenite—enough to supply about 10 percent of the global market for a decade or more—and began to make a plan for extracting it. The company set its sights on three mineral-rich areas along the coast encompassing a total of approximately 6,000 hectares. Mining would start at the 2,000-hectare site in Mandena and eventually expand north to Sainte Luce and to Petriky farther south. The extraction would continue for the life of the mine—about 60 years from the date of first production, according to the company’s projections. Rio Tinto estimated that in the end the project would result in the loss of 1,665 hectares, or 3.5 percent, of Madagascar’s remaining littoral forest….

      Despite Rio Tinto's support for ecological research in Madagascar, by the early 2000s the company’s global track record had earned it a reputation as an unscrupulous actor in a heavily polluting industry. In Papua New Guinea, where Rio Tinto had developed a giant copper mine in the 1980s, protests brought on by the company’s disparate treatment of white foreigners and local workers forced the mine.s closure and helped to spark a civil war. Thirty years later Rio Tinto is gone, but pollution from the shuttered Panguna mine will still cost an estimated $1 billion to clean up….

“Emotional AI” Sounds Appealing

[This excerpt is from an article by Zeynep Tufekci in the July 2019 issue of Scientific American.]

      …And right out of the gate, advertisers and marketers have jumped on this technology. For example, Coca-Cola has hired a company called Affectiva, which markets emotion-recognition software, to fine-tune ads. As usual, money is driving this not so noble quest: research shows that ads that trigger strong emotional reactions are better at getting us to spend than ads using rational or informational approaches. Emotional recognition can also be used in principle for pricing and marketing in ways that just couldn't be done before. As you stand before that vending machine, how thirsty do you look? Prices may change accordingly. Hungry? Hot dogs may get more expensive.

      This technology will almost certainly be used along with facial-recognition algorithms. As you step into a store, cameras could capture your countenance, identify you and pull up your data. The salesperson might get discreet tips on how to get you to purchase that sweater—Appeal to your ego? Capitalize on your insecurities? Offer accessories and matching pieces?—while coupons customized to lure you start flashing on your phone. Do the databases know you have a job interview tomorrow? Okay, here’s a coupon for that blazer or tie. Are you flagged as someone who shops but doesn't buy or has limited finances? You may be ignored or even tailed suspiciously.

      One potential, and almost inevitable, use of emotion-recognition software will be to identify people who have “undesirable” behaviors. As usual, the first applications will likely be about security. At a recent Taylor Swift concert, for example, facial recognition was reportedly used to try to spot potential troublemakers. The software is already being deployed in U.S. airports, and it’s a matter of time before it may start doing more than identifying known security risks or stalkers. Who’s too nervous? Who's acting guilty?

      In more authoritarian countries, this software may turn to identifying malcontents. In China, an app pushed by the Communist party has more than 100 million registered users—the most downloaded app in Apple’s digital store in the nation. In a country already known for digital surveillance and a “social credit system” that rewards and punishes based on behavior the party favors or frowns on, it’s not surprising that so many people have downloaded an app that the New York Times describes as “devoted to promoting President Xi Jinping.” Soon people in China may not even be able to roll their eyes while they use the app: the phone's camera could gauge their vivacity and happiness as they read Xi’s latest quotes, then deduct points for those who appear less than fully enthusiastic.

      It’s not just China: the European Union is piloting a sort of “virtual agent” at its borders that will use what some have called an “AI lie detector.” Similar systems are being deployed by the U.S. government. How long before companies start measuring whether customer service agents are smiling enough? It may seem like a giant leap from selling soda to enforcing emotional compliance, and there can certainly be some positive uses for these technologies. But the people pushing them tend to accentuate the positive and downplay the potential downside. Remember Facebook’s feel-good early days…?

How Matter Becomes Mind

[These excerpts are from an article by Max Bertolero and Danielle S. Bassett in the July 2019 issue of Scientific American.]

      Networks pervade our lives. Every day we use intricale networks of roads, railways, maritime routes and skyways traversed by commercial flights. They exist even beyond our immediate experience. Think of the World Wide Web, the power grid and the universe, of which the Milky Way is an infinitesimal node in a seemingly boundless network of galaxies. Few such systems of interacting connections, however, match the complexity of the one underneath our skull.

      Neuroscience has gained a higher profile in recent years, as many people have grown familiar with splashily colored images that show brain regions “lighting up” during a mental task. There is, for instance, the temporal lobe, the area by your ear, which is involved with memory, and the occipital lobe at the back of your head, which dedicates itself to vision.

      What has been missing from this account of human brain function is how all these distinct regions interact to give rise to who we are….In the most fundamental sense, what the brain is—and thus who we are as conscious beings—is, in fact, defined by a sprawling network of 100 billion neurons with at least 100 trillion con-necting points, or synapses.

      Network neuroscience seeks to capture this complexity….

      To undersand how networks underlie our cognitive capabilities, first consider the analogy of an orchestra playing a symphony. Until recently, neuroscientists have largely studied the functioning of individual brain regions in isolation, the neural equivalent of separate brass, percussion, strings and woodwind sections. In the brain, this stratification represents an approach that dates back to Plato—quite simply, it entails carving nature at the joints and then studying the individual components that remain.

      Just as it is useful to understand how the amygdala helps to process emotions, it is similarly vital to grasp how a violin produces high-pitched sounds. Still, even a complete list of brain regions and their functions—vision, motor, emotion, and so on—does not tell us how the brain really works. Nor does an inventory of instruments provide a recipe for Beethoven’s Eroica symphony.

      …Put another way, just as a collection of instruments is not music, an assemblage of wires does not represent brain function….

      At any moment, though, some areas within the three-pound organ are more active than others. We have all heard the saying that people use a small fraction of their brain capacity. In fact, the entire brain is active at any point in time, but a given task modulates the activity of only a portion of the brain from its baseline level of activity.

      That arrangement does not mean that you fulfill only half of your cognitive potential. In fact, if your entire brain were strongly active at the same time, it would be as if all the orchestra members were playing as loudly as possible—and that scenario would create chaos, not enable communication. The deafening sound would not convey the emotional overtones present in a great musical piece. It is the pitch, rhythms, tempo and strategic pauses that communicate information, both during a symphony and inside your head.

      …Imagine a scenario in which every musician in an orchestra had to change the notes played every time another musician changed his or her notes. The orchestra would spiral out of control and would certainly not produce aesthetically pleasing sounds. Processing in the brain is similar—each module must be able to function mostly independently. Philosophers as early as Plato and as recent as Jerry Fodor have noted this necessity, and our research confirms it.

      Even though brain modules are largely independent, a symphony requires that families of instruments be played in unison….Watching a movie with only a brain module for vision—without access to the one for emotions—would detract greatly from the experience.

      For that reason, to complete many cognitive tasks, modules must often work together….

      Although our brains have certain basic network components—modules interconnected by hubs—each of us shows slight variations in the way our neural circuits are wired. Researchers have recently devoted intense scrutiny to this diversity….

      …brain-connectivity patterns establish a “fingerprint” that distinguishes each individual. People with strong functional connections among certain regions have an extensive vocabulary and exhibit higher fluid intelligence—helpful for solving novel problems—and are able to delay gratification. They tend to have more education and life satisfaction and better memory and attention. Others with weaker functional connections among those same brain areas have lower fluid intelligence, histories of substance abuse, poor sleep and a decreased capacity for concentration.

      …If your brain network has strong hubs with many connections across modules, it tends to have modules that are clearly segregated from one another, and you will perform better on a range of tasks, from short-term memory to mathematics, language or social cognition. Put simply, your thoughts, feelings, quirks, flaws and mental strengths are all encoded by the specific organization of the brain as a unified, integrated network. In sum, it is the music your brain plays that makes you you….

      Transformations in both structural and functional connectivity are important during adolescent brain development, when the finishing touches of the brain’s wiring diagram are being refined. This period is of critical importance because the first signs of mental disorders often appear in adolescence or early adulthood.

      One area our research relates to is understanding how brain networks develop through childhood and adolescence and into adulthood. These processes are driven by underlying physiological changes, but they are also influenced by learning, exposure to new ideas and skills, an individual's socioeconomic status and other experiences.

      …We have also found that the extent to which modules segregate from one another is more rapid in children who have a higher socioeconomic status, highlighting the key impact of their environment.

      …The areas identified as hubs are also the locations in the human brain that have expanded the most during evolution, making them up to 30 times the size they are in macaques. Larger brain hubs most likely permit greater integration of processing across modules and so sup-port more complex computations….

      This simulation demonstrates that one potential solution to evolving a brain capable of exchanging information among modules requires hubs with strong connections. Notably, real networks—brains, airports, power grids—also have durable, tightly interconnected hubs, exactly as predicted by evolutionary experiments. That observation does not mean evolution necessarily occurred in the same way as the simulation, but it shows La possible means by which one of nature's tricks might operate….

The Grand Story of Carbon

[This excerpt is from a book report by Nicola Pohl in the 5 July 2019 issue of Science.]

      Although organic chemistry is often described as the science of carbon, Robert Hazen's latest book, Symphony in C, makes clear that this vital element cannot be contained by such a disciplinary boundary.

      Despite its abundance and importance, the location and cycling of carbon on Earth are not yet well understood. Ever-increasing atmospheric concentrations of its dioxide form lend urgency to a more accurate accounting of this element. However, it is Hazen’s enthusiasm, the string of shareable facts presented, and the introduction of so many interesting scientists that make this book such a fascinating read.

      …In the first section, “Earth”—the most well-developed of the book—Hazen discusses carbon-based minerals, offering a wonderful account of how mineralogy has gradually turned from a purely observational science to one that can predict missing carbon-containing minerals….

      The extreme pressures and temperatures hundreds of miles below Earth’s surface form denser crystalline minerals, but almost none of the ~400 currently known carbon minerals are in high-pressure phases. Hazen has been fascinated by experimental efforts to mimic these extreme pressures since he was a graduate student, and he details the methods being used to reach 80,000-fold or greater atmospheres of pressure. Such high pressures do not lead only to simple structures, we learn. Additional surprises await when Hazen ex-plores deeper toward Earth’s core.

      The second section, “Air,” starts with a very brief history of Earth before discussing carbon in the air—primarily carbon dioxide—and its cycling through the atmosphere. Hazen finishes this section with a wonderful discussion of the biases that humans, including scientists, have and the challenges they bring to forming a more integrated picture of the world….

      “Fire” is devoted to the many ways we manipulate carbon, using it to create plastic materials, for example, as well as other products of modern organic chemistry….

      The final pages of the book end with a very personal look at what Hazen calls the “human carbon cycle”—exploring how we consume and expel carbon—and examines our responsibility to our carbon-based home….

Quest for Fire

[These excerpts are from an article by Robert F. Service in the 5 July 2019 issue of Science.]

      …His elixir is gasoline. Nearly everyone in the developed world is hopelessly addicted to it. Collectively, we use nearly 3 trillion liters every year….

      But before the demo, the machine sprang a leak. Although it wasn’t operating at the pitch fest, McGinnis’s optimism was. He promised audience members that the repaired device would extract carbon dioxide (CO2) from the air, add it to water, and use a catalyst to rearrange the chemical bonds to make hydrocarbons. The result: fossil fuel without the fossils. “It can sound like magic, but it’s really just chemistry,” McGinnis told the audience.

      Synthesizing gasoline, instead of refining it from oil, isn’t a new idea. German chemists hi the 1920s discovered they could turn coal into carbon monoxide (CO) and hydrogen—a combination known as synthesis gas. Catalysts, along with heat and pressure, could then transform synthesis gas into gasoline and other liquid hydrocarbons.

      But McGinnis’s setup requires no heat, pressure, or coal. It uses only air, water, and electricity, which can come from the sun or wind. And with those renewable resources becoming ever cheaper, he's betting he can deliver gasoline more economically—and far more cleanly—than companies that must find oil, drill for it, ship it, and refine it.

      Several other startups and academic labs are pursuing the same dream….

      Yet many of those efforts have stumbled over the expensive, energy-intensive steps needed to separate the hydrocarbons from the water they are produced in. Prometheus relies instead on a proprietary carbon nanotube membrane sieve that it says readily parts the hydrocarbons from water….

      Once the machine is working efficiently, electricity will make up about one-third of its operating costs. Renewable electricity prices around the globe are falling, however, and they already sink near zero at certain times of the day in places where the sun blazes or the wind howls. Prometheus, McGinnis says, could easily ramp its electricity demands up and down to take advantage of the lowest rates, and the machines could be sited wherever renewable power is cheapest….

      Even if all goes according to plan, McGinnis will face a long mad to compete with the likes of ExxonMobil. He'll have to prove he can build a fuelmaker cheaply enough to make its gasoline aBordable. That could be tough if turning it on makes sense only when renewable electricity prices bottom out. The fuehnaker also works only with a source of clean water. And before he can market his invention, he'll need to prove that his fuels can directly substitute for fossil-derived versions….

FDA Enforcement Actons Plummet under Trump

[These excerpts are from an article by Charles Piller in the 5 July 2019 issue of Science.]

      From monitoring clinical trials and approving medicines and vaccines, to ensuring the safety of blood transfusions, medical devices, groceries, and more, the U.S. Food and Drug Administration (FDA) is one of the nation's most vital watchdogs. By several measures, however, FDA’s compliance and enforcement actions have plummeted since President Donald Trump took office….

      The agency’s “warning letters”—a key tool for keeping dangerous or ineffective drugs and devices and tainted foods off the market—have fallen by one-third, for example. Such letters typically demand swift corrections to protect public health and safety….

      Warnings from the FDA Center for Devices and Radiological Health, which helps ensure the safety and quality of medical devices, and from some of the agency's district offices—including Philadelphia, Florida, and New York—have dropped even more steeply, by more than two-thirds. Two district offices have not issued a warning in more than 2 years. The numbers don’t just reflect a new administration's slow start. FDA sent significantly fewer warning letters in the second year of Trump’s presidency than in his first.

      FDA watchers say they can’t pinpoint what’s driving the decline, but they are alarmed….

      Several other FDA actions under Trump show similar dedines when measured against the end of the Obama administration. FDA inspection reports labeled “official action indicated”—typically a trigger for warning letters or similar actions—have fallen by about half under Trump and are continuing to trend downward….

Contrails Threaten Climate

[This news brief is in the 5 July 2019 issue of Science.]

      Carbon emissions from aviation are a well-known contributor to climate warming. But another byproduct of planes—the white contrails they paint across the sky—has an even bigger effect. Contrails form when water vapor in a plane's exhaust freezes, and unlike low-level clouds that reflect sunlight to cool the atmosphere, they trap heat radiated from Earth's surface, warming it. A new model described last week in Atmospheric Chemistry and Physics is the first to classify contrail clouds separately from natural ones. Researchers predicted a threefold increase in contrail warming effects by 2050. They also estimated that a 50% reduction in airplane soot, which promotes the formation of contrails, would lead to a 15% decrease in those effects. But such reductions are unlikely; global air traffic is surging, and most pollution-reduction plans in the industry fail to consider climate impact beyond carbon dioxide emissions.

Chinese Scientists and Security

[These excerpts are from an editorial by Elias Zerhouri in the 5 July 2019 issue of Science.]

      …We should remember that for years, scientific exchanges and collaborations with China were encouraged by U.S. policy-makers, including implicit support of China’s Thousand Talents Program. Chinese-born as well as American-born federally funded scientists were publicly offered various positions in China over the years without opposition by relevant institutions. The “rules,” now presented and enforced as severe violations of U.S. ethics and intellectual property regulations, were not rigorously implemented by officials at many U.S. institutions. The consternation, sense of targeted discrimination, and fear in the Chinese-American scientific community are thus understandable.

      U.S. science and technology are a cornerstone of competitiveness and preeminence in the world, and the security and protection of this vital enterprise are of paramount importance. But unlike most other countries, the United States relies heavily on attracting the best and brightest in the world to its ecosystem of innovation because of an insufficient number of graduates in science, technology, engineering, and mathematics fields relative to the size and technological intensity of its economy. Would it be in the national interest to risk losing all or some of the extraordinarily productive Chinese-American scientific diaspora trained and supported for years in the United States?

      …most Chinese-born scientists prefer to stay in the United States for personal and family preferences. One can imagine the glee of Chinese officials at the prospect of thousands of Chinese-American scientists, whom they were unable to recruit until now, relocating back to China. So, what should be done?

      …No members of any community should be targeted because of their origins, rather than for what they may have intentionally and demonstrably done to harm U.S. science and technology. The United States should not risk losing critical intellectual assets such as productive foreign-born scientists and engineers to global competitors to serve short-term security concerns at the expense of long-term national interests.

Genomic Surveillance for Malaria

[These excerpts are from an article by Ify Aniebo in the July 2019 issue of Scientific American.]

      In 2018 the World Health Organization proposed a “10+1” initiative for malaria control and elimination that targets 10 African countries plus India, which together host 70 percent of global cases. Although this approach is promising, it is missing an important component: genomic surveillance. Drug resistance threatens all of the progress made so far against malaria, but genomic surveillance can detect resistance years before the first warning signs appear in clinics. It can answer critical questions about how resistance emerges and spreads and can help control the balance of interventions, preserve the useful life of already existing drugs and ensure effective treatment.

      …This genomic information can help malaria-control programs use quality data sets for regular monitoring of drug resistance, provide evidence-based decision-making around malaria policy and assist in managing the spread of resistance.

      The countries most affected by malaria all had a first-line drug that ended up becoming resistant. In African countries, toward the end of the 20th century, chloroquine was the drug of choice, but malaria parasites grew resistant to it. That drug was theh replaced with a combination of pyrirnethamine and sulfadoxine in the early 2000s, and resistance again occurred. Now the parasites are becoming resistant to the current first-line artemisinin-based combination therapies (ACTs)….

      …The more drugs we use to treat malaria parasites, the more resistant they become as a result of selective pressure, which creates the preconditions for resistance. Because we know this biological response from the parasites is inevitable, we should put in place measures to track down these changes when they arise: doing so would help us prevent the spread of the disease, investigate emergence of resistance and subsequently preserve the efficacy of the current first-line antimalarial treatment.

      With advances in genomic technology, scientists have been able to analyze malaria parasites from the patients carrying them and the mosquitoes transmitting them. Such analysis has become a source of relevant information for both drug and insecticide resistance. Research shows that genomic surveillance has helped us understand how different mosquito species arise and transmit malaria to humans, which in turn has led to a better targeting of interventions as vectorial capacity becomes better understood.

      Such surveillance has enabled greater knowledge of changing transmission intensity and parasite gene flow, including drug-resistant genes, and has aided in quantifying the risks of importing malaria from a country that is burdened with the disease. But work using genomic surveillance as a tool has mostly transpired within the realm of research, with only a few examples of its application in the field where malaria burden remains high.

      Genomic surveillance has been used in countries that have eliminated malaria to prevent its resurgence and in countries that are in a malaria-elimination phase. It should not be any different for the. African countries that have the highest malaria burden. Lessons learned from poliomyelitis show that genomic surveillance played a huge role in controlling the infection.

Lucky Charms

[These excerpts are from an article by Clara Moskowitz in the July 2019 issue of Scientific American.]

      We could have been living in an antimatter universe, but we are not. Antimatter is matter’s upside-down twin—every matter particle has a matching antimatter version with the opposite charge. Physicists think the cosmos started out with just as much antimatter as matter, but most of the former got wiped out. Now they may be one step closer to knowing why.

      Researchers…have discovered antimatter and matter versions of “charm” quarks—one of six types, or flavors, of a class of elementary matter particles—acting differently from one another….

      Matter and antimatter annihilate each other on contact, and researchers believe such collisions destroyed almost all of the antimatter (and a large chunk of the matter) that initially existed in the cosmos. But they do not understand why a relatively small excess of matter survived to become the stars and planets and the rest of the cosmos. Consequently, physicists have been looking for a kind of matter that behaves so differently from its antimatter version that it would have had time to generate this excess in the early universe.

      The newly discovered mismatch in decay rates between charm quarks and antiquaries turns out to be too small to account for the universe’s excess of matter….

      Physicists previously found similar variations in two other quark flavors, but those were also too tiny to account for our matter-dominated universe. Scientists are holding out hope of finding much larger matter-antimatter differences elsewhere, such as in ghostly particles called neutrinos or reactions involving the Higgs boson—the particle that gives others mass….

Science Research Pays Off

[These excerpts are from an article by Peter Dizikes in the July/August 2019 issue of MIT News.]

      The Human Genome Project helped scientists make the first complete map of human genes. But the $3 billion,15-year project was also a remarkably successful investment: in 2012, human genome sequencing accounted for an estimated 280,000 jobs, $19 billion in personal income, $3.9 billion in federal taxes, and $2.1 billion in state and local taxes, all for just $2 per year per US resident….

      It’s hardly the only science-based payoff out there. Every $10 million in public science funding granted to the National Institutes of Health produces 2.7 patents and an additional $30 million in value for the private-sector firms that own them; meanwhile, each dollar in publicly funded military R&D leads to another $2.50 to $5.90 in private-sector investment.

      Once upon a time, the US steadily increased public investment in science. In 1938, the country allocated under one-tenth of 1% of its national income to science research. By 1944, the war had boosted that to 0.5%, and by 1964, 2% of GDP was invested in science research. But today, that figure has dropped to 0.7% of GDP, and overall growth rates have lagged compared with the first postwar decades.

Science in a Nutshell

[These excerpts are from an article by David L. Chandler in the July/August 2019 issue of MIT News.]

      A team of scientists at MIT and elsewhere has developed a neural network that can read scientific papers and render a brief plain-English summary. Such a system could help editors, writers, and scientists scan a large number of papers to get a preliminary sense of what they’re about. And the approach could also be used in machine translation and speech recognition….

      Neural networks mimic one way humans learn: the computer examines many different examples and identifies the key underlying patterns. While widely used for pattern recognition, such systems often have difficulty correlating information from a long string of data, such as a research paper. Other techniques used to improve this capability—including one called long short-term memory (LSTM)—can't handle natural-language processing tasks that require really long-term memory.

Slow-Motion Extinction

[These excerpts are from an article by Rachel Numer in the July 2019 issue of Scientific American.]

      Nearly four decades ago zoologist Michael Thompson, then at the University of Adelaide in Australia, made an alarming discovery: invasive red foxes were gobbling up more than 90 percent of all the turtle eggs laid along the banks of Australia’s Murray River. Thompson’s surveys also revealed a disproportionate number of older turtles, suggesting that fox predation had already reduced the amount of juveniles in the river. If no one took action, he warned, the formerly abundant turtles would eventually disappear.

      Very little was done, and Thompson’s prediction now appears to be on its way to coming true. A recent study confirms that several turtle species have either drastically declined or disappeared from various sec-tions of the Murray River….

      …The researchers inferred the species' population sizes from the number of individuals they trapped in a given amount of time. They found the turtles have been extirpated in places where they were previously abundant, and most of the specimens they managed to capture elsewhere were large—and likely old—adults. Spencer and his colleagues blame the losses on ongoing nest predation by foxes, compounded by other problems, including environmental degradation and severe drought in the 2000s….

      The turtles could recover quickly if action is taken to protect nests from foxes and restore habitat….But governments tend to respond only when losses reach crisis levels, and the Murray River species currently lack federal protection, he says.

Millions without Water as Reservoirs in India Run Dry

[This brief is from the 28 June 2019 issue of Science.]

      The four reservoirs that supply Chennai, India’s sixth largest city, have run dry; plunging its estimated 10 million residents into an acute water shortage. Clashes broke out in the streets last week as tens of thousands of people stood for hours in scorching heat to collect water trucked in by the government; many businesses and restaurants were forced to close. Weakening monsoons and rising temperatures have emptied the reservoirs, experts say, while groundwater supplies have been depleted by overuse. Chennai’s plight—one of many water emergencies hitting cities from Cape Town, South Africa, to Sao Paulo, Brazil—is part of the worst water crisis in India’s history, with an estimated 600 million people facing acute scarcity. What water exists often isn’t safe to drink—some 70% of groundwater is contaminated with iron, fluorides, arsenic, or nitrates, killing nearly 200,000 people each year, according to government reports. And things may soon get worse: Forecasts suggest 40% of India's population will have no access to drinking water by the year 2030.

Can Cannabis Fix the Opioid Crisis?

[These excerpts are from an article by Jonathan N. Stea in the July 2019 issue of Scientific American.]

      Cannabis has been hailed as a potential magic bullet in the fight against all sorts of ills, including chronic pain and depression. But it has also been called the “devil’s lettuce,” with claims that using it will lead to laziness, insanity and even murder. These polarized views can, in part, be explained by the drug’s complexity: cannabis is not a single substance but rather a mixture of more than 500 individual chemicals whose proportions vary from one plant strain to another.

      Because cannabis is such a complicated chemical soup, until recently most often prepared for the black market, it has been difficult to draw clear research conclusions about whether the substance harms or helps. This assessment is particularly true in the area of addiction and mental health, where advocates believe that the drug could be the white knight of the opioid epidemic.

      …And the critics are correct: there have been no randomized controlled trials—the gold standard for testing drug effects—that have evaluated cannabis specifically for treating opioid addiction….

      …substituting cannabis for opioid addiction therapies could be harmful because it would displace already established treatments, such as metha-done and buprenorphine—which could be life-threatening. At this time, offering cannabis as a treatment for opioid addiction is not consistent with the practice of evidence-based medicine.

      But such evidence is beginning to emerge….Cannabis is less dangerous than illicit opioids to both the individual and society at large. While there is a small chance that substituting a less harmful drug for a more harmful one could simply lead to a new addiction, this approach might well be a risk worth taking.

      One issue complicates the equation: it's unclear if cannabis can help people who experience opioid addiction and chronic pain….The effectiveness of cannabis for pain management is by no means proved: research on this question so far is relatively weak—but that could be said for most work on a drug scientists have been discouraged from studying by the government. The case is by no means closed….

      Despite the hype, it is absurd to think cannabis can be a remedy for all aspects of the human condition. There is, however, good reason to believe that future research will support a helpful role for it in the treatment of opibid addiction. But we are not there yet….

MAGA on the Moon

[These excerpts are from an editorial by the editors in the July 2019 issue of Scientific American.]

      Just in time for the half-century anniversary of the Apollo 11 lunar landing [see our special report, starting on page 50], the White House has declared the U.S. is going back to the moon within the next five years….

      …Such missions could support China's plans for a research station near the lunar south pole to study resources such as water ice, which can be used to manufacture rocket fuel, potable water and breathable air. The fear in the White House, it seems, is that China will lay claim to the lunar pole and prevent the U.S. and others from operating there. (This action is essentially prohibited under the United Nations Outer Space Treaty of 1967, to which both China and the U.S. are signatories.)

      There are good reasons to treat China as an adversary in space, but these moon plans are not among them. China’s use of antisatellite missiles and spacecraft does pose significant threats to strategic U.S. assets (while mirroring decades of similar efforts by the U.S. and Russia). Such concerns do not require framing NASA’s planned lunar return as part of a warlike conflict with China. As the crown jewel of the U.S. civil space program, the agency is ostensibly devoted to science and exploration instead of national defense. Although it emerged from the cold war-fueled space race of the late 1950s, NASA has more recently been defined by collaboration, not competition—most notably, in its partnerships with Russia and other nations on the International Space Station, which has served for decades to defuse geopolitical tensions….

      Sending NASA to the moon to beat China would not be the first time the administration has sought to extend President Donald Trump’s signature “Make America Great Again” mantra into outer space. Trump has previously vowed to aggressively develop space-based missile defense systems and to create a “Space Force” as a sixth branch of the U.S. military. Both proposals have been framed as part of an unfolding clash of civilizations in which the U.S. and its allies must act decisively in space to overcome China and other adversaries, such as Russia and North Korea.

      In the long term, however, this stance will most likely be self-defeating because it reinforces the impression, eagerly promulgated by China and Russia, that the biggest threat to the peaceful use of outer space is really the U.S. To ensure that our nation’s values are enshrined in space governance, the White House and Congress must together reduce needless barriers to engagement with China and other competitors, ideally through reinvigorated US. diplomacy within the framework of existing U.N. treaties and committees. Collaboration, not conflict, is the sustainable path forward to the moon.

Under a Watchful Eye

[These excerpts are from an article by Christopher Beam in the July/August 2019 issue of MIT Technology Review.]

      …Commercial satellite imagery is currently in a sweet spot: powerful enough to see a car, but not enough to tell the make and model; collected frequently enough for a farmer to keep tabs on crops’ health, but not so often that people could track the comings and goings of a neighbor. This anonymity is deliberate. US federal regulations limit images taken by commercial satellites to a resolution of 25 centimeters, or about the length of a man’s shoe. (Military spy satellites can capture images far more granular, although just how much more is classified.)

      Ever since 2014, when the National Oceanic and Atmospheric Administration (NOAA) relaxed the limit from 50 to 25 cm. that resolution has been fine enough to satisfy most customers. Investors can predict oil supply from the shadows cast inside oil storage tanks. Farmers can monitor flooding to protect their crops. Human rights organizations have tracked the flows of refugees from Myanmar and Syria.

      But satellite imagery is improving in a way that investors and businesses will inevitably want to exploit. The imaging company Planet Labs currently maintains 140 satellites, enough to pass over every place on Earth once a day. Maxar, formerly DigitalGlobe, which launched the first commercial Earth observation satellite in 1997, is building a constellation that will be able to revisit spots 15 times a day. BlackSky Global promises to revisit most major cities up to 70 times a day. That might not be enough to track an individual's every move, but it would show what times of day someone’s car is typically in the driveway, for instance.

      Some companies are even offering live video from space. As early as 2014, a Silicon Valley startup called SkyBox (later renamed Terra Bella and purchased by Google and then Planet) began touting HD video clips up to 90 seconds long. And a company called EarthNow says it will offer “continuous real-time” monitoring “with a delay as short as about one second,” though some think it is overstating its abilities….

      Some of the most radical developments in Earth observation involve not traditional photography but rather radar sensing and hyperspectral images, which capture electromagnetic wavelengths outside the visible spectrum. Clouds can hide the ground in visible light, but satellites can penetrate them using synthetic aperture radar, which emits a signal that bounces off the sensed object and back to the satellite. It can determine the height of an object down to a millimeter. NASA has used synthetic aperture radar since the 1970s, but the fact that the US approved it for commercial use only last year is testament to its power— and political sensitivity. (In 1978, military officials supposedly blocked the release of radar satellite images that revealed the location of American nuclear submarines.)

      Meanwhile, farmers can use hyperspectral sensing to tell where a crop is in its growth cycle, and geologists can use it to detect the texture of rock that might be favorable to excavation. But it could also be used, whether by military agencies or terrorists, to identify underground bunkers or nuclear materials.

      The resolution of commercially available imagery, too, is likely to improve further. NOAA’s 25-centimeter cap will come under pressure as competition from international satellite companies increases. And even if it doesn’t, there’s nothing to stop, say, a Chinese company from capturing and selling 10 cm images to American customers….

      …But burglars could also scan a city to determine which families are out of town most often and for how long.

      Satellite and analytics companies say they’re careful to anonymize their data, scrubbing it of identifying characteristics. But even if satellites aren’t recognizing faces, those images combined with other data streams—GPS, security cameras, social-media posts—could pose a threat to privacy….

Lost at Sea

[These excerpts are from an article by Elizabeth Preston in the 21 June 2019 issue of Science.]

      …When many people think of threats to the world’s fish, overfishing or vanishing reefs might leap to mind. Increasingly, however, scientists also worry about a subtler danger: how human activities might interfere with the senses fish use to perceive the world. Noise from ships and construction, murkier waters caused by pollution, and rising ocean acidification from the buildup of atmospheric carbon dioxide (CO2) are all possible culprits. In laboratories and in the wild, scientists study exactly how those factors might affect a fish’s ability to communicate, navigate, and survive.

      The studies face both logistical and conceptual challenges. Observing the behavior of fish in the vast sea is nearly impossible, but a laboratory aquarium is a far cry from their natural environment. And we can’t know exactly what seeing, smelling, or hearing as a fish is like. But by drawing on tools as elaborate as simulated underwater environments and as simple as bits of thread tethering baby fish to stream bottoms, researchers are gaining a better understanding of how fish use their senses—as well as the consequences of disrupting them.

      Driving the work is the concern that humans are creating a pervasive threat to the fish stocks that provide food and livelihoods for millions of people: a kind of sensory smog. Combating causes such as pollution and acidification is a staggering challenge, the scientists admit….

      Acidification also seems to cause other behavioral changes. In boldness tests—in which researchers approach fish with a blunt object to make them retreat—fish treated with acidic water come back out of hiding sooner….

      The problem isn’t that acidified water damages fish noses, Dixson says. Instead, the issue is apparently in the brain. Multiple mechanisms may be at work. One strong possibility is that acidic conditions interfere with brain cell receptors….

      What cod or other migrating fish are saying as they travel is not clear. They may be calling to stragglers, synchronizing migration or spawning, or establishing dominance. But clearly, the fish increasingly must compete with human noise sources such as recreational boats, commercial ships, pile driving, sonar, and deep-sea mining….

      Researchers don't know whether such interference has affected overall populations of cod and haddock. But scientists worry about subtle harms that could ultimately take a toll….

Teaching Ingenuity

[These excerpts are from an article by Sally G. Hoskins in the 14 June 2019 issue of Science.]

      After a fulfilling career as a college biology professor, I’m retiring. “What will you miss most?” a colleague asked. My answer was something that, 30 years ago, I would never have expected myself to say: “I will miss the creativity of teaching.” When I was a new faculty member, I considered teaching a necessary evil that took me away from the lab bench. I wanted to focus on research, guiding graduate students in what I hoped would be groundbreaking studies on nerve growth. I believed imagination lived not in the classroom, but in the laboratory—to be used for inventing techniques, designing experiments, and interpreting data. But when my life took an unexpected turn, I realized how wrong I had been….

      It was hard to drop a research program that—up to that point—had defined my career and fueled my passions. To stay close to the research world, I began to assign journal articles in my upper-level undergraduate course, anticipating lively discussions about the latest discoveries. This failed miserably. My students would skim the papers, but they’d rarely dive into them fully. Many wouldn’t even look at the figures, which I had expected them to focus on.

      A clue to the problem came when I took a look at the introductory biology textbooks they had studied in earlier classes. There were abundant illustrations of scientific facts—the array of bones in a bird’s wing, the structure of a bacterial flagellum—but hardly any of the figures looked like the data presented in scientific papers. Equally problematic, the books had vanishingly few illustrations of how key findings had been made, or of who did the work. Now it made sense: My students were comfortable memorizing facts, but they lacked insight into how those facts were generated and how the conclusions were drawn. The ingenuity of research—what I loved most about being a scientist—was lost on them.

      This epiphany changed the way I used the primary literature in my teaching; I started to go for depth over breadth. I spent multiple class sessions deconstructing a single paper with my students, analyzing each figure and table. I then asked, “If you had coauthored the paper we just studied, what would you do next?”

      Some balked. “I’m not creative,” they'd say. But I asked them to give it a try—adding a sense of urgency by announcing that, in a later class, we’d form “grant panels” that would rank their proposed studies and decide where to invest an imaginary pool of research funds.

      After taking part in the panels, the students changed their tunes. They were amazed by the variety of follow-up studies their classmates had thought up. They argued passionately about which ideas were superior, expressing surprise when other panels made different choices….

      Could I have conveyed more information per minute by talking at my students? Sure. But that's not how I wanted to teach. My students already knew how to learn facts. I wanted them to think deeply about the research process and to develop their own inventiveness. I wanted them to tap into their imaginations….

Much Ado about Method

[These excerpts are from an article by Christopher J. Phillips in the 14 June 2019 issue of Science.]

      Compared with reading, writing, and arithmetic, science is a relative newcomer to the primary and secondary school curriculum, emerging only in the late 19th century. Nevertheless, proponents of the subject have established it as central to what an educated person needs to know, not least because of the promise of good jobs in scientific fields.

      Even if nearly every school district in the United States now treats it as a required subject, there has been almost no consensus on what science classes should entail. Some have claimed that the subject should be taught 'as a single methodology, presenting the scientific method as a fixed number of discrete steps. Others emphasize it as a disparate collection of techniques—some inductive, others deductive, and divided up into specific disciplinary approaches. Teachers have disagreed on whether it is best taught through textbooks or laboratory experiments, as a set of conclusions and facts, or as a mode of inquiry. Most contemporary scientists would agree that there is no single method for doing science, but beyond that, there has not been much to agree on….

      There is also continuity in another sense. Rudolph comes down harshly on the fate of science education reform, arguing that every reform effort has essentially failed. In doing so, he echoes the National Science Foundation's inquiry in the 1970s, which concluded that teachers were expensive to retrain and consistently resistant to new pedagogical methods; that pupils continued to use textbooks slavishly, despite reformers’ efforts; that “methods” were seen as fixed steps rather than as guides to arriving at reliable conclusions, however they were presented; and that disciplines were repeatedly reduced to sets of facts and truths rather than modes of systematic inquiry.

      Rudolph’s book centers on a key shift that took place in the years after World War II, when the teaching of science was transformed from a practice that was thought of as being good for students to one that was thought of as being good for science itself. Science class was no longer a space for transforming hearts and minds but one for mass-producing researchers, laboratory workers, and theorists….

Innovating a Green Real Deal

[These excerpts are from an editorial by Ernest J. Moriz in the 14 June 2019 issue of Science.]

      A Green Real Deal framework should be structured around a set of key characteristics. It must be science-based and analytically sound. It must be pragmatic in providing maximum optionality and flexibility, enabling a broad coalition to form. It must address all sectors of the economy, particularly those difficult to decarbonize such as transportation, industry, and agriculture. It must have a regional focus, because low-carbon solutions will necessarily be location dependent. And it must advance social equity and workforce development, avoiding stranded assets and stranded workers wherever possible and offering remedies for those most affected by the energy transition. By any name, these characteristics provide criteria for measuring Green Real Deal proposals.

      What is needed to support this framework? A core component is innovation in technology, business models, and policy, and technology innovation is in many ways the key enabler among them. Without the development and deployment, at gigatonne scale, of breakthrough technologies that are affordable, the United States and the world will not reach net carbon neutrality. Breakthrough technology candidates could enable carbon direct removal, commodity-scale CO2 utilization, biological and geological CO2 storage at gigatonne scale, economic electricity storage at time scales from days to seasons, advanced nuclear fission and fusion energy; a hydrogen economy, full integration of information technology with the energy system, and advanced low-carbon fuels—and more.

      Although we cannot expect these breakthrough technologies to have been developed and deployed at scale in the near term, innovation is nevertheless also crucial for decadal goals that put us on the pathway to mid-century deep decarbonization. The impacts of enormous cost reductions in solar energy, wind power, batteries, and light-emitting diodes are evident, but electricity emissions reductions are only 28% of the U.S. total. Major cost reductions, starting with an emphasis on energy efficiency, must be extended across all sectors of the economy. This depends on innovation that comes not only from research and development but also from the manufacturing and scaling experience gained from increasing deployment.

      The innovation agenda is also central to the social equity objective. Mitigating climate change through technology and policy is critical for underserved communities, those hit first and worst by extreme weather. Also, most clean energy technologies remain expensive relative to incumbent technologies, and lowering energy prices through innovation is progressive in its impact across the income distribution. Further, platform technologies that support innovation, such as broadband, must be universally available.

      …Advancing an innovation initiative through Congress now will be an important step toward a Green Real Deal and perhaps awaken the kind of climate change accountability needed more broadly in 2020.

Science is Awesome

[This excerpt is from chapter 18 of The Astonishing Hypothesis, written by Francis Crick in 1994.]

      Of course, there are people who say that they do not wish to know how their minds work. They believe that to understand Nature is to diminish her, since it removes the mystery and thus the natural awe that we feel when we are confronted with things that impress us but about which we know very little. They prefer the myths of the past even when they are in clear contradiction to the science of the present. I do not share this view. It seems to me that the modem picture of the universe—far, far older and bigger than our ancestors imagined, and full of marvelous and unexpected objects, such as rapidly rotating neutron stars—makes our earlier picture of an earth-centered world seem far too cozy and provincial. This new knowledge has not diminished our sense of awe but increased it immeasurably. The same is true of our detailed biological knowledge of the structure of plants and animals, and of our own bodies in particular. The psalmist said, “I am fearfully and wonderfully made,” but he had only a very indirect glimpse of the delicate and sophisticated nature of our molecular construction. The processes of evolution have produced wonders of which our civilized ancestors had no knowledge at all. The mechanism of DNA replication, while basically unbelievably simple and elegant, has been elaborated by evolution to one of great complexity and precision. One must be dull of soul indeed to read about it and not feel how marvelous it is. To say that our behavior is based on a vast, interacting assembly of neurons should not diminish our view of ourselves but enlarge it tremendously.

      It has been reported that a religious leader, shown a large drawing of a single neuron, exclaimed, “So that is what the brain is like!” But it is not the single neuron, wonderful though it is as an elaborate and well-organized piece of molecular machinery, that is built in our image. The true description of us is the complex, ever-changing pattern of interactions of billions of them, connected together in ways that, in their details, are unique to each one of us. The abbreviated and approximate shorthand that we employ every day to describe human behavior is a smudged caricature of our true selves. “What a piece of work is a man!” said Shakespeare. Had he been living today he might have given us the poetry we so sorely need to celebrate all these remarkable discoveries.

      It is unlikely that the Astonishing Hypothesis, if it turns out to be true, will be universally accepted unless it can be presented in such a way that it appeals to people’s imagination and satisfies their need for a coherent view of the world and themselves in terms they can easily understand. It is ironic that while science aims at exactly such a unified view, many people find much of our present scientific knowledge too inhuman and too difficult to understand.

      This is not surprising since most of it is in the fields of physics, chemistry, and their related disciplines, such as astronomy, all of which are somewhat removed from the daily lives of most people. In the future this may change. We can hope to understand more precisely the mechanisms of such mental activities as intuition, creativity, and aesthetic pleasure, and in so doing grasp them more clearly and, it is to be hoped, enjoy them more. Free Will (see the Postscript) may no longer be a mystery. That is why the words nothing but in our hypothesis can be misleading if understood in too naive a way. Our wonder and appreciation will come from our insights into the marvelous complexities of our brains, complexities we can only glimpse today.

      While we may not be able to deduce human values solely from scientific facts, it is idle to pretend that scientific knowledge (or unscientific knowledge, for that matter) has no influence on how we form our values. To construct a New System of the World we need both inspiration and imagination, but imagination building on flawed foundations will, in the long run, fail to satisfy. Dream as we may reality knocks relentlessly at the door. Even if perceived reality is largely a construct of our brains, it has to chime with the real world or eventually we grow dissatisfied with it.

The Claustrum

[This excerpt is from chapter 17 of The Astonishing Hypothesis, written by Francis Crick in 1994.]

      There is one other brain region that must be mentioned briefly. This is the claustrum, a thin sheet of neurons lying next to the lower cortical layers near a part of the cortex called the “insula.” It appears to be a satellite of the cortex since its input comes mainly from the cortex and most of its output goes back to the cortex. It receives an input from many cortical areas and may send connections back to all of them. Some but not all the visual areas of the cortex project to one part of it and (in the cat) form a single retinotopic map there. There may be some overlap of this visual input with other claustral inputs. Very little work seems to have been done on the monkey claustrum in these last few years, so some of the above statements may be somewhat inaccurate. (For example, it is possible there are two visual maps there.)

      Its function is unknown. Why should all this information be brought together in one thin sheet? One would suspect that the claustrum has some kind of global function, but what that might be nobody knows. Even though it is rather a small region of the brain, it should not be totally overlooked.

      It may well be that there is a hierarchy of processing units, in the sense that some of them may exert some sort of global control over the others. Sets of neurons that project very widely over the cortex, such as the claustrum and the intralaminar nuclei of the thalamus, could well play such a role.

How the Brain Works

[This excerpt is from chapter 17 of The Astonishing Hypothesis, written by Francis Crick in 1994.]

      Let us now survey the ground we have covered so far. The main theme of the book is the Astonishing Hypothesis—that each of j us is the behavior of a vast, interacting set of neurons. Christof Koch and I suggested that the best way to approach the problem of consciousness was to study visual awareness, both in man and his near relations. However, the way people see things is not straightforward. Seeing is both a constructive and a complicated process. Psychological tests suggest that it is highly parallel but with a serial, “attentional” mechanism on top of the parallel one. Psychologists have produced several theories that try to explain the general nature of the visual processes, but none is much concerned with how the neurons in our brains behave.

      The brain itself is made of neurons (plus various supporting cells). Each neuron, considered in molecular terms, is a complex object, often with a rather bizarre, irregular shape. Neurons are electrical signallers. They respond quickly to incoming electrical and chemical signals and dispatch fast electrochemical pulses down their axon, often over distances very many times greater than the diameter of their cell bodies. There are enormous numbers of them, of many distinct type and they interact with each other in complicated ways.

      The brain is not a general-purpose machine, like most modem computers. Each part, when fully developed, does a somewhat different and specific job, but, in almost any response, many parts interact together. This general picture is supported by studies of humans whose brains have been damaged and by modem methods of scanning the human brain from outside the head.

      The visual system has far more distinct cortical areas than one might have expected. These areas are connected in a semihierarchical manner. A neuron in one of the lower cortical areas (i.e., those connected most closely to the eyes) is mainly interested in relatively simple aspects of only a small fragment of the visual scene, although even these neurons are influenced by the visual context of the fragment. The ones at the higher cortical levels respond best to more complex visual objects (such as faces or hands) and are not too fussy about exactly where these objects are in the visual scene. There appears to be no single cortical area whose activity corresponds to the global content of our visual awareness.

      To understand how the brain works, we have to develop theoretical ly models that describe how sets of neurons interact with each other. At the moment, the neurons in these models are oversimplified. Modern computers, while many times faster than the ones available a generation ago, can simulate only a relatively small number of these neurons and their interconnections. Nevertheless, these primitive models, of which there are several distinct types, often show surprising behavior, not unlike some of the behavior of the brain. They help provide us with new ways of thinking about how the brain might work.

Blindness Denial

[This excerpt is from chapter 14 of The Astonishing Hypothesis, written by Francis Crick in 1994.]

      There is one kind of visual defect that is so surprising that people have been known to doubt that it could possibly exist. This is known as Anton’s syndrome or “blindness denial.” The patient is clearly unable to see but is unaware of that fact. Asked to describe the doctor’s tie the patient may say that it is a blue tie with red spots when in fact the doctor is not wearing a tie at all. When pressed the patient may volunteer the information that the light in the room seems a little dim.

      At first, such a condition seems impossible. Alternatively the medical diagnosis might be hysteria, which is not very helpful. But consider the following possibility. I have often found that when talking to someone over the telephone whom I have not met I spontaneously form a crude visual image of his or her appearance. I held several long telephone conversations with one man whom I must have pictured in his fifties, rather thin, with rimless glasses. When, eventually, he came to see me I found he was in his thirties and was decidedly fat. It was only my surprise at his appearance that made me realize that I had previously imagined him otherwise.

      I suspect that a person suffering from blindness denial produces such images, probably because the brain damage is such that these images do not have to compete with the normal visual input from the eyes. In addition, due to damage elsewhere, they may have lost the critical faculty that would normally alert them that something was wrong. Whether this explanation is correct remains to be seen, but at least it makes the condition appear not totally incomprehensible.

Brain Damage

[This excerpt is from chapter 12 of The Astonishing Hypothesis, written by Francis Crick in 1994.]

      Over the years, neurologists have examined people whose brains have been damaged in various ways—by strokes, blows to the head, gunshot wounds, infections. Quite a number of these injuries alter some aspects of visual awareness while leaving other abilities, like speech or motor activities, more or less intact. The evidence suggests that there is a remarkable degree of functional specialization in the cortex, often in rather surprising ways.

      In many cases, the damage to the brain is not very “clean” or specific. A speeding bullet is no respecter of cortical areas. (The living cortex has the texture of a rather soft jelly. Bits of it can easily be removed by sucking on a pipette.) Typically, the damage is likely to involve several cortical areas. The most dramatic effects are produced by damage to corresponding places on the two sides of the head, although such cases are rather rare.

      Many neurologists have time only for a short examination of an injured patient—just long enough to be able to make an educated guess as to where the damage is likely to be. Lately, even this form of detective work has been largely superseded by brain scans. In the recent past it was the usual practice to report on a dozen or so similar cases together, since it was felt that to describe a single, isolated injury was unscientific. This unfortunately tended to lump together what were really somewhat distinct types of damage.

      Recent trends have corrected these practices to some extent. Particular attention is now often given to the small number of cases in which one particular aspect of perception or behavior is altered while most other aspects are spared. These patients are likely to have suffered more limited and therefore more specific damage. An effort is also made to localize the damage via brain scans. The living patient, if he is cooperative, can be given a whole battery of psychological and other tests in order to discover just what he can or cannot see or do. In some cases such tests have extended over a number of years. As ideas about visual processing have become more sophisticated, experiments to test these ideas have become more subtle and extensive. They can now be combined with brain scans that record the activity in the brain during these different tasks. These results on several patients can be used to compare and contrast patients with either similar damage, or similar symptoms, or both.

Fields of Vision

[This excerpt is from chapter 10 of The Astonishing Hypothesis, written by Francis Crick in 1994.]

      In most vertebrates the ganglion cells of the right eye project almost entirely to the optical tectum (roughly the equivalent of the superior colliculus in mammals) on the left side of the brain and vice versa. In primates, matters are more complicated. Each eye projects to both sides of the brain, but it does so in such a way that the left side of the brain receives input relating only to the right half of the visual field.

      Thus everything you see to the right of your center of gaze goes to the left LGN [Lateral Geniculate Nucleus], on its way to the left visual cortex…and also to the left superior colliculus, Of course, the two halves of the brain are normally connected to each other by several tracts of nerve fibers of which the largest is the corpus callosum. If this is cut (for medical reasons)…the left half of the brain of that person only sees the right side of the visual field and the right half only the left side. This can produce somewhat surprising results, almost as if there were now two persons in one head.

Research Dilemma

[This excerpt is from chapter 9 of The Astonishing Hypothesis, written by Francis Crick in 1994.]

      Most people do not object to an experimenter fixing electrodes to their scalp in order to study their brain waves. They do, however, object to having a portion of their skull removed, even temporarily, so that electrodes can be stuck directly into the living brain tissues. Even if a person volunteered to have his head cut open—because he wished to further scientific discovery—no doctor would consent to perform the operation, saying either that it was against his Hippocratic oath or, more realistically, that somebody would be sure to sue him for doing it. In our society, you can volunteer for the armed forces and run the risk of being wounded or killed, but you may not volunteer to undergo dangerous experiments merely to obtain scientific knowledge.

Neural Messages

[This excerpt is from chapter 8 of The Astonishing Hypothesis, written by Francis Crick in 1994.]

      One characteristic of a neuron is already fairly clear. A single neuron can fire at different rates and, to some extent, in different styles. Even so, in any period of time it can only send out limited information. Yet during that time the potential information coming into it, through its many synapses, is very large. In this process—going from its input to its output—there must be a loss of information, at least if we look at one neuron in isolation. This loss is compensated by the fact that each neuron responds to particular combinations of its inputs and sends out this new form of information, not to just one place, but to many places: The pattern of spikes that a neuron transmits down its axon is distributed in much the same form to many different synapses because a single axon has many branches. What one neuron receives at one synapse is the same signal that many other neurons have also received. All this shows, if nothing else, that we cannot just consider one neuron at a time. It is the combined effect of many neurons that we have to consider.

      It is important to realize that what one neuron tells another neuron is simply how much it is excited. These signals will not normally give the receiving neurons other information—for example, where the first neuron is located. The information in the signal will usually be related to certain activities in the outside world, such as that received by the photoreceptors of the eye.

      In perception, what the brain learns is usually about the outside world or about other parts of the body. This is why what we see appears to be located outside us, although the neurons that do the seeing are inside the head. To many people this is a very strange idea. The “world” is outside their body yet, in another sense (what they know of it), it is entirely within their head. This is also true of your body. What you know of it is not attached to your head. It is inside your head.

      Of course, if we open the skull and pick up the signals sent out by a particular neuron, we can often tell where that neuron is located, but the brain we are studying does not have this information. This explains why we do not normally know exactly where our perceptions and thoughts are taking place in our heads. There are no neurons whose firing symbolizes such information.

      Recall that Aristotle believed that such processes occurred in the heart because he could both locate the heart and observe its changes in behavior as a result of mental processes, such as falling in love. We cannot do this for the neurons in the human brain without the aid of spe-cial instruments….

Spikes in Axons and Dendrites

[This excerpt is from chapter 8 of The Astonishing Hypothesis, written by Francis Crick in 1994.]

      In the nineteenth century, it was assumed, quite erroneously, that the spike travelled at speeds too fast to be measured, possibly at the speed of light. When the speed was finally measured by Helmholtz in the middle of the last century, it was found to seldom be as fast as 300 feet per second. (This is about a third of the speed of sound in air.) Many people, including Helmholtz's father, were very surprised by this result. A more typical speed, for an unmyelinated axon, might be 5 feet per second. This may seem rather slow (slower, in fact, than a bicycle), but it is equivalent to 1.5 millimeters in about a millisecond.

      The far ends of the axon have to be supplied with molecules by the cell body because almost all the genes and most of the biochemical machinery needed for the synthesis of proteins are in the cell body, not in the axon. There is a systematic flow of molecules along the axon in both directions. It is quite remarkable to watch a (speeded-up) movie of this, taken with the aid of a special high-powered optical microscope, showing minute particles jogging past each other, some going down a large axon, some going up it. Some travel faster than others, but all these flows are far slower than the speed of the axonal spike. Naturally, special molecular apparatus is needed to direct and to power this transport.

      The classical view of a neuron held that the dendrites (the input cables) were “passive.” This implies that the change of potential decayed as it spread from one position on the dendrite to another, as some of the ions involved leaked through the membrane, just as Morse signals used to decay as they travelled great distances along transatlantic cables. For this reason, dendrites are typically shorter than axons, often being only a few hundred microns long. It is now suspected that some neurons have active processes in their dendrites (as axons have), but they are probably not exactly the same as those found in axons.

      The spike, then, travels down the axon till it reaches a synapse, the special junction between one neuron and another. Each neuron has many synapses on its dendrites and soma. A small neuron may have as few as five hundred; a large pyramidal cell may have as many as twenty thousand. An average number for neurons in the neocortex might be six thousand. It might be thought that since the spike is electrical, and the effect on the next neuron is mainly electrical, that the synapse is some form of electrical contact. This is sometimes the case, but more usually the transmission from one neuron to the next is more complicated than that.

Visual Deception

[This excerpt is from chapter 3 of The Astonishing Hypothesis, written by Francis Crick in 1994.]

      You are easily deceived by your visual system. For example, many people believe they see everything before them with equal clarity. As I look out my study window onto the garden, I have the impression that I see the rose bushes ahead of me just as clearly as the trees that are farther to my right. If I keep my eyes still for a moment I can easily observe that this is false. I can sec fine details only when they are close to my center of gaze. Off to one side of it my vision becomes increasingly blurred. At the extreme periphery of vision I can identify objects only with difficulty. These limitations are not immediately apparent in everyday life because we move our eyes easily and frequently so that we have the illusion of seeing equally clearly everywhere.

      Hold a colored object, such as a blue pen or a red playing card, to one side of your head and sufficiently far back that you cannot see it at all. Gradually bring it forward so that it just enters the periphery of your field of view BE SURE NOT TO MOVE YOUR EYES. If you jiggle the object, you will recognize that something is moving there before you can sec what it is. You can tell if the pen (or the playing card) is horizontal or vertical before you can be sure what color it is. Even when you can see both the shape and the color you still will not see the fine details of the object until you bring it quite close to the center of gaze. My pen has a little label on it that says “extra fine point.” The print is small but with my glasses on I can easily read it if I hold it about a foot from my eves. But if I put my finger next to it, and gaze not at the pen but at the center of my fingertip, I cannot read what is written on the pen, even though the writing is only a short distance from my center of gaze. My visual acuity, as it is called, falls off very rapidly away from the center of gaze.

Mammalian Development

[This excerpt is from chapter 3 of The Astonishing Hypothesis, written by Francis Crick in 1994.]

      The neurobiologist John Allman, who works at Caltech, has argued that mammals, in contrast to reptiles, have a special need to preserve heat because of their continual activity and their relatively high and constant body temperature. This is especially true for small mammals, since their surface area is so big compared to their volume. Hence fur, a uniquely mammalian attribute, and also, he suggests, the large development of the mammalian neocortex. He believes that this part of the brain made the early mammals smart enough to locate sufficient food to keep them warm.

      Although mammals are smart, they do not, as a class, have especially good vision, probably because they evolved from small nocturnal creatures for whom vision was less important than smell and hearing. The exceptions are the primates (monkeys, apes, and man), most of whom have evolved excellent vision, although, as in man, their sense of smell may be poor.

      When the dinosaurs were all killed off, these early mammals evolved rapidly as they took over the ecological niches left empty by the dinosaurs. The smarter brains of the mammals helped them to do this very effectively, and eventually led to the emergence of man, the smartest mammal of them all.

Human Soul

[This excerpt is from chapter 1 of The Astonishing Hypothesis, written by Francis Crick in 1994.]

      Many educated people, especially in the Western world, also share the belief that the soul is a metaphor and that there is no personal life either before conception or after death. They may call themselves atheists, agnostics, humanists, or just lapsed believers, but they all deny the major claims of the traditional religions. Yet this does not mean that they normally think of themselves in a radically different way. The old habits of thought die hard. A man may, in religious terms, be an unbeliever but psychologically he may continue to think of himself in much the same way as a believer does, at least for every-day matters.

Our Nerve Cells

[This excerpt is from chapter 1 of The Astonishing Hypothesis, written by Francis Crick in 1994.]

      The Astonishing Hypothesis is that “You,” your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules. As Lewis Carroll’s Alice might have phrased it: “You’re nothing but a pack of neurons.” This hypothesis is so alien to the ideas of most people alive today that it can truly be called astonishing.

How Fast Will the Antarctic Ice Sheet Retreat?

[These excerpts are from an article by Eric J. Steig in the 7 June 2019 issue of Science.]

      There is no longer any serious scientific doubt that the retreat of glaciers in Antarctica will eventually cause several meters of sea level rise, unless the emission of anthropogenic greenhouse gas is reduced substantially. By one estimate, each kilogram of carbon emitted as CO2 will ultimately result in the melting of more than two metric tons of Antarctic ice. But how quickly this will happen remains highly uncertain, with the implications for society ranging from relatively moderate to possibly catastrophic….They show that the interaction of the ice sheet with the solid Earth—an important aspect of the problem not adequately captured by previous work—may slow down retreat.

      Antarctica is losing ice mainly through the thinning, calving, and retreat of ice streams, the fast-flowing glaciers that drain the larger, slower-moving ice sheet into the ocean. A handful of ice streams in the Amundsen Sea region of West Antarctica…contribute nearly 10% of the current global sea level rise of 3.4 mm/ year. The ice discharge from Thwaites Glacier…, the largest glacier in this region, has more than doubled in recent years. The ongoing retreat of Amundsen Sea glaciers was initiated by the flow of warm, deep water onto the continental shelf, probably driven by anomalous wind conditions in the mid-20th century. Warm ocean water melts the glaciers 41.- from below, causing acceleration and grounding line retreat (the grounding line is the point where the glacier goes afloat to become an ice shelf).

      …The farther the glacier retreats, the deeper the water in which it terminates, and the more readily it melts and calves. Once retreat is initiated, atmosphere and ocean conditions become less important and the nonlinear dynamics of the glacier dominates its behavior. This makes the future behavior of such glaciers inherently difficult to predict….

Women of Color Face Double Dose of Bias

[These excerpts are from an article by Katie Langin in the 7 June 2019 issue of Science.]

      Bradley Miller is more likely to be hired than Jose Rodriguez. Mang Wei (David) is more competent than Jamal Banks. And both Miller and Wei are more competent and hirable than Maria Rodriguez or Shanice Banks.

      These postdoc job candidates are fictional. But the differences in how they’re viewed based on name alone—despite identical CVs —by a sample of professors are real. That's according to a recent study that unearths evidence of racial bias in biology and a combination of gender and racial bias in physics, highlighting both the pervasive nature of various biases in science as well as important disciplinary differences.

      …In 2012, Moss-Racusin published a similar study, which found that biology, chemistry, and physics faculty members who reviewed applications for a lab manager position favored applicants named John over otherwise identical applicants named Jennifer. The new study…is an important advance because it manipulates race as well as gender….

      …she and her colleagues decided to focus on the postdoc period because it is “a critical part of the pipeline,” and also “a part of the pipeline where there are almost no checks and balances” to prevent bias. Principal investiga-tors typically review postdoc applications in isolation….

      Faculty members in biology viewed male and female applicants to be similarly competent and likely to be hired….But in physics, it was a different story: Faculty members preferred male applicants, rating them one point higher on competence—on a nine-point scale—and two points higher on hirability.

      Faculty members in both disciplines exhibited racial bias. In physics, Asian and white applicants were given higher competence and hirability ratings than black and Latino applicants. In biology Asian and white applicants were viewed as more competent than black applicants. Asian applicants were also viewed as more hirable than black and Latino applicants. (Ratings for Latino competence and white hirability didn’t statistically differ from other groups.)

      In physics, black and Latina women were doubly disadvantaged, rated three points lower in hirability than white and Asian men….

      …In physics, the number of women has “stayed stubbornly low” despite a surge of female scientists in other disciplines, and a lack of awareness of bias may be one factor….

A New Narrative for the Ocean

[These excerpts are from an editorial by Jane Lubchenco and Steven D. Gaines in the 7 June 2019 issue of Science.]

      …For most of human history, people considered the ocean so immense, bountiful, and resilient that it was impossible to deplete or disrupt it. The overarching narrative was, “The ocean is so vast, it is simply too big to fail.” This mindset persists today, bringing even more intense, unsustainable uses of the ocean that reflect ignorance; the allure of new economic opportunity; or the need for food, resources, and development. However, the folly of this too-big-to-fail narrative has become glaringly obvious through overpowering scientific evidence of depletion, disruption, and pollution. Climate change, ocean acidification, habitat destruction, overfishing, and nutrient, plastic, and toxic pollution are insidious. These changes threaten the most vulnerable people; the economic prosperity, quality of life, and opportunities for everyone; and the well-tieing of the ocean’s amazing life forms. Problems appear too complex, vested interests too powerful, and system inertia too great, especially as demands on the ocean escalate. A new narrative has arisen: “The ocean is massively and fatally depleted and disrupted. The ocean is simply too big to fix.” The result? Depression and lack of engagement and motivation.

      Yet despite the undeniable challenges, hints of a new ocean mindset are emerging. Many powerful solutions already exist and could be scaled up. Opportunities abound to develop new solutions that are based on efficiency; incentives, technology; biotechnology, and regenerative and holistic approaches. Moreover, because the ocean is central to the functioning of the planet and human well-being, many ocean solutions could bring substantial co-benefits to address poverty, hunger, economic development, inequity; peace, security, coastal resilience, and climate mitigation and adaptation. For example, reforming fisheries to “fish smarter, not harder” can help restore ocean ecosystems; reduce impacts of climate change; and enhance food security, job creation, and poverty alleviation. Combining remote sensing, artificial intelligence, big data, machine learning, transparency, and new policies can minimize illegal fishing. Enabling sustainable aquaculture—especially of low trophic species—could contribute substantially to food security; with a much smaller environmental footprint than that of terrestrial animal production. Creating fully and highly protected, well-designed marine protected areas will safeguard biodiversity, replenish the ocean, and help mitigate and adapt to climate change and ocean acidification. Incorporating ocean actions into the climate agenda is essential to reducing greenhouse gas emissions and adapting to climate disruption. Expanding the range of effective solutions and scaling them globally requires scientists to engage actively with communities, fishers, businesses, nongovernmental organizations, managers, and policy-makers so that solutions are complementary; integrated, effective, and rapid.

      A new narrative does not automatically change the status quo but, if widely adopted, can reset expectations and liberate ingenuity. Yes, the challenges are fierce, and the future is unpredictable. But here is an opportunity to replicate, accelerate, and escalate existing successes while driving innovative and transformative changes….Now is the moment for more scientists to pivot from simply documenting the tragedy underway to also creating scalable solutions.

      It is time for a new ocean narrative that says, “The ocean is so central to our future. It’s too important to neglect.” In creating a new solution space for the ocean, we can also address broader global problems. In healing the ocean, we can heal ourselves….The ocean is not too big to fail, nor is it too big to fix. It is too big to ignore.

The Rise of Animals

[These excerpts are from an article by Rachel A. Wood in the June 2019 issue of Scientific American.]

      …For decades scientists thought that complex animals—multicellular organisms with differentiated tissue types— originated in the Cambrian explosion. To be sure, a riot of novel forms burst into existence during this time, including the ancestors of many of today’s major animal groups. But recent discoveries from Siberia, Namibia and elsewhere show that complex animals actually got their start millions of years before the Cambrian explosion, during the last chapter of the Precambrian, known as the Ediacaran. Among these finds are the oldest known creatures with external and internal skeletons composed of mineralized tissue, a pivotal evolutionary innovation seen in many modern-day animals.

      The presence of these armored creatures so far back in time-550 million years ago—indicates that the ecological and environmental pressures thought to have driven the Cambrian explosion were in fact at work long before then. Figuring out how these factors shaped the evolution of the earliest complex animals in the Ediacaran is key to understanding the astonishing burst of diversification that followed in the Cambrian. The Cambrian fossil record has been the subject of intense study for more than 150 years. Thus, the broad global patterns of what Cambrian fossils appeared when—and where—are relatively well established: similar fossils turned up on many continents at around the same time, and they followed the same succession of evolutionary changes more or less synchronously. But only now, with the discoveries of the older Ediacaran fossils, are we starting to see the roots of the Cambrian explosion.

      Gratifyingly, we are also beginning to puzzle out why it happened when it did, thanks in part to the development of new geochemical techniques that have revolutionized our understanding of the changing chemistry of the oceans in the Ediacaran-Cambrian world. Insights from the emerging fossil and geochemical records have just recently been integrated to show how the planet’s biosphere, geosphere, hydrosphere and atmosphere—together known as the Earth system—may have operated during this interval. But already we can paint a striking picture of how the seafloor became successively populated by ever more complex creatures tens of millions of years before the Cambrian explosion, setting the stage for the rise of animal life as we know it….

      The oldest candidate animal fossils, which hail from a sequence of rocks in southern China called the Lantian Formation and are possibly as old as 635 million years, are similarly contested….

      The oldest animal remains that almost everyone can agree on are fossils from Newfoundland that date to about 571 million years ago, shortly after the last regional “Snowball Earth” glaciation that encased much of the planet in thick ice. These earliest known representatives of the Ediacaran biota were dominated by soft-bodied creatures up to a meter in height or width. Some took the form of large, featherlike fronds with vertical stalks that rooted them to the seafloor; others sprawled across the ocean bottom, their flat bodies exhibiting a fractal architecture, with branching units that showed the same patterns at all scales. All these body plans maximize surface area, suggesting that these animals absorbed nutrients directly from the surrounding water.

      This modest variety of fauna prevailed for more than 10 million years. But then the pace of animal evolution began to accelerate. The fossil record indicates that after around 560 million years ago, the Ediacaran biota diversified to include mobile forms that inhabited shallow seas. Some of the fossils preserve scratch marks that suggest the animals were eating algal mats by grazing. Others may have dragged themselves across the algae, absorbing nutrients from the underside of their bodies. The first simple burrows also appear at around this time, evidence that animals had started to move and disturb the sediment of the seafloor.

      Fast-forward to around 550 million years ago, and the oldest fossils preserving external and internal skeletons suddenly appear in limestone rocks (which consist mainly of calcium carbonate). These fossils are already diverse in size and form, and they show up in such far-flung locales as Siberia, Brazil and Namibia. The presence of skeletons in so many unrelated animal groups around the world at this point in time is testament to a major driving evolutionary force operating on a global scale. We do not know for sure what this force was. But we have an idea. Making a skeleton is energetically expensive, so for an animal to undertake such an endeavor the benefit must outweigh the cost. Animals may produce a skeleton for many reasons, but by far the most common is the need for protection from predators. Although there is no fossil evidence of predators from this time period, it stands to reason that the appearance of skeletons might reflect the first widespread occurrence of animals that ate other animals.

      …has shown that Cloudina had a variety of growth styles. It could attach to mats made of microbes that bound the soft sediment of the seafloor, or it could anchor itself to layered mounds of cyanobacteria. Most important of all, Cloudina individuals could actually cement themselves to one another to form a reef. This finding has established Cloudina as one of the oldest reef-building animals, pushing back the record of this way of life by some 20 million years.

      Whether Cloudina was related to modern reef builders such as corals remains uncertain. But we do know that like reef-building corals, it lived in proximity to a number of other animals. Hints of this intimate association have come from other skeletal fossils found in rocks of the same age as those that contain Cloudina fossils….

      These observations are significant because reef building represents an important ecological innovation. By growing closely together and even cementing to one another, individuals can become mechanically stronger, rise above the seafloor away from competitors, enhance feeding efficiency and gain protection from predators. Like the earliest skeletons, then, the appearance of reefs in the Ediacaran fossil record may signal rising, complex ecological pressures. The Cambrian explosion, and indeed an arms race between predator and prey, had already begun.

      …Simple, immobile creatures, such as sponges, may need less oxygen than mobile animals, and they certainly require far less of the stuff than active, fast-swimming predators do. We have borne this variation in mind in the course of our investigations.

      Fortunately for us, many new geochemical methods for estimating how much oxygen existed in these ancient seas have been developed in recent years. One especially powerful technique—Fe speciation— harnesses the characteristics of the various compounds of iron, which behave differently depending on whether oxygen is present or not. This method allows us to see at a local scale where—and when—there was enough oxygen to support complexlife. Studies carried out using this approach have led to a broad consensus: dissolved oxygen in the oceans probably reached a threshold or series of thresholds during the Ediacaran that allowed animals to diversify by meeting their increasing metabolic demands as they became more mobile and active….

      Periods of increased anoxia on the seafloor coincide with some well-known mass extinctions, such as the one that punctuated the Permian period 252 million years ago, killing off more than 90 percent of all marine species. But several major diversifications—including those in the Ediacaran-Cambrian, the Ordovician 100 million years later and the mid-late Triassic about 247 million years ago—began during long intervals of dynamic shallow marine anoxia….

      It is far easier for animals to form a skeleton of limestone—the material that makes up the skeletons and shells of many modern marine creatures—when seawater oxygen levels exceed 10 micromoles per liter. Perhaps soft-bodied animals were only able to evolve these calcium carbonate skeletons once oxygen levels reached such a threshold, allowing formerly isolated oases to expand, connect and achieve stability on a global scale….

Air Inequality

[These excerpts are from an article by Andrea Thompson in the June 2019 issue of Scientific American.]

      Harlem and the South Bronx have some of the highest asthma rates in New York City. And these predominantly black and Hispanic neighborhoods—studded with smokestacks and crisscrossed by grid locked highways—are emblematic of a large body of research showing clear racial disparities in exposure to air pollution.

      …even though black and Hispanic people in the U.S. are exposed to more air pollution than white people, these groups consume less from the industries generating much of that pollution….

      The study also found that these disparities persist despite substantial overall reductions in air pollution in recent decades….

      The researchers focused on fine particulate matter with a diameter of 2.5 microns or less (PM2.5), generated by construction, fires and the combustion of fossil fuels. These particles can contain hundreds of different chemicals and can penetrate deep into the lungs, contributing to heart and lung disease. As part of its study, the team estimated that 102,000 people die prematurely every year from PM2.5 emissions from human-made sources (as opposed to wildfires or other natural sources). That number is nearly double the amount of people who die annually from car crashes and murders combined….

      …The results from this study, he adds, emphasize the need to undo the legacy of previous policies and decisions that placed polluting infrastructure disproportionately in low-income and minority communities….

      Scientists, legislators and communities will need to jointly work out which policies and regulations can tackle overall pollution while reducing these inequities….


[These excerpts are from an article by Steve Mirsky in the June 2019 issue of Scientific American.]

      …For anyone who vowed that their calculus textbook would be the last thing they’d ever read on the subject, reconsider: “I’ve written Infinite Powers in an attempt to make the greatest ideas and stories of calculus accessible to everyone,” Strogatz notes in the introduction. Then, throughout the book, he gently explains the basics—and gives a historical context that makes for a fascinating read even if you skip the math parts completely. Like you may have done with your textbook.

      The history includes the fact that the word “calculus” comes from the Latin root calx, meaning a “small stone.” “A reminder of a time long ago,” Strogatz writes, “when people used pebbles for counting and thus for calculations.... Doctors use the same word for gallstones, kidney stones, and bladder stones.” In my younger days, I studied derivatives and integrals, but I don’t recall learning until I read Infinite Powers that both of the two 17th-century geniuses usually credited with the invention of calculus, Isaac Newton and Gottfried Wilhelm Leibniz, “in a cruel irony ... died in excruciating pain while suffering from calculi—a bladder stone for Newton, a kidney stone for Leibniz.” At least it was just hyperbole if you ever complained in school that calculus was killing you….

      What evolved over the millennia became the math that gave us modernity “Without calculus, we wouldn’t have cell phones, computers, or microwave ovens,” Strogatz writes. “We wouldn't have radio. Or television. Or ultrasound for expectant mothers, or GPS for lost travelers. We wouldn’t have split the atom, unrav-eled the human genome, or put astronauts on the moon….”

Modern Math Meets Fairy-Tale Physics

[These excerpts are from an article by Andrew Robinson in the 31 May 2019 issue of Science.]

      When Charles Darwin published his theory of evolution by natural selection, he used only words—no mathematics. The same was true of Alfred Wegener's first description of his theory of continental drift. Even Dmitti Mendeleev’s first publication of the periodic table used no mathematics, merely a simple numbering system for the chemical elements.

      In physics, by contrast, sophisticated mathematics played an integral part in the publication of Isaac Newton’s laws of motion; James Clerk Maxwell’s analysis of electromagnetism; Albert Einstein’s theory of relativity; and the quantum mechanics of Paul Dirac, Werner Heisenberg, and Erwin Schrodinger….

      Einstein changed his mind about the ranking of mathematics in physics over three or four decades, as Farmelo reveals in an excellent historical section covering Newton to the 1970s. While developing his 1905 special theory of relativity and his quantum theory, Einstein regarded mathematics as a tool, not as a source of ideas. But later, while struggling with his general theory of relativity, mathematical concepts such as Bernhard Riemann’s geometry of curved space provided Einstein with crucial new ideas. Even so, his loyalty remained with physics….

      However, from the late 1920s until his death, while Einstein was searching for a unified field theory of gravity and electromagnetism, mathematics alone struck him as capable of explaining the structure of the universe. In 1933, he provocatively told an Oxford audience in a lecture, “On the method of theoretical physics”—note his use of “the”—“It is my conviction that pure mathematical construction enables us to discover the concepts and the laws connecting them which give us the key to the understanding of the phenomena of nature”

      During this period, Einstein was generally criticized for his lack of interest in new experimental data from nuclear physics, his lack of sympathy for quantum mechanics, and his obsession with mathematics. Farmelo sympathizes with Einstein’s controversial 1933 claim, while adding that pure mathematical construction must be consistent with relativity and quantum mechanics….

Scientific Leaders Explore Pathways to Climate Solutions

[These excerpts are from an article by Anne Q. Hoy in the 31 May 2019 issue of Science.]

      Climate change is altering ocean ecosystems and impacting Earth’s land surfaces. Yet strategies to address such challenges largely focus on land activities when broader responses offer more power-ful solutions….

      Drawing from an array of scientific disciplines; Lubchenco and Moniz each pointed to the effectiveness of multidisciplinary scientific approaches and collaborative responses to climate change, including those that reach into business and policy arenas….

      In building a verbal bridge between transformations facing the world’s land surfaces and those confronting the oceans, Lubchenco said that success dealing with climate change can only be accom-plished by taking on the risks to land and the oceans….

      Already, the oceans are warming, growing more acidic, holding less oxygen, rising due to higher water temperatures, generating more storms, producing fewer fish stocks, and becoming more polluted from plastics—factors depleting and disrupting the oceans….

      At risk are the beneficial powers of the oceans: the production of more than half the planet's oxygen, absorption of more than 90% of excess heat entrapped by greenhouse gas emissions, and absorp-tion of almost half the carbon dioxide emitted by human activities. Such repercussions…endanger coastal habitats and ocean creatures from wetlands to whales.

      Restoring rapidly disappearing mangrove forests, tidal marshes, and seagrass meadows and protecting coastal areas—which touch 78% of the world's countries—can play an active role in eliminating greenhouse gas emissions….Over a decade and a half beginning in 2000, for instance, the deforestation of mangrove ecosystems, among the best carbon sequestration systems, resulted in the release of 122 million tons of carbon into the atmosphere.

      The effectiveness of ocean “biological pumps” that move carbon around and store it on the ocean floor, due to such things as the submersion of whale carcasses, is weakening. Restoring whale populations could rebuild the ability of oceans to absorb some 160,000 tons of carbon each year….

      Revamping global fisheries responsible for releasing 170 million tons of carbon dioxide into the atmosphere in 2011 and overhauling the shipping industry responsible for 2.6% of the world's carbon emissions in 2016 would deliver significant benefits. Adapting fisheries would expand seafood yields by 40%, increase ocean fish stocks by 5%, and boost industry economic gains by 30%, benefits that would contribute to climate change responses….

      With a focus on energy innovations, Moniz called for adoption of low-carbon electricity generation systems, including advanced nuclear technologies, implementation of energy efficiency practices across economic sectors, and transformative carbon capture and storage techniques. Such approaches…need to be part of a viable climate change response plan.

      Moniz proposes solutions that recognize the diversity of regional energy systems and implement energy efficiency across sectors of transportation, industry, electricity, buildings, and agriculture….

A Call to Climate Action

[These excerpts are from an editorial by Jonathan T. Overpeck and Cecilia Conde in the 31 May 2019 issue of Science.]

      The science is clear, students are striking, and publics around the globe are demanding a new level of leadership to tackle the climate crisis before it is too late. Climate extremes are inflicting serious economic losses on nations, and climate-driven issues such as sea-level rise, regional aridification, food shortages, disease spread, and massive biodiversity loss only promise ever-worsening costs. Progress has been too slow since 195 countries signed the 2015 Paris Agreement committing to hold the increase in the global average temperature to well below 2°C above preindustrial levels. To prevent planetary climate disaster, we must all work to speed up bold initiatives that ensure a rapid exit from the era of fossil fuels, and drive carbon dioxide and other greenhouse gas emissions to the atmosphere down to zero in a manner that benefits everyone on the planet, not just a few.

      Much attention has been focused on promoting the transition to low-carbon energy because of the role that fossil fuel combustion has played in fouling the air and disrupting climate. Moreover, the technology needed to electrify the planet using renewable energy is already, or close to being, cheaper than fossil fuel or nuclear energy, so the existing fossil fuel power plants and fossil-fueled transportation can be phased out completely within two decades. Investment in new fossil fuel infrastructure must cease as existing natural gas and nuclear plants serve only as temporary bridges to a cleaner-energy world. At the same time, every economic sector must innovate to become more energy efficient and decarbonize. With continued innovation, the effective management of marine, coastal, agricultural, forest, and other carbon-sequestering systems can help offset those greenhouse emissions that cannot be eliminated quickly.

      The circle of solutions to address the climate crisis must quickly widen and include effective ways to soften the blows of climate change that are already inevitable. Solutions must strive to enable communities, businesses, societies, and natural systems to become resilient and adapt to the changing climate. The rapid expansion of adaptation strategies around the globe requires greater integration of academic research knowledge with the insight gained from real-world practice, and this means placing greater priority on partnership between academ-ics and nonacademics….The top priority must remain the elimination of the greenhouse gas emissions that are driving climate change, and greater emphasis must be placed on positive synergies between mitigation and adaptation actions, especially those that maximize the protection of biodiversity and soils.

      The climate crisis requires societal transformation of a scale and rapidity that has rarely been achieved. Indeed, the last time such a change took place was sparked by global economic depression and World War II. What enabled action then was a perceived existential threat and broad support in society. Today, we are faced with such a threat, but widening wealth disparities and special interests impede the needed change. The solution to the climate crisis thus requires a strong commitment to equity and justice, to indigenous peoples and future generations, and to a global transformation that vastly increases the number of those who benefit while dramatically reducing the number of those who do not. This is true at national scales, and also at the planetary scale. Only by working together across divides, only by working to empower others, and only by making it a top priority now can the community of nations avoid catastrophic climate change while creating a more sustainable and just 21st century….

What to Do about Plastic Pollution

[These excerpts are from an editorial by the editors in the June 2019 issue of Science.]

      From the bags that find their way to the ocean and into the stomachs of whales to the straws that hurt turtles to the microscopic shards and synthetic fibers that have been found in the remote Arctic, plastic permeates the planet.

      The problem of plastic pollution has gotten dramatically worse as production has ramped up from two million metric tons a year in 1950 to more than 300 million metric tons a year today without much thought to what happens once it is discarded. The thousands of polymers that fall under the catchall label “plastics” never disappear. They merely degrade into smaller pieces called microplastic. A 2017 study in Science Advances estimated that of all the plastic ever produced, 90 percent is still around, mainly in landfills or out in the environment (the rest has been incinerated). Bans on single-use plastic such as bags and straws have become a popular policy around the world to rein in plastic use. But although some of these rules have reduced waste in places, including Ireland and California, they do not directly address production and can send users to alternatives that are not much friendlier to the environment.

      Researchers have learned enough about the flow of plastic waste to know it poses a widespread environmental problem. Plastic causes physical harm to animals and could combine with other threats to endanger vulnerable species. There is also concern about humans inhaling and ingesting microplastic. We must do a better job of stanching the flood. Doing that means tackling two broad goals: considerably reducing the amount of plastic we produce and improving the recycling and reuse of what we make.

      The U.S. must be a bigger part of these solutions. Blame is too often laid solely at the feet of rapidly developing Asian countries that lack robust waste-management systems, and we forget the role that the U.S. plays not only in producing plastic but by exporting millions of tons of the waste to Asia. With China no longer accepting imports of much recyclable waste, it has forced a reckoning in the U.S., with the local authorities responsible for an overwhelmed recycling system turning to landfills and incinerators. Those options can have other environmental impacts and perpetuate the creation of virgin plastic from fossil fuels, instead of reusing and recycling existing plastic. Only 9 percent of plastic in the U.S. is now recycled, according to the Environmental Protection Agency.

      Federal and state governments should step up to help streamline and shore up the nation’s disjointed recycling system. This could be done, for example, by standardizing what can be recycled and putting limits on additives such as coloring, which is expensive to remove and can make plastic less valuable to a recycler. Governments could also fund recycling and composting infrastructure in communities that otherwise might not be able to afford it Such investments could spur American innovation in the area, for example, setting the stage for wider use of compostable plastic, which can currently only be properly broken down in industrial facilities.

      Many researchers also say plastic product manufacturers need to be pushed beyond their present voluntary commitments to reduce plastic waste with incentives that will make them bear more of the cost of that waste….

      We need comprehensive solutions, not just Band-Aids that cov-er up the symptoms but ignore the roots of the plastic problem.


[These excerpts are from “Half-Earth: Our Planet’s Fight for Life” which is written by Edward O. Wilson.]

      …Our population is too large for safety and comfort. Fresh water is growing short, the atmosphere and the seas are increasingly polluted as a result of what has transpired on the land. The climate is changing in ways unfavorable to life, except for microbes, jellyfish, and fungi. For many species it is already fatal….

      We were not inserted as ready-made invasives into an edenic world. Nor were we intended by providence to rule that world. The biosphere does not belong to us; we belong to it. The organisms that surround us in such beautiful profusion are the product of 3.8 billion years of evolution by natural selection. We are one of its present-day products, having arrived as a fortunate species of Old World primates. And it happened only a geological eyeblink ago. Our physiology and our minds are adapted for life in the biosphere, which we have only begun to understand. We are now able to protect the rest of life, but instead we remain recklessly prone to destroy and replace a large part of it.

      …As a consequence of human activity, it is believed that the current rate of extinction overall is between one hundred and one thousand times higher than it was originally, and all due to human activity. In 2015 an international team of researchers finished a careful analysis of the prehuman rates and came up with a diversification rate ten times lower in genera (groups of closely related species). The data, when translated to species extinctions, suggests species extinction rates at the present time are closer to one thousand times higher than that before the spread of humanity. The estimate is further consistent with an independent study that detected a similar downward shift in the rate of species formation in prehumans, as well as in their closest relatives among the great apes.…

      Reproduction is obviously necessary, but it is a bad idea, as Pope Francis I has pointed out, to continue multiplying like rabbits. Demographic projections suggest that the human population will rise to about eleven billion or slightly more before the end of the century, thereafter peak, and begin to subside. Unfortunately for the sustainability of the biosphere, per-capita consumption is also destined to rise, and perhaps even more steeply than human numbers. Unless the right technology is brought to bear that greatly improves efficiency and productivity per unit area, there will be a continued increase in humanity’s ecological footprint, defined as the area of Earth’s surface each person on- average needs. The footprint is not just local area, but space scattered across land and sea, in pieces for habitation, food, transportation, governance, and all other services down to and including recreation.

      …Clearing a forest for agriculture reduces habitat, diminishes carbon capture, and introduces pollutants that are carried downstream to degrade otherwise pure aquatic habitats en route. With the disappearance of any native predator or herbivore species, the remainder of the ecosystem is altered, sometimes catastrophically. The same is true of the addition of an invasive species….

      Humanity has delivered a blow to the planet not even remotely approached by that of any other single species. The full scale of the assault, in common parlance of the Anthropocene called “growth and development,” began at the start of the Industrial Revolution. It was foreordained by the extermination of most of the mammal species in the world more than ten kilograms in weight, collectively called the megafauna, a process begun by Paleolithic hunter-gatherers and thereafter increased in stages enabled by technological innovation. The decline of biodiversity has been more like the gradual dimming of light than the flick of a switch. As the human population multiplied and spread around the world, it almost always strained local resources to their local limits. Doubling in numbers, then doubling again, and yet again, people fell upon the planet like a hostile race of aliens. The process was pure Darwinian, obedient to the gods of unlimited growth and reproduction. While the creative arts yielded new forms of beauty by human standards, the overall process has not been pretty by anybody else's standards—except bacteria, fungi, and vultures….

      The clear lesson of biodiversity research is that the diversity of species, arrayed in countless natural ecosystems on the land and in the sea, is under threat. Those who have studied the database most carefully agree that human activity, which has raised the species extinction rate a thousand times over its prehuman level, threatens to extinguish or bring to the brink of extinction half of the species still surviving into this century….

      The global conservation movement has temporarily mitigated but hardly stopped the ongoing extinction of species. The rate of loss is instead accelerating. If biodiversity is to be returned to the baseline level of extinction that existed before the spread of humanity, and thus saved for future generations, the conservation effort must be raised to a new level. The only solution to the “Sixth Extinction” is to increase the area of inviolable natural reserves to half the surface of the Earth or greater. This expansion is favored by unplanned consequences of ongoing human population growth and movement and evolution of the economy now driven by the digital revolution. But it also requires a fundamental shift in moral reasoning concerning our relation to the living environment….

      Finally, during the Anthropocene, Earth’s shield of biodiversity is being shattered and the pieces are being thrown away. In its place is being inserted only the promise that all can be solved by human ingenuity. Some hope we can take over the controls, monitor the sensors, and push the right buttons to run Earth the way we choose. In response, all the rest of us should be asking: Can the planet be run as a true spaceship by one intelligent species? Surely we would be foolish to take such a large and dangerous gamble. There is nothing our scientists and political leaders can do to replace the still-unimaginable complex of niches and the interactions of the millions of species that fill them. If we try, as we seem determined to do, and then even if we succeed to some extent, remember we won't be able to go back. The result will be irreversible. We have only one planet and we are allowed only one such experiment. Why make a world-threatening and unnecessary gamble if a safe option is open?

      …In every country where women have gained some degree of social and financial independence, their average fertility has dropped by a corresponding amount through individual personal choice. In Europe and among native-born Americans, it has already reached and continued to hold below the zero-growth threshold of 2.1 children per woman surviving to maturity. Given even a modest amount of personal freedom and an expectation of future security, women choose the option of what ecologists call K-selection, that favoring a small number of healthy well-prepared offspring, as opposed to r-selection, hedging the bet with a larger number of poorly prepared offspring. There won’t be an immediate drop in the total world population. An overshoot still exists due to the longevity of the more numerous per-mother offspring of earlier, more fertile generations.

      There also remain high-fertility countries, with an average of more than three surviving children born to each woman, thus higher than the 2.1 children per woman that yields zero population growth. They include Patagonia, the Middle East, Pakistan, and Afghanistan, plus all of sub-Saharan Africa exclusive of South Africa. The shift to lower fertility can happen during one or two generations. The United Nations biennial report on population in 2014 projected an 80 percent probability that by 2100 the world population, even as it decelerates toward zero growth, will reach between 9.6 billion and 12.3 billion, up from the 7.2 billion existing in 2014. That is a heavy burden for an already overpopulated planet to bear, but unless women worldwide switch back from the negative population trend of fewer than 2.1 children per woman, a turn downward in the early twenty-second century is inevitable….

More Women = Better Energy

[These excerpts are from an article by Katie Mehnert in the June 2019 issue of Scientific American.]

      Climate change is one of the most monumental challenges of our time. But even as it draws increasing calls for action, one of the most important steps we can take still gets far too little attention: we need more women in the energy sector. Only 15 percent of employees in the oil and gas industry are women, and that number is even smaller when you look at higher-paying technical jobs.

      Despite popular belief to the contrary, most leaders in oil and gas do recognize the reality of climate change. And many say they want to do something about it….

      …The Yale Program on Climate Change Communication recently reported that “on average, women are slightly more likely than men to be concerned about the environment and have stronger pro-climate opinions and beliefs.” And for years some women in energy fields have been prominent voices calling for greater action….

      This is true for all forms of diversity: The more different perspectives and life experiences that people bring to boardrooms and work teams, the more innovative ideas they can come up with together.

      …There are still far too many obstacles preventing women from entering the energy field and from reaching their full potential within it. The sector is paying a deep price for its long-term failure to recruit and retain a diverse workforce. When other industries beefed up operations to establish talent pipelines into diverse communities, far too many energy companies did not.

      We also need stronger STEM programs for young women and ample support for those programs from the oil and gas companies….

      To move forward, oil and gas companies also need to erase the negative perceptions many people have of the industry….

      For big ideas to flourish and big actions to follow, people of all backgrounds must be at the table tackling these challenges together. It is time all Americans see themselves represented among the decision makers at the companies that fuel our world. Ill

A Call for Systemic Change for Personalized Learning

[These excerpts are from an article by Joseph J. Cirasuolo in the May 2019 issue of Phi Delta Kappan.]

      …If our schools haven't provided much personalized learning until now it isn’t because educators have lacked the necessary compassion or skills. Rather, it is because they’ve had to operate in a system that constrains such work….

      Or consider the fact that the school system compels all children to follow the same curriculum, instead of learning in a way that best suits the individual child. A significant percentage of children come to the system as active learners who have acquired an enormous number of skills in the first four or five years of their lives only to become quickly classified as slow learners or learning disabled because they do not learn in the way that the system teaches. Again, if the system neglects to provide multiple pathways for children to learn, that is not the fault of educators.

      …If we can change the relationship between time and learning — so that time becomes the variable and learning the constant — then educators will be much better able to help students take greater responsibility for and ownership of their own learning.

      Achieving this deep systemic change is a daunting task. It requires us to replace almost all the underlying assumptions upon which the present system is based….So if we’re serious about making deep'and systemic educational changes, then we have to recognize that this also implies profound cultural changes.

      Of course, cultural change requires persistent and patient efforts….we need to be committed to doing the hard, patient work of systemic and cultural change, taking for granted that our colleagues are also motivated to help children learn. Rather than claiming to be the only real champions of personalized learning, we need to work together to free everybody from a system that prevents us from doing what we became educators to do.

Money, Power, and Education

[These excerpts are from an article by Maria Ferguson in the May 2019 issue of Phi Delta Kappan.]

      …It is not hyperbole to say that the Varsity Blues scandal, in which 50 people were charged with using bribes, false test scores, and other shady methods to influence college admission decisions, is one of the biggest education stories in years. Although the details of the scandal follow a familiar pattern — the powerful and wealthy using money and influence to get what they want — the high-profile cast of wrongdoers (actresses, coaches, CEOs) makes it far more salacious, if not nauseating.

      The fact that wealthy families have the upper hand when it comes to college admissions is not news to anyone, especially poor people. But the audacity of this scandal is a distasteful reminder of just how unfair postsecondary education has become in the United States. For less affluent families or first-generation college-goers, the admissions process alone can seem like an overwhelming gauntlet of decision making and paperwork. Add to that the competition with wealthier parents who can provide their kids with test prep, counselors, and resume-enhancing life experiences, and the process can become soul crushing. And the worries don’t stop when the acceptance letter arrives — because then it is time to start worrying about tuition, living expenses, and how to make the most of the experience.

      The parents involved in this scandal bought into a mythology that many elite colleges have worked hard to create. The idea that one school is so special, so rarefied, that the mere act of attending can set its students up for a happy and successful life is not an education-based strategy; it’s a marketing plan. Yes, elite networks can open doors and lower some of the barriers to entry that exist in the professional world, but this scandal is based on the false idea that the institutions themselves make a student happy and success-ful. I would argue instead that the true value of a postsecondary education ultimately depends on the student. It has been my experience that a hard-working, scrappy student who seeks out opportunities and mentors and actively builds a network can go pretty far in this world. One of the saddest aspects of this sordid story is that the adults involved didn’t have more faith in the students. Did it ever occur to them that these kids could get into a great school that was the right place for them on their own merit? Clearly not. These parents had their eyes on a completely different prize. They saw their children’s post-secondary education as a vehicle for enhancing their own social standing….

      The release of Trump’s fiscal year 2020 education budget was small peanuts after the news of the scandal broke. By proposing to cut overall spending by $7.1 billion while main-taining level funding for the grant programs with the strongest advocacy base (Title I, special education, and English language acquisition), the administration has done the political equivalent of stepping on all the flowers in the garden except the perennial favorites….

      For example, the $2.1-billion grant program that supports teacher training and development would be eliminated and replaced with a "test" program that would support the use of professional development vouchers. Putting aside the fact that the new program is only funded at $200 million (a $1.9-billion cut), do we really want to use teacher training as the proving ground for a test program that has no research base whatsoever? There is no doubt room for improvement when it comes to professional development, but when organizations like Learning Forward (whose sole focus is to support the ongoing development of teachers) express deep concerns about this plan, you have to wonder what (if any) logic is driving the administration’s stated goal of “elevating the teaching profession through innovation.”

      Higher education also took a hit in this budget. The administration once again seeks to eliminate the Public Service Loan Forgiveness program, which allows individuals who work in government or the nonprofit sector to be relieved of their student loan debt. The budget also cuts the Federal Work-Study program by a little more than half and takes away $2 billion in reserve funds for Pell Grants. These suggested cuts, which hit at the heart of federal student aid programs, come just as Congress begins the reauthorization process for the Higher Education Act. I have to believe the lingering stench of the Varsity Blues scandal will ultimately help the Democratic leaders in Congress who are pushing for a budget that actually helps students afford college….

      …Using the power of the bully pulpit, Secretary of Education Betsy DeVos had an opportunity to bring together some of the factions that fought so fiercely during the Obama years, but from the beginning it was clear she was more ideologue than diplomat….

Toward Equality of Educational Opportunity: What’s Most Promising?

[These excerpts are from an article by Arthur E. Wise in the May 2019 issue of Phi Delta Kappan.]

      …In the mid-1970s, I became concerned that advocates pushing for higher standards and tougher accountability were hijacking the movement to promote equity in school funding. The use of achievement testing was on the rise, and some saw these tests as a way to push schools to ensure that students of color and poor students achieved minimum levels of academic competency. To my mind, though, increased testing seemed likely to have a negative effect on teaching and learning in general, without doing much to provide the neediest students with high-quality instruction. Meanwhile, the flurry of activity surrounding standards, accountability; and testing would divert attention from funding inequity….

      Within a few years, many of us became convinced not only that this strategy would not work but that it was pushing education toward much greater centralization and bureaucratization….The trend began at the state level, as policy makers seized upon low-cost strategies based on practices from the business world, such as management-by-objectives, operations analysis, and other kinds of “scientific management” Soon these ideas morphed into their educational equivalents: mastery learning, behavioral objectives, minimum competency testing, and more….

      Of course, the enactment of the No Child Left Behind Act (2001) dramatically ratcheted up this trend to a new level, demanding even more standardized testing along with measures of Adequate Yearly Progress, remedies for low-performing schools, and a top-down compliance-driven approach to management. In schools serving low-performing students, the strategy has bordered on obsession, with preparation for reading and math tests crowding out other subject areas. Even at schools that serve high-performing students, the pressure to raise test scores has led administrators to narrow the curriculum and treat teachers as instruments of the bureaucracy.

      Failure should have been anticipated. Absent from the strategy was any new approach to teaching and learning, except for the unproven assumption that “if you test it, they will learn"….Perhaps the Every Student Succeeds Act of 2015, which rolls back much of the federal government's role in regulating schools, signals the beginning of the end of this movement. Still, though, it is important to acknowledge that many advocates thought that standards, testing, and accountability would lead to more equitable student outcomes. Though it has been a largely ineffective reform strategy and, in many ways, a destructive one, it is one of the three main strategies in recent decades intended to produce greater equity….

Equity in Anxious Times

[These excerpts are from an editorial by Rafael Heller in the May 2019 issue of Phi Delta Kappan.]

      As Maria Ferguson notes in this month's Washington View column, Operation Varsity Blues — the wide-ranging federal investigation into cheating and bribery in elite college admissions — already ranks as “one of the biggest education stories in years.” Since March, when the news broke, pundits have issued a steady stream of angry op-eds denouncing not just the 50 celebrities and CEOs indicted for sleazing their kids into schools like Yale and Georgetown, but also the many other (perfectly legal) ways in which well-to-do parents secure coveted spots at selective colleges for their children: making big donations, taking advantage of “legacy” admissions, hiring private college counselors, paying for test-prep services, and so on.

      …Few topics have received more attention, in these pages, than the inequitable distribution of educational resources and opportunities. Decade after decade, researchers have found that the more affluent the students, the more likely they are to study with the most experienced teachers, go to the schools with the nicest facilities, have access to the newest equipment, and enjoy many other advantages.

      Nor, for that matter, should any of this come as a surprise to the general public. For instance, every point that has been argued in response to the Varsity Blues scandal has already been detailed in best sellers….Anybody who thought rich and poor kids were competing on an even playing field simply hasn’t been paying attention.

      They sure are paying attention now, though. In part, as Ferguson points out, that's due to the celebrities involved in the Varsity Blues scandal and the sheer audacity of the fraud they appear to have committed. No doubt, though, much of it also has to do with the heightened sense of economic anxiety and resentment that has bubbled up….For many people, stories about inequality hit an awfully raw nerve right now.

Uniformity of the Universe

[This excerpt is from the fifth lecture in The Theory of Everything by Stephen W. Hawking.]

      …Nevertheless, it leaves a number of important questions unanswered. First, why was the early universe so hot? Second, why is the universe so uniform on a large scale—why does it look the same at all points of space and in all directions?

      Third, why did the universe start out with so nearly the critical rate of expansion to just avoid recollapse? If the rate of expansion one second after the big bang had been smaller by even one part in a hundred thousand million million, the universe would have recollapsed before it ever reached its present size. On the other hand, if the expansion rate at one second had been larger by the same amount, the universe would have expanded so much that it would be effectively empty now.

      Fourth, despite the fact that the universe is so uniform and homogenous on a large scale, it contains local lumps such as stars and galaxies. These are thought to have developed from small differences in the density of the early universe from one region to another. What was the origin of these density fluctuations?

      The general theory of relativity, on its own, cannot explain these features or answer these questions. This is because it predicts that the universe started off with infinite density at the big bang singularity. At the singularity, general relativity and all other physical laws would break down. One cannot predict what would come out of the singularity….

Heavenly Bodies

[This excerpt is from the first lecture in The Theory of Everything by Stephen W. Hawking.]

      Ptolemy’s model provided a reasonably accurate system for predicting the positions of heavenly bodies in the sky. But in order to predict these positions correctly, Ptolemy had to make an assumption that the moon followed a path that sometimes brought it twice as close to the Earth as at other times. And that meant that the moon had sometimes to appear twice as big as it usually does. Ptolemy was aware of this flaw but nevertheless his model was generally, although not universally, accepted. It was adopted by the Christian church as the picture of the universe that was in accordance with Scripture. It had the great advantage that it left lots of room outside the sphere of fixed stars for heaven and hell.

      A much simpler model, however, was proposed in 1514 by a Polish priest, Nicholas Copernicus. At first, for fear of being accused of heresy, Copernicus published his model anonymously. His idea was that the sun was stationary at the center and that the Earth and the planets moved in circular orbits around the sun. Sadly for Copernicus, nearly a century passed before this idea was to be taken seriously. Then two astronomers—the German, Johannes Kepler, and the Italian, Galileo Galilei—started publicly to support the Copernican theory, despite the fact that the orbits it predicted did not quite match the ones observed. The death of the Aristotelian-Ptolemaic theory came in 1609. In that year Galileo started observing the night sky with a telescope, which had just been invented.

      When he looked at the planet Jupiter, Galileo found that it was accompanied by several small satellites, or moons, which orbited around it. This implied that everything did not have to orbit directly around the Earth as Aristotle and Ptolemy had thought. It was, of course, still possible to believe that the Earth was stationary at the center of the universe, but that the moons of Jupiter moved on extremely complicated paths around the Earth, giving the appearance that they orbited Jupiter. However, Copernicus’s theory was much simpler.

      At the same time, Kepler had modified Copernicus’s theory, suggesting that the planets moved not in circles, but in ellipses. The predictions now finally matched the observations. As far as Kepler was concerned, elliptical orbits were merely an ad hoc hypothesis—and a rather repugnant one at that because ellipses were clearly less perfect than circles. Having discovered, almost by accident, that elliptical orbits fitted the observations well, he could not reconcile with his idea that the planets were made to orbit the sun by magnetic forces.

      An explanation was provided only much later, in 1687, when Newton published his Principia Mathematica Naturalis Causae. This was probably the most important single work ever published in the physical sciences. In it, Newton not only put forward a theory of how bodies moved in space and time, but he also developed the mathematics needed to analyze those motions. In addition, Newton postulated a law of universal gravitation. This said that each body in the universe was attracted toward every other body by a force which was stronger the more massive the bodies and the closer they were to each other. It was the same force which caused objects to fall to the ground. The story that Newton was hit on the head by an apple is almost certainly apocryphal. All Newton himself ever said was that the idea of gravity came to him as he sat in a contemplative mood, and was occasioned by the fall of an apple.

They Persisted

[These excerpts are from an article by Sara Talpos in the 17 May 2019 issue of Science.]

      …The concerned citizens uncovered evidence that the tannery had contaminated large swaths of land and water with chemicals known as a per- and polyfluoroalkyl substances (PFASs), which researchers have linked to an array of human health problems. More than 4000 such compounds exist, and they are widely used in products such as fire-fighting foams, nonstick coatings, carpeting, food packaging, and even dental floss. The tannery used two PFASs by the ton to waterproof shoe leather….

      …Around the country, evidence of PFAS contamination has anxious residents demanding to know how exposure could affect their health. Regulators are struggling to balance cost and risk as they set safety limits. And companies, fire departments, water utilities, and the U.S. military are facing cleanup and liability costs that could total tens of billions of dollars or more….

      At the heart of the PFAS controversy is the carbon-fluorine bond, among the strongest of all chemical bonds. Enzymes can’t break it. Sunlight can't break it. Water can’t break it. That durability explains the commercial appeal of PFASs, but it makes them problematic pollutants. They’ve been dubbed “forever chemicals” because they don’t degrade naturally. And because the molecules have a water-soluble head, water and airborne droplets can carry them for long distances.

      The U.S. chemists who discovered how to synthesize PFASs in the 1930s, however, were beguiled by their advantages. Use of the chemicals in the United States began to expand rapidly during the 1950s, when the Minnesota Mining and Manufacturing Company, a Saint Paul-based firm now called 3M, began to sell two compounds: perfluorooctanoic acid (PFOA) and perfluorooctanesulfonic acid (PFOS). PFOA became the basis for Teflon, the ubiquitous nonstick cookware coating manufactured by DuPont. PFOS became a key ingredient in firefighting foams used at airports and military bases and in the popular Scotchgard protestant, which enabled fabrics and other materials to resist water and oils….

      Even as sales of PFOA and PFOS boomed, however, 3M and DuPont researchers were amassing evidence that the chemicals accumulated in people and other animals and could have toxic effects. Much of that evidence became public only because of a lawsuit. In 1980, DuPont purchased farmland in West Virginia and began to dump waste laced with PFOA there. Cattle that grazed nearby began to die, and in 1999 a local family sued the company. The proceeding forced DuPont to hand over internal files….In 2001, DuPont paid an undisclosed sum to settle the case, and EPA fined the company in 2005 for violating rules for toxic waste. Under pressure from EPA, U.S. manufacturers agreed in 2006 to phase out production of PFOA by 2015. (They ended PFOS production in 2002.) Often, the two chemicals were replaced by related PFASs that manufacturers have asserted are safer and break down faster….

      …In 2011 and 2012, three independent epidemiologists who analyzed the data issued reports indicating a probable link between PFAS exposure and six conditions: high cholesterol, ulcerative colitis, thyroid disease, testicular cancer, kidney cancer, and pregnancy-induced high blood pressure….

      Meanwhile, other researchers were finding that almost all people living in the United States carry detectable levels of PFASs in their blood (although levels of PFOA and PFOS have declined since they were phased out). And the more researchers looked for PFAS contamination around industrial sites, airports, and military bases, the more they found….

      After the CDC report became public, however, the Trump administration—under growing pressure from Congress and state officials—promised to take action. And in February, EPA released a plan that calls for formally setting regulatory limits for PFOA and PFOS and for launching a nationwide program to monitor PFASs in water systems. The agency said it will beef up research into detection and cleanup methods, consider requiring companies to report PEAS releases, and even consider banning certain compounds.

      EPA also is planning to intensively examine about 125 of the thousands of newer, less studied PFASs, in collaboration with the National Toxicology Program. One goal is to test the assumption that the newer compounds are safer because they have shorter lives….

Uncovering the Hidden Curriculum

[These excerpts are from an article by Janani Hariharan in the 17 May 2019 issue of Science.]

      …I did not know what was expected of me or how the academic system operated. I was confused and desperate for help.

      “You’re too nice!” the professor said. “You’re very quick to agree with everyone, and you never stand up for yourself.” I was taken aback. Was I allowed to disagree with a professor? All my previous training had taught me otherwise. Clearly there was a whole set of rules I didn't know I wondered whether I ever would….

      …I struggled with assignments that tested my thinking and presentation skills instead of asking me to memorize and regurgitate information. The approach to grading was peculiar and confusing….

      …For me, things started to take a turn for the better when I realized I could reach out for help….

      …As I answered their questions I suddenly realized that, even though I was still learning, I had knowledge to share that could help others navigate the hidden curriculum….

Two Threats to U.S. Science

[These excerpts are from an editorial by Bruce Alberts and Venkatesh Narayanamurti in the 17 May 2019 issue of Science.]

      …The cycle of success that catapulted the United States to a global leadership position in science and technology has long been fueled by its many research universities. These institutions create the new fundamental knowledge in science and engineering on which all else depends, and they also train the large numbers of outstanding young people required to produce the next generation of professors, technologists, and entrepreneurs. U.S. universities have attracted great young scientists and engineers from all over the world, many of whom choose to remain in the country, strengthening our institutions and enterprises. Two critical features of this system are now threatened: the support of young people and their unique potential to take risks and explore promising new ideas; and a merit-based selection of scientists and engineers to populate academia and industry, view-ing everyone as equal, regardless of the nation in which they were born.

      The current grant opportunities for starting a new independent research career in academia have not only become increasingly unavailable to young scientists and engineers, but are also disastrously risk-averse. At the NIH, the proportion of all grant funds awarded to scientists under the age of 36 fell from 5.6% in 1980 to 1.5% in 2017. One might ask the rhetorical question: How successful would Silicon Valley be if nearly 99% of all investments were awarded to scientists and engineers age 36 years or older, along with a strong bias toward funding only safe, nonrisky projects? Similarly, at the U.S. Department of Energy and its National Laboratories, high-risk, high-reward research and development has been severely limited by extreme volatility in research funding and by very limited discretionary funding at the laboratory level.

      Another major concern stems from a new distressing and dangerous public dialogue, encouraged by some political leaders, that unjustly disparages the many people in the United States who were born elsewhere. This strikingly un-American attitude, along with the new visa policies that it has generated, is discouraging migration to the United States of the young talent in science and engineering from other nations instrumental to the nation’s success….Nearly half of current doctoral students in science, technology, engineering, and mathematics (STEM) fields are from abroad, and the United States needs to make it easier, not harder, for them to stay and contribute to the cycle of success.

      U.S. leadership must focus on stimulating innovation by awarding an equal number of grants to those new investigators proposing risky new research ideas and those proposing to extend the research that they did during their training period, while also funding them at a younger age. At the same time, it is imperative that the United States reconsider its visa and immigration policies, making it much easier for foreign students who receive a graduate degree in a STEM discipline from a U.S. university to receive a green card, while stipulating that each employment-based visa automatically cover a worker’s spouse and children….

Turning off the Emotion Pump

[These excerpts are from an article by Wade Roush in the May 2019 issue of Scientific American.]

      …Facebook watches to learn what pleases you and what angers you, and it uses that information to auction ads to companies that want to reach consumers with your specific profile. It also watches what everyone else likes, then shows you more of whatever is most engaging that day—the better to keep you scrolling, so that you’ll encounter more ads….Think of it as an emotion pump. You finish reading a post. Before you can close the app or click to another browser tab, you scroll some more, almost by reflex. In that moment, Facebook injects another post optimized to make you laugh or get you angry, and the cycle continues. Polarizing content keeps the pump constantly primed by riling users up.

      The side effects of this strategy have become plain in nations such as Myanmar, the Philippines and the U.S., where misinformation shared on Facebook has fueled division and social unrest….Russian-sponsored Facebook ads and posts swayed the outcome of the 2016 presidential election. Nobody at Facebook anticipated these effects. But they can’t be swept under the rug—and they can’t be solved through minor algorithmic adjustments, because this is Facebook. The emotion pump is at the core of its business model.

Climate-Friendly Capitalism

[These excerpts are from an editorial by the editors in the May 2019 issue of Scientific American.]

      …Recently more and more of these resolutions have pushed companies to act on climate change and reduce greenhouse gas emissions. Two years ago, for instance, investor proposals forced Shell to sell off carbon-rich oil sands assets. Investors also made the company tie 10 percent of executive bonus pay to success in cutting emissions. And earlier this year BP, bowing to investor pressure, agreed to align future capital spending with the targets of the 2015 Paris climate accord, reducing emissions enough to keep global temperature rise below two degrees Celsius. That could mean cuts as high as 50 percent, depending on the country.

      This year and going forward, investors should exert more of this leverage on these and other companies. That is because politicians, especially in the U.S., have abjectly failed to address the threats that climate change poses to health, national security and the environment. President Donald Trump has repeatedly said he does not see climate change as a problem, despite strong and steadily growing scientific evidence from the world’s researchers—and his own government agencies. This year the White House took steps to create a panel, chaired by someone who believes mounting carbon dioxide is good for the planet, to attack this overwhelming scientific consensus. On a local level, the state of Washington recently voted down a tax on carbon emissions.

      The businesses that generate large amounts of greenhouse gases, in contrast, have proved willing to change their ways when investors insist on it. Of the more than 600 largest publicly traded companies in the U.S., 64 percent have now made commitments to reduce emissions….

      Companies’ desire to avoid embarrassing proxy-season showdowns has given rise to another investor force—a shareholder network called Climate Action 100+, whose members have $32 billion in assets under management and try to push corporate changes outside of these yearly meetings. One success earlier this year: international mining giant Glencore said it will not grow its coal-mining business any larger and will develop targets for emissions reductions. Climate Action 100+ is also pressing nonenergy businesseS that generate a lot of emissions, such as steel manufacturers, to line up behind science-based reduction goals.

      The motive of these investment funds is not unfettered altruism. While they hold oil company stock, they also invest in real estate along coastlines threatened by rising seas, in health care firms whose costs will increase, and in dozens of other sectors that stand to take a substantial hit if climate change is not brought under control. So they have to take a long-term and global view….

The Trillion-Gallon Problem

[These excerpts are from an article by Shanti Menon in the Spring 2019 issue of Environmental Defense Fund’s Solutions.]

      …Scientists linked Oklahoma’s sudden jump in earthquake activity to pressure from wastewater injected into underground disposal wells by the oil and gas industry. The state limited those injections, but the volume of wastewater has been rising rapidly nationwide. The industry now produces a trillion gallons of it every year — enough to fill more than a million Olympic swimming pools….

      This year, the EPA and water-stressed states such as New Mexico, Oklahoma and Texas could open the door to all kinds of wastewater disposal and reuse, including expanding the discharge of treated wastewater into rivers and streams and reusing treated wastewater on lawns, golf courses, ranches and farms. Some have even considered using it to replenish drinking water supplies.

      There’s one mega-problem: Nobody really knows what’s in this water.

      Without an understanding of what’s in wastewater or how to clean it, risky proposals to discharge or reuse it could threaten precious groundwater, crops, livestock and people's health across the parched American West….

      In hard-rock formations such as the Permian Basin, which straddles Texas and New Mexico, oil and gas companies extract petroleum by injecting billions of gallons of water mixed with chemicals into the rock at high pressure.

      The wastewater that comes out the other side can contain more than 1,000 chemicals, including cancer-causing arsenic and benzene. Some of these occur naturally underground, others are purely industrial….

      In the 1920s, wastewater released directly onto Texas soil created the Texon Scar, a patch of blighted earth visible from space. Cleanup of the scar is still ongoing today, nearly 100 years later.

      The composition of wastewater varies from well to well. Samples are hard to get and difficult to analyze. Some of the chemicals used are industrial secrets….

      …Of the 1,200 chemicals listed to date, most are not well studied, and some have not been evaluated at all. This makes it difficult to determine when wastewater is clean enough to discharge and what impacts it might have on a farmer's fields, a rancher's cattle, fish in a river or drinking water.

      Even as the science begins to raise red flags, research is being outpaced by the rapid rise of water-intensive drilling in places like the Permian Basin. Some in the industry want to rush into wastewater reuse and discharge now seeing a window of opportunity in today’s industry-friendly EPA led by AndrewWheeler. The safeguards that regulate wastewater discharge were created decades ago, when the industry discharged relatively little. This summer, the EPA will decide if it’s time to reconsider the rules….

      For the moment, there's enough room to handle wastewater with disposal wells that are properly located, designed and monitored to avoid groundwater pollution and earthquakes. In the future, more water could be recycled on-site, and eventually, with a lot more science and strong state and federal safety standards, it could be treated and reused in ways that minimize environmental risks.

      Not everyone is willing to wait. An entrepreneurial ex-rodeo clown in VVyoming claims he can treat wastewater for crop irrigation, and is gearing up for a state-authorized test on a wheat field. In the Permian Basin, organizations eager to explore wastewater treatment and reuse have sprung up like Texas wildflowers. A recent joint paper from the EPA and the state of New Mexico renames oil field wastewater as “renewable water.”

      In this Wild West of water pushers, drought and environmental rollbacks, all options appear to be in play. But with so many unknowns, there's no way to be certain that treated wastewater is clean enough for these new purposes….

Can a Dire Ecological Warning Lead to Action?

[These excerpts are from an article by Erik Stokstad in the 10 May 2019 issue of Science.]

      After issuing a landmark report warning of a deepening planetary threat from the loss of biodiversity, a global scientific advisory group faces a new and daunting challenge: helping policymakers act to stop the decline.

      At least 1 million plant and animal species of the estimated 8 million known are now at risk of extinction, according to this week’s report from the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES). Already, environmental degradation imperils the natural systems that provide us with food, water, and livelihoods….

      Blunting the threat will require transformational economic and social change, the report concludes. And now, 7-year-old IPBES faces the question of how to best help catalyze that transformation. Some experts want the group to put a greater emphasis on working closely with policymakers….But others caution that IPBES must remain a politically neutral provider of science-based information and not strain its limited resources.

      ….human activities, especially agriculture and exploitation such as hunting, have “severely altered” 75% of Earth’s land area, helped kill about half of the world’s coral reefs, and decimated many wildlife populations. It forecasts that, at expected rates of human population growth and consumption, the trends will worsen without major changes, including the adoption of new farming practices. More fundamentally, the report says business and political leaders need to begin “steering away from the current limited paradigm of economic growth.”

      ….meaningful change often requires extensive practical follow-up work with those who make and implement government policies….

      …Politicians from coastal nations, for example, might want to know how to conserve ecosystem benefits, such as productive fisheries and mangroves that provide storm protection, while allowing for development and economic activities. Other nations might be more interested in how to manage forests so they can produce and purify water while continuing to provide fuel and timber.

      Observers say IPBES can also help national decision-makers by identifying and recommending products that might be especially relevant, such as computer models for analyzing different approaches to balancing competing needs and minimizing social conflicts. IPBES is planning to create a new task force dedicated to identifying such tools. And it already has an online catalog for sharing such resources….

      IPBES needs to be careful, however, not to prescribe specific policies….Too much detail…can make it difficult for the group’s diverse member nations to agree on and approve the reports.

      That balancing act is likely to be on display in coming years, as IPBES finishes three ongoing assessments, including a report on the various ways that people value nature, due in 2022, and another on the sustainable use of wild species such as fish and medicinal plants….

Ancient Jaw Gives Elusive Denisovans a Face

[These excerpts are from an article by Ann Gibbons in the 3 May 2019 issue of Science.]

      Thirty-nine years ago, a Buddhist monk meditating in a cave on the edge of the Tibetan Plateau found something strange: a human jawbone with giant molars. The fossil eventually found its way to scientists. Now, almost 4 decades later, a groundbreaking new way to identify human fossils based on ancient proteins shows the jaw belonged to a Denisovan, a mysterious extinct cousin of Neanderthals.

      The jawbone is the first known fossil of a Denisovan outside of Siberia’s Denisova Cave in Russia, and gives paleoanthropo-logists their first real look at the face of this lost member of the human family….

      Together, the jaw’s anatomy and the new method of analyzing ancient proteins could help researchers learn whether other mysterious fossils in Asia are Denisovan….

      The international team of researchers also reports that the jawbone is at least 160,000 years old. Its discovery pushes back the earliest known presence of humans at high altitude by about 120,000 years.

      A massive search for Denisovans has been underway ever since paleogeneticists extracted DNA from the pinkie of a girl who lived more than 50,000 years ago in Denisova Cave and found she was a new kind of human….

Pipe Dreams

[These excerpts are from an article by James Temple in the May/June 2019 issue of MIT Technology Review.]

      Severe droughts have drained rivers, reservoirs, and aquifers across vast parts of India in recent years, pushing the nation’s leaky, polluted water systems to the brink.

      More than 600 million Indians face “acute water shortages,” according to a report last simmer by NITI Aayog, a prominent government think tank. Seventy percent of the nation's water supply is contaminated, causing an estimated 200,000 deaths a year. Some 21 cities could run out of groundwater as early as next year, including Bangalore and New Delhi, the report found. Forty percent of the population, or more than 500 million people, will have “no access to drinking water” by 2030.

      India gets more water than it needs in a given year. But the vast majority of rain falls during the summer monsoon season, generally a four-month window. The country’s other major source is melting snow and glaciers from the Himalayan plateau, which feeds rivers in the north.

      Capturing and delivering the water to the right places at the right times across thousands of miles, without wasting or contaminating tremendous amounts along the way, is an enormous engineering challenge. India captures and uses only a fraction of its rainfall, allowing most of it to run off into the ocean.

      Meanwhile, farmers without efficient irrigation systems employ heavily subsidized electricity to suck up as much groundwater as possible. Agriculture is the single largest drain on India’s water supplies, using more than 80% of the water despite accounting for only around 15% of the country’s GDP….

      Climate change will surely make the problem worse. It’s uncertain what role higher temperatures have played in recent droughts, as the climate models have mainly predicted increasingly intense Indian monsoons. But the longer-term forecast is that the extremes will become more extreme, threatening more frequent flooding and longer droughts.

      Most climate studies predict that India will get more rain on average in the decades to come, though regional and seasonal patterns will vary sharply. A paper published last year in Geophysical Research Letters found that flash flooding will significantly increase in 78 of the 89 urban areas evaluated if global temperatures rise to 2 °C above preindustrial levels. The resulting catastrophes will disproportionately harm India's poor, who frequently settle along the low-lying floodplains of major cities.

      Sea-level rise threatens to deluge villages and megacities, and poison the water tables, along the subcontinent’s 7,500 kilometers (4,660 miles) of coastline between the Arabian Sea and the Bay of Bengal.

      Finally, climbing temperatures and shrinking snowfall will accelerate the melting of the Himalayan glaciers, the wellspring of major Asian waterways including the Ganges, Indus, Yangtze and Yellow rivers. In some regions, under high emissions scenarios, glaciers could shrink by as much as half by midcentury and 95% by 2100.

      Initially the increased runoff will swell rivers, raising the risks of downstream flooding but sending Indians more water. That trend is likely to shift into reverse in the second half of the century, however, shrinking the flow to around 1.9 billion people who live along those rivers. The Ganges basin alone supports 600 million Lfapeople, provides 12% of the country’s surface water, and accounts for 33% of GDP….

      Whether shoddy infrastructure or climate change is to blame for India’s water sources running dry or turning toxic won’t, in the end, much matter in the minds of the victims. And either way, India will need to grapple with present-day disasters and fortify infrastructure for wane dangers to come—all with fewer resources than rich nations and without derailing its economic growth….

How to Cool an Ocean

[These excerpts are from an article by Holly Jean Buck in the May/June 2019 issue of MIT Technology Review.]

      Coral reefs smell of rotting flesh as they bleach. The riot of colors—yellow, violet, cerulean—fades to ghostly white as the corals’ flesh goes translucent and falls off, leaving their skeletons underneath fuzzy with cobweb-like algae.

      Corals live in symbiosis with a type of algae. During the day, the algae photosynthesize and pass food to the coral host. During the night, the coral polyps extend their tentacles and catch passing food. Just 1°C of ocean warming can break down this coral-algae relationship. The stressed corals expel the algae, and after repeated or prolonged episodes of such bleaching, they can die from heat stress, starve without the algae feeding them, or become more susceptible to disease.

      Australia’s Great Barrier Reef—actually a 2,300-kilometer (1,400-mile) system made up of nearly 3,000 separate reefs—has suffered severe bleaching in the past few years….

      Coral reefs are not just about colorful fish and exotic species. Reefs protect coasts from storms; without them, waves reaching some Pacific islands would be twice as tall. Over 500 million people depend on reef ecosystems for food and livelihoods. Even if the temperature increase eventually stabilizes at 1.5 °C a century or two from now, it’s not known how well coral reef ecosystems will survive a temporary over-shoot to higher temperatures….

      Arctic ecosystems, mountain glaciers, and the redwood forests in California are also at high risk from even small changes in global mean temperature. So are species that can’t move quickly and find another suitable niche….

      Besides upsetting ecosystems, ocean warming will, of course, raise sea levels. They are already 13 to 20 centimeters (5 to 8 inches) higher than in 1900. In the 20th century, most of this rise came from ocean waters expanding as they got warmer, but now the effects of melting glaciers and ice sheets have far overtaken thermal expansion. The rise produced by melting glaciers is projected to be staggering—on the order of meters per century.

      But what if we could engineer specific glaciers to keep them from melting? John Moore, a glaciologist and leader of China’s geoengineering research program, has recently been looking into this, and he wrote a comment with colleagues in Nature that outlines a few ways to do it.

      One example involves two Antarctic glaciers scientists have a nervous eye on: Pine Island and Thwaites. Warm ocean water comes in underneath them. Conventional wisdom says this is unstoppable and irreversible, because of the bedrock slope and geometry. But Moore suggests that building artificial islands in front of the glaciers could buttress them, pinning down the ice and holding it back the way natural rocks and islands do.

      Another technique would be to extract water from below the glaciers to keep them from sliding off into the ocean. Glaciers sit on subglacial streams, or thin layers of water, but drying these streams could slow their slide into the sea….

The Threat to the World’s Breadbasket

[This excerpt is from an article by Adam Piore in the May/June 2019 issue of MIT Technology Review.]

      By 2050, the world’s population is expected to grow to 9.7 billion. As living standards and diets also improve around the world, food production will have to increase by SO% at a time when climate change will help make both sub-Saharan African and. East Asia unable to meet their own needs without imports. Already US corn and soybeans account for 17% of the world’s caloric output. The UN Food and Agriculture Organization projects that American exports of corn must almost triple by 2050 to meet the shortfall, while US soy exports would have to rise by more than 50%. All this extra food has to be grown without using significantly more land. That means it's going to be all about yield—the productivity of the crop.

Already Hot and Getting Hotter

[These excerpts are from an article by Lauren Zanolli in the May/June 2019 issue of MIT Technology Review.]

      …Since the spring of 2018, Mexico’s Caribbean coast and the shorelines of 19 other countries in the region have been inundated with unprecedented amounts of Sargassum seaweed. Tourists expecting pristine white beaches have instead been confronted with endless piles of slimy, decaying vegetation. While it’s normal for the brown macroalgae to appear on Caribbean shorelines in smaller amounts, outlier blooms have been increasing in size and regularity over the past 10 years. The one that occurred last year is believed to be the worst ever in the region. Now efforts are mounting not just to contain the ecological crisis but to capitalize on it.

      Usually Sargassum arrives in the Caribbean from its namesake sea in the eastern Atlantic. But researchers believe the 2018 influx came from a new source: the equatorial waters between Brazil and West Africa, where pesticide and fertilizer runoff from the Amazon and Congo Rivers fed the algae bloom. This bloom was amplified by climate change....rising ocean temperatures help the seaweed proliferate faster. Deforestation in the Amazon rain farest also feeds the bloom—not only does it increase the pesticide and fee tilizerninoff, but it is itself a massive contributor to climate change….

      In ordinary conditions, Sargassum is a normal, even healthy, part of the ocean. But in vast quantities, the seaweed brings a litany of harms to ceastal ecosystems. Mats of it block much-needed sunlight from coral reefs, causing disease or death. As the seaweed dies and decays, baceria suck up oxygen in the water ile nitrogen, phosphorus, and other nutrients are released in massive quantities. If the seaweed is left to rot on land, as it is on Mexico’s beaches, the samealutrients threaten to leach into groundwater. In the Yucatan Peninsula, where the only source of fresh water is a unique regional network of underground rivers, water contamination is a serious worry. And while seaweed can draw carbon dioxide from the atmo-sphere, just as trees do, it’s beneficial for the environment only if the seaweed is harvested and processed, and the resulting CO2 is stored away permanently….

India’s Dilemma

[These excerpts are from an article by James Temple in the May/June 2019 issue of MIT Technology Review.]

      …India has recently completed or approved dozens of giant solar and wind projects, nearly doubling its renewables capacity since 2015. For the last two years, it was the fastest builder of solar projects on the planet after China. All told, the country has around 75 gigawatts of solar, wind, and other renewable sources installed—and more than 45 gigawatts in the pipeline.

      In 2015, government officials announced plans to more than quadruple renewables capacity, setting a target of 175 gigawatts by 2022. Later that year, under the Paris climate accords, India committed to produce 40% of its electricity from clean sources and cut emissions intensity (the level of carbon dioxide produced per unit of GDP) to at least 33% below 2005 levels by 2030.

      India is now a shining case study in how rapidly generation of renewable power can expand with government investment and support, even in a deeply poor country. But it also underscores the fact that adding clean energy and cutting climate emissions aren’t the same thing.

      For India to achieve the latter, clean energy would need to replace—not simply augment—coal, which currently generates nearly 55% of the nation’s electricity. And that's not happening anytime soon in one of the world’s fastest growing and fastest urbanizing economies.

      India’s GDP could more than quintuple by 2040, more than doubling its energy demand, according to the International Energy Agency. That would represent roughly a quarter of the total global increase over that period.Air-conditioning units alone could increase by 15-fold, as citizens become better off and cities grow hotter….

      Estimates vary widely, but the IEA expects that carbon emissions from India’s power sector will rise 80% through 2040 even with the renewable generating plants currently planned. By then India could overtake the US as the world’s second largest emitter, undermining efforts to curb global warming. If India can’t pull off the necessary reductions, even with its substantial policies and investments, it means that wealthy nations will need to step up their efforts even more….

      India has shown that a developing nation can rapidly add clean energy, at costs below those associated with coal plants, while still expanding its economy and creating jobs….

      …It’s still likely to take decades before the nation’s renewables start to replace coal and bring down emissions, given how fast energy needs are growing and how hard it is to integrate intermittent sources like wind and solar….

      Last spring, following a three-year, $2.5 billion government-funded effort to link transmission lines to the most remote parts of the country, Modi trumpeted that “every single village of India now has access to electricity?”

      But the government set a low bar, ticking the box so long as 10% of households in a village, and institutions like schools and hospitals, were connected. That means as many as 90% of rural households in many villages still aren’t wired, and even those that are may get power only a few hours a day. At least tens of millions of Indians still lack electricity.

      Fixing all this is likely to require far more funding and a broad regulatory overhaul, including penalties for utilities that fail to provide power and reforms that move prices closer to market rates. But the latter is an extremely unpopular notion in India, where the belief that the state should deliver cheap power is deeply held, dating back to the earliest promises of independence….

      …The UN’s climate-change body has concluded that the world needs to cut carbon dioxide emissions 45% from 2010 levels by 2030, and eliminate them entirely by midcentury, to have a decent chance of preventing 1.5 °C of warming. India is the world's fourth largest emitter, contributing 7% of emissions, behind China (27%), the US (15%), and the European Union (10%), according to the Global Carbon Project.

      But it’s fundamentally unfair to ask the country to cap its climate pollution and stunt its growth now, given that richer countries have pumped out far more carbon dioxide to get to where they are today. They’ve enjoyed decades of accumulating economic growth thanks largely to cheap fossil fuels.

      India’s per capita energy consumption is around one-tenth of America's—and even if it doubles by 2030, it will be only half what China’s was in 2015….

      …Arguably, wealthier countries should also help poorer ones reduce emissions, whether that’s by providing low-interest capital or subsidized technology, or by developing cheaper clean-energy solutions. If they don't want to do it because it’s the right thing to do, then they should do it for the self-interested reason: climate change doesn’t recognize borders.

What Would You Pay to Save the World?

[These excerpts are from an article by David Rotman in the May/June 2019 issue of MIT Technology Review.]

      In contrast to the existential angst currently in fashion around climate change, there’s a cold-eyed calculation that its advocates, mostly economists, like to call the most important number you’ve never heard of.

      It’s the social cost of carbon. It reflects the global damage of emitting one ton of carbon dioxide into the sky, accounting for its impact in the form of warming temperatures and rising sea levels. Economists, who have squabbled over the right number for a decade, see it as a powerful policy tool that could bring rationality to climate decisions. It’s what we should be willing to pay to avoid emitting that one more ton of carbon.

      For most of us, it's a way to grasp how much our carbon emissions will affect the world's health, agriculture, and economy for the next several hundred years….

      Common estimates of the social cost Lof that ton are $40 to $50. The cost of the fuel for the journey in an average car is currently around $225. In other words, you’d pay roughly 20% more to take the social cost of the trip into account.

      The number is contentious, however. A US federal working group in 2016, convened by President Barack Obama, calculated it at around $40, while the Trump administration has recently put it at $1 to $7. Some academic researchers cite numbers as high as $400 or more.

      Why so wide a range? It depends on how you value future damages. And there are uncertainties over how the climate will respond to emissions. But another reason is that we actually have very little insight into just how climate change will affect us over time. Yes, we know there’ll be fiercer storms and deadly wildfires, heat waves, droughts, and floods. We know the glaciers are melting rapidly and fragile ocean ecosystems are being destroyed. But what does that mean for the livelihood or life expectancy of someone in Ames, Iowa, or Bangalore, India, or Chelyabinsk, Russia?

      For the first time, vast amounts of data on the economic and social effects of climate change are becoming available, and so is the computational power to make sense of it. Taking this opportunity to compute a precise social cost of carbon could help us decide how much to invest and which problems to tackle first….

      Over the last several years, economists, data scientists, and climate scientists have worked together to create far more detailed and localized maps of impacts by examining how temperatures, sea levels, and precipitation patterns have historically affected things like mortality, crop yields, violence, and labor productivity. This data can then be plugged into increasingly sophisticated climate models to see what happens as the planet continues to warm.

      The wealth of high-resolution data makes a far more precise number possible—at least in theory….

      So far, the researchers have found that climate change will kill far more people than once thought….It found that the social cost of carbon due to increased mortality alone is $30, nearly as high as the Obama administration's estimate for the social cost of all climate impacts. An additional 9.1 million people will die every year by 2100, the group estimates, if climate change is left unchecked (assuming La global population of 12.7 billion people)….

      If climate change is left to run unchecked through the end of the century, the southern and southwestern US will be devastated by rising rates of mortality and crop failure. Labor productivity will slow, and energy costs (especially due to air-conditioning) will rise. In contrast, the northwestern and parts of the northeastern US will benefit….

      India is the big loser. Not only does it have a fast-growing economy that will be slowed, but it’s already a hot country that will suffer greatly from getting even hotter….

      Estimating the global social cost of carbon also raises a vexing question: How do you put a value on future damages? We should invest now to help our children and grandchildren avoid suffering, but how much? This is hotly and often angrily debated among economists….

      There’s an ethical dimension to these calculations. Wealthy countries whose prosperity has been built on fossil fuels have an obligation to help poorer countries. The climate winners can’t abandon the losers. Likewise, we owe future generations more than just financial considerations. What’s the value of a world free from the threat of catastrophic climate events—one with healthy and thriving natural ecosystems?

      …Enter the Green New Deal (GND). It's the sweeping proposal issued earlier this year by Representative Alexandria Ocasio-Cortez and other US progressives to address everything from climate change to inequality. It cites the dangers of temperature increases beyond the UN goal of 1.5 °C and makes a long list of recommendations. Energy experts immediately began to bicker over its details: Is achieving 100% renewables in the next 12 years really feasible? (Probably not.) Should it include nuclear power, which many climate activists now argue is essential for reducing emissions?

      In reality, the GND has little to say about actual policies and there’s barely a hint of how it will attack its grand challenges, from providing a secure retirement for all to fostering family farms to ensuring access to nature. But that’s not the point. The GND is a cry of outrage against what it calls “the twin crises of climate change and worsening income inequality.” It’s a political attempt to make climate change part of the wider discussion about social justice. And, at least from the perspective of climate policy, it’s right in arguing that we can’t tackle global warming without considering broader social and economic issues….

      Nevertheless, the investments will take decades to pay for themselves. Renewables and new clean technologies may lead to a boom in manufacturing and a robust economy, but the Green New Deal is wrong to paper over the financial sacrifices we’ll need to make in the near term.

      That is why climate remedies are such a hard sell. We need a global policy—but, as we’re always reminded, all politics is local. Adding 20% to the cost of that San Francisco-Chicago trip might not seem like much, but try to convince a truck driver in a poor county in Florida that raising the price of fuel is wise economic policy. A much smaller increase sparked the gilets jaunes riots in France last winter. That is the dilemma, both political and ethical, that we all face with climate change.

The “Mind-boggling” Problem of Keeping New York Dry

[These excerpts are from an article by Courtney Humphries in the May/June 2019 issue of MIT Technology Review.]

      …temperatures in the city could be hotter on average by 4 to 6 °F (about 2 to 3 °C), with several heat waves per summer. Sea levels could rise 11 to 21 inches (28 to 53 centimeters) by the 2050s and up to six feet (1.8 meters) by 2100—doubling the size and population of the 100-year-flood zone, the area that has a 1% annual chance of flooding. The borough with the most land affected by all this will be Queens….

      Climate resilience is expensive and onerous. Seven projects in the region got federal funding in a post-Sandy design competition called Rebuild by Design, but several years later, not one has broken ground. Last fall, the city abruptly changed plans for the first phase of the “Big U,” a project that would create and connect 10 miles of parks, barriers, and flood walls around the low-lying area from East 57th Street down to the Battery and up West 42nd Street. The city eschewed an innovative approach that would allow a newly redesigned East River Park to partially flood during storms, deciding to spend more money to raise the park 8 to 10 feet, adding fill that may cover natural habitats….

Democracy Endangered

[This excerpt is from an article by Susan Silbey in the May/June 2019 issue of MIT News.]

      Today, more people than ever before in human history enjoy the rights and privileges of citizenship—the ability to participate in self-governance.

      Yet the internet, search platforms, and social media—the very same technologies that join people across the globe in more widespread connections—also represent the most significant and disruptive threats to our democracies since the founding of the modern democratic state.

      We live in a world where time and space, people and things, are organized in a radically new way. Vast temporal, spatial, and cultural distances are bridged; social organizations based on similarity and proximity have been transformed into connections among very different and very distant people. The quantity and pace of social interactions have increased geometrically. The everyday lives of most people in most social classes all over the globe are filled with more encounters, of shorter duration, and over greater distances than ever before. This is life on the internet.

      Amid this escalating, energetic circulation of information and comment, threats to democratic citizenship take root.

      We do not simply use the internet—to shop, find information, or keep track of appointments and friends. With every click, search, and view, we provide the information for algorithms to analyze and platform companies to sell. We supply the inputs—the raw materials of what is now a world of constant digitized surveillance. We, the users, are ourselves used.

      New and largely uncontested forms of power have been concentrated in a handful of firms that extract personal information in unexpected and often hidden ways. Techniques for monitoring, personalization, and customization are contractually permitted and regularly improved. Layered on top of that are continuous experimentation, commodification, and control.

      The internet now freely distributes more information than has ever circulated in human history. At the same time, as the circulation of knowledge advances, the threats to expertise and truth escalate, challenging democratic norms and the centuries-long evolution of democratic citizenship.

      How did this happen?

      First, the inventors of the internet overlooked the essential role of context in shaping the uses and consequences of technology. A tool made for physicists to exchange data on a platform built for the military was given to the world. Designed for a cohesive and highly disciplined community, it was distributed to a world ungoverned by shared norms of participation.

      A second mistake was generalizing from personal experience: “This is such a good technology for us, it will be good for everyone.” By seeing themselves as everyman, the inventors ignored human variation. They forgot to user-test with diverse populations.

      Finally, the inventors demonstrated canonical groupthink: talking and listening to a narrow set of like-minded people, excluding unfamiliar, perhaps even critical, perspectives.

      So what do we do now…?

Rock Lobster

[This excerpt is from an article by Jennifer Chu in the May/June 2019 issue of MIT News.]

      Flip a lobster on its back, and you'll see that the underside of its tail is lined with a translucent membrane that appears more vulnerable than the armor-like carapace shielding the rest of the crustacean.

      But MIT engineers have found that this soft membrane is surprisingly tough, with a plywood-like microscopic structure that prevents scrapes and cuts as the animal scuttles across the rocky seafloor.

      The membrane is a natural hydrogel, composed of 90% water, so it's flexible enough to allow the lobster to whip its tail back and forth. (Chitin, a fibrous material found in many shells and exoskeletons, makes up most of the rest.) But the membrane stiffens dramatically when stretched, making it difficult for a predator to chew through the tail or pull it apart.

      The team discovered that the lobster membrane is the toughest of all natural hydrogels, including collagen, animal skins, and rubber. It's also about as strong as industrial rubber composites, such as those used to make car tires, garden hoses, and conveyor belts.

      The lobster’s tough yet stretchy membrane could inspire more flexible body armor, particularly for areas such as elbows and knees. Materials designed to mimic lobster membranes could also be useful in soft robotics and tissue engineering. And the results shed new light on the survival of one of nature's most resilient creatures….

Bacteria Help Lung Tumors Grow

[These excerpts are from an article by Anne Trafton is in the May/June 2019 issue of MIT News.]

      MIT biologists have discovered a new mechanism that lung tumors exploit to promote their own survival: they alter the lung’s bacterial populations, provoking the immune system to create an inflammatory environment that helps tumor cells thrive.

      In mice genetically programmed to develop lung cancer, those raised in a bacteria-free environment developed much smaller tumors than mice raised under normal conditions. And treating the latter mice with antibiotics resulted in tumors that were about 50% smaller. Giving them drugs that blocked the immune response also significantly inhibited tumor development.

      “This research directly links bacterial burden in the lung to lung cancer development and opens up multiple potential ave-nues toward lung cancer interception and treatment….”

      Lung cancer, the leading cause of cancer-related deaths, kills more than 1 million people worldwide per year. Up to 70% of patients also suffer complications from bacterial infections of the lung.

      Mice (and humans) typically have many harmless bacteria in their lungs. The mice engineered to develop lung tumors harbored fewer bacterial species, but their lungs’ overall bacterial population grew significantly. That caused immune cells called gamma delta T cells to proliferate and begin secreting cytoldnes, inflammatory molecules that promote tumor growth.

      The researchers’ analysis of human lung tumors revealed unusually high numbers of gamma delta T cells and altered bacterial signals similar to those seen in the mice, so they believe drug treatments lie those that inhibited mouse tumor development are worth testing in humans.

Scientists Track Florida’s Vanishing Barrier Reef

[These excerpts are from an article by Paul Voosen in the 26 April 2019 issue of Science.]

      …Around the world, warming oceans are killing coral. In Florida…heat-induced bleaching is just the latest in a millennia-long series of insults, which have brought the reef’s growth to a standstill and left it vulnerable to erosion and rising seas. As a result, the barrier reef—the third longest in the world—is not simply dying. It appears to be vanishing.

      At stake is a 320-kilometer-long bulwark that protects the Keys from waves while providing habitat for fish and a lure for tourists. Recent measurements…have confirmed that the coral is eroding, in some places by several millimeters per year….

      The recent toll of warming, disease, and pollution on Florida’s reef has been even heavier than on some other iconic reefs, such as Australia’s Great Barrier Reef….Some species, such as the fast-growing elkhorn coral with its distinctive wide branches, have nearly vanished. A host of others have died as a disease called stony coral tissue loss has marched down the Keys….

      …Florida’s corals haven’t been healthy for millennia. Core samples from the reef record that it stopped growing 3000 years ago. Florida’s reef lies near the northern temperature limit for corals, and…a cooling trend around that time likely made the waters more prone to cold snaps that would periodically kill off corals, leaving the reefs at a delicate tipping point.

      Now although cold snaps still occur, global warming is bringing hotter summers, causing bleaching and mass die-offs. It is also raising sea level. Healthy corals readily cope with sea level rise, growing with the rising ocean. But Florida’s ailing reefs probably can’t keep pace. Anecdotes abound of patches of reef eroded flat to sand…

      A patchwork of restoration efforts, largely from nonprofit groups, continues on the reef, often aiming to replace dead coral with heat-resistant transplants. Backers should temper their expectations….

      …biologists need to introduce not just corals that can resist heat or disease, but also species that can build structure, like staghorn or elkhorn. It’s also time…to think about saving the reef structure and the services it provides, even if its coral dies….

Health for All

[This excerpt is from an editorial by Seth Berkley and Henrietta Fore in the 26 April 2019 issue of Science.]

      Imagine a world where affordable, quality health care is available to every person, and where infectious disease and infant and maternal mortality are as rare in the poorest parts as they are in wealthier countries. The world has already come a long way toward meeting this goal. But to finish the job, we need to change our thinking.

      To be sure, the incidence of child mortality and cases of deadly infectious diseases have dropped dramatically around the world. For example, polio, which once paralyzed a thousand children every day, has been eliminated from all but three countries, with just 33 cases last year. Measles cases, despite a recent, alarming global surge, are now a fraction of what they were four decades ago. All this was made possible because global health organizations and the governments of lower-income countries have worked together to provide the most vulnerable communities access to essential health care interventions, such as clean water, sanitation, and vaccinations.

      And yet, 1 in 10 children are still missing out. Most are the hardest to reach, whether they live in remote rural villages, conflict zones, among the swelling numbers of displaced people, or in rapidly growing urban slums where they might be undetected by formal health systems. Meeting their needs will require focusing more on health interventions that have both the greatest reach and are conduits to other health services for vulnerable communities. Childhood vaccination does precisely this. Vaccination reaches more children—more than 85%, who are inoculated against a range of infectious diseases—than any other health intervention globally.

      When a child gets access to vaccines, it benefits that child’s community. With vaccines come supply chains, logistics, cold storage, trained health care staff, data monitoring, disease surveillance, and health care records. Parents and siblings often come along with the child who is being vaccinated, giving them potential access to a host of other health interventions—from neonatal and maternal health care to malaria prevention measures, and sexual and reproductive health and education….

Lessons from Benjamin Franklin

[These excerpts are from an article by Maria Ferguson in the April 2019 issue of Phi Delta Kappan.]

      …Franklin’s many successes were the direct result of a childhood that included three key factors: 1) an ambitious and loving family, 2) access to educational opportunities, and 3) free time spent building his own curiosity, resilience, and independence. In policy terms, Franklin's childhood centered around three familiar issues: early childhood education, access to higher education, and social and emotional learning, three of the hottest topics in education today. It fascinated me to think that even in colonial America, these modern ideas were completely relevant. And now, 250 years later, those same factors can still make or break a childhood.

      Policy makers and education leaders know well that providing all children with the kinds of opportunities and support Franklin had is a lofty, if not unreachable, goal. But federal, state, and local leaders can still make wise investments in the areas that affect children most….

      …Families are the first, and in many ways the most consequential, influencers on a child's learning and development.

      …Despite the other dramas happening in Congress, there is some common ground between Republicans and Democrats on higher education — so the current mood is best described as “cautiously optimistic.” Never mind that the Trump administration has its own ideas about how to manage the federal role in higher education (a role they tend to define as quite limited). It is fair to say that Secretary Betsy DeVos’ actions regarding student debt forgiveness and protections for predatory providers would not pass the Ben Franklin smell test. Same for her relentless push for school choice at any cost.

      …Although keeping students physically safe is far and away the top priority for school leaders, there is also a growing consensus that schools need to consider the social and emotional needs of students as well. The last two decades saw an intense and single-minded obsession with academics and test scores that would have no doubt have alienated a student as creative and independent as young Ben Franklin. Today, education and child development experts are focused on a broader set of outcomes for children….

      It is amazing that it takes an expert-packed commission two years to tell us what we should all know already (and what Ben Franklin clearly knew): A child's education depends on many factors. In addition to high-quality learning opportunities, children also need time, love, support, and some semblance of personal freedom if they are to grow and develop into capable, productive citizens. The idea that rigorous academics alone can somehow compensate for the many things a child needs to develop has never made sense to me. That argument was at the heart of the effort to evaluate teacher performance using only test scores (a policy that has now been rejected by most state and local leaders). How can we possibly hold one person accountable for the myriad influences and circumstances in a child’s universe?...

To Improve the Curriculum, Engage the Whole System

[These excerpts are from an article by Phyllis L. Fagell in the April 2019 issue of Phi Delta Kappan.]

      …The policy obsessions of the last two decades haven’t had the desired effect. Perhaps that’s because they haven't accounted for the everyday life of teaching and learning.

      Lo and behold, maybe those of us who were so critical of No Child Left Behind and Race to the Top were right all along! If you want to improve K-12 education, you have to think about the topics kids study, the books they read, the discussions they have, the projects they complete, the experiments they conduct ... you know, the stuff that actually goes on in the class-room. If the policy wonks have finally realized that curriculum matters, then that’s a good thing. It took them long enough, but at least they’re beginning to think about the instructional core.

      Unfortunately, though, many of the people now turning their attention to the curriculum are the very same people who championed Adequate Yearly Progress, value-added teacher ratings, and other misguided accountability schemes, and it’s not clear that they’ve learned the most important lesson of the last 20 years: People who don’t understand the complexities of life in schools have no business trying to impose simplistic, one-dimensional reforms on school practitioners.

      Look, folks, a curriculum isn’t something you can just drop into a school from above, like adding fluoride to the water supply. It’s an integral part of the system, bound up with everything from teacher recruitment, hiring, and mentoring to governance, budgeting, and parent engagement. Ideally, designing and implementing a new curriculum is an opportunity to foster cohesion, trust, and professional learning, as well as to assess and respond to local interests and needs. So when I hear people argue that a simple way to improve our schools is for administrators to purchase high-quality textbooks or an “evidence-based” off-the-shelf curriculum, I’m tempted to go full-on Charlton Heston from Planet of the Apes, shouting “Keep your filthy paws off our classroom materials!”

      …Simply put, it’s always helpful to begin with an assessment of the curriculum you have, before deciding what kind of curriculum you need….

      Over several decades, our public schools were designed, explicitly, to rank and sort students into differing curricular tracks. Most schools continue to do so in one form or another, whether by placing kids into regular and honors classes, limiting enrollments in Advanced Placement courses, counseling students into (or out of) career and technical education programs, and so on, often under the guise of meeting kids’ needs for acceleration or academic support. Any effort to redesign the curriculum will have to wrestle with these long-standing practices, particularly the tracking of students of color and less affluent students into low-tier courses of study. And any changes will likely set off those parents whose kids have benefited from tracking until now….

      Most parents don’t think much about the curriculum, but they do tend to see their child's homework and grades. If the homework looks different from what they remember from their own school days, or if their kids’ grades seem to be based on new criteria, they may be tempted to complain about the school’s performance, vent their frustrations at public hearings, and lobby their board members to make the schools go back to the way things were. In short, when redesigning the curriculum, system leaders cannot afford to leave parents and other Lcommunity members out of the loop….

      Given that they will be responsible for putting any new curriculum into practice, teachers must have clarity about how they will be involved in shaping that curriculum and how their professional roles and responsibilities are going to change. Teacher leadership is critical throughout the process of curriculum planning, but even if district leaders choose to reserve key decisions for themselves, they should not try to impose a new curriculum on teachers without seeking their input. Ultimately, teachers are the ones who will have to design lesson plans, select materials, deliver the curriculum, assess student progress, and so on — and so they need to be engaged in the planning, they need to trust the process, and they need to be personally invested in the new approach….

      The last 20 years of school reform have given us good reason to be suspicious of policies and initiatives being pushed on practitioners by those who have little to no experience in schools. Yet, it’s hard to disagree with the idea that curriculum reform is sorely needed in every part of K- I2 education. Here’s hoping that it is shaped and led by people who actually know how school systems work….

How Childhood Has Changed for Tweens

[These excerpts are from an article by Phyllis L. Fagell in the April 2019 issue of Phi Delta Kappan.]

      …Every student I counsel or teach today was born after 9/11, and they’ve grown up amidst school shootings, the 24-7 news cycle, and rapid improvements in technology. Code red drills are both their normal and a source of stress. Their parents are more anxious, too, which may be contributing to a more protective parenting style. And as kids pick up on their parents’ unease, there's an added element of emotional contagion….

      And yet, whatever the changes happening around them, there are important ways in which tweens haven’t changed at all. The developmental phase is much the same for 21st-century middle schoolers as it was for those of us growing up in the 20th century. Just as I had to do in the 1980s, today’s young adolescents must figure out their identity and place in the world. As their prefrontal cortex develops, they’re malleable, impulsive, and impressionable. They’re capable of reasoning intellectually, interpreting emotions, and taking a moral stand, but they lack perspective or life experience. Sorting out social drama can consume large chunks of their time, and they tend to experience emotions in polarities. Any mishap can register as a catastrophe, and they have little understanding that negative feelings are temporary. They’re trying to figure out what coping skills work for them and where their strengths and interests align. They’re hyperaware of an invisible audience judging their every move and picking up on their shortcomings and limits. They can organize a rally for an important cause but forget to take a two-week-old banana out of their backpack. The same child who will jump from a cliff into a lake might be too self-conscious to raise his hand in class. It's a time of insecurity, hormonal changes, and contradictions. The only other time a child experiences so much development is between birth and age two.

      So how do these young people handle such transitions in a world that is changing at a similarly dizzying pace? We can’t predict the future for today’s tweens, but we do know the unique characteristics of their present-day life….

      …According to the Pew Research Center report…, 45% of teens say they’re online “almost constantly.” That’s double the number in the previous Pew report.

      On the social side, kids who are developmentally wired for adventure somehow have to preserve their reputation, stay safe, be kind, and make solid judgment calls online without the benefit of face-to-face social cues. Plus, there's the potential for permanence in everything they do. When a few boys in my 7th-grade class smeared pats of butter on all the cars in the school parking lot in 1986, no one beamed it out to the world. We were just as prone to mischief and mistakes a generation ago, but our transgressions could fade into the ether.

      The butter-smearing incident seems rather quaint today. Now, principals have to call police if a 12-year-old forwards a classmate’s nude snapshot across state lines and unknowingly violates child pornography laws. Technology has brought new avenues for harassment, too. One study found that more than 25% of middle schoolers reported being a victim of electronic dating aggression, and more than 10% said they were a perpetrator….Kids must consider the long-term impact of more mundane behaviors, too….

      …Researchers at the University of Kentucky recently found that middle school girls and boys not only tend to feel dissatisfied with their bodies, but their negative feelings increase as they spend more time on social media….

      …Simply having a phone on hand in class can affect students’ grades. A study in Educational Psychology found a causal link between students’ use of cell phones in class and lower exam scores….Kids are losing sleep because of technology, too. For some, it feels rude to drop out of a group chat, even if it’s 1 a.m. and they’re exhausted. Others have difficulty disengaging from gaming or other online interests.

      Technology is even changing the age-old phenomenon of cheating. A 2009 Common Sense Media survey found that 35% of teens with cell phones admitted to cheating with them at least once. Half of those polled admitted to cheating using the internet, while 38% had plagiarized work from websites. And according to a survey by McAfee, a computer security firm, 29% of students admit to using tech devices to cheat in school….

      The internet also is exposing kids to darker and more mature information earlier, whether it’s pornography or graphic images of a school shooting. According to Break the Cycle, the average age of first exposure to pornography is now around age 11….

      On the plus side, there’s less incentive to memorize facts when most information can be Googled, which means kids are more likely to focus on making connections across ideas and thinking outside the box. With half of all U.S. jobs at risk of being automated in the coming years, kids will need to be able to think critically, innovate, and solve problems to perform the new jobs that emerge….

      The good news is that parents have become more forth-coming about their children’s mental health struggles, whether they’re disclosing a diagnosis or a therapist’s treatment plan. They’re also more honest about struggles at home, such as a divorce or job loss. At the same time, however, kids have become less resilient and feel less in control of their fate, at least in part because parents tend to afford them less freedom and autonomy than in the past.

      …growing numbers of educators are coming to understand that when we deprive kids of play, we put them at risk for depression and anxiety.

      Overall, I’ve observed more parents and schools expressing interest in building kids’ character, prioritizing authentic inclusion, and encouraging moral action. In the age of the selfie, more adults are recognizing that we need to de-emphasize “likes” on social media and spend more time underscoring the importance of empathy. Middle schoolers have always been tuned into justice and fairness, but today’s tweens are perhaps even more likely to take on an activist role, whether they lobby for gun control, the environment, or immigration rights….

      In some ways, kids are growing up faster today. Parents can no longer shield their children from bad news, particularly if they’re carrying a computer in their pocket. In other ways, they’re growing up slower. As they spend more time communicating online, they’re spending less time together experimenting, taking risks, and dating. That said, some aspects of being this age haven’t changed at all. A principal once told me that he loves this age group because the same child can present as 13 going on three, or 13 going on 30. No matter how much their landscape changes, that’s as true today as it was a generation ago.

Childhood, Then and Now

[These excerpts are from an editrial by Rafael Heller in the April 2019 issue of Phi Delta Kappan.]

      April 1919 — exactly a century ago, on the nose — marked the end of President Woodrow Wilson’s “Children’s Year,” a national campaign to improve infant and maternal health, provide more social services to needy families, increase school enrollments, and further reduce Americans’ reliance on child labor (which had been declining steadily for two decades).

      At the time, the typical American childhood included a lot more sickness (and 1 in 10 babies died in childbirth), a lot less schooling, and a lot more manual labor than it does today. In 1919, roughly a million 10- to 15-year-olds, or roughly 8% of the age group, spent their days working on farms, in factories, or on the street. Close to 80% of children ages 5 to 17 were enrolled in school, but their numbers dwindled in the later grades — only about a quarter of 14- to 17-year-olds attended high school, and only about 1 in 10 graduated….

      As federal initiatives go, the Children’s Year looks to have been fairly productive. A national commission met in Washington, D.C., followed by a series of regional meetings meant to rally public and political support for efforts to help the most vulnerable children. In turn, most states proceeded to create child welfare agencies, while, across the country, millions of women mobilized to collect data on children's health and advocate for better education and recreational activities for kids.

      No doubt, the lives of American children have changed dramatically since then. However, it’s tricky to compare the experience of childhood today with that of previous generations. For example, historians argue that it wasn’t until the early 20th century, as a large middle class began to coalesce, that Americans began to conceive of childhood as the sort of special, happy stage of life that we now take to be the norm. Nor did Americans think of the teen years as a distinct phase of childhood at all until high school enrollments took off in the 1920s and ‘30s….

      But whatever our point of comparison, one thing is certain about childhood today: American children of all backgrounds now spend much more time in school than at any time in the past — not just dramatically more time than in 1919 but also, given rising trends in high school attendance and graduation, significantly more time than just a few years ago. As the sociologist Annette Lareau points out in this issue, educators have always played an exceptionally important role in children’s lives. Today more than ever.

New Climate Models Forecast a Warming Surge

[These excerpts are from an article by Paul Voosen in the 19 April 2019 issue of Science.]

      For nearly 40 years, the massive computer models used to simulate global climate have delivered a fairly consistent picture of how fast human carbon emissions might warm the world. But a host of global climate models developed for the United Nations’s next major assessment of global warming, due in 2021, are now showing a puzzling but undeniable trend. They are running hotter than they have in the past. Soon the world could be, too.

      In earlier models, doubling atmospheric carbon dioxide (CO2) over preindustrial levels led models to predict somewhere between 2°C and 4.5°C of warming once the planet came into balance. But in at least eight of the next-generation models, produced by leading centers in the United States, the United Kingdom, Canada, and France, that “equilibrium climate sensitivity” has come in at 5°C or warmer. Modelers are struggling to identify which of their refinements explain this heightened sensitivity before the next assessment from the United Nations’s Intergovernmental Panel on Climate Change (IPCC). But the trend “is definitely real. There’s no question,” says Reto Knutti, a climate scientist at ETH Zurich in Switzerland. “Is that realistic or not? At this point, we don’t know.”

      That’s an urgent question: If the results are to be believed, the world has even less time than was thought to limit warming to 1.5°C or 2°C above preindustrial levels—a threshold many see as too dangerous to cross. With atmospheric CO2 already at 408 parts per million (ppm) and rising, up from pre-industrial levels of 280 ppm, even previous scenarios suggested the world could warm 2°C within the next few decades. The new simulations are only now being discussed at meetings, and not all the numbers are in….

      Many scientists are skeptical, pointing out that past climate changes recorded in ice cores and elsewhere don’t support the high climate sensitivity—nor does the pace of modern warming….

Cold Comfort

[This excerpt is from an article by Steve Minsky in the April 2019 issue of Scientific American.]

      …When the forecast warned us a couple of days earlier that Arctic air was looming, the president issued a sincere and helpful tweet, which ended with: “What the hell is going on with Global Warning [sic]? Please come back fast, we need you!” And being the most powerful man on Earth, he was successful in his polite imploration. On February 4 the Chicago temperature reached 51 degrees. And the next day the Big Apple basked in a sunny 65.

      The Arctic is warming at twice the rate as the global average. This heat can help disrupt the polar vortex, a steady wind pattern that usually stays focused on circling the North Pole. A wobbly jet stream then runs into a brick wall of that Arctic air, which is still pretty frosty by human standards, and both wind up hundreds of miles farther south than they usually belong. And for a few days we in the Deep South—by which I mean Chicago or New York compared with the Arctic—freeze our butts off. But less than a week after this most recent vortex disruption, thanks to some warm air coming up from the real South, I was walking outside without a coat. On a date when the average high temperature is about 40.

      Like so much else we are currently living through, this kind of thermometer ride is not normal. Or it didn’t used to be, anyway.

      Of course, scientists have been waming—sorry, warning—that warming can have these very effects. Climate change deniers may sneer, “So when it’s warmer than usual, that’s because of global warming. And when it’s colder, that's also because of global warming?” Well, yes. And anybody who just can’t accept these kinds of seemingly paradoxical situations needs to reflect on the expression “freezer burn.”…

Failing Successfully

[These excerpts are from an article by Rachel Nuwer in the April 2019 issue of Scientific American.]

      People often say that “failure is the mother of success.” This cliche might have some truth to it, but it does not tell us how to actually turn a loss into a win….”we know we shouldn’t give up when we fail—but in reality, we do.”

      …One study reported, for example, that the sooner and more often students fail at a task, such as building a robot, the sooner they can move forward and improve. Another confirmed that feedback on failures is most constructive when the giver comes across as caring, and the receiver is prepared to weather negative emotions.

      …study focused on overcoming one fundamental, everyday form of failure: not completing a task. They asked 131 undergraduates to write an essay about their school experiences. Half of the students received instructions for structuring their writing, and half were left to their own devices; all, however, were stopped prior to finishing. Afterward the researchers found that those in the structured group were more motivated to complete their essays, compared with those who lacked guidance—even if the latter were closer to being done. Knowing how to finish, in other words, was more important than being close to finishing.

      The researchers dubbed this finding “the Hemingway effect,” for the author’s self-reported tendency to stop writing only when he knew what would happen next in the story—so as to avoid writer’s block when he returned to the page….learning how to fail temporarily can help people avoid becoming permanent failures at many tasks, such as completing a dissertation, learning a language or inventing a new technology.

      Demystifying failure and teaching students not to fear it make goals more attainable, says Stephanie Couch, executive director of the Lemelson-MIT Program….

Low-Tech Climate Fix

[These excerpts are from an editorial by Hans de Groot in the April 2019 issue of Scientific American.]

      Climate change disproportionately affects the world’s most vulnerable people, particularly poor rural communities that depend on the land for their livelihoods and coastal populations throughout the tropics. We have already seen the stark asymmetry of suffering that results from extreme weather events, such as hurricanes, floods, droughts, wildfires, and more.

      For remedies, advocates and politicians have tended to look to-ward cuts in fossil-fuel use or technologies to capture carbon before it enters the atmosphere—both of which are crucial. But this focus has overshadowed the most powerful and cost-efficient carbon capture technology in the world. Recent research confirms that forests are absolutely essential in mitigating climate change, tha.nks to their ability to absorb and sequester carbon. In fact, natural climate solutions such as conservation and restoration of forests, along with improvements in land management, can help us achieve 37 percent of our climate target of limiting warming to a maximum of two degrees Celsius above preindustrial levels, even though they currently receive only 2.5 percent of public climate financing.

      Forests’ power to store carbon dioxide is staggering: one tree can store an average of about 48 pounds in one year. Intact forests could take in the CO2 emissions of some entire countries.

      For this reason, policy makers and business leaders must create and enforce policies to prevent deforestation; foster reforestation of degraded land; and promote the sustainable management of standing forests in the fight against climate change. Protecting the world’s forests ensures they can keep performing essential functions such as producing oxygen, filtering water and supporting biodiversity. Not only does all the world's population depend on forests to provide clean air, clean water, oxygen and medicines, but 1.6 billion people rely on them directly for their livelihoods.

      Unfortunately, a huge amount of forest continues to be converted into agricultural lands to produce a handful of resource-intensive commodities—despite zero-deforestation commitments from companies and governments. So now is the time to increase forest protection and restoration. This action will also address a number of other pressing global issues. For example, increasing tree cover can help tackle the problem of food security in many areas: trees can enhance farm productivity and give farmers another source of revenue through the sale of fruits, nuts or timber—all the while storing carbon dioxide—in a practice known as agro-forestry. It is estimated that increased investment in this area could help sequester up to 9.28 gigatons of carbon dioxide while saving a net $709.8 billion by 2050. In productive landscapes where it would be difficult to increase tree cover dramatically, agroforestry serves as an attractive compromise….

      Landscape restoration promises an unparalleled return on investment, in terms of ecosystem services and carbon sequestered and stored. It could potentially sequester up to 1.7 gigatons of carbon dioxide every year….

The WHO Takes a Reckless Step

[This excerpt is from an editorial by the editors in the April 2019 Scientific American.]

      …To include TCM in the ICD is an egregious lapse in evidence-based thinking and practice. Data supporting the effectiveness of most traditional remedies are scant, at best. An extensive assessment was done in 2009 by researchers at the University of Maryland: they looked at 70 review papers evaluating TOM, including acupuncture. None of the studies proved conclusive because the data were either too paltry or did not meet testing standards.

      To be sure, many widely used and experimentally validated pharmaceuticals, including aspirin, decongestants and some anti-cancer chemotherapies, were originally derived from plants or other natural sources. Those drugs have all gone through extensive clinical testing of safety and efficacy, however. Giving credence to treatments that have not met those standards will advance their use but will also diminish the WHO’s credibility.

      China has been pushing for wider global acceptance of traditional medicines, which brings in some $50 billion in annual revenue for the nation’s economy. And in 2016 Margaret Chan, then the WHO director, praised China’s plans to do so. But while it’s a good idea to catalogue TCM and make health workers aware of treatments used by millions, their inclusion in the ICD recklessly equates them with medicines that have undergone clinical trials.

      In China, traditional medicines are unregulated, and they frequently make people sick rather than curing them. One particularly troublesome ingredient, aristolochic acid, is commonly used in traditional remedies and has been linked to fatal kidney damage and cancers of the urinary tract.

      A 2018 study in the British Journal of Clinical Pharmacology tested 487 Chinese products taken by sick patients and discovered 1,234 hidden ingredients, including approved and banned Western drugs, drug analogues and animal thyroid tissue. And in 2012 a team led by Megan Coghlan, then at Murdoch University of Australia, identified the DNA sequences in 15 samples of traditional medicines in the form of powders, tablets, capsules, bile flakes and herbal -teas. The samples also contained plants that produce toxic chemicals and animal DNA from vulnerable or endangered species (the Asiatic black bear and saiga antelope, for example) and other creatures protected by international laws.

      Thus, the proliferation of traditional medicines would have significant environmental impacts on top of the negative health effects. It would contribute to the destruction of ecosystems and increase the illegal trade of wildlife. China announced last October that it was legalizing the controlled trade of rhinoceros horn and tiger bone. (The move was postponed in November, following a global outcry.) Both are believed by practitioners to have the power to cure a range of ailments, from fever to impotence—although no study has found any beneficial outcome of ingesting either. Allowing even the controlled harvest of otherwise endangered creatures will boost illegal poaching, critics say.

      Until they undergo rigorous testing for purity, efficacy, dosage and safety, the WHO should remove traditional medicines from its list. These remedies should be given the same scrutiny as other treatments before being included in standard care practices.

Beliefs in Aliens, Atlanis Are on the Rise

[These excerpts are from an article by Lizzie Wade in the 12 April issue of Science.]

      …Common beliefs include that aliens helped build the Egyptian and Mayan pyramids, that refugees escaping Atlantis brought technology to cultures around the world, and that European immigrants were the original inhabitants of North America.

      …41% of Americans believed that aliens visited Earth in the ancient past, and 57% believed that Atlantis or other advanced ancient civilizations existed. Those numbers are up from 2016, when the survey found that 27% of Americans believed in ancient aliens and 40% believed in Atlantis.

      ….He can’t say exactly what is driving the rise in such ideas, but cable TV shows like Ancient Aliens (which has run for 13 seasons) propagate them, as does the internet.

      …Almost all such claims assume that ancient non-European societies weren’t capable of inventing sophisticated architecture, calendars, math, and sciences like astronomy on their own. “It’s racist at its core,” says Kenneth Feder, an archaeologist at Central Connecticut State University in New Britain….

New Species of Ancient Human Unearthed

[These excerpts are from an article by Lizzie Wade in the 12 April 2019 issue of Science.]

      A strange new species may have joined the human family. Human fossils found in a cave on Luzon, the largest island in the Philippines, include tiny molars suggesting their owners were small; curved finger and toe bones hint that they climbed trees. Homo luzonensis, as the species has been christened, lived some 50,000 to 80,000 years ago, when the world hosted multiple archaic humans, including Neanderthals and Denisovans, and when H. sapiens may have been making its first forays into Southeast Asia….

      The discovery echoes that of another unusual ancient hominin—the diminutive H. floresiensis, or “hobbit,” found on the island of Flores in Indonesia. “One is interesting. Two is a pattern,” says Jeremy DeSilva, an expert on Homo foot bones at Dartmouth College. He and others suspect the islands of Southeast Asia may have been a cradle of diversity for ancient humans, and that H. luzonensis, like H. floresiensis, may have evolved small body size in isolation on an island….

      The teeth show a unique mosaic of traits found separately in other Homo species. The premolars are about the size of ours, but instead of a single root they have two or three—a primitive feature. The molars are much more modern, with single roots, but “incredibly small” at only 10 millimeters across and 8 millimeters long, says Florent Detroit, a paleoanthropologist at the Museum of Man in Paris who worked with Mijares. That’s even smaller than those of H. floresiensis. Tooth size tends to correlate with body size, so it’s possible that H. luzonensis itself was tiny, Detroit says. But only a complete arm or leg bone will say for sure.

      The long, curved fingers and toes resemble those of australopithecines like Lucy, an early human ancestor thought to have both walked upright and swung through the trees….

      Not everyone is ready to embrace these teeth and skeletal fragments as a separate species, rather than a locally adapted population of, say, H. erectus, an older hominin that lived in Asia for millennia….

      Regardless of whether H. luzonensis was its own species, it may have evolved in isolation for hundreds of thousands of years. Butchered rhino bones on Luzon date to 700,000 years ago, though researchers don’t yet know which human species was responsible….

Reverse Global Vaccine Dissent

[This editorial by Heidi J. Larson and William S. Schulz is in the 12 April 2019 issue of Science.]

      This year, the World Health Organization named vaccine hesitancy as one of the top 10 global health threats, alongside threats as grave as climate change, antimicrobial resistance, Ebola virus, and the next influenza pandemic. What happened? How did vaccine reluctance and refusal become such a major risk?

      The concerns driving antivaccine sentiment today are diverse. For example, from 2003 to 2004, a vaccine boycott in Nigeria's Kano State sparked the retransmission of polio across multiple countries as far as Indonesia. Rumors of vaccine contamination with antifertility agents contributed to distrust and reinforced the boycott, costing the Global Polio Eradication Initiative over U.S. $500 million to regain the progress that was lost. In Japan, vaccination against human papilloma virus plummeted to almost zero after young women complained of movement disorders and chronic pain, causing the government to suspend proactive recommendation of the vaccine nearly 6 years ago. Similar episodes occurred in Denmark, Ireland, and Colombia as YouTube videos of the girls' symptoms spread anxiety, despite evidence of the vaccine's safety.

      The global surge in measles outbreaks has been exacerbated by vaccine refusers. In 2015, the measles strain that sparked the Disneyland outbreak came from visitors from the Philippines, infecting people who had refused vaccination. And in Indonesia, Muslim leaders issued a fatwa against a measles vaccine containing “haram” porcine compounds, while naturopathic “cupping” methods were promoted on Facebook as an alternative to vaccination. In 2018, a mix of political, religious, and alternative health antivaccine messages circulated on WhatsApp and Facebook in Southern India, disrupting a local measles-rubella vaccination campaign.

      The phenomenon of vaccine dissent is not new. The pages of 18th-century London antivaccination pamphlets bristle with many of today’s memes, but these ideas now spread over unprecedented distances with remarkable speed, clustering in online neighborhoods of shared beliefs. This clustering can tear the protective fabric—the “herd (community) irnmunity”—that the majority of vaccine acceptors have woven. As the portion of the community that is vaccinated decreases, there is less protection for others who may be too young, unable, or choose not to be vaccinated. For some diseases, it only takes a small minority to disrupt the protective cover.

      It is just over 20 years since British physician Andrew Wakefield sowed seeds of doubt about the safety of the MMR (measles, mumps, rubella) vaccine, suggesting a link between the vaccine and autism. Suspicions around the vaccine traveled globally, instilling anxiety among the most and least educated alike. The discredited Wakefield alone, though, cannot be blamed for today’s waves of vaccine discontent. He seeded a message on the eve of a technological revolution that disrupted business, politics, societies, and global health. The same year that Wakefield published his research, Google opened its doors. The launches of Facebook, YouTube, Twitter, and Instagram soon followed. These social media platforms have magnified individual sentiments that might have stayed local. Emotions are particularly contagious on social media, where personal narrative, images, and videos are shared easily.

      Today’s tech companies are now being called to account for their role in spreading vaccine dissent. Last month, the American Medical Association urged the chief executives of key technology companies to “ensure that users have access to scientifically valid information on vaccinations.” But this is not merely an issue of correcting misinformation. There are social networks in which vaccine views and information are circulating in online communities, where vaccine choices become part of one’s overall identity.

      To mitigate the globalization of vaccine dissent, while respecting legitimate sharing of concerns and genuine questions, a mix of relevant expertise is needed. Technology experts, social scientists, vaccine and public health experts, and ethicists must convene and take a hard look at the different roles each group has in addressing this challenge. It needs everyone’s attention.

Does Fossil Site Record Dino-killing Impact?

[These excerpts are from an article by Colin Barras in the 5 April 2019 issue of Science.]

      A fossil site in North Dakota records a stunningly detailed picture of the devastation minutes after an asteroid slammed into Earth about 66 million years ago, a group of researchers argues in a paper published online this week. Geologists have theorized that the impact, near what is now the town of Chicxulub on Mexico’s Yucatan Peninsula, played a role in the mass extinction at the end of the Cretaceous period, when all the dinosaurs (except birds) and much other life on Earth vanished.

      The team, led by Robert DePalma…, says it has uncovered a record of apocalyptic destruction 3000 kilometers from Chicxulub. At the site, called Tanis, the researchers say they have discovered the chaotic debris left when tsunamilike waves surged up a river valley. Trapped in the debris is a jumbled mess of fossils, including freshwater sturgeon that apparently choked to death on glassy particles raining down from the fireball lofted by the impact.

      …The deposit may also provide some of the strongest evidence yet that nonbird dinosaurs were still thriving on impact day….

      But not everyone has fully embraced the find, perhaps in part because it was first announced to the world last week in an article in The New Yorker. The paper, published in the Proceedings of the National Academy of Sciences (PNAS), does not include all the scientific claims mentioned in The New Yorker, including that numerous dinosaurs as well as fish were buried at the site….

      In the early 1980s, the discovery of a clay layer rich in iridium, an element found in meteorites, at the very end of the rock record of the Cretaceous at sites around the world led researchers to link an asteroid to the End Cretaceous mass extinction. A wealth of other evidence has persuaded most researchers that the impact played some role in the extinctions. But no one has found direct evidence of its lethal effects.

      DePalma and his colleagues say the killing is captured in forensic detail in the 1.3-meter-thick Tanis deposit, which they say formed in just a few hours, beginning perhaps 13 minutes after impact. Although fish fossils are normally deposited horizontally, at 'Tanis, fish carcasses and tree trunks are preserved haphazardly, some in near vertical orientations, suggesting they were caught up in a large volume of mud and sand that was dumped nearly instantaneously. The mud and sand are dotted with glassy spherules—many caught in the gills of the fish—isotopically dated to 65.8 million years ago. They presumably formed from droplets of molten rock launched into the atmosphere at the impact site, which cooled and solidified as they plummeted back to Earth. A 2-centimeter-thick layer rich in telltale iridium caps the deposit.

      Tanis at the time was located on a river that may have drained into the shallow sea covering much of what is now the eastern and southern United States. DePalma’s team argues that as seismic waves from the distant impact reached Tanis minutes later, the shaking generated 10-meter waves that surged from the sea up the river valley, dumping sediment and both marine and freshwater organisms there. Such waves are called seiches: The 2011 Tohoku earthquake near Japan triggered 1.5-meter-tall seiches in Norwegian fjords 8000 kilometers away….

      Until a few years ago, some researchers had suspected the last dinosaurs vanished thousands of years before the catastrophe. If Tanis is all it is claimed to be, that debate—and many others about this momentous day in Earth’s history—may be over.

A Recharge Revolution

[These excerpts are from an article by Jennifer Marcus in the Spring 2019 issue of the USC Trojan Family Magazine.]

      …One hour of sunlight provides more than all of the energy consumed on the planet in a year. Solar panels are one way for us to tap into some of this universal, free power source—but what happens on a rainy day?

      Solar panels can only generate power when the sun shines on them, and wind turbines can only generate power when the wind blows. The ups and downs in supply from these renewable sources make it difficult for power companies to rely on them to meet customer demand in real time….

      If batteries could store surplus energy to keep a consistent supply on hand, that sporadic unreliability could cease to be a problem. That’s why Prakash and Narayan have developed a water-based organic battery that is long-lasting and built from inexpensive, eco-friendly components. This new design uses no metals or toxic materials and is intended for use in solar and wind power plants, where its large-scale storage capacity could make the energy grid more resilient and efficient.

      Their design differs from the conventional batteries familiar to consumers. It’s called a redox flow battery and consists of two tanks of fluid, which store the energy. The fluids are pumped through electrodes that are separated by a membrane. The fluid contains electrolytes, and ions and electrons flow from one fluid into the other through the membrane and then the electrode, creating an electric current….

      Resembling a small building, the redox flow battery Prakash envisions would act as a battery farm of sorts, storing surplus energy generated from nearby solar panels or wind turbines….

      The new water-based organic flow batteries last for about 5,000 recharge cycles—five times longer than traditional lithium-ion batteries—giving them about a 15-year life span. At wok one-tenth the cost of lithium-ion batteries, they’re also much cheaper to manufacture thanks to their use of abundant, sustainable materials.

      Narayan and Prakash have tested a 1-kilowatt flow battery capable of powering the basic electricity needs of a small house….

      With annual global energy consumption projected to continue increasing by about 50 percent in the next 30 years, relying on renewable resources is one of the most important motivators driving sustainable technology research forward. The world can’t continue to rely on fossil fuels to meet energy demands without devastating environmental consequences….

      Marinescu focuses on gathering energy harvested from sunlight and storing it as chemical energy—much like plants do through photosynthesis. She and her team are working on a way to convert that stored energy into electricity by using what are called metal-organic frameworks. These flexible, ultra-thin and highly porous crystalline structures have unique properties that have been used by scientists primarily to absorb and separate different types of gas. Their use for energy applications seemed like a lost cause because researchers believed they couldn't conduct electricity. But Marinescu’s work has changed that.

      In the lab, her team experimented with the materials. Typically, electrons were localized in bonds (which prevents them from conducting any electricity). But the team created new materials with the electrons spread over multiple bonds, developing solids that could now carry electric current the same way that metals do….The frameworks developed by her research group contain inexpensive elements and can transform acidic water into hydrogen. This represents a huge advance, as these materials could one day be used in technologies like those for hydrogen-powered vehicles. They can also be spread thin across a huge area: It only takes 10 grams of the material to coat a surface the size of a football field.

      The technology opens the door for storing renewable energy at a huge, almost unthinkable scale….

Science during Crisis

[These excerpts are from an editorial by Rita R. Colwell and Gary E. Machlis in the 5 April 2019 issue of Science.]

      In April 1902, on the Caribbean island of Martinique, La Commission sur le Vulcan convened to make a fateful decision. Mt. Pelee was sending smoke aloft and spreading ash across the capital city of Saint-Pierre. Comprising physicians, pharmacists, and science teachers, the commission debated the danger of an eruption and the burden of evacuation, and judged the safety of the city’s population to be “absolutely assured.” Weeks later, Mt. Pelee erupted and approximately 30,000 residents died within minutes, leaving only two survivors. Environmental crises require pivotal decisions, and such decisions need timely, credible scientific information and science-based advice. This requirement is the focus of a report released last month by the American Academy of Arts and Sciences, calling attention to improvements in the operation and delivery of science during crises.

      Science has provided essential data and insight during disaster responses in the United States, including the World Trade Center attack (2001), Deepwater Horizon oil spill (2010), Hurricane Sandy (2012), and the Zika virus epidemic (2016). The context of scientific work done during such major disasters differs from that of routine science in several ways. Conditions change rapidly—wildfires spread swiftly, hurricanes intensify within hours, and aftershocks render buildings unsafe. In such scenarios, scientists must respond within tightly constrained time frames to collect data, do analyses, and provide findings that normally would involve months or years of work. Decision-makers need actionable information (such as risk assessments or mitigation techniques), yet scientific information is only one of many inputs to disaster response. Because communication networks may be severely disrupted, as occurred in Puerto Rico during Hurricane Maria (2017), delivery of science becomes even more difficult.

      Thus, science during crisis involves specialized actions such as heightened attention to coupled human-natural systems and cascading consequences. Important responses include rapid establishment of interdisciplinary scientific teams, local knowledge quickly integrated into scientific work, clear and compelling visualization of results, and concise communication to decision-makers, disaster-response specialists, and the public….

      In 2018, the United States experienced 14 weather and climate disasters with losses exceeding $1 billion each and a total of 247 lives lost. The summer wildfire season in the American West will soon again begin, followed by the start of the 2019 hurricane season in the Atlantic Ocean. There will be new disasters and science will play a critical role, informing and guiding decisions governing disaster response and recovery. Science during a crisis must be as effective as possible….

A Deadly Ambhibian Disease Goes Global

[These excerpts are from an article by Dan A. Greenberg and Wendy J. Palen in the 29 March 2019 issue of Science.]

      …Three decades ago, biologists began to report the decline or extinction of amphibian populations around the world. This concerted global phenomenon spawned a proliferation of hypotheses, especially to explain “enigmatic” declines in remote places, where neither habitat loss nor direct exploitation were apparent. The suspected culprit was a species new to science: Batraehochytrium dendrobatidis (Bd), a fungal pathogen found in amphibian skin that belongs to the chytrids, a group of otherwise benign soil and water fungi.

      Bd is one of two species responsible for chytridiomycosis, the disease that appeared to be causing mass die-offs of amphibians. Bd is present in much of the world, but in the past century, a group of pathogenic strains originated in and spread from Asia; this spread coincided with the expansion of the global trade in live amphibians….Scientists have only been able to guess at the scale of damage caused by Bd to amphibian populations across the world, mostly because the baseline population data needed to decipher where and when species were lost through this disease have not been available.

      Scheele et al. overcame these data limitations to reconstruct the biodiversity impact of the global spread of pathogenic Bd. They compiled a detailed dataset of chytridiomycosis-associated declines from both published records and interviews with regional experts around the world. They estimate that chytridiomycosis has contributed to the decline or extinction of at least 501 amphibian species, earning Bd the inauspicious title of the most destructive pathogen for biodiversity ever recorded. The analysis suggests that of the 501 species, 90 are presumably extinct, with another 124 suffering severe declines. Many of these species belong to a few particularly susceptible frog lineages.

      Because ecology and life history shape the susceptibility of species to Ed, the authors used their dataset to test for commonalities in species loss across six continents. The results suggest that large-bodied, range-restricted, and aquatic-associated species are most at risk of severe declines from chytridiomycosis. This information is vital for identifying regions that have the right environmental conditions for Bd and many potential hosts and where pathogen introduction could thus trigger extinctions. This information is particularly relevant given concern about the spread of pathogenic Bd strains to amphibian evolutionary hotspots where the pathogen is thought to be absent or rare, mainly oceanic islands like Papua New Guinea and Madagascar.

      The authors’ data also suggest that chytridiomycosis-associated declines peaked in the 1980s in many regions, just as scientists were beginning to note these enigmatic losses (3). They conclude that the severity of declines may be dwindling over time. It remains unclear whether this is a sign of hosts and pathogen achieving a stable coexistence or merely a lull after the first of many waves of outbreaks….

      Scientific understanding of Ed is still unfolding decades after its discovery, raising the ominous possibility that our ability to react to complex cases of biodiversity decline may always lag behind the emergence of threats. As studies such as that by Scheele et al. reconstruct what is already lost, there is a critical need to leverage these data into proactive management that considers multiple threats.

      Bd is but one more nail in the coffin for the state of amphibians globally. Habitat loss, exploitation, and climate change remain the main threats for thousands of species. These stressors often act in concert, but clear management actions exist to address at least some of them: protect habitat, limit collection of wild populations, and restrict trade. By contrast, there appear to be few viable management actions available once pathogenic Bd strains have established; if trade restrictions fail, then the only hope will be that evolutionary rescue can save at least some species. Moving forward, conservationists must carefully consider what management solutions are going to be most effective for each region, with habitat loss, climate change, and pathogen introduction all simultaneously threatening amphibian diversity….

Natural History Museums Face their Own Past

[These excerpts are from an article by Grethchen Vogel in the 29 March 2019 issue of Science.]

      Step into the main hall of the Natural History Museum here and you’ll be greeted by a towering dinosaur skeleton, the tallest ever mounted. Nearly four stories high and twice as long as a school bus, the sauropod Giraffatitan brancai was the largest dinosaur known for more than a half-century. It has been a crowd magnet since it was first displayed in 1937.

      But the tidal flats Giraffatitan bestrode 150 million years ago weren’t in Europe. It lived in eastern Africa, today’s Tanzania, much of which was a German colony when the fossil was unearthed in the early 1900s. Now some Tanzanian politicians argue the fossils should return to Africa.

      Berlin’s Natural History Museum isn’t the only one facing calls for the return of fossils, which echo repatriation demands for human remains and cultural artifacts. Many specimens were collected under conditions considered unethical today, such as brutal colonial rule that ignored the ownership rights and knowledge of indigenous people….

      Although German paleontologists have traditionally gotten credit for discovering Giraffatitan, it was in fact local residents, who knew the bones and used them in religious rites, who guided the foreigners to the find….

      The Natural History Museum in London is now facing at least three repatriation requests for prominent specimens: Gibraltar has asked for two Neanderthal skulls; Chile has requested exquisitely preserved skin, fur, and bones from a 12,000-year-old giant ground sloth (Mylodon darwinii); and Zambia has asked for the Broken Hill skull, a famous early hominin about 300,000 years old that’s usually classified as Homo heidelbergensis. < /p>

      Many of these objects were sent to Europe without much thought given to who might own them. The first Neanderthal discovered, for example, was unearthed in 1848 by a British lieutenant stationed in what was then the U.K. military base of Gibraltar. Some 70 years later, British archaeologist Dorothy Garrod found a Neanderthal child’s skull in a Gibraltar cave and also sent it to England for study….

      Chile is making a similar claim for the Mylodon remains. European explorers found them in the 1890s and shipped them home without permission from local authorities. They have ended up distributed among half a dozen museums in Europe. Chile retains a few smaller Mylodon bones and dung, but the spectacularly preserved skin specimens are a key part of its natural heritage….

      Giraffatitan, for its part, will likely stay put. Rather than press for its return, the Tanzanian government has said it would prefer support for excavating new fossils, training local paleontologists, and strengthening its museums….

Integrating Tactics on Opioids

[These excerpts are from an editorial by Alan I. Leshner in the 29 March 2019 issue of Science.]

      Many parts of the world are in the middle of an opioid addiction crisis. It is an equal opportunity destroyer, affecting rich and poor, urban and rural people alike. The current epidemic differs from the long-standing heroin addiction problem in its broader demographic and in that it has resulted from inappropriate marketing and overprescription of pain medicines and the intrusion of powerful and lethal synthetic opioids. The magnitude of the crisis is also unprecedented: In the United States alone, more than 2 million people are estimated to have “opioid use disorder,” and 47,000 people died of an opioid overdose in 2017. Traditional strategies for dealing with addiction have had limited success. They have primarily used parallel tactics of “supply control” (limiting availability) and “demand control” (trying to prevent or reduce use), which might be considered as criminal justice and public health approaches. But this side-by-side approach may be counterproductive. Last week, the U.S. National Academies of Sciences, Engineering, and Medicine CP released a report on the state of medication-based treatments for opioid addiction. What is clear is that in addition to the need for more research, the nature of the epidemic requires new approaches that integrate public health, regulatory, and criminal justice strategies.

      Although additional studies would be useful in refining strategies, there already exists a body of evidence that should be used to improve current tactics. Take the case, for example, of addicted individuals who are accused of or have committed crimes. If these individuals are not treated while they are under criminal justice control, the rates of recidivism to both crime and drug use upon release are extremely high. If, however, they are treated while incarcerated and after release, recidivism rates fall substantially, as do post-release mortality rates. In addition, only 25 to 30% of prison and jail inmates are estimated to receive any drug abuse treatment, whereas over 50% suffer from substance use disorders. Even more dramatic, only 5% of justice-referred individuals receive any of the medications approved by the U.S. Food and Drug Administration (FDA) for their addiction, despite substantial evidence of their effectiveness. The conclusion from that effectiveness research is obvious: It is foolish and borders on being unethical to withhold medical treatment from people with opioid use disorder who are under criminal justice control. There are extensive data that could provide better guidance for refining prevention, prescription, and regulatory policies as well.

      The lack of successful strategies to address opioid addiction results in part from other barriers to progress on this epidemic. These include misunderstandings about the nature of addiction and the lack of medications used to treat it, as well as the insidious ideology and stigma that have long surrounded the issue of drug use and addiction. A large body of scientific evidence has established that addiction is a chronic disease of the brain that requires medical intervention. It is not a moral weakness or a failure of will….

      To make real progress in tackling the opioid epidemic, people on all sides of the issue will have to give up many of their long-held biases and beliefs. Prog-ess will require more researchers working across fields and more informed public health, regulatory, and criminal justice officials, as well as members of the public, agreeing on the actual nature of the opioid crisis and science-based, integrated strategies to deal with it.

Big Floods Highlight Prediction Needs

[This excerpt is from a news article in the 29 March 2019 issue of Science.]

      Two major floods in March wrought destruction in the United States and Mozambique, highlighting that scientists are falling short in accurately predicting high water so communities around the world can prepare. Above-average winter rainfall helped swell the Missouri River to record levels, flooding thousands of homes and destroying stored crops. U.S. government forecasters predict more than 200 million Americans and 25 states may be affected by “unprecedented” flooding later this spring. The inundation has shocked many residents, which likely reflects shortcomings in U.S. floodplain maps that predict where large floods will strike….The maps don’t include many small streams, and many are dated, resulting in big underestimates of the population now at risk….41 million Americans live in the path of a once-in-a-century flood, rather than the 13 million indicated by existing maps. Meanwhile in Mozambique, hundreds have died since a cyclone’s torrential rain flooded more than 2000 square kilometers. Scientists recently reported progress in using high-resolution satellite data and enhanced computing power to create global flood models; these could improve emergency flood warnings and long-term planning in Mozambique and other developing countries that lack these tools.

Rapid Apple Decline Has Researchers Stumped

[These excerpts are from an article by Erik Stokstad in the 22 March 2019 issue of Science.]

      Six years ago, an unpleasant surprise greeted plant pathologist Kari Peter as she inspected a research orchard in Pennsylvania. Young apple trees were dying—and rapidly. At first, she suspected a common pathogen, but chemical treatments didn’t help. The next year, she began to hear reports of sudden deaths from across the United States and Canada. In North Carolina, up to 80% of orchards have shown suspicious symptoms….

      Now, as their trees prepare to blossom, North America’s apple producers are bracing for new losses, and scientists are probing possible causes. Apples are one of the continent’s most valuable fruit crops, worth some $4 billion last year in the United States alone. Growers are eager to understand whether rapid or sudden apple decline, as it is known, poses a serious new threat to the industry.

      Weather-related stress—drought and se-vere cold—could be an underlying cause….Early freezes are becoming more common across the eastern United States, for example. But that doesn’t appear to be the whole story, and scientists are examining an array of other factors, including pests, pathogens, and the growing use of high-density orchards….

      One common symptom in trees struck by rapid decline is dead tissue at the graft union, the part of the trunk where the fruit-bearing budwood of an apple variety is joined to hardy rootstock to create new trees. The union is vulnerable to late-season freezes because the tissue is the last to go dormant.

      A team led by plant pathologist Awais Khan of Cornell found dead tissue just below the graft union in trees from an affected orchard in New York. They suspect the cause was the extremely cold winter of 2014-15, which was followed by a drought. The dying tissue could have weakened the trees, allowing pests or pathogens to invade. But Khan and colleagues could not locate any known culprits in the affected trees or nearby soil….

      Observations from other apple-growing regions suggest extreme weather isn’t entirely to blame. In Canada, rapid decline “exploded” in British Columbia in the summer of 2018, after a string of unusually mild winters….These orchards are irrigated, suggesting drought was not a factor.

      Some scientists wonder whether certain rootstocks or exposure to herbicides might make trees more susceptible. Decline seems to be more common in trees with a popular rootstock, called M9, which can be slower to dormant in fall….decline appears to be more common in orchards with fewer weeds, leading him to suspect herbicides play a role.

      Meanwhile, the search for new pathogens is accelerating….But getting an answer could take up to 5 years….

      In hard-hit North Carolina, researchers 1 have found ambrosia beetles infesting the graft union of dying trees. These stubby insects burrow into weakened trees and cultivate fungus for their larvae to eat. Those fungi or stowaway fungi might harm the trees….

      Modern apple farming methods could also be a factor. Rapid decline is most common in dense orchards, which are increasingly planted because they are efficient to manage. Instead of about 250 trees per hectare, high-density orchards can have 1200 or more. Tightly packed trees must compete for nutrition and moisture. They also have shallow roots, which make them easier to trellis but more vulnerable to drought….

The Need to Catalyze Changes in High School Mathematics

[These excerpts are from an article by Robert Q. Berry III and Mathew R. Larson in the March 2019 issue of Phi Delta Kappan.]

      In the last few decades, policy makers and reformers have described the goal of boosting students’ college and career readiness (and, by extension, preparing those students to contribute to national defense and economic prosperity) as more or less the only purpose for learning high school mathematics, while other purposes, such as teaching students to think critically and participate actively in civic life, have been given short shrift….But in fact, mathematics can serve multiple purposes, and should be taught in ways that prepare students to “flourish as human beings….”

      Mathematics underlies much of the fabric of society, from polling and data mining in politics, to algorithms used in targeting advertisements, to complex mathematical models of financial instruments and policies that affect the lives of millions of people. Students should leave high school with the quantitative literacy and critical-thinking processes necessary to determine the validity of claims made in scientific, economic, social, and political arenas….Students should have an appreciation for the beauty and usefulness of mathematics and statistics. And students should see themselves as capable lifelong learners and confident doers of mathematics and statistics….

      Mathematics education at the high school level is part of a complex system of policies, traditions, and societal expectations. This system and its structures (school district policies, practices, and conditions) must be critically examined, changed, and improved. All stakeholders — school, district, and state administrators; instructional leaders and coaches; classroom teachers; counselors; curriculum and assessment developers; higher education administration and faculty, and policy makers at all levels — will need to be part of the process of reexamining long-standing beliefs, practices, and policies. This work is critical for all of us to undertake. It is also long overdue.

      Francis Su, past president of the Mathematical Association of America, has argued that answering the “why we teach mathematics” question is critical because the answer will have a strong influence on who we think should learn math-ematics and how we think mathematics should be taught….The work of making this happen will not be easy because the challenges are real and long-standing. But we owe this effort not only to our students but also to ourselves as we work together to create and nurture the society we wish to inhabit.

Standards, Instructional Objectives, and Curriculum Design: A Complex Relationship

[These excerpts are from an article by David A. Gamson, Sarah Anne Eckert, and Jeremy Anderson in the March 2019 issue of Phi Delta Kappan.]

      Since the formation of the American republic and the nation’s first halting steps toward building state public school systems, educators and policy makers have debated which kinds of knowledge and skills schoolchildren should acquire and the levels of proficiency they should reach. Today, we often call these expectations standards, and the most recent effort at defining these types of academic objectives is the Common Core State Standards, an outgrowth of the standards-based reform movement that has been with us now for a quarter century, with no signs of dissipating. For instance, the most recent reauthorization of the Elementary and Secondary Education Act — the 2015 Every Student Succeeds Act — reinforces the movement by requiring “that all students in America be taught to high academic standards that will prepare them to succeed in college and careers.”

      The vocabulary used to describe what students should know and be able to do when they complete a lesson, a unit, or course has shifted over the past 100 years, but the underlying belief in the necessity of standards has not…. /p>

      The origins of the current standards-based reform movement are usually pegged to events of the late 1980s and early 1990s, when state legislatures and governors began to assert a new level of authority over the details of student learning. But while these developments marked an important shift in reform strategy, the obsession with standards in our national educational experience actually goes back much further. Over the past century, policy makers have made many efforts to specify the learning objectives toward which the K-12 curriculum should lead.

      Beginning in the 1890s, and with greater regularity than most scholars have recognized, prominent educational leaders dedicated themselves to identifying the core curricular material necessary for American students to thrive in and out of school. The first major national undertaking, sponsored by the National Education Association (NEA), was the Committee of Ten’s (1893) attempt to determine what knowledge high school pupils require.

      The Committee of Ten was successful in sketching out core curricular requirements, such as five weekly periods of Latin each year in grades 9-12, five weekly periods of chemistry in grade 11, and one year of natural history (botany, zoology, etc.) at some point during high school. At the same time, though, its recommendations grated against the zeitgeist of the times. Many progressives thought the notion of a common curriculum for all students was hopelessly anachronistic in an era in when pupils could (or should) be scientifically sorted using IQ tests and divided into separate tracks of coursework. Moreover, young educators and reformers wanted to escape the rigidity that characterized their experience with the 19th-century curriculum….

      In virtually every period of American educational history, but especially in times of national crisis, critics have argued that American students were floundering academically due to intellectually feeble and flabby academic objectives….

      That assertion was repeated in 1958 after the Soviets launched Sputnik (and again in the 1980s and 1990s after states had been shocked into action by A Nation at Risk). Critics of public schools in the 1950s argued that schools had become “educational wastelands” and so often forsook fundamentals that “Johnny” couldn’t read. A Life magazine series (1958) argued that standards had become “shockingly low” and documented examples of failure within public schools, in part by contrasting the rigor of Soviet schools with the weak academics and leisure-oriented offerings of their American counterparts. Other developments of the time, including the work of educational scholars, reinforced the quest for strong and clear objectives….

      One vocal critic of an overreliance on objectives or other predetermined “ends” of education was Elliott Eisner (1967), who explained that “the outcomes of education are far more numerous and complex for educational objectives to encompass”….He believed that curricula driven by predetermined outcomes prohibited the development of “curiosity, inventiveness, and insight”….In other words, heavy reliance on objectives in curriculum design not only eliminated the potential for unanticipated learnings, but also had the potential to limit the development of creativity and critical thinking in students — which, to Eisner, were the true goals of schooling.

      …Behavioral objectives, argued Ebel, rarely match the real intent of instruction. Furthermore, simply stating an objective in behavioral terms does not inherently make it worth striving for. For example, it is quite difficult to specify an observable outcome when the intent of instruction is for students “to respond adaptively and effectively to unique future problem situations”…, and clearly stating that a child should be able to name all of the rivers in South America doesn’t necessarily make this a worthwhile skill….

      …In part because clearly articulating and measuring higher-level thinking skills was more difficult, only a handful of states adopted such standards for all students….

      For well over a century, educational leaders and policy makers have turned to standards as a kind of safety zone, a common ground whereupon (they hoped) all educators could meet and agree on dear, consistent, and rationally developed objectives. Americans find comfort in knowing precisely what students should know and be able to do, especially at times of national uncertainty or economic transition — whether represented by World War I, the Great Depression, the Space Race, or anxiety about global competitiveness. Even skeptics of standards acknowledge that some skills are amenable to concise articulation and demonstrable achievement and that some proficiencies can be measured; their main concern is with universal adoption of standards as the primary tool for curriculum development.

      Nevertheless, American educators have trod this terrain before, as it turns out, and we would do well to listen to the echoes of previous experiences. Several enduring dilemmas are posed by the persistent desire for measurable objectives, and policy makers and practitioners can benefit from the lessons gleaned by a close reading of the educational past. Aside from the controversies that will attend the development of any new learning goals, policy makers must acknowledge the constraints that standards are likely to place on classroom instruction.

      More specifically, state and local leaders will need to remain watchful of the riptide of confusion and consternation that will follow on the publication of large, potentially overwhelming, lists of objectives. Whether standards can be unpacked and translated into realistic and vibrant classroom activities will depend on the support and resources made available to teachers. Otherwise, the aims of academic disciplines and the knowledge embedded within core subject areas will all too easily be shattered into the thin, disconnected shards of minor objectives, which in turn will ultimately narrow any broader vision of education….

      The history of reform movements demonstrates that one reform regime is often pushed aside by a wave of novel educational innovations. Whether the standards-based reform movement can regularly upgrade itself enough to escape such a fate remains to be seen. Finally, then, persists the danger that Bode once voiced about Bobbin’s objectives: Standards that stagnate, drifting on unrevised or unevolved, will do more to perpetuate the status quo rather than to prepare students for the future. In an increasingly unequal society, that hazard is well worth avoiding.

Zombie Spiders

[These excerpts are from an article by Joshua Rapp Learn in the March 2019 issue of Scientifi American.]

      Talk about a raw deal: deadly parasitic wasps ruin the lives of adolescent spiders by taking over their minds, forcing them to become hermits and then eating them alive.

      A remarkable species of social spider lives in parts of Latin America, in colonies of thousands. Anelosimus eximius spiders dwell in basket-shaped webs up to 25 feet wide attached to vegetation near the jungle floor, where they protect their eggs and raise broods cooperatively. A colony works together to take down much larger prey, such as grasshoppers, which sometimes fall into a web after blundering into silk lines L that stick out of it vertically….

      But Fernandez-Fournier recently observed a wasp species—not previously named or described in the scientific literature—that can bend these social spiders to its will in an even more nightmarish way. This parasitic puppet master camps out beside the web, apparently waiting for a young spider to stray from its colony. The wasp may prefer juveniles because of their softer shells and “less feisty” nature….

      Scientists do not know how a wasp larva ends up on the spider—but once there it starts feeding on the arachnid’s abdomen. As the larva grows, it starts to control the spider’s brain, inducing it to leave the safety of its colony. Then the young spider weaves a ball of silk that seals it off from the outside world. The larva completes its life cycle by eating the rest of the spider, using the conveniently surrounding web to build its own cocoon and pupate into an adult wasp.

      Fernandez-Fournier believes the wasp larvae most likely release a chemical that activates specific genes in their hosts, triggering antisocial behavior. Other related spiders are less social, leaving their colonies when they are young….the mind-controlling wasp larvae may be tapping into this latent genetic pathway. The spiders may have evolved toward social living for protection from predators, but the parasites could be pulling the genetic strings in their favor….

Feverish Planet

[These excerpts are from an article by Tanya Lewis in the March 2019 issue of Scientific American.]

      A devastating heat wave swept across Europe in 2003, killing tens of thousands of people, scientists estimate. Many were elderly, with limited mobility, and some already suffered from chronic diseases. But climate change is making such extreme weather more common—and the effects will not be limited to the old and sick. Warming temperatures do not only threaten lives directly. They also cause billions of hours of lost labor, enhance conditions for the spread of infectious diseases and reduce crop yields, according to a recent report….

      The report found that millions of people worldwide are vulnerable to heat-related disease and death and that populations in Europe and the eastern Mediterranean are especially susceptible—most likely because they have more elderly people living in urban areas. Adults older than 65 are particularly at risk, as are those with chronic illnesses such as heart disease or diabetes. Places where humans tend to live are exposed to an average temperature change that is more than twice the global L L average-0.8 versus 0.3 degree Celsius….

      Sweltering temperatures also affect productivity. A staggering 153 billion hours of labor—80 percent of them in agriculture—were lost to excessive heat in 2017, the new report found, with the most vulnerable areas being in India, Southeast Asia, sub-Saharan Africa and South America. The first stage of heat’s impact is discomfort….But there comes a point at which it is simply too hot for the body to function. For example, sweating heavily without replenishing water can result in chronic kidney disease….News reports have documented - farm workers in Central America dying from kidney problems afteryears of working in the hot fields. Richer countries such as the U.S. may avoid the worst effects because of better access to drinking water and, in the case of indoor work, air-conditioning. But these solutions can be expensive….

      Climate change also threatens food security. Our planet still produces more than enough food for the world, but 30 countries have seen crop yields decline as a result of extreme weather….

      Among the biggest steps countries can take to mitigate these health effects are phasing out coal-fired power and shifting to greener forms of transportation….Electric vehicles are making inroads in places…and “active” transport, such as walking or cycling, is also important….

The Weather Amplifier

[These excerpts are from an article by Michael E. Mann in the March 2019 issue of Scientific American.]

      Consider the following summer extremes: In 2003 Europe’s worst heat wave in history killed more than 30,000 citizens. In 2010 wildfires in Russia and floods in Pakistan caused unprecedented damage and death. The 2011 U.S. heat wave and drought caused rarchers in Oklahoma to lose a quarter of their cattle. The 2016 Alberta wildfires constituted the costliest disaster in Canadian history. And the summer of 2018 that the U.S. experienced the notorious: temperatures flared above 100 degrees Fahrenheit for days on end across the desert Southwest, heavy rains and floods inundated the mid-Atlntic states, and California had a shocking wildfire season. Extreme heat waves, floods and wildfires raged across Europe and Asia, too.

      …All these events had a striking feature in common: a very unusual pattern in the jet stream. The jet stream is a narrow band of strong wind that blows west to east around the Northern Hemisphere, generally along the U.S.-Canada border, continuing across the Atlantic Ocean, Europe and Asia. The band is sometimes fairly straight, but it can take on big bends—shaped like an S lying on its side. It typically curls northward from the Pacific Ocean into western Canada, then turns southward across the U.S. Midwest, then back up toward Nova Scotia. This shape usually proceeds west to east across the U.S. hi a few days, bringing warm air north or cool air south and creating areas of rain or snow, especially near the bends. The jet stream controls our daily weather.

      During the extreme events I noted, the jet stream acted strangely. The bends went exceptionally far north and south, and they stalled—they did not progress eastward. The larger these bends, the more punishing the weather gets near the northern peak and southern trough. And when they stall—as they did over the U.S. in the summer of 2018—those regions can receive heavy rain day after day or get baked by the sun day after day. Record floods, droughts, heat waves and wildfires occur….

Don’t Let Bots Pull the Trigger

[These excerpts are from an editorial by the editors of the March 2019 issue of Scientific American.]

      The killer machines are coming. Robotic weapons that target and destroy without human supervision are poised to start a revolution in warfare comparable to the invention of gunpowder or the atomic bomb. The prospect poses a dire threat to civilians—and could lead to some of the bleakest scenarios in which artificial intelligence runs amok. A prohibition on killer robots, akin to bans on chemical and biological weapons, is badly needed. But some major military powers oppose it.

      The robots are no technophobic fantasy. In July 2017, for example, Russia’s Kalashnikov Group announced that it had begun development of a camera-equipped 7.62-millimeter machine gun that uses a neural network to make “shoot/no-shoot” decisions. An entire generation of self-controlled armaments, including drones, ships and tanks, is edging toward varying levels of autonomous operation. The U.S. appears to hold a lead in R&D on autonomous systems—with $18 billion slated for investment from 2016 to 2020. But other countries with substantial arms industries are also making their own investments.

      …The inability to read behavioral subtleties to distinguish civilian from combatant or friend versus foe should call into question whether AIs should replace GIs in a foreseeable future mission. A killer robot of any kind would be a trained assassin, not unlike Arnold Schwarzenegger in The Terminator. After the battle is done, moreover, who would be held responsible when a machine does the killing? The robot? Its owner? Its maker?

      With all these drawbacks, a fully autonomous robot fashioned using near-term technology could create a novel threat wielded by smaller nations or terrorists with scant expertise or financial resources. Swarms of tiny, weaponized drones, perhaps even made using 3-D printers, could wreak havoc in densely populated areas. Prototypes are already being tested: the U.S. Department of Defense demonstrated a nonweaponized swarm of more than 100 micro drones in 2016….

      …Because of opposition from the U.S., Russia and a few others, the discussions have not advanced to the stage of drafting formal language for a ban. The U.S., for one, has argued that its policy already stipulates that military personnel retain control over autonomous weapons and that premature regulation could put a damper on vital Al research.

      A ban need not be overly restrictive. The Campaign to Stop Killer Robots, a coalition of 89 nongovernmental organizations from 50 countries that has pressed for such a prohibition, emphasizes that it would be limited to offensive weaponry and not extend to antimissile and other defensive systems that automatically fire in response to an incoming warhead….

      Since it was first presented at the International Joint Conference on Artificial Intelligence in Stockholm in July, 244 organizations and 3,187 individuals have signed a pledge to “neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.” The rationale for making such a pledge was that laws had yet to be passed to bar killer robots. Without such a legal framework, the day may soon come when an algorithm makes the fateful decision to take a human life.

Nowhere to Hide

[These excerpts are from an article by Amy Yee in the 15 March 2019 issue of Science.]

      The small pangolin tucked its head toward its belly and curled its tail around its body. Clad in large scales, it resembled a pine cone. After a moment, the creature—a mammal, despite appearances—uncoiled and raised its slender head. Currantlike eyes blinked and a pointy nose trembled inquisitively. Its feet had tender pink soles tipped with long, curved claws, but it did not scratch or fight.

      This animal, a white-bellied pangolin (Phataginus tricuspis), was lucky. It had most likely been illegally caught in a nearby forest not long ago; a tip had led the Uganda Wildlife Authority (UWA) to rescue it. One of its brown scales had been ripped off, perhaps for use in a local witchcraft remedy. But after a long, jarring car ride on bumpy dirt roads, the pangolin was being released back into the wild in a national park….Weighing just 2.5 kilograms, the pangolin heaved as if panting.

      The rescue and release was part of a growing global effort to save pangolins which face a bleak future as the world’s most poached and trafficked animal. They are in demand for both their meat and their scales, believed in some Asian countries to have medicinal properties. The past 2 months have seen record-setting seizures of pangolin body parts both in Asia and Africa….

      In 2014, the International Union for Conservation of Nature (IUCN) classified all eight pangolin species—four of which live in Asia and four in Africa—as threatened with extinction. And in 2017, the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) banned international trade in pangolins. Several groups and government agencies, including UWA, are now intensifying conservation efforts both in Asia and Africa.

      Yet it’s an uphill battle. Large endangered wildlife, such as elephants and rhinos, attract tourist dollars, giving policymakers an incentive to save them. But pangolins are small, shy, and believed to be mostly nocturnal. Patchy understanding of their population size, breeding behavior, migratory patterns, and physiology also hampers conservation efforts….

      Pangolins are unique among mammals because of their scales, which are made of keratin, the stuff of hair and fingernails. They live on a diet of ants and termites—hence their nickname “scaly anteater.” The scales are their major defense mechanism; when faced with danger, they curl up in an immobile, hard ball. Sadly, that also makes them easy for hunters and poachers to catch.

      Pangolins have long been on the menu for people in Asia and Africa. According to a 2018 study, 400,000 to 2.7 million pangolins are hunted annually in the forests of six Central African countries for bushmeat. (The range is so wide because researchers used three different ways of estimating the harvest, says the paper's first author, Daniel Ingram of University College London, but the lower figure is more likely.)

      Demand for pangolin scales seems to be surging, particularly in China and Vietnam, where they are believed to cure ailments ranging from poor circulation and skin diseases to asthma. With Asian species in sharp decline, poachers are increasingly turning to Africa for scales, adding to the ushmeat toll….

      Pangolins are slow-breeding and give birth to one offspring a year at most, which means depleted populations are slow to bounce back. They are easily stressed and tend to die in captivity; which makes studying their physiology and behavior difficult. So far, efforts to breed them have largely failed. Biologists also know little about their movements and population sizes, which Lould guide efforts to protect them….

The Meat without the Cow

[These excerpts are from an article by Niall Firth in the March/April 2019 issue of Technology Review.]

      …A little over five years later, startups around the world are racing to produce lab-grown meat that tastes as good as the traditional kind and costs about as much.

      They’re already playing catch-up: “plant-based” meat, made of a mix of non-animal products that mimic the taste and texture of real meat, is already on the market. The biggest name in this area: Impossible Foods, whose faux meat sells in more than 5,000 restaurants and fast food chains in the US and Asia and should be in supermarkets later this year. Impossible’s research team of more than 100 scientists and engineers uses techniques such as gas chromatography and mass spectrometry to identify the volatile mole-cules released when meat is cooked.

      The key to their particular formula is the oxygen-carrying molecule heme, which contains iron that gives meat its color and metallic tang. Instead of using meat, Impossible uses genetically modified yeast to make a version of heme that is found in the roots of certain plants.

      Impossible has a few competitors, particularly Beyond Meat, which uses pea protein (among other ingredients) to replicate ground beef. Its product is sold in supermarket chains like Tesco in the UK and Whole Foods in the US, alongside real meat and chicken. Both Impossible and Beyond released new, improved versions of their burgers in mid-January.

      In contrast, none of the lab-grown-meat start-ups has yet announced a launch date for its first commercial product. But when that happens—some claim as early as the end of this year—the lab-grown approach could turn the traditional meat industry on its head.

      …The answer is that our meat consumption habits are, in a very literal sense, not sustainable. Livestock raised for food already contribute about 15% of the world's global greenhouse-gas emissions. (You may have heard that if cows were a country, it would be the world's third biggest emitter.) A quarter of the planet’s ice-free land is used to graze them, and a third of all cropland is used to grow food for them. A growing population will make things worse. It’s estimated that with the population expected to rise to 10 billion, humans will eat 70% more meat by 2050. Greenhouse gases from food production will rise by as much as 92%.

      In January a commission of 37 scientists reported in The Lancet that meat’s damaging effects not only on the environment but also on our health make it “a global risk to people and the planet.” In October 2018 a study in Nature found that we will need to change our diets significantly if we’re not to irreparably wreck our planet’s natural resources….

      The good news is that a growing number of people now seem to be rethinking what they eat. A recent report from Nielsen found that sales of plant-based foods intended to replace animal products were up 20% in 2018 compared with a year earlier. Veganism, which eschews not just meat but products that come from greenhouse-gas-emitting dairy livestock too, is now considered relatively mainstream.

      That doesn’t necessarily equate to more vegans. A recent Gallup poll found that the number of people in the US who say they are vegan has barely changed since 2012 and stands at around just 3%. Regardless, Americans are eating less meat, even if they're not cutting it out altogether….

      The traditional meat industry doesn’t see it that way. The National Cattlemen's Beef Association in the US dismissively dubs these new approaches “fake meat.” In August 2018, Missouri enacted a law that bans labeling any such alternative products as meat. Only food that has been “derived from harvested production of livestock or poultry” can have the word “meat” on the label in any form. Breaking that law could lead to a fine or even a year’s jail time.

      The alternative-meat industry is fighting back….to get the law overturned….

      …But the Missouri battle is just the start of a struggle that could last years. In February 2018, the US Cattlemen's Association launched a petition that calls on the US Department of Agriculture (USDA) to enact a similar federal law.

      Traditional meat-industry groups have also been very vocal on how cultured meat and plant-based meats are to be regulated….

      But there are other issues, says Datar, of New Harvest. She says we still don’t understand the fundamental processes well enough. While we have quite a deep understanding of animals used in medical research, such as lab mice, our knowledge of agricultural animals at a cellular level is rather thin….

      Lab-grown meat has another—more tangible—problem. Growing muscle cells from scratch creates pure meat tissue, but the result lacks a vital component of any burger or steak: fat. Fat is what gives meat its flavor and moisture, and its texture is hard to replicate. Plant-based meats are already getting around the problem—to some extent—by using shear cell technology that forces the plant protein mixture into layers to produce a fibrous meat-like texture. But if you want to create a meat-free “steak” from scratch, some more work needs to be done. Cultured meat will need a way to grow fat cells and somehow mesh them with the muscle cells for the end result to be palatable. That has proved tricky so far, which is the main rea-son that first burger was so mouth-puckeringly dry….

      As it stands, lab-grown meat is not quite as virtuous as you might think. While its greenhouse emissions: are below those associated with the biggest villain, beef, it is more polluting than chicken or the plant-based alternatives, because of the energy currently:- required to produce it. A World Economic Forum white paper on the impact of alternative meats found that lab-grown meat as it is made now would produce only about 7% less in greenhouse-gas emissions than beef. Other replacements, such as tofu or plants, pro-duced reductions of up to 25%....

      Expecting the whole world to go vegan is unrealistic. But a report in Nature in October 2018 suggested that if everyone moved to the flexitarian lifestyle (eating mostly vegetarian but with a little poultry and fish and no more than one portion of red meat a week), we could halve the greenhouse-gas emissions from food production and also reduce other harmful effects of the meat industry, such as the overuse of fertilizers and the waste of fresh water and land. (It could also reduce premature mortality by about 20%, according to a study in The Lancet in October, thanks to fewer deaths from ailments such as coronary heart disease, stroke, and cancer.)…

Is Carbon Removal Crazy or Critical?

[These excerpts are from an article by Spencer Lowell in the March/April 2019 issue of Technology Review.]

      …Lackner…has now been working on the problem for two decades. In 1999, as a particle physicist at Los Alamos National Laboratory, he wrote the first scientific paper exploring the feasibility of combating climate change by pulling carbon dioxide out of the air. His was a lonely voice for years. But a growing crowd has come around to his thinking as the world struggles to slash climate emissions fast enough to prevent catastrophic warming. Lackner’s work has helped inspire a handful of direct-air-capture startups, including one of his own, and a growing body of scientific literature….

      No one, including Lackner, really knows whether the scheme will work. The chemistry is easy enough. But can we really construct anywhere near enough carbon removal machines to make a dent in climate change? Who will pay for them? And what are we going to do with all the carbon dioxide they collect?

      Lackner readily acknowledges the unknowns but believes that the cheaper the process gets, the more feasible it becomes….

      The concentration of carbon dioxide in the atmosphere is approaching parts per million. That has already driven global temperatures nearly 1 °C above pre-industrial levels and intensified droughts, wildfires, and other natural disasters. Those dangers will only compound as emissions continue to rise. /p>

      The latest assessment from the UN’s Intergovernmental Panel on Climate Change found that there’s no way to limit or return global warming to 1.5 °C without removing somewhere between 100 billion and a trillion metric tons of carbon dioxide by the end of the century. On the high end, that means reversing nearly three decades of global emissions at the current rate.

      There are a handful of ways to draw carbon dioxide out of the atmosphere. They include planting lots of trees, restoring grasslands and other areas that naturally hold carbon in soils, and using carbon dioxide-sucking plants and other forms of biomass as a fuel source but capturing any emissions when they’re used (a process known as bio-energy with carbon capture and storage).

      But a report from the US National Academies in October found that these approaches alone probably won’t be enough to prevent 2 °C of warming—at least, not if we want to eat. That's because the amount of land required to capture that much carbon dioxide would come at the cost of a huge amount of agricultural food production.

      The appeal of direct-air-capture devices like the ones Lackner and others are developing is that they can suck out the same amount of carbon dioxide on far less land. The big problem is that right now it’s much cheaper to plant a tree. At the current cost of around $600 per ton, capturing a trillion tons would run some $600 trillion, more than seven times the world's annual GDP….

      However, selling carbon dioxide isn’t an easy proposition.

      Global demand is relatively small: on the order of a few hundred million tons per year, a fraction of the tens of billions that eventually need to be removed annually, according to the National Academies study. Moreover, most of that demand is for enhanced oil recovery, a technique that forces compressed carbon dioxide into wells to free up the last drips of oil, which only makes the climate problem worse….

How We’ll Invent the Future

[These excerpts are from an article by Bill Gates in the March/April 2019 issue of Technology Review.]

      …My mind went to—of all things—the plow. Plows are an excellent embodiment of the history of innovation. Humans have been using them since 4000 BCE, when Mesopotamian farmers aerated soil with sharpened sticks. We’ve been slowly tinkering with and improving them ever since, and today’s plows are technological marvels.

      But what exactly is the purpose of a plow? It’s a tool that creates more: more seeds planted, more crops harvested, more food to go around. In places where nutrition is hard to come by, it’s no exaggeration to say that a plow gives people more years of life. The plow—like many technologies, both ancient and modern—is about creating more of something and doing it more efficiently, so that more people can benefit.

      Contrast that with lab-grown meat, one of the innovations I picked for this year’s 10 Breakthrough Technologies list. Growing animal protein in a lab isn’t about feeding more people. There’s enough livestock to feed the world already, even as demand for meat goes up. Next-generation protein isn’t about creating more — it’s about making meat better. It lets us provide for a growing and wealthier world without contributing to deforestation or emitting methane. It also allows us to enjoy hamburgers without killing any animals.

      Put another way, the plow improves our quantity of life, and lab-grown meat improves our quality of life. For most of human history, we’ve put most of our innovative capacity into the formetAnd our efforts have paid off: world-wide life expectancy rose from 34 years in 1913 to 60 in 1973 and has reached 71 today….

      To be clear, I don’t thinkhuman-ity will stop trying spans anytime soon. We’re still far from a world where everyone everywhere lives to old age in perfect health, and it’s going to take a lot of innovation to get us there. Plus, “quantity of life” and “quality of life” are not mutually exclusive. A malaria vaccine would both save lives and make life better for children who might otherwise have been left with developmental delays from the disease.

      We’ve reached a point where we’re tackling both ideas at once, and that’s what makes this moment in history so interesting….

      The 30 minutes you used to spend reading e-mail could be spent doing other things. I know some people would use that time to get more work done—but I hope most would use it for pursuits like connecting with a friend over coffee, helping your child with homework, or even volunteering in your community.

      That, I think, is a future worth working toward.

Sanitation without Sewers

[These excerpts are from an article by Erin Winick in the March/April 2019 issue of Technology Review.]

      About 2.3 billion people don’t have good sanitation. The lack of proper toilets encourages people to dump fecal matter into nearby ponds and streams, spreading bacteria, viruses, and parasites that can cause diarrhea and cholera. Diarrhea causes one in nine child deaths worldwide.

      Now researchers are working to build a new kind of toilet that’s cheap enough for the developing world and can not only dispose of waste but treat it as well….

      Most of the prototypes are self-contained and don’t need sewers, but they look like traditional toilets housed in small buildings or storage containers. The NEWgenerator toilet, designed at the University of South Florida, filters out pollutants with an anaerobic membrane, which has pores smaller than bacteria and viruses. Another project, from Connecticut-based Biomass Controls, is a refinery the size of a shipping container; it heats the waste to produce a carbon-rich material that can, among other things, fertilize soil….

      So the challenge now is to and more adaptable to communities of different sizes….

The Cow-free Burger

[These excerpts are from an article by Markkus Rovito in the March/April 2019 issue of Technology Review.]

      The UN expects the world to have 9.8 billion people by 2050. And those people are getting richer. Neither trend bodes well for climate change—especially because as people escape poverty, they tend to eat more meat.

      By that date, according to the predictions, humans will consume 70% more meat than they did in 2005. And it turns out that raising animals for human consumption is among the worst things we do to the environment.

      Depending on the animal, producing a pound of meat protein with Western methods requires 4 to 25 times more water, 6 to 17 times more land, and 6 to 20 times more fossil fuels than producing a pound of plant protein.

      The problem is that people aren’t likely to stop eating meat anytime soon. Which means lab-grown and plant-based alternatives might be the best way to limit the destruction.

      Making lab-grown meat involves extracting muscle tissue from animals and growing it in bioreactors. The end product looks much like what you’d get from an animal, although researchers are still working Lon the taste….One drawback 1 of lab-grown meat is that the environmental benefits are still sketchy at best—a recent World Economic Forum report says the emissions from lab-grown meat would be only around 7% less than emissions from beef production.

      The better environmental case can be made for plant-based meats from companies like Beyond Meat and Impossible Foods (Bill Gates is an investor in both companies), which use pea proteins, soy, wheat, potatoes, and plant oils to mimic the texture and taste of animal meat….a Beyond Meat patty would probably generate 90% less greenhouse-gas emissions than a conventional burger made from a cow.

Sun in a Box

[These excerpts are from an article by Jennifer Chu in the March/April 2019 issue of MIT News.]

      MIT engineers have come up with a conceptual design fora system that could store renewable energy and deliver it back into an electric grid on demand. Such a system could power a small city not just when the sun is up or the wind is high, but around the clock.

      The new design stores heat generated by excess electricity from solar or wind power in large tanks of molten silicon, and then converts the light from the glowing metal back into electricity when it’s needed. The researchers expect it would be vastly more affordable than lithium-ion storage systems.

      The system consists of two large, heavily insulated, 10-meter-wide tanks made from graphite. One is filled with liquid silicon, kept at a “cold” temperature of almost 3,500 °F (1,927 °C). A bank of tubes, exposed to heating elements, then connects this cold tank to the second, “hot” tank. When electricity from the town’s solar cells comes into the system, this energy is converted to heat in the heating elements. Meanwhile, liquid silicon is pumped out of the cold tank, collects heat from the heating elements as it passes through the tubes, and enters the hot tank, where it is now stored at a much higher temperature of about 4,300 °F (2,371 °C).

      When electricity is needed (say, after the sun has set), the hot liquid silicon—so hot that it’s glowing white—is pumped through an array of tubes that emit that light. Specialized solar cells, known as multi-junction photovoltaics, then turn that light into electricity, which can be supplied to the town’s grid. The now-cooled silicon can be pumped back into the cold tank until the next round of storage—so the system effectively acts as a large rechargeable battery….

      Henry says the system could be sited anywhere, regardless of a location's landscape. This is in contrast to pumped hydroelectric systems, currently the cheapest form of energy storage, which require locations that can accommodate large waterfalls and dams to store energy from falling water….

The Climate Optimist

[These excerpts are from an article by Amanda Schaffer in the March/April 2019 issue of MIT News.]

      …the data she gathered that night and over two months in Antarctica would change our understanding of how chlorofluorocarbons, released into the atmosphere from refrigerants and a range of other consumer products, damage the ozone layer, which helps protect Earth from ultraviolet radiation. In response to this and other scientific work, an international agreement limited and then banned the use of CFCs. Thirty years later, [Susan] Solomon was also the first to clearly demonstrate that, thanks to this change, the Antarctic ozone hole has slowly begun to heal….

      Solomon is quick to acknowledge that climate change poses tougher political challenges than ozone depletion, because fossil-fuel consumption is so integral to the world economy. Still, she argues that by studying past environmental successes, her students will come to understand “what is actually going to have to happen to make progress….”

      The spectral data that Solomonl and her team gathered provided support for the theory that chlorine originally locked up in chlorofluorocarbons is released by way of a surface reaction between hydrochloric acid and chlorine nitrate on polar stratospheric clouds. That chlorine in turn reacts with ozone to produce chlorine monoxide and goes on to deplete the ozone. Further buttressing her theory, Solomon’s expedition found that stratospheric concentrations of hydrochloric acid were low and those of chlorine monoxide and chlorine dioxide were high. (They also used balloons to measure concentrations of ozone directly and found that it, too, was severely depleted in the stratosphere, as expected.)….

      In her role on the IPCC report, Solomon worked with some 150 of the best climate scientists in the world, helping to synthesize current research related to climate change. Finding areas of consensus required a mix of science and diplomacy…."

      In 2007, the group produced a textbook-size tome, which Solomon keeps above her desk, and which garnered attention for stating, for the first time, that “warming is unequivocal.” (Solomon herself came up with that wording, based on the researchers' thorough examination of the research.) It also said that most of the warming of the past 50 years was “very likely due to human activity.” (The report defined “very likely” as meaning that the conclusion was 90% certain.) The year the report was published, the IPCC and former vice president Al Gore shared the Nobel Peace Prize for their efforts….

      In the years that followed, Solomon continued to work on climate change. In 2009, she published a paper showing that some of the effects of carbon dioxide, which takes a long time to dissipate from the atmosphere, are probably irreversible. Other researchers had conducted a set of experiments modeling where the carbon currently in the atmosphere would end up if emissions came to a halt. After examining this work, Solomon noticed that in all the models, Earth’s surface temperature did not significantly decrease for a thousand years, even in the absence of new carbon dioxide emissions. “On a human time scale, a thousand years is pretty close to irreversible,” she says. “I realized that that story had not been told clearly enough in a simple paper. I also wanted to understand why it was true.”

      She ultimately concluded that the key factor was the ocean, which is slow to warm but stores heat—and therefore warms the atmosphere—for long periods of time. In addition, her team found that even without further emissions of carbon dioxide, sea levels would also continue to rise for hundreds of years. And her lab at MIT would later reach a similar conclusion for even short-lived greenhouse gases like methane. Her research raises questions about whether vulnerable island nations and coastal populations can still be saved from the consequences of climate change….

      In other climate-related work, Solomon and her team explored how volcanic eruptions affect ozone depletion. In 2011, they found that even eruptions that appear to cause damage only at the local level can still fling enough sulfur into the stratosphere to put it into circulation there, where it can have global consequences….

      Recent news related to global climate change has been bleak—from the rapid rise in sea levels to catastrophic wildfires in California. The US government’s fourth National Climate Assessment, released by the Trump administration on the Friday of Thanksgiving weekend in 2018, warned that if significant steps are not taken to reduce emissions, the world will experience ever more dangerous heat waves, fires, floods, and hurricanes. These events are likely to cause crop failures, more frequent wildfires, and severe disruptions in supply chains and trade. Together, the changes could reduce the GDP of the United States alone 10% by 2100.

      But even in the face of dispiriting data, Solomon continues to focus on figuring out how to fix things. She tells her students that making headway on climate change depends on getting people to understand how the problem will affect them personally—and to believe there are practical solutions. And she’s hopeful about the progress she’s seeing.

      “Climate change won’t be solved until alternative energies become more widely adopted, but they are already becoming adopted at an incredibly astonishing pace,” she says. “Admittedly, it may not be as fast as it would need to be to hold temperatures to one and a half degrees—that would really be Herculean. But we can bend the warming curve.”

      Solomon notes that even though solar and wind power are not yet competitive in many parts of the United States, in many other places, clean-energy alternatives are becoming cheaper than fossil fuels. Certainly, there's a good deal more work to be done. Innovation must continue to bring down the costs of clean energy and improve the battery technology needed to store it. And the challenge of converting existing infrastructure to cleaner power sources will be massive….

More Screen Time, Lower Well-Being

[This brief newswothry item appeared in the February 2019 issue of Phi Delta Kappan.]

      A study published in Preventive Medicine Reports finds that as screen time goes up among children ages 2 to 17, psychological well-being goes down.

      Jean Twenge and W. Keith Campbell surveyed caregivers about the amount of time children in their care spent watching TV and videos, playing video games, and using computers and mobile devices for activities other than schoolwork. The survey also asked caregivers to respond to questions about children's emotional stability, relationships with caregivers, self-control, diagnoses of mood disorders, and mental health treatments.

      Screen time averaged 3.2 hours per day, with the amount of time going up as children get older. In general, young people spending up to an hour with screens had similar results on measures of psychological well-being as students who did not spend time with screens. However, once screen time reached more than an hour, and the more screen time increased beyond that, young people were rated lower on their ability to stay calm, finish tasks, learn new things, make friends, and avoid conflict with caregivers. In addition, children with higher levels of screen time were more likely to be diagnosed with anxiety and depression. The associations were largest among adolescents.

      The researchers note, however, that it is not clear from their study whether screen time leads to lower levels of well-being or whether lower levels of well-being leads to more screen time.

The Myth of de facto Segregation

[These excerpts are from an article by Richard Rothstein in the February 2019 issue of Phi Delta Kappan.]

      For nearly 30 years, the nation’s education policy makers have proceeded from the assumption that disadvantaged children would have much greater success in school if not for educators’ low expectations of them. In theory; more regular achievement testing and tougher accountability practices would force teachers to pursue higher academic standards for all children, resulting in improved instruction and greater student proficiency.

      However, there never was any evidence to support this theory, and even its most eager proponents have come to realize that it was flawed all along. In fact, there are a host of reasons why disadvantaged children often struggle to succeed academically. Undeniably, one is that some schools in low-income neighborhoods fall short in their traditional instructional roles. Another is that many schools have failed to embrace effective out-of-classroom programs —'such as health clinics or early childhood centers — that might enable students to be more successful in the classroom. Perhaps most important, however, is the influence of children’s out-of-school social and economic conditions, which predict academic outcomes to a far greater extent than what goes on in the classroom. Researchers have long known that only about one-third of the Black-White academic achievement gap results from variations in school quality. The rest stems from social and economic factors that render some children unable to take full advantage of what even the highest-quality schools can offer.

      Racial segregation exacerbates achievement gaps between Black and White children because it concentrates students with the most serious social and economic challenges in the same classrooms and schools. Consider childhood asthma, for example: Largely because of poorly maintained housing and environmental pollution, urban African-American children have asthma at as much as four times the rate of White middle-class children. Asthmatic children often come to school drowsy and inattentive from sleeplessness, or they don’t come to school at all. Indeed, asthma is the single most important cause of chronic absenteeism. No matter how good the teacher, or their instruction, children who are frequently absent will see less benefit than children who come to school well rested and regularly. Certainly, some asthmatic children will excel — there is a distribution of out-comes for every human condition — but on average, children in poorer health will fall short.

      Children from disadvantaged families suffer disproportionately from a number of other such problems, including lead poisoning that diminishes cognitive and behavioral capacity; toxic stress, from experiencing or witnessing violence; irregular sleep or meal times, related to their parents' working multiple jobs with contingent work schedules; housing instability or homelessness; parental incarceration, and many others. A teacher can give special attention to a few who come to school with challenges that impede learning, but if an entire class has such problems, average achievement inevitably declines.

      We cannot expect to address our most serious educational issues if the most disadvantaged of the nation’s children are concentrated in separate neighborhoods and schools. Today though, racial segregation characterizes every metropolitan area in the United States and bears responsibility for our most serious social and economic problems: Not only does it produce achievement gaps but it predicts lower life expectancies and higher disease rates for African Americans who reside in less healthy neighborhoods, and it corrupts our criminal justice system when police engage in violent altercations with young men who are concentrated in neighborhoods with inferior access to good jobs in the formal economy and without transportation to access those jobs (and for the same reason, segregation exacerbates economic inequality, too).

      Racial segregation also undermines our ability to succeed, economically and politically, as a diverse society. Some might argue that “a Black child does not have to sit next to a White child to learn.” They are wrong: Not only should Black children sit next to White children, but White children should sit next to Black children. A diverse adult society is inevitable; failing to prepare children for it invites disastrous conflict. This has become readily apparent, as our growing political polarization — which maps closely onto racial lines — threatens our very existence as a democratic society….

      …But if segregation has been created by government's explicit racial policies — that is, if residential segregation itself is a civil rights violation — then not only are we permitted to remedy it, we are required to do so.

      And we are so required. Not only did local police forces organize and support mob violence to drive Black families out of homes on the White side of racial boundaries, the federal government purposefully placed public housing in high-poverty, racially isolated neighborhoods to concentrate the Black population. It created a Whites-only mortgage insurance program to shift the White population from urban neighborhoods to exclusively White suburbs. The Internal Revenue Service granted tax exemptions to nonprofit institutions that openly sought neighborhood racial homogeneity. State government licensing agencies enforced a real estate brokers “code of ethics” that prohibited the sale of homes to African Americans in White neighborhoods. Federal and state regulators allowed the banking, thrift, and insurance industries to deny loans to homeowners in other-race communities.

      When the federal government first constructed civilian public housing during the Great Depression, it built separate projects for White and Black families, often segregating previously integrated communities. For instance, the great African-American poet, Langston Hughes, described in his autobiography how, in early-20th-century Cleveland, he went to an integrated neighborhood high school where his best friend was Polish and he dated a Jewish girl. However, the Public Works Administration — a federal agency created under the New Deal — demolished housing in that integrated neighborhood to build racially segregated public housing, creating residential patterns that persisted long into the future. This was the case even in places that today consider themselves racially progressive. In Cambridge, Mass., for example, the Central Square neighborhood between Harvard and the Massachusetts Institute of Technology was integrated in the 1930s, about half Black and half White. But the federal government razed integrated housing to create segregated projects that, with other projects elsewhere in the region, established a pattern of segregation throughout the Boston metropolitan area.

      During World War II, hundreds of thousands ofWhite and African-American migrants flocked to war plants in search of jobs, and federal agencies systematically segregated the war workers’ housing. In many cases, officials did so in places where few African Americans lived before the war and little previous pattern of segregation existed. Richmond, Calif., a suburb of Berkeley, was one such case. It was the largest shipbuilding center on the West Coast, employing 100,000 workers by war's end. In Berkeley, African-American workers were housed in separate buildings along the railroad tracks in an industrial area, while White workers were housed adjacent to a shopping area and White neighborhoods.

      Residents of even the most segregated communities couldn’t count on staying put, however. At the end of the war, local housing agencies in most parts of the country assumed responsibility for such projects and maintained their racial boundaries. However, Berkeley and the University of California (which owned some of the land on which war workers had been housed) refused to permit the public housing to remain, arguing not only that it would change the “character” of the community but also that the site wasn’t suitable for housing. The war projects were demolished and African-American residents were placed in public housing in Oakland. Then, the university reconsidered the site's suitability for housing and used the property for graduate student apartments.

      To be sure, some public officials fought against such policies and practices. In 1949, for instance, the U.S. Congress considered a proposal to prohibit racial discrimination in public housing. It was voted down, however, and federal agencies went on to cite this vote as justification for segregating all federal housing programs for at least another decade.

      Thus, during the years after World War II, the Federal Housing Administration (FHA) and Veterans Administration (VA) subsidized the development of entire subdivisions to house returning veterans and other working-class families on a Whites-only basis. Communities like Levittown (east of New York City), Lakewood (south of Los Angeles), and hundreds of others in between could be built only because the FHA and VA guaranteed the builders’ bank loans for purchase of land and construction of houses. The FHA's Underwriting Manual for appraisers who investigated applications for such suburbs required that projects could be approved only for “the same racial and social classes” and prohibited developments close enough to Black neighborhoods that they might risk “infiltration of inharmonious racial” groups….

      By 1962, when the federal government renounced its policy of subsidizing segregation, and by 1968, when the Fair Housing Act banned private discrimination, the residential patterns of major metropolitan areas had already been set in concrete. White suburbs that had previously been affordable to the Black working class were no longer so, both because of the increase in suburban housing prices and because other federal policies had depressed Black incomes while supporting those of Whites.

      …Further, when researchers have looked closely at the handful of experimental programs that have assisted low-income families with young children to move to integrated housing, they have observed positive effects on those children’s performance in school.

      …That’s why it's so critical, for example, to challenge those who would misinform young people about the country's recent past. Even today, the most widely used middle and high school history textbooks neglect to mention the role of public housing in creating segregation, and they portray the FHA as an agency that made home ownership possible for working-class Americans, with no mention of those who were excluded. Likewise, they describe state-sponsored segregation as a strictly Southern phenomenon, and they portray discrimination in the North as the result of private prejudice alone, saying nothing about the active participation of local, state, and federal governments….

Confronting our Beliefs about Poverty and Discipline

[These excerpts are from an article by Edward Fergus in the February 2019 issue of Phi Delta Kappan.]

      Dating back to Brown v. Board of Education, Mendez v. Westminster, and other landmark court decisions of the mid-20th century, civil rights advocates have prioritized efforts to desegregate school systems and ensure the equitable distribution of educational resources.

      However, the civil rights struggle has always focused not just on passing laws and securing resources but also on challenging the beliefs that underlie segregation and worsen its effects. And the more researchers have learned about the psychology of racial discrimination, the more obvious the need to tackle certain biases that continue to be prevalent among educators, resulting in deficit-based thinking…, low academic expectations for particular students…, and misguided claims of “colorblindness”….

      An additional form of bias — poverty-disciplining belief — has received somewhat less attention from equity adVocates, but it appears to be quite common in schools. Poverty-disciplining belief is the assumption that poverty itself is a kind of “culture,” characterized by dysfunctional behaviors that prevent success in school….In effect, it pathologizes children who live (or whose parents lived) in low-income communities. And while it doesn't focus on race per se, it is often used as a proxy for race and to justify racial disparities in disciplinary referrals, achievement, and enrollment in gifted, AP, and honors courses, as well as to justify harsh punishments for “disobedience” or “disorderly conduct” or “disrespect….”

      The belief that poor people are in need of discipline rests, in turn, on a highly debatable premise, the idea that the economic status of a community determines the value of its cultural practices: The poorer the community, the more impoverished and dysfunctional its culture; the richer the community, the more culturally refined it must be.

      That’s hardly a new idea; elites in every society tend to assert the superiority of their chosen customs, norms, and behaviors. But in recent decades it has been given an academic sheen, beginning with the cultural deprivation theory (also known as the “culture of poverty” argument) of the 1960s, which maintained that the low academic perfor-mance of racial and ethnic minorities stemmed from their deficient cultural practices….Supposedly, parents in certain cultures tend to suppress the development of linguistic, cognitive, and affective skills their children need to succeed in school.

      …Among the nearly 1,600 practitioners surveyed, nearly a third agreed (ranging from somewhat to strongly agree) that the values students learn growing up in disadvantaged neighborhoods conflict with school values; more than a quarter agreed that such students do not value education, and roughly one in six believe poor kids lack the abilities necessary to succeed in school. In short, a significant percentage of school practitioners appear to believe that the values and behaviors learned in low-income communities conflict with those taught in school.

      Increasingly, I’ve found also that educators are framing their assumptions about poor and minority children in terms borrowed from the biological and cognitive sciences, especially research into the effects of long-term exposure to lead paint, food insecurity, violence, and other environmental dangers, and a lack of exposure to certain positive influences, such as frequent reading time at home….

      In short, not only do significant numbers of school practitioners believe that when students from low-income backgrounds struggle it must be the fault of their culture, but some practitioners are tempted to dress up that belief in “scientific” evidence about what it means to grow up in poverty. Supposedly, it is the nature of low-income families to expose their children to trauma and to deny them appropriate support for language development.

      But in fact, mountains of research findings suggest that while poverty may put children at somewhat elevated risk for trauma and other negative influences on development, poverty is far from a deterministic condition….If an individual student has trouble learning to read, behaving appropriately in class, or meeting other expectations, it is for complex reasons having to do with that individual and the specific people and institutions in their lives. It is not simply because they are poor….

      The challenge for educators is to get over the habit of pathologizing entire populations of young people, as though the struggles of individual students could ever be explained merely by pointing out that they're poor. We need to get it into our heads that poverty is not a determin-istic condition; it doesn't tell us anything about the ways in which any particular kid — from any particular race or ethnicity — will develop, the kinds of instruction they’ll need, or the level of “discipline” they require.

      Further, given the inevitability of our own blind spots, we have a responsibility to seek out regular feedback on the racial and economic ecology of our schools. As Eduardo Bonilla-Silva…has argued, even if there are no out-and-out racists among us, racism is still often woven tightly into the social fabric of our schools, subtly influencing our assumptions about ability, intelligence, behavior, and more….

Why We Need a Diverse Teacher Workforce

[These excerpts are from an article by Dan Goldhaber, Roddy Theobald and Christopher Tien in the February 2019 issue of Phi Delta Kappan.]

      …A significant body of literature argues that a match between the race and ethnicity of teachers and students leads to better student outcomes, particularly in high-poverty environments with significant at-risk student populations….At least three commonly cited theoretical rationales suggest why racially matched teacher role models have positive educational benefits for students of color in particular. The first is that students of color, particularly those living and attending schools in disadvantaged settings, benefit from seeing role models of their race in a position of authority….In particular, some scholars have suggested that having an adult role model who exemplifies academic success could alleviate the stigma of “acting White” among some students of color….

      Second, some researchers argue that teachers of color are more likely to have high expectations for students of color….This is important because students of color, especially Black students, appear to be more sensitive to teacher expectations than middle-class White students….And when teachers allow negative stereotypes to lower expectations, a “self-fulfilling prophecy” takes hold to perpetuate poor performance of students of color….

      Finally, some argue that teachers of different backgrounds are able to draw on their own cultural contexts when determining instructional strategies and interpreting students’ behavior. A vast literature finds that Black students are more likely to be disciplined and suspended from school than other students, even after accounting for the nature of students’ misconduct….These dispari-ties in disciplinary actions could be based in part on teacher interpretation of student behavior, which may be informed by negative stereotypes….

      These theoretical arguments suggest several ways that increasing the diversity of the teacher workforce might improve outcomes for students of color. When empirical researchers have considered the effects of teacher diversity, they have generally found that, all else being equal (and, importantly, all else is often not equal), students of color do appear to benefit when they are taught by a teacher of the same race or ethnicity. Much of this empirical evidence focuses on student test performance, but we also discuss empirical evidence related to other important outcomes such as subjective evaluations and discipline….

      The theoretical arguments and empirical evidence generally support the notion that improving the diversity of the teacher workforce would help close racial achievement gaps in public schools. However, teacher workforce diversity is just one of many ways to improve the education system, and diversifying the teacher workforce may present substantial challenges and potential unintended consequences. One challenge is that we know very little about what contributes to the lack of diversity in the teaching workforce. We must understand the answer to that question before we can design effective strategies to recruit more teachers of color. Another challenge is that, while the empirical evidence is consistent with the three theoretical arguments about the importance of teacher workforce diversity discussed above, we don’t have conclusive evidence for why students of color appear to benefit from assignment to a teacher of the same race….

Voluntary Integration in Uncertain Times

[These excerpts are from an article by Jeremy Anderson and Erica Frankenberg in the February 2019 issue of Phi Delta Kappan.]

      The U.S. Supreme Court’s 1954 decision in Brown v. Board of Education — declaring state laws segregating students by race to be “inherently unequal” — stands as one of the most important decisions in the nation’s history. Over the following 15 years, and with help from other branches of the federal government, courageous Black lawyers and plaintiffs, and Black educators working behind the scenes…, the South’s schools were transformed. Despite initial protests and fierce resistance in many local communities, the region’s public school systems became the most integrated in the country and remained so for several decades.

      Since the 1990s, however, many judges have chosen to release school districts from court-ordered desegregation plans, which has prompted a wave of resegregation, especially in the South….Further, the decline in court supervision means that any new integration efforts must be carried out voluntarily by school districts. However, even voluntary school integration has been dealt a setback, thanks to the U.S. Supreme Court’s 2007 decision in Parents Involved in Community Schools v. Seattle School District #1, which limited the ways in which districts can choose to promote diversity and reduce racial isolation….

      This seeming reversal of Brown comes despite growing evidence that students from all backgrounds, White students included, tend to benefit both academically and socially from racial integration….Additionally, students who have attended desegregated schools tend to be more comfortable with those of other racial and ethnic backgrounds, and better prepared to participate in our increasingly diverse democracy…than those who were educated in more racially isolated schools….

      Convinced of these benefits, many school district officials continue to pursue school integration even though they are not legally required to do so, and even though it can be very challenging politically for them to change existing student assignment policies….However, since the Parents Involved decision, those officials have received mixed and confusing signals about which methods of voluntary integration they are and are not permitted to use.

      In 2009, recognizing that many district leaders were confused about their options, Congress authorized a small grant program to provide some districts with assistance. Further, in 2011, the Obama administration published additional legal and policy guidance including examples of ways in which schools could legally pursue voluntary integration and methods that incorporate race as a factor in school assignment decisions alongside those that use methods not involving race….And in 2016, the U.S. Department of Education proposed a larger program to assist districts in designing or implementing voluntary integration strategies. However, that program was cancelled by newly appointed Secretary of Education Betsy DeVos in March 2017….

      This back-and-forth over what is and isn’t permissible has had a chilling effect on school districts’ voluntary integration plans. While some districts have forged ahead, others have given up on their plans, fearing that whatever approach they choose would run into legal challenges….

School Segregation: A Realist’s View

[These excerpts are from an article by Jerry Rosiek in the February 2019 issue of Phi Delta Kappan.]

      As a nation, we often think of racial segregation in schools as an unjust form of social organization that we put behind us long ago, like aristocratic monarchies or the denial of women's right to vote. The inequity of these arrangements is so obvious, it feels indisputable that we should never return to them. The truth, however, is that racial segregation has incrementally returned to U.S. schools over the last 30 years. Like a disease that was never fully cured, school segregation has come out of remission and returned in a form that is more pervasive and harder to treat.

      This relapse should not be surprising. The U.S. Justice Department actively desegregated public schools for only five years. Although the Supreme Court made its landmark Brown v. Board of Education decision in 1954, it was not until 1968 that the Green v. County School Board decision enabled federal enforcement of desegregation orders. That same year Richard Nixon was elected president and soon ordered the Justice Department to reduce enforcement of desegregation rulings….As the cases already in the system ran their course, the momentum of the judicial desegregation movement dissipated. By 1980, desegregation of the schools had peaked.

      This reduction in enforcement was followed by an organized effort by conservative activists to roll back desegregation gains in the courts. In 1992, the Supreme Court Freeman v. Pitts decision made it easier to get desegregation orders lifted, and since that time, more than half of the desegregation orders issued by federal courts have indeed been rescinded. In almost every case where such orders have been lifted, school districts have moved back in the direction of greater segregation….As a result, 50 years after the Green decision, our schools are more racially segregated by some measures now than they were in 1968….

      The theory of change underlying court-mandated desegregation was that a generation of citizens educated in racially desegregated schools would normalize racial integration. The hope was that communities would achieve what the courts called “unitary status,” a condition in which racism would dissipate enough that communities could be trusted not to racially segregate their schools once court orders were no longer in place. Given the rapidity and consistency with which school districts have resegregated, we are forced to conclude that this was a false hope.

      …The 1974 U.S. Supreme Court ruling Milliken v. Bradley struck down efforts to desegregate schools across district lines. In demographically diverse districts, segregation often involves creating schools with few or no White students, so as to maintain relatively high percentages of White students at the remaining schools….In other cases, we see secessionist movements where wealthy predomi-nantly White neighborhoods break away from more diverse school systems and form their own school districts….

      The new segregation also combines race and class segregation. Because of persistent patterns of race and class segregation in housing, as well as racial disparities in wealth accumulation, students of color and low-income students of all races are concentrated in the same schools. Wealthier households (in which White families are overrepresented) can afford to relocate to residential zones with more political clout, and once there, these families invest their political capital in securing advanced curriculum and other educational resources for their schools, but not for others. Voucher systems, open enrollment, and charter schools have all been offered as means of disrupting the influence of residential segregation on school enrollment; however, the evidence indicates such school choice plans either increase school segregation or leave it unaltered….

      Efforts to rezone schools to address racial segregation increasingly are met with objections that such efforts vio-late the principle of color-blind jurisprudence, objections delivered without the slightest sense of irony….

      Racial segregation in schools today is unequivocally a national phenomenon. From 1970 to the early 1980s, federal desegregation orders focused mainly on the Southeastern states, resulting in the lowest levels of racial segregation in the country. However, while segregation is now on the rise again in the South, it is no longer concentrated in that region, having increased dramatically in major urban centers in the North, Midwest, and West New York City, for example, currently has the most racially seg-regated schools in the United States….

      Further, even where federal desegregation orders have remained in force, racial segregation has quietly reappeared at the classroom level…..

      The tendency of White citizens to hoard educational resources for themselves has proven more resilient than civil rights era desegregationists anticipated. Denied the instruments of explicit law and policy, the desire for White majority educational spaces has found other means of enactment. It has used residential housing patterns and school zoning policy. It has bent the courts to its defense. It has camouflaged itself with new rhetoric and new ratio-nales. It has moved into the capillaries of our education system, segregating students at the classroom level through tracked curriculum. What it has not done is go away….

      Although the form of racial segregation has evolved, its odious effects have remained consistent. Black children in racially isolated schools perform less well on standardized tests, their graduation rates are lower, and college attendance is lower. Income levels and wealth accumulation across generations are lower for Black people who attend racially segregated schools, and lower health outcomes across a person's life span are correlated with racially iso-lated schooling….Careful research suggests that these effects are not the consequence of characteristics found within students or their families. Instead, they are strongly correlated with the greater per-pupil spending that come with enrollment in racially integrated or pre-dominantly White schools….In other words, they are a consequence of the way resources follow White students….

      Anti-racist professional development and activism are already practiced in many places, to be sure. Unfortunately, these practices remain largely on the margins of our education system. The fact that we are sending our children to racially resegregating schools means that anti-racist policy and practice must become mainstream. Anything less con-stitutes a failure to face the persistent reality of racism in our schools.

United States Alone in Opposing US Resolution

[This news clip by Marian Starkey is in the March 2019 issue of Population Connection.]

      At the 55th plenary meeting of the UN General Assembly in December, the United States was alone in voting against a nonbinding resolution on the “Intensification of efforts to prevent and eliminate all forms of violence against women and girls: sexual harassment.” The rest of the attending countries either voted yes (130) or abstained from voting (31). Countries that voted yes included Afghanistan, Congo, Myanmar, North Korea, Saudi Arabia, South Sudan, and Yemen.

      In another vote that same day, on “Child, early, and forced marriage,” the United States was one of two countries to vote no. The other country was Nauru, a small Pacific island nation that serves as a detention camp for refugees and migrants to Australia. In that vote, 134 countries voted yes and 32 abstained from voting.

      References to sexual and reproductive health in both resolutions raised concerns among American delegates that voting yes could be interpreted as supporting or promoting abortion.

Climate Change Impacts on Fisheries

[These excerpts are from an article by Eva Plaganyi in the 1 March 2019 issue of Science.]

      Food security, climate change, and their complex and uncertain interactions are a major challenge for societies and ecologies. Global assessments of predicted changes in crop yield under climate change, combined with international trade dynamics, suggest that disparities between nations in production and food availability will escalate. But climate change has already affected productivity. For example, weather-related factors caused declines in global maize and wheat production of 3.8% and 5.5%, respectively, between 1980 and 2008….that indicates a 4.1% decline between 1930 and 2010 in the global productivity of marine fisheries, with some of the largest fish-producing ecoregions experiencing losses of up to 35%. Their spatial mapping can help to inform future planning and adaptation strategies….

      Losses in seafood production are of concern because seafood has unique nutritional value and provides almost 20% of the average per-person intake of animal protein for 3.2 billion people. However, wild capture fisheries yield has likely already reached its natural limits. Coupled with projected human population increases, this means that per-person seafood availability is certain to decline. This shortfall will be exacerbated by future losses in fisheries production as a result of warming….Aquaculture, which parallels the intensification of crop production on land, has been proposed as one potential solution. However, this sector’s initial high growth rates have declined, and its future scope for growth is uncertain and susceptible to environmental extremes. Given the widening gap between current and projected per-person fisheries yield…, other protein sources and solutions will be needed to avoid food shortages.

      Future fisheries production may be at even greater risk considering that, owing to anthropogenic climate change, the oceans are continuing to warm even faster than originally predicted. Moreover, extreme temperature events are on the increase, with profound negative consequences for fisheries and aquaculture. Regional declines in some species will thus increasingly not be counterbalanced by increases, as populations exceed their thermal optima or become subject to other environmental stressors such as reduced oxygen concentration and ocean acidification. These additional effects could not be accounted for….

      Regional fishery managers and stake-holders can influence future sustainable fisheries production and food security through the development, adoption, and enforcement of sustainable management strategies and practices. These strategies should be pretested for robustness to temperature-driven changes in productivity. However, global efforts are needed to contain the rise in global mean temperature to no more than 2°C, beyond which the integrity of marine and terrestrial ecological sys-tems, and hence our food supplies, become compromised.

Mobilize for Peace

[This editorial by Jonathan A. King and Aron Bernstein in the 1 March 2019 issue of Science.]

      Fifty years ago, on 4 March 1969, research and teaching at the Massachusetts Institute of Technology (MIT) came to a halt as students, faculty, and staff held a “research strike” for peace. The strike protested United States involvement in the Vietnam War and MITs complicity in this engagement through its Instrumentation Laboratory (now the Draper Laboratory), a major contracting lab for the U.S. Department of Defense. The anniversary of this activism by scientists…is a reminder that the scientific community must continue to recognize its social responsibilities and promote science as a benefit for all people and for a peaceful world.

      By the end of 1968, nearly 37,000 American soldiers had died in the Vietnam conflict and a draft lottery pulled about 300,000 men into the military in that year alone. In the years following the MIT strike, opposition to the U.S. role in Vietnam grew across the nation, particularly on college campuses. Academia condemned the nation's hand in the war, including the American Association for the Advancement of Science…, which, in 1972, passed a resolution denouncing U.S. involvement in Vietnam, stating that scientists and engineers did not endorse such “wanton destruction of man and environment.” These activities arguably helped to end the Vietnam Wax. In addition, the March 4th strike elevated the voices of scientists urging the use of research and technology for peaceful purposes. Indeed, some years later, as nuclear weapons proliferated and tensions escalated between the United States and the Soviet Union, scientists protested again. They helped initiate the national Nuclear Weapons Freeze campaign, which brought more than 1,000,000 people to New York’s Central Park in protest in 1982. The event galvanized public support to curb the nuclear arms race, and limitations were eventually negotiated between the two nations.

      Over the past 50 years, science has seen rapid growth in biotechnology, computer science, and telecommunications, among other fields. These advances have opened up rich arenas for applications of research and technology, and consequently, have raised ethical concerns. For example, advances in epidemiology and biochemistry identified environmental and occupational toxins and carcinogens, but engendered strong resistance from manufacturers. And in documenting accelerating climate change and its potentially devastating social and economic consequences, the scientific community has informed national and global climate policies, but has been met by resistance from sectors of the energy industry.

      Such commitment and engagement must now be extended once again to the renewed danger of nuclear war. Last month, President Trump announced that the United States would pull out of the Intermediate-Range Nuclear Forces Treaty, threatening a new nuclear arms race with Russia In return, last week, President Putin issued a warning about bolstering Russia’s nuclear arms. The 1987 pact bans the development of certain ground-based missiles and has been a model treaty for arms control agreements between major powers. Withdrawing from this treaty is a threat to U.S. national security and to Europe. Congress has also supported spending $1.7 trillion over the next 25 years to upgrade land-launched nuclear missiles, nuclear missile-armed submarines, and aircraft armed with nuclear missiles and bombs.

      The scientific community needs to remind the Trump administration, Congress, and the public of the destructive power of atomic bombs and to communicate how federal investment in housing, health care, education, environmental protection, sustainable energy development, and basic and biomedical research will be sacrificed to build up a nuclear weapons arsenal. Young scientists, in particular, need to fight for a peaceful future and speak out against profound misuses of resources for war. It is time to mobilize for peace in ways that effectively promote social responsibility in science.

Is Antarctica Collapsing?

[These excerpts are from an article by Richard B. Alley in the February 2019 issue of Scientific American.]

      Glaciers are melting, seas are rising. We already know ocean water will move inland along the Eastern Seaboard, the Gulf of Mexico and coastlines around the world. What scientists are urgently trying to figure out is whether the inundation will be much worse than anticipated—many feet instead of a few. The big question is: Are we entering an era of even faster ice melt? If so, how much and how fast? The answer depends greatly on how the gigantic Thwaites Glacier in West Antarctica responds to human decisions. It will determine whether the stingrays cruising seaside streets are sports cars or stealthy creatures with long, ominous tails.

      Global warming is melting glaciers up in mountainous areas and expanding ocean water, while shrinking ice at both poles. Averaged over the planet's oceans for the past 25 years, sea level has risen just over a tenth of an inch per year, or about a foot per century. Melting the rest of the globe's mountain glaciers would raise the sea a little more than another foot. But the enormous ice sheets on land in the Arctic and Antarctic hold more than 200 feet of sea-level rise; a small change to them can create big changes to our coasts. Ice cliffs many miles long and thousands of feet high could steadily break off and disappear, raising seas significantly.

      Well-reasoned projections for additional sea-level rise this century have remained modest—maybe two feet for moderate warming and less than four feet even with strong warming. Scientists have solid evidence that long-term, sustained heating will add a lot to that over ensuing centuries. But the world might be entering an era of even more rapid ice melt if the front edges of the ice sheets retreat.

      To learn whether this could happen, we look for clues from changes underway today, aided by insights gained about Earth's past and from the physics of ice. Many of the clues have come from dramatic changes that started about two decades ago on Jakobshavn Glacier, an important piece of the Greenland Ice Sheet. Glaciers spread under their own weight toward the sea, where the front edges melt or fall off, to be replaced by ice flowing from behind. When the loss is faster than the flow from behind, the leading edge retreats backward, shrinking the ice sheet on land and raising sea level.

      During the 1980s Jakobshavn was among the fastest-moving glaciers known, racing toward Baffin Bay, even though it was being held back by an ice shelf—an extension of the ice floating on top of the sea. In the 1990s ocean warming of about 1.8 degrees Fahrenheit (one de-gree Celsius) dismantled the ice shelf, and the glacier on land behind it responded by more than doubling its speed toward the shore. Today Jakobshavn is retreating and thinning extensively and is one of the largest single contributors to global sea-level rise. Geologic records in rocks there show that comparable events have occurred in the past. Our current observations reveal similar actions transforming other Greenland glaciers.

      If Thwaites, far larger, unzips the way Jakobshavn did, it and adjacent ice could crumble, perhaps in as little as a few decades, raising sea level 11 feet. So are we risking catastrophic sea-level rise in the near future? Or is the risk overhyped? How will we know how Thwaites will behave? Data are coming in right now….

      Warming air can create lakes on top of the ice shelves. When the lakes break through crevasses, a shelf can fall apart. For example, the Larsen B Ice Shelf in the Antarctic Peninsula, north of Thwaites, disintegrated almost completely in a mere five weeks in 2002, with icebergs breaking off and toppling like dominoes. That did not immediately raise sea level—the shelf was floating already—but the loss of the shelf allowed the ice sheet on land behind it to flow faster into the ocean—like pulling a spatula away, allowing the batter to run. The ice flowed as much as six to eight times quicker than it had been moving earlier. Fortunately, there was not a lot of ice behind the Larsen B Ice Shelf in the narrow Antarctic Peninsula, so it has raised sea level only a little. But the event put society on notice that ice shelves can disintegrate quickly, releasing the glaciers they had been holding back. Ice shelves can also be melted from below by warming seawater, as happened to Jakobshavn.

      When shelves are lost, icebergs calve directly from ice-sheet cliffs that face the sea. Although this delights passengers on cruise ships in Alaska and elsewhere, it speeds up the ice sheets demise. At Jakobshavn today, the icebergs calve from a cliff that towers more than 300 feet above the ocean's edge—a 30-story building—and extends about nine times that much below the water. As these icebergs roll over, they make splashes 50 stories high and earthquakes that can be monitored from the U.S.

      So far ice-shelf loss and ice-cliff calving are contributing moderately to sea-level rise. But at Thwaites, this process could make the rise much more dramatic because a geologic accident has placed the glacier near a “tipping point” into the great Bentley Subglacial Trench….

Ghosts of Wildlife Past

[These excerpts are from an article by Rachel Nuwer in the February 2019 issue of Scientific American.]

      Kenya’s national parks serve as oases in an increasingly human-crowded world, but they are not a conservation panacea. As in much of East Africa, a striking two thirds of the country's wildlife resides outside of national parks—and these animals are not welcome visitors for many landowners, who see them as competition for livestock. But in a rare win-win situation for humans and nature, researchers have now shown that livestock and wildlife can benefit from each other’s presence. A study published last October in Nature Sustainability found that wildlife can boost bottom lines by providing opportunities for tourism, and livestock improve the quality of grass for all grazing species.

      Recent history explains this symbiosis. Animals and savanna grasses evolved together for millennia—but Kenya’s wildlife population dropped by about 70 percent between 1977 and 2016, according to a 2016 PLOS ONE study. With fewer animals around to encourage new growth by removing old and dead grass stems, it seems livestock have stepped in to fill that ecological role.

      …To the team members’ surprise, they found I only benefits in combining moderate numbers of cattle and wildlife. At mixed properties, livestock treated for ticks reduced the overall number of those pathogen-carrying parasites by 75 percent—and grass quality was higher than in livestock- or wildlife-only areas, which tended to be overgrazed or underg razed, respectively….

Reengineering the Colorado River

[These excerpts are from an article by Heather Hansman in the February 2019 issue of Scientific American.]

      …wanted to see if holding the river at a consistent level would aid the struggling native bug population, 85 percent of which lay their eggs in the intertidal zone. Those eggs can get wet, but they cannot get dry; eggs laid at high tides desiccate within an hour of the water dropping.

      Bugs might seem like a lowly thing to focus on. But they form the basis of a complex food web. When their numbers drop, that reduction affects species, such as bats and endangered humpback chub, that feed on them. In a national park held up as an iconic wild, Kennedy and his group are trying to figure out why, accord-ing to their research published in 2016 in BioScience, the Grand Canyon section of the Colorado has one of the lowest insect diver-sities in the country….

      Last summer the researchers were testing whether adjusting dam releases so that the Colorado runs closer to its natural course might help insect populations recover. In those tests, they artificially created the kind of flow patterns that allowed life to flourish before the dam went in—without removing the dam itself.

      Nearly 40 million people depend on the Colorado for the necessities of daily life, including electricity, tap water and the irrigation of 10 percent of land used for U.S. food production. Ever since Glen Canyon Darn opened in 1963, the river has been engineered to accommodate these demands. Doing so changed the ecosystem balance, which was dependent on ingredients such as sediment, snowmelt and seasonal flows. For more than 30 years researchers have been trying to figure out how to help the ecosystem coexist with human needs, and they are finally beginning to test some solutions. By working out an experimental flow schedule that minimally impacted power generation, the 2018 bug tests marked one of the first times that dam operations were adjusted for species health in the Grand Canyon.

      Meanwhile, though, the river is dwindling. The Colorado River Basin has been in a drought for almost two decades; 2018 was the third-driest year ever recorded. Since 2000 ambient temperatures in the basin have been 1.6 degrees Fahrenheit warmer than 20th-century averages, and researchers predict they will reach up to 9.5 degrees F hotter still by 2100. The effects of climate change could decrease river flow by as much as half by the end of the century. With earlier snowmelt and more evaporation, the Bureau of Reclamation has predicted that it may have to cut the amount of water it sends downstream for the first time—as soon as 2020. That will stress every part of the system, from hydropower and city water supplies to native fish populations. It will also mean less room for experimental flows, a tool the scientists think is critical for understanding how to protect the canyon.

      The insect research is a meaningful step toward sustaining the river for habitat as well as for humans. It also runs straight into a core conflict between science and Colorado River policy: scientists want the flexibility to experiment, whereas power and water managers want stability. As the Colorado dries up, this conflict will intensify. And yet if Kennedy and others can show that changing the flow can bring back insect populations, it could make ecosystem health a bigger priority for those who manage the most used river in the West….

      Giving scientists a voice in water management has led to new insights about how the Colorado River reservoirs are suffering from climate change. Much of the news is alarming. A 2017 study…found that average Colorado River flows in the 21st century were 19 percent lower than in the 20th. They predicted flows could drop by up to 55 percent by 2100 as a result Lathe effects of global warming….

      There is also problematic math at Lake Powell. Because the Colorado's water is allocated down to virtually the last drop, the lake level is crucial to a 1922 legal compact that guarantees 8.23 million acre-feet of water will flow past Lees Ferry every year. The Bureau of Reclamation built reservoirs—including Powell—starting in the 1950s. But these storage systems only work if they are replenished. Lake Powell is considered “full” at 3,700 feet above sea level; the last time that happened was 1986. In 2002, the driest year on record, only as million acre-feet flowed into Powell from upstream on the Colorado. Because Lake Powell’s entire purpose is to keep the downstream water supply consistent even when it does not rain or snow, the legally obligated 8.23 million acre-feet still went out.

      This logic, however, is fundamentally flawed. The compact water was allocated based on calculations done by the Bureau of Reclamation in the early 1900s, the wettest recorded period in measured history, which concluded that 18 million acre-feet of water flowed through the river basin every year. Data collected from USGS river gauges installed at Lees Ferry in 1922 have showed that average yearly flows are actually 14.8 million. Because the compact is federal law, the 8.23 million acre-feet of downstream obligations still stand. Water managers call this a structural deficit, and it means that if every state claims its entire share—a near-future scenario thanks to projects like the Lake Powell Pipeline—there will not be enough to go around.

      The accelerating drought has become so threatening that in 2018, seven “basin states” drafted contingency plans. Each state outlined how much of its allocated compact water it would leave in reservoirs if Lake Powell’s level hit 3,525 feet above sea level—just high enough to comfortably maintain power production at the dam. (In November 2018 Powell was at 3,588 feet.) Although interim guidelines came out in 2007, this official step marked the first time since the compact was signed nearly 100 years ago that the basin states made a legally enforceable plan for a drier future. Finally, policy is starting to reflect science….

Urine Trouble

[These excerpts are from an article by Andy Extance in the February 2019 issue of Scientific American.]

      In a disturbing trend, scam artists are using commercially sold fake urine to fool doctors into prescribing pain medications such as hydrocodone—which can then be consumed or illegally sold. The synthetic pee lets patients pass tests intended to ensure they are not already taking opioid medications or drugs of abuse….

      Hoping to address the situation, Kyle and his pathologist colleague Jaswinder Kaur have now shown how legal indulgences— F including chocolate, coffee and cigarettes—can help distinguish real pee from fake….

      The new method, described at the annual Society of Forensic Toxicologists (SOFT) meeting last October in Minneapolis, looks for four substances common in urine: caffeine and theobromine, both found in chocolate, tea and coffee; cotinine, produced as nicotine breaks down; and urobilin—degraded hemoglobin that gives urine its yellow color. The technique employs liquid chromatography to separate urine, just as water spilled on paper separates ink into different colors. The compounds then flow into mass spectrometers that identify them by their molecular weights.

      …the test will not detect people passing off others’ pee as their own. “If someone is carrying synthetic urine or somebody else’s clean urine, you have to do observed collection,” she says. Peace also warns that fake urine makers could easily add substances such as caffeine or theobromine to their products.

      Some already do, Kyle says. He emphasizes that testing must therefore look for compounds naturally produced in our bodies (urobilin, in this case). Combining that with commonly consumed substances makes the test even more powerful—and is potentially more practical than watching people pee.

Call the Midwife … If You Can

[These excerpts are from an editorial by the editors in the February 2019 issue of Scientific American.]

      Despite the astronomical sums that the U.S. spends on maternity care, mortality rates for women and infants are significantly higher in America than in other wealthy countries. And because of a shortage of hospitals and ob-gyns, especially in rural areas, many women struggle to access proper care during pregnancy. Moreover, the rate of cesarean sections is exceedingly high at 32 percent—the World Health Organization considers the ideal rate to be around 10 percent—and 13 percent of women report feeling pressured by their providers to have the procedure.

      Widespread adoption of midwife-directed care could alleviate all these problems. In many other developed countries, such as the U.K., France and Australia, midwifery is at least as common as care by obstetricians. In the U.S., certified midwives and nurse-midwives must hold a graduate degree from an institution accredited by the American College of Nurse-Midwives, and certified professional midwives must undergo at least two years of intensive training. This is designed to make midwives experts in normal physiological pregnancy and birth. Thus, for women with low-risk pregnancies who wish to deliver vaginally, it often makes sense to employ a midwife rather than a more costly surgeon. Yet only about 8 percent of US. births are attended by midwives.

      The roots of America's aversion to midwifery go back to the late 1800s, when the advent of germ theory and anesthesia reduced much of the danger and discomfort associated with childbirth. The benefits of these technologies brought doctors to the forefront of maternity care and pushed midwives aside. Obstetricians helped to bar midwives from practicing in hospitals, which were now considered the safest birth settings. By the ear-ly 1960s midwifery was virtually obsolete.

      It has made a comeback since then, with practitioners just as well trained as doctors to supervise uncomplicated deliveries. Studies show that midwife-attended births are as safe as physician-attended ones, and they are associated with lower rates of C-sections and other interventions that can be costly, risky and disruptive to the labor process. But midwifery still remains on the margins of maternity care in the U.S.

      …Half of planned nonhospital births are currently paid for by patients themselves, compared with just 3.4 percent of hospital births. Thus, a less expensive birth at home may paradoxically be out of reach for women who cannot afford to pay out of pocket. U.S. hospitals charge more than $13,000, on average, for an uncomplicated vaginal birth, whereas a similar midwife-attended birth outside of the hospital reduces that figure by at least half. Insurers would save money by embracing midwife-attended, nonhospital birth as a safe and inexpensive alternative.

      A national shortage of birth centers further limits women’s choices. These homelike settings are designed to support naturally laboring women with amenities such as warm baths and spacious beds and are consistently rated highly in surveys of patient satisfaction. Yet there are only around 350 existing freestanding birth centers in the entire nation, and nine states lack regulations for licensing such facilities. More government support for birth centers would help midwives meet a growing demand, which has already fueled an increase of 82 percent in centers since 2010.

      Policy makers, providers and insurers all have good reasons to encourage a shift toward midwifery. The result will be more choices and better outcomes for mothers and babies.

The Law and Vaccine Resistance

[These excerpts are from an editorial by Dorit Rubenstein Reiss in the 15 February 2019 issue of Science.]

      Last week, the Centers for Disease Control and Prevention announced that more than 100 cases of measles, spanning 10 states, had been reported in the United States since the beginning of the year. This news came on the heels of the World Health Organization’s estimate of over 200,000 cases of measles in 2018. These numbers signal the reemergence of a preventable, deadly disease, attributed in significant part to vaccine hesitancy. Communities and nations must seriously consider leveraging the law to protect against the spread of this highly contagious disease.

      In the United States, measles was deemed “eliminated” in 2000 because of vaccination success. Since then, its reemergence has been associated with a resistance to vaccination. This also reflects the fact that unvaccinated U.S. residents visit countries that have seen large measles outbreaks (such as Ukraine, the Philippines, and Israel), become infected, and bring the disease back home.

      Outbreaks in the United States are still fewer than in, say, Europe because of unique U.S. policies and laws that maintain high vaccination coverage. All 50 states and the District of Columbia have laws requiring vaccinations for school and daycare attendance. School mandates have proven very effective: The stronger they are, the higher the vaccination rate, and the lower the risk of outbreaks.

      …States have extensive leeway to protect public health, and courts have consistently upheld strong school immunization mandates. Thus, states could tighten nonmedical exemptions (for example, by requiring consultation with a doctor) or remove these exemptions completely from school mandates. Valid medical exemptions are important, but it is less clear whether nonmedical exemptions are appropriate. Some scholars are concerned that eliminating nonmedical exemptions may generate resentment among parents and interfere with parental autonomy. Others—including professional medical associations—disagree, because mandates protect children, and a parent’s freedom to send an unvaccinated child to school places classmates at risk of dangerous diseases. There is a strong argument for removing nonmedical exemptions, and at the least, they should be hard to get, to further incentivize parents to vaccinate. In many states, however, getting an exemption is as easy as checking a box. States and localities could also require schools to provide their immunization rates to parents at the start of the school year.

      Beyond school mandates, states can consider other legal tools that have not yet been used. States could implement workplace mandates for those working with vulnerable populations, such as health care workers, teachers in schools, and providers of daycare. States could impose tort liability (civil law damages for harm) when unexcused refusal to vaccinate leads to individuals becoming infected unnecessarily or worse, to a large outbreak. States could permit teenagers to consent to vaccinations without parental approval. And states could mandate vaccinations to enroll in institutions of higher education.

      Vaccine hesitancy is a problem with many components. In handling it, societies should improve public understanding of vaccinations but also not hesitate to use the law to prevent deadly diseases from spreading.

How Rabbits Escaped a Deadly Virus—At Least for Now

[These excerpts are from an article by Elizabeth Pennisi in the 15 February 2019 issue of Science.]

      In Australia, a few dozen European rabbits introduced in the mid-1800s for hunters did what the animals famously do. They multiplied until hundreds of millions were chowing down on crops. So, in 1950, after a smallpoxlike virus found in South American rabbits turned out to kill the European relative, Australian authorities released the virus into the wild, cutting the rabbit population by 99%. A few years later, the virus, called myxoma, was released in France and eventually spread to the United Kingdom.

      …Within a decade, rabbit numbers were on the rise again as some evolved resistance to this deadly infection and the virus itself became less deadly.

      One allele shift affected the rabbits’ interferon, a protein released by immune cells that sounds the alarm about a viral attack and helps trigger an immune response. Compared with preinfection rabbits, modern rabbits make an interferon that is better at responding to the biocontrol virus, the researchers found when they added different versions of the protein to rabbit cell lines.

      The virus has not stood still. In 2017, Holmes and his colleagues reported that in the 1970s the virus developed a greater ability to suppress the rabbit's immune responses. That change, as well as the natural emergence of another rabbit-killing virus, has caused populations to decline again. But in contrast to the parallel evolution in rabbits, myxoma viruses in the various locations took different genetic paths to regaining potency….

Controversial Flu Studies Can Resume, Panel Says

[These excerpts are from an article by Jocelyn Kaiser in the 15 February 2019 issue of Science.]

      Controversial lab studies that modify bird flu viruses in ways that could make them more risky to humans will soon resume after being on hold for more than 4 years….last year, a U.S. government review panel, quietly approved two projects previously considered so dangerous that federal officials had imposed an unusual moratorium on the research….

      The outcome marks the latest twist in a nearly decadelong debate over the risks and benefits of studies that aim to make pathogens more potent or more likely to spread in order to better understand them and prepare defenses. And it represents the first test of a new federal review process intended to ensure that government funding flows to such studies only when they are scientifically and ethically justifiable and done under strict safety rules….

      Some other researchers are sharply critical of the approvals, saying the review lacked transparency. After a long public discussion to develop new standards that “consumed countless weeks and months of time for many scientists, we are now being asked to trust a completely opaque process,” says Harvard University epidemiologist Marc Lipsitch, who helped lead a push to require the reviews….

A Harvest of Change in the Heartland

[These excerpts are from an article by Peter Klebnikov in the Winter 2019 issue of Solutions, the newsletter of the Environmental Defense Fund.]

      In American agriculture, corn Is king. More than 89 million acres were planted in 2018, enough to fill a freight train that would more than encircle the earth.

      But growing corn has a steep environ-mental cost. Excess fertilizer runs off fields into rivers, lakes and groundwater, polluting drinking water around the Midwest and creating algae-filled dead zones around the country. It also forms nitrous oxide, a potent greenhouse gas.

      Historically, farmers often didn’t know how much fertilizer to use, so they applied extra to be on the safe side. This hurts downstream communities. Today, farmers increasingly want to use fertilizer more efficiently, which also saves money, and adopt other conservation practices….

      That means partnering with farmers and trade groups to advance practices such as applying fertilizer more precisely, using no-till techniques that leave more carbon in the soil, creating buffers and wetlands along rivers and streams to reduce erosion and improve water quality, and planting cover crops to protect the soil.

      Today’s tech-savvy, data-hungry farmers are using these practices to reinvent their approach to the land….

Getting America Back on Track

[These excerpts are from an article by Charlie Miller in the Winter 2019 issue of Solutions, the newsletter of the Environmental Defense Fund.]

      The state motto is “Land of Enchantment.” New Mexico's high plains, mountains and stunning deserts cast a spell over many visitors. But beyond the scenery lies a dirty secret —a 2,500-square-mile invisible cloud of methane from oil and gas operations hovers over New Mexico’s San Juan Basin, a cloud so enormous that scientists can spot it in infrared images from space.

      New Mexico’s last governor, Susanna Martinez, had no problem with this. She backed a congressional effort to allow unfettered emissions of methane, a greenhouse gas 84 times more powerful than carbon dioxide. She also vetoed three solar bills that would have nudged the state toward clean energy.

      The state’s new governor, Michelle Lujan Grisham, couldn’t be more different. She is one of many new faces of 2019 who are turning the page to an era of renewed environmental progress….

      In some races, climate was an explicit issue. Sean Casten is a writer, scientist and clean energy entrepreneur. He defeated an 11-year incumbent representing suburban Chicago who called climate change “junk science.” Casten put climate change at the heart of his campaign.

      In a Florida district spanning the Everglades and the Florida Keys—both areas highly vulnerable to climate change—the candidates, Democrat and Republican, even released dueling television ads over who was tougher on climate….

      The implications of a more environmentally friendly Congress are profound. Bills that would harm the environment, such as proposed legislation to eviscerate the Endangered Species Act and slash environmental protection budgets, are now nonstarters. And environmental supporters in Congress can now fend off moves to defend critical programs and agencies. And although the Trump administration will still try to roll back key safeguards, it no longer has a completely free hand.

      A disturbing hallmark of the Trump administration has been its unrelenting attacks on science, with climate denial at the center. One proposal now under consideration at Trump’s EPA would invalidate many studies that underpin key public health protections. Supporters of sound science can now push back.

      The House Science Committee is now led by Rep. Eddie Bernice Johnson, elected in 1992 and the first registered nurse to serve in Congress. As chair, she replaced retiring Rep. Lamar Smith, who often brought climate deniers to testify before the committee….

      A comprehensive carbon bill isn’t likely to pass in this Congress, but these are building years to get a strong, bipartisan law passed after 2020. Already, three House committees have announced climate change hearings. We expect House leaders to revive the House Select Committee on Climate Change, which through hearings can draw attention to the issue. With so many new members in Congress, this committee can also educate House newcomers on the dangers of climate change through the testimony of credible experts and scientists, instead of industry shills….

      A critical role for Congress is to provide oversight of the executive branch, a role largely abdicated in the past two years. Acting EPA Administrator Andrew Wheeler will be summoned to appear before Congress to testify under oath about why, for example, he is promoting a sham Clean Power Plan that will do almost nothing to halt climate change.

      Wheeler will face relentless scrutiny of his failure to enforce clean air and clean water rules, as well as the exodus of talented but dispirited staff from the EPA. Many have been handcuffed by political appointees as they try to protect the public from pollution and climate change….

      Victories for the environment extended far beyond Washington, D.C., on Election Day. hi seven states, gubernatorial candidates promising strong climate action won their races. Six more state legislatures now have a majority supporting climate action, and more than 300 state House and Senate seats flipped….

Hope Arrives on Fragile Wings

[These excerpts are from an article by Tasha Kosviner in the Winter 2019 issue of Solutions, the newsletter of the Environmental Defense Fund.]

      It weighs less than a dollar bill and is capable of flying 2,500 miles. The beguiling strength and fragility of the monarch butterfly has captured American hearts. But in the past 30 years, loss of habitat has seen the butterfly’s population plummet 95 percent. It faces an Endangered Species Act listing decision this year.

      Last summer, on a vibrant prairie near Greenville, Missouri, thousands of monarchs seemed to defy this bleak outlook. They flitted between native flowers, fueling up for an extraordinary migration that would carry them across the Great Plains to Mexico….

      It’s hard to imagine that just a few years ago, this was degraded pasture, a monoculture of nonnative grass with L no monarchs and little ecological value….

      Saving the beloved butterfly and supporting renewable energy are not the only benefits. The presence of monarchs and other pollinators such as bees and birds is a key indicator of the health of the land. Restored prairie, rich in native flora, is an ecological powerhouse. It sequesters carbon, prevents soil erosion, retains water and absorbs excess fertilizer, keeping waterways clean….

A Plan to Rescue the Amazon

[These excerpts are from an article by Tasha Kosviner in the Winter 2019 issue of Solutions, the newsletter of the Environmental Defense Fund.]

      …Tropical forest loss is responsible for 16-19 percent of global climate emissions—more than all the world’s cars and trucks combined. In the Amazon, where 95 percent of deforestation is caused by farming, scientists estimate that once 20-25 percent of the forest is destroyed, it will reach a tipping point. After this, damage to the water cycle and resulting reductions in rainfall mean the forest will begin degrading on its own. Currently we are at 19 percent.

      …But we have a steep climb ahead. In 2018, Brazil reported the worst annual deforestation rates in a decade. And in October, Jair Bolsonaro, the country’s new populist president, sailed to victory Among his campaign pledges was a promise to open up forests on indigenous lands to mining and agriculture.

      The news is bleak. But there are some reasons for hope. Caio Penido is one of those reasons. He has ended deforestation on his ranch, leaving more than 5,000 acres of forest intact. By planting more nutritious grasses and practicing better land management he has been able to increase the number of cows per hectare, growing his business without clearing more land. Elsewhere in Mato Grosso similar projects are unfolding, increasing profitability while reducing emissions and saving forests. The Novo Campo project in the Alta Floresta region has assisted a group of beef ranchers to reduce climate emissions associated with beef production by 60 percent. The project will expand to cover 250,000 acres in the next four years.

      …the state—which produces 30 percent of Brazil’s soy and is home to more than 30 million cattle—has committed to ending illegal deforestation by 2020. To achieve this, it has partnered with local producers, communities and environmental organizations to help coordinate and fund initiatives that increase productivity while reducing deforestation….

      The proof is in the numbers. If it were a country, Mato Grosso would have been the world’s fifth-largest greenhouse gas emitter in 2004. By 2014 it would have been 50th. If it stays on track, it could cumulatively reduce emissions by more than six gigatons—that’s nearly the annual emissions of the U.S.—by 2030….

      Success ultimately will depend on local buy-in. In the lush Araguaia Valley, Caio Penido has enlisted 70 other ranchers to comply with Brazil’s forest laws, conserving 160,000 acres of forest while still intensifying production. The ranchers’ efforts will cut carbon emissions by half a million tons by next year….

Gut Bacteria Linked to Mental Well-being and Depression

[These excerpts are from an article by Elizabeth Pennisi in the 8 February 2019 issue of Science.]

      Of all the many ways the teeming ecosystem of microbes in a person’s gut and other tissues might affect health, its potential influences on the brain may be the most provocative. Now, a study of two large groups of Europeans has identified several species of gut bacteria that are largely missing in people with depression. The researchers can’t say whether the absence is a cause or an effect of the illness, but they showed that many gut bacteria could make or break down substances that affect nerve cell function—and maybe mood.

      …took a closer look at 1054 Belgians they had recruited to assess a “normal” microbiome. Some in the group-173 in total—had been diagnosed with depression or had done poorly on a quality of life survey, and the team compared their microbiomes with those of other participants.

      Two kinds of bacteria, Coprococcus and Dialister, were missing from the microbiomes of the depressed subjects, but not from those with a high quality of life. The finding held up when the researchers allowed for factors such as age, sex, or antidepressant use, all of which influence the microbiome, the team reports this week in Nature Microbiology. And when the team looked at another group-1064 Dutch people whose microbiomes had also been sampled—they found the same two species were missing in depressed people, and they were also missing in seven subjects suffering from severe clinical depression. The data don’t prove causality, Raes says, but they are “an independent observation backed by three groups of people.”

      Looking for something that could link microbes to mood, Raes and his colleagues compiled a list of 56 substances important for proper nervous system function that gut microbes either produce or break down. They found that Coprococcus seems to make a metabolite of dopamine, a brain signal involved in depression, although it’s not clear whether the bacteria break down the neurotransmitter or whether the metabolite has its own function. The same microbe makes an anti-inflammatory substance called butyrate; increased inflammation may play a role in depression. (Depressed subjects also had an increase in bacteria implicated in Crohn disease, an inflammatory disorder.)

      Linking the bacteria to depression “makes sense physio-logically….” Still, no one has shown that microbial compounds in the gut influence the brain. One possible channel is the vagus nerve, which links that organ and the gut.

      Resolving the microbiome-brain connection “might lead to novel therapies,” Raes suggests. Some physicians are already exploring probiotics—oral bacterial supplements—for depression, although most don't include the missing gut microbes identified in the new study….

Learning to Think

[These excerpts are from an editorial by Steve Metz in the February 2019 issue of The Science Teacher.]

      We are bombarded daily by a barrage of claims and counter-claims. Cable news commentary, social media, and partisan political pronouncements routinely ask us to accept opinion masquerading as fact, presented alongside data that is often misleading, out of context, or even patently false. Has there ever been a greater need for the rigorous, evidence-based critique of ideas?

      In an age where facts must compete with “alternative facts,” it is more important than ever for our students to learn and practice the skills of scientific argumentation. Taken from the Latin arguer—to make bright or enlighten—argument is central to scientific progress….

      When students defend and critique scientific explanations, experimental designs, or engineering solutions, they learn to create and evaluate arguments using evidence and logical reasoning. Through critical discourse, they are challenged to distinguish opinion from evidence. They learn that argumentation is how scientists collaboratively construct and revise scientific knowledge.

      Argumentation requires students to engage in a social process as they consider competing ideas from multiple voices and generate knowledge through peer-to-peer interaction. They develop the important skill of respectfully considering more than one competing idea. Creating multiple opportunities for students to engage in argumentation can thus promote equity in a classroom where all ideas are valued and the whole class works together in the evaluation and revision of diverse ideas and sources of evidence.

      Perhaps most importantly, argumentation and critique help students learn life skills that extend far beyond the science classroom. Students develop the understanding that claims must be supported by evidence and sound reasoning, not by opinion, belief, emotion, or appeals to authority. Evidence-based argumentation and critique are our only defense against prejudice, pseudoscience, and demagoguery.

Algae Suggest Eukaryotes Get Many Gifts of Bacteria DNA

[These excerpts are from an article by Elizabeth Pensini in the 1 February 2019 issue of Science.]

      Algae found in thermal springs and other extreme environments have heated up a long-standing debate: Do eukaryotes—organisms with a cell nucleus—sometimes get an evolutionary boost in the form of genes transferred from bacteria? The genomes of some red algae, single-celled eukaryotes, suggest the answer is yes. About 1% of their genes have foreign origins, and the borrowed genes may help the algae adapt to their hostile environment….

      Many genome studies have shown that prokaryotes—bacteria and archaea—liberally swap genes among species, which influences their evolution. The initial sequencing of the human genome suggested our species, too, has picked up microbial genes. But further work demonstrated that such genes found in vertebrate genomes were often contaminants introduced during sequencing.

      …any such transfers only occurred episodically—early in the evolution of eukaryotes, as they internalized the bacteria that eventually became organelles such [as] mitochondria or chloroplasts….

In Hot Water

[These excerpts are from an article by Warren Cornwall in the 1 February issue of Science.]

      …The data, collected by research trawlers, indicated cod numbers had plunged by 70% in 2 years, essentially erasing a fishery worth $100 million annually. There was no evidence that the fish had simply moved elsewhere. And as the vast scale of the disappearance became clear, a prime suspect emerged: “The Blob.”

      In late 2013, a huge patch of unusually warm ocean water, roughly one-third the size of the contiguous United States, formed in the Gulf of Alaska and began to spread. A few months later, Nick Bond, a climate scientist at the University of Washington in Seattle, dubbed it The Blob. The name, with its echo of a 1958 horror film about an alien life form that keeps growing as it consumes everything in its path, quickly caught on. By the summer of 2015, The Blob had more than doubled in size, stretching across more than 4 million square kilometers of ocean, from Mexico’s Baja California Peninsula to Alaska’s Aleutian Islands. Water temperatures reached 2.5°C above normal in many places.

      By late 2016, the marine heat wave had crashed across ecosystems all along North America’s western coast, reshuffling food chains and wreaking havoc. Unusual blooms of toxic algae appeared, as did sea creatures typically found closer to the tropics….Small fish and crustaceans hunted by larger animals vanished. The carcasses of tens of thousands of seabirds littered beaches. Whales failed to arrive in their usual summer waters. Then the cod disappeared.

      The fish “basically ran out of food,” Barbeaux now believes. Once, he didn’t think a food shortage would have much effect on adult cod, which, like camels, can harbor energy and go months without eating. But now, it is “something we look at and go: ‘Huh, that can happen.’”

      Today, 5 years after The Blob appeared, the waters it once gripped have cooled, although fish, bird, and whale numbers have yet to recover. Climate scientists and marine biologists, meanwhile, are still putting together the story of what triggered the event, and how it reverberated through ecosystems. Their interest is not just historical.

      Around the world, shifting climate and ocean circulation patterns are causing huge patches of unusually warm water to become more common, researchers have found. Already, ominous new warm-patches are emerging in the North Pacific Ocean and elsewhere, and researchers are applying what they've learned from The Blob to help guide predictions of how future marine heat waves might unfold. If global warming isn’t curbed, scientists warn that the heat waves will become more frequent, larger, more intense, and longer-lasting. By the end of the century, Bond says, “The ocean is going to be a much different place.”

      …The Blob was spawned, experts say, by a long-lasting atmospheric ridge of high pressure that formed over the Gulf of Alaska in the fall of 2013. The ridge helped squelch fierce winter storms that typically sweep the gulf. That dampened the churning winds that usually bring colder, deeper water to the surface, as well as transfer heat from the ocean to the atmosphere—much like a bowl of hot soup cooling as a diner blows across it. As a result, the gulf remained unusually warm through the following year.

      But it took a convergence of other forces to transform The Blob into a monster. In the winter of 2014-15, winds from the south brought warmer air into the gulf, keeping sea temperatures high. Those winds also pushed warm water closer to the coasts of Oregon and Washington. Then, later in 2015 and in 2016, the periodic warming of the central Pacific known as El Nino added more warmth, fueling The Blob’s growth. The heat wave finally broke when La Nifia—El Nino’s cool opposite number—arrived at the end of 2016, bringing storms that stirred and cooled the ocean….

      Krill—tiny shrimp that, like copepods, are a key food for many fish—felt the heat, too. In 2015 and 2016, as The Blob engulfed the coasts of Washington and Oregon, the heat-sensitive creatures vanished from biologists’ nets.

      As the base of the food chain crumbled, the effects propagated upward. One link higher, swarms of small fish that dine on copepods and krill—and in turn become food for larger animals—also became scarce as warm waters spread. On a remote island in the northern gulf, where scientists have tracked seabird diets for decades, they noticed that capelin and sand lance, staples for many bird species, nearly vanished from the birds' meals. In 2015, by one estimate, the populations of most key forage fish in the gulf fell to less than 50% of the average over the previous 9 years….

  Website by Avi Ornstein, "The Blue Dragon" – 2016 All Rights Reserved