Increase Your Brain Power
Sonia in Vert
Publications
Shared Idea
Interesting Excerpts
Awards and Honors
Presentations
This Week's Puzzle
Last Week's Puzzle
Interesting Excerpts
The following excerpts are from articles or books that I have recently read. They caught my interest and I hope that you will find them worth reading. If one does spark an action on your part and you want to learn more or you choose to cite it, I urge you to actually read the article or source so that you better understand the perspective of the author(s).


A Looming Tragedy of the Sand Commons

[These excerpts from are from anarticle Aurora Torres, Jodi Brandt, Kristen Lear and Jianguo Liu in the September8, 2017, issue of Science.]

      Between 1900 and 2010, the global volume of natural resources used in buildings and transport infrastructure increased 23-fold. Sand and gravel are the largest portion of these primary material inputs (79% or 28.6 gigatons per year in 2010) and are the most extracted group of materials worldwide, exceeding fossil fuels and biomass. In most regions, sand is a common-pool resource, i.e., a resource that is open to all because access can be limited only at high cost. Because of the difficulty in regulating their consumption, common-pool resources are prone to tragedies of the commons as people may selfishly extract them without considering long-term consequences, eventually leading to overexploitation or degradation. Even when sand mining is regulated, it is often subject to rampant illegal extraction and trade. As a result, sand scarcity is an emerging issue with major sociopolitical, economic, and environmental implications.

      Rapid urban expansion is the main driver of increasing sand appropriation, because sand is a key ingredient of concrete, asphalt, glass, and electronics. Urban development is thus putting more and more strain on limited sand deposits, causing conflicts around the world. Further strains on sand deposits arise from escalating transformations in the land-sea interface as a result of burgeoning coastal populations, land scarcity, and rising threats from climate change and coastal erosion. Even hydraulic fracturing is among the plethora of activities that demand the use of increasing amounts of sand. In the following, we identify linkages between sand extraction and other global sustainability challenges.

      Sand extraction from rivers, beaches, and seafloors affects ecosystem integrity through erosion, physical disturbance of benthic habitats, and suspended sediments . Thus, extensive mining is likely to place enormous burdens on habitats, migratory pathways, ecological communities, and food webs….

      Such environmental impacts have cascading effects on the provisioning of ecosystem services and human well-being. For example, sand mining is a frequent cause of shoreline and river erosion and destabilization, which undermine human resilience to natural hazards such as storm surges and tsunami events, especially as sea level continues to rise…

Reasoning versus Post-truth

[These excerpts from are from a commentary article by Wayne Melville in the September 2017 issue of The Science Teacher.]

      Empirical evidence and reasoning have not always been at the heart of the scientific enterprise. Evidence-based reasoning evolved in response to beliefs that were increasingly untenable to early natural philosophers. In the early 1600s, the first scientific academies were established in part to uphold the primacy of experiment in questions about the natural world. Such a stance was counter to scholasticism, the dominant medieval method of learning “rooted in Aristotle and endorsed by the Church ….”

      Synthesizing Christianity and Aristotelian thought, scholasticism viewed the universe as simultaneously religious and physical. The scholastic reaction to the heliocentrism put forth in the 1543 publication of De revolutionibus orbium coelestium is entirely understandable: Copernicus challenged not just a “scientific” model of the universe but also a view of man’s place in creation.

      The difficulty that philosopher and scientist Francis Bacon had with deductive scholasticism was that it was static, not permitting new knowledge to develop. By introducing and promoting induction as a method for studying nature, L Bacon profoundly influenced the course of scientific inquiry….

      …After Galileo died in 1642, both Grand Duke Ferdinano II and his brother Prince (later Cardinal) Leopoldo recognized the political value of continuing to support Galileo’s experimental practices.

      This led to Leopoldo’s creation of the scientific Accademia del Cimento in 1657. In 1664, the Accademician Francesco Redi recorded that Leopoldo was interested in science “not for vain or idle diversion, but rather to find in things the naked, pure, genuine truth…” Leopoldo’s commitment to experimentation was captured in the Accadernia's motto: Provando e riprovando (Test and Test Again)….

      The Accademicians regarded experimentation as central to the practice of science, directly in contrast to both Aristotle and the Church….

      …In doing so, the Accademicians worked to remove any reference to philosophy or mythological cosmology from experimental science and so establish the authority of experimentation in questions about the natural world. This was an overt challenge to the prevailing scholastic view of the natural world.

      To reason from evidence is not simple, as it opens the evidence to speculation and argumentation. The Accademicians often struggled to reconcile their interpretations of the experimental data….A particular point of contention within the Accademia. was the range of views about the relationship between experimentation and the still powerful approach of Aristotle….

      The work of the Accademia set out the need for replicable tests, the control of variables, and the standardization of measurement and instrumentation. It also demonstrated that modern science is more than just knowledge; science is a human endeavor based on curiosity about the natural world, observation. argument, creativity, and reason….As science teachers, we must model, teach, and practice these qualities if we are to engage our students with Lthe need for evidence and reasoned argument.

      …As educators, our challenge is to use our authority in the classroom to engage, alongside our students and as learners ourselves, with all of the practices of science, and thus build trust in those practices.

      Post-truth relies on the distrust of both the sources and value of information. This loss of trust in institutions and academic disciplines—including science—along with the wide availability of misinformation that conforms to what people want to hear, diminishes expertise and learning. Drawing from history, we can give students the tools and attitudes needed to challenge those who would devalue reason so that reasoned decision-making can triumph. Just as the Accademicians challenged scholasticism and eventually prevailed, so must we challenge the very idea of post-truth.

With Ice Cream, Fattier May Not Be Tastier

[These excerpts from are from a “Current News” article in the September 2017 issue of The Science Teacher.]

      Researchers have found that people generally cannot tell the difference between fat levels in ice creams.

      In a series of taste tests, participants could not distinguish a 2% difference in fat levels in two vanilla ice cream samples as long as the samples were in the 6-12% fat-level range. While the subjects could detect a 4% difference between ice cream with 6% and 10% fat levels, they could not detect a 4% fat difference in samples between 8% and 12% fat….

      The researchers also found that fat levels did not significantly sway consumers' preferences in taste. The consumers’ overall liking of the ice cream did not change when fat content dropped from 14% to 6%, for example….

      The study may challenge some ice-cream marketing that suggests ice creams with high fat levels are higher-quality and better-tasting products, according to researchers….

How Civilization Started

[This excerpt is from an article by John Lanchester in the September 18, 2017, issue of The New Yorker.]

      Science and technology: we tend to think of them as siblings, perhaps even as twins, as parts of STEM (for “science, technology, engineering, and mathematics”). When it comes to the shiniest wonders of the modem world—as the supercomputers in our pockets communicate with satellites—science and technology are indeed hand in glove. For much of human history, though, technology had nothing to do with science. Many of our most significant inventions are pure tools, with no scientific method behind them. Wheels and wells, cranks and mills and gears and ships’ masts, clocks and rudders and crop rotation: all have been crucial to human and economic development, and none historically had any connection with what we think of today as science. Some of the most important things we use every day were invented long before the adoption of the scientific method. I love my laptop and my iPhone and my Echo and my G.R S., but the piece of technology I would be most reluctant to give up, the one that changed my life from the first day I used it, and that I'm still reliant on every waking hour—am reliant on right now, as I sit typing—dates from the thirteenth century: my glasses. Soap prevented more deaths than penicillin. That’s technology, not science.

      In “Against the Grain: A Deep History of the Earliest States,” James C. Scott, a professor of political science at Yale, presents a plausible contender for the most important piece of technology in the history of man. It is a technology so old that it predates Homo sapiens and instead should be credited to our ancestor Homo erectus. That technology is fire. We have used it in two crucial, defining ways. The first and the most obvious of these is cooking. As Richard Wrangham has argued in his book “Catching Fire,” our ability to cook allows us to extract more energy from the food we eat, and also to eat a far wider range of foods. Our closest animal relative, the chimpanzee, has a colon three times as large as ours, because its diet of raw food is so much harder to digest: The extra caloric value we get from cooked food allowed us to develop our big brains, which absorb roughly a fifth of the energy we consume, as opposed to less than a tenth for most mammals' brains. That difference is what has made us the dominant species on the planet.

      The other reason fire was central to our history is less obvious to contemporary eyes: we used it to adapt the landscape around us to our purposes. Hunter-gatherers would set fires as they moved, to clear terrain and make it ready for fast-growing, prey-attracting new plants. They would also drive animals with fire. They used this technology so much that, Scott thinks, we should date the human-dominated phase of earth, the so-called Anthropocene, from the time our forebears mastered this new tool.

Our Connected Struggle

[These excerpts are from an article by Michelle Chan in the Summer 2017 issue of Friends of the Earth Newsmagazine.]

      Environmentalists understand one thing above all others: in any ecosystem, everything is interconnected. Bears rely on salmon, which rely on rivers, which rely on trees - all the way down to the microorganisms that sustain our soil. Every part plays a critical role, even if we don’t understand it. All of life - and our fates - are intertwined….

      Nature inspires hope and awe: The mind-boggling 3,000-mile migration of the monarch butterfly, alight on paper-thin wings; the damp quiet of ancient redwoods, which have stood in witness to history for thousands of years….

      Excludes nothing and no one. Our fraternity with one another, and with nature, cannot exclude Muslims, immigrants, people of color, or those who disagree with us politically. This communion does not build walls or prohibition lists; it doesn't divide the world between “worthy” or “unworthy” people, or into places to be protected and those to be sacrificed. As our nation becomes increasingly divided and laden with hate, fear and militarism, it flies against nature’s most powerful and enduring lesson: everything and everyone is intrinsically valuable and vital to the whole….

      Our environmental ethic requires us to consider not only our planet's crisis, but our role in creating and solving it - which includes both ecological and social dimensions. We begin by making changes in our own lives to get in “right relationship” with the planet and with others. Then we take that to the systemic level, to transform the institutions that perpetuate environmental and social injustice - understanding that these are interconnected….

      Nature teaches us that all things are interconnected and intrinsically valuable. As environmentalists, we have the opportunity., and responsibility, to let our interconnectedness inspire and animate us to overcome these turbulent and dangerous times. Let us exclude nothing and no one.

Measuring and Managing Bias

[These excerpts are from an editorial by Jeremy Berg in the September 1, 2017, issue of Science.]

      As someone who grew up with a mother who was a medical researcher, who has been married to a woman very active in scientific research for more than 30 years, and who has had many female colleagues and students, I was surprised when I first took a test to measure implicit gender bias and found that I have a strong automatic association between being male and being involved in science. We all carry a range of biases that are products of our culture and experiences or, in some cases, of outcomes that we desire. Fortunately, many such biases can be measured and, in some cases, effectively managed. A key is to first acknowledge their presence and then to take steps to minimize their influence on important decisions and results.

      Implicit biases—those that we are not consciously aware of—might seem difficult to demonstrate or quantify. However, implicit association tests…can be a useful tool for achieving this. IATs are based on measuring the times needed to classify attributes in a simple computer exercise that takes about 5 minutes to complete….I have found the same strong automatic association between male and science and with female and liberal arts. The good news is that direct awareness of one's own implicit biases can reduce their impact on outcomes, at least in some circumstances. I sometimes catch myself assuming that a scientist is male, and then remind myself of my implicit bias test results, and try to think deliberately to avoid making such assumptions.

      Implicit biases are intrinsic human characteristics that should be acknowledged and managed, rather than denied or ignored. Everyone should consider taking one or more IATs to understand this approach and measure his or her own implicit biases….

Postmodernism vs. Science

[This article by Michael Shermer is in the September 2017 issue of Scientific American.]

      In a 1946 essay in the London Tribune entitled “In Front of Your Nose,” George Orwell noted that “we are all capable of believing things which we know to be untrue, and then, when we are finally proved wrong, impudently twisting the facts so as to show that we were right. Intellectually, it is possible to carry on this process for an indefinite time: the only check on it is that sooner or later a false belief bumps up against solid reality, usually on a battlefield.”

      The intellectual battlefields today are on college campuses, where students' deep convictions about race, ethnicity; gender and sexual orientation and their social justice antipathy toward capitalism, imperialism, racism, white privilege, misogyny and “cissexist heteropatriarchy” have bumped up against the reality of contradictory facts and opposing views, leading to campus chaos and even violence. Students at the University of California, Berkeley, and outside agitators, for example, rioted at the mere mention that conservative firebrands Milo Yiannopoulos and Ann Coulter had been invited to speak (in the end, they never did). Middlebury College students physically attacked libertarian author Charles Murray and his liberal host, professor Allison Stanger, pulling her hair, twisting her neck and sending her to the ER.

      One underlying cause of this troubling situation may be found in what happened at Evergreen State College in Olympia, Wash., in May, when biologist and self-identified “deeply progressive” professor Bret Weinstein refused to participate in a “Day of Absence” in which “white students, staff and faculty will be invited to leave the campus for the day’s activities.” Weinstein objected, writing in an e-mail: “on a college campus, one’s right to speak—or to be—must never be based on skin color.” In response, an angry mob of 50 students disrupted his biology class, surrounded him, called him a racist and insisted that he resign. He claims that campus police informed him that the college president told them to stand down, but he has been forced to stay off campus for his safety’s sake.

      How has it come to this? One of many trends was identified by Weinstein in a Wall Street Journal essay: “The button-down empirical and deductive fields, including all the hard sciences, have lived side by side with ‘critical theory,’ postmodernism and its perception-based relatives. Since the creation in 1960s and ‘70s of novel, justice-oriented fields, these incompatible world-views have repelled one another.”

      In an article for Quillette.corn on “Methods Behind the Campus Madness,” graduate researcher Sumantra Maitra of the University of Nottingham in England reported that 12 of the 13 academics at U.C. Berkeley who signed a letter to the chancellor protesting Yiannopoulos were from “Critical theory, Gender studies and Post-Colonial/Postmodernist/Marxist background.” This is a shift in Marxist theory from class conflict to identity politics conflict; instead of judging people by the content of their character, they are now to be judged by the color of their skin (or their ethnicity, gender, sexual orientation, et cetera). “Postmodernists have tried to hijack biology, have taken over large parts of political science, almost all of anthropology, history and English,” Maitra concludes, “and have proliferated self-referential journals, citation circles, non-replicable research, and the curtailing of nuanced debate through activism and marches, instigating a bunch of gullible students to intimidate any opposing ideas.”

      Students are being taught by these postmodern professors that there is no truth, that science and empirical facts are tools of oppression by the white patriarchy, and that nearly everyone in America is racist and bigoted, including their own professors, most of whom are liberals or progressives devoted to fighting these social ills. Of the 58 Evergreen faculty members who signed a statement “in solidarity with students” calling for disciplinary action against Weinstein for “endangering” the community by granting interviews in the national media, I tallied only seven from the sciences. Most specialize in English, literature, the arts, humanities, cultural studies, women’s studies, media studies, and “quotidian imperialisms, intermetropolitan geography [and] detournement.” A course called “Fantastic Resistances” was described as a “training dojo for aspiring ‘social justice warriors’” that focuses on “power asymmetries.”

      If you teach students to be warriors against all power asymmetries, don’t be surprised when they turn on their professors and administrators. This is what happens when you separate facts from values, empiricism from morality, science from the humanities.

Life before Roe

[These excerpts are from an article by Rachel Benson Gold and Megan K. Donovan in the September 2017 issue of Scientific American.]

      When she went before the U.S. Supreme Court for the first time in 1971, the 26-year-old Sarah Weddington became the youngest attorney to successfully argue a case before the nine justices—a distinction she still holds today.

      Weddington was the attorney for Norma McCorvey, the pseudonymous “Jane Roe” of the 1973 Roe v. Wade decision that recognized the constitutional right to abortion—one of the most notable decisions ever handed down by the justices….

      The pre-Roe era is more than just a passing entry in the history books. More than 40 years after Roe v. Wade, antiabortion politicians at the state level have succeeded in re-creating a national landscape in which access to abortion depends on where a woman lives and the resources available to her. From 2011 to 2016 state governments enacted a stunning 338 abortion restrictions, and the onslaught continues with more than 50 new restrictions so far this year. At the federal level, the Trump administration and congressional leaders are openly hostile to abortion rights and access to reproductive health care more generally. This antagonism is currently reflected in an agenda that seeks to eliminate insurance coverage of abortion and roll back public funding for family-planning services nationwide.

      Restrictions that make it more difficult for women to get an abortion infringe on their health and legal rights. But they do nothing to reduce unintended pregnancy, the main reason a woman seeks an abortion. As the pre-Roe era demonstrates, women will still seek the necessary means to end a pregnancy. Cutting off access to abortion care has a far greater impact on the options available and the type of care a woman receives than it does on whether or not she ends a pregnancy.

      The history of abortion underscores the reality that the procedure has always been with us, whether or not it was against the law. At the nation's founding, abortion was generally permitted by states under common law. It only started becoming criminalized in the mid-1800s, although by 1900 almost every state had enacted a law declaring most abortions to be criminal offenses.

      Yet despite what was on the books, abortion remained common because there were few effective ways to prevent unwanted pregnancies. Well into the 1960s, laws restricted or prohibited outright the sale and advertising of contraceptives, making it impossible for many women to obtain—or even know about—effective birth control. In the 1950s and 1960s between 200,000 and 1.2 million women underwent illegal abortions each year in the U.S., many in unsafe conditions. According to one estimate, extrapolating data from North Carolina to the nation as a whole, 699,000 illegal abortions occurred in the U.S. during 1955, and 829,000 illegal procedures were performed in 1967.

      A stark indication of the risk in seeking abortion in the pre-Roe era was the death toll. As late as 1965, illegal abortion accounted for an estimated 17 percent of all officially reported pregnancy-related deaths—a total of about 200 in just that year. The actual number may have been much higher, but many deaths were officially attributed to other causes, perhaps to protect women and their families. (In contrast, four deaths resulted from complications of legally induced abortion in 2012 of a total of about one million procedures.)

      The burden of injuries and deaths from unsafe abortion did not fall equally on everyone in the pre-Roe era. Because abortion was legal under certain circumstances in some states, women of means were often able to navigate the system and obtain a legal abortion with help from their private physician. Between 1951 and 1962, 88 percent of legal abortions performed in New York City were for patients of private physicians rather than for women accessing public health services.

      In contrast, many poor women and women of color had to go outside the system, often under dangerous and deadly circumstances. Low-income women in New York in the 1960s were more likely than affluent ones to be admitted to hospitals for complications following an illegal procedure. In a study of low-income women in New York from the same period, one in 10 said they had tried to terminate a pregnancy illegally.

      State and federal laws were slow to catch up to this reality. It was only in 1967 that Colorado became the first state to reform its abortion law, permitting the procedure on grounds that included danger to the pregnant woman’s life or health. By 1972,13 states had similar statutes, and an additional four, including New York, had repealed their antiabortion laws completely. Then came Roe v. Wade in 1973—and the accompanying Doe v. Bolton decision—both of which affirmed abortion as a constitutional right.

      The 2016 Supreme Court decision in Whole Womans Health v. Hellerstedt reaffirmed a woman’s constitutional right to abortion. But the future of Roe is under threat as a result of President Donald Trump’s commitment to appointing justices to the Supreme Court who he says will eventually overturn Roe. Should that happen, 19 states already have laws on the books that could be used to restrict the legal status of abortion, and experts at the Center for Reproductive Rights estimate that the right to abortion could be at risk in as many as 33 states and the District of Columbia….

      Instead of repeating the mistakes of the past, we need to protect and build on gains already made. Serious injury and death from abortion are rare today, but glaring injustices still exist. Stark racial, ethnic and income disparities persist in sexual and reproductive health outcomes. As of 2011, the unintended pregnancy rate among poor women was five times that of women with higher incomes, and the rate for black women was more than double that for whites. Abortion restrictions—including the discriminatory Hyde Amendment, which prohibits the use of federal dollars to cover abortion care for women insured through Medicaid—fall disproportionately on poor women and women of color.

      These realities are indefensible from a moral and a public health standpoint. The time has come for sexual and reproductive health care to be a right for all, not a privilege for those who can afford it.

Is there a “Female” Brain?

[These excerpts are from an article by Lydia Denworth in the September 2017 issue of Scientific American.]

      As she continued reading, Joel came across a paper contradicting that idea. The study, published in 2001 by Tracey Shors and her colleagues at Rutgers University, concerned a detail of the rat brain: tiny protrusions on brain cells, called dendritic spines, that regulate transmission of electrical signals. The researchers showed that when estrogen levels were elevated, female rats had more dendritic spines than males did. Shors also found that when male and female rats were subjected to the acutely stressful event of having their tail shocked, their brain responded in opposite ways: males grew more spines; females ended up with fewer.

      From this unexpected finding, [Daphna] Joel developed a hypothesis about sex differences in the brain that has stirred up new controversy in a field already steeped in it. Instead of contemplating brain areas that differ between females and males, she suggested that we should consider our brain as a “mosaic” (repurposing a term that had been used by others), arranged from an assortment of variable, sometimes changeable, masculine and feminine features. That variability itself and the behavioral overlap between the sexes—aggressive females and empathetic males and even men and women who display both traits—suggest that brains cannot be lumped into one of two distinct, or dimorphic, categories. That three-pound mass lodged underneath the skull is neither male nor female, Joel says….Joel tested her idea by analyzing MRI brain scans of more than 1,400 brains and demonstrated that most of them did indeed contain both masculine and feminine characteristics. “We all belong to a single, highly heterogeneous population,” she says….

      In the late 1800s, long before MM was a gleam in any scientist's eye, the primary measurable difference in male and female brains was their weight (assessed postmortem, naturally). Because women's brains were, on average, five ounces lighter than men's, scientists declared that women must be less intelligent….

      For much of the next century concrete sex differences in the brain were the province not of neuroscientists but endocrinologists, who studied sex hormones and mating behavior. Sex determination is a complex process that begins when a combination of genes on the X and Y chromosomes act in utero, flipping the switch on feminization or masculinization. But beyond reproduction and distinguishing boy versus girl, reports persisted of psychological and cognitive sex differences. Between the 1960s and early 1980s Stanford University psychologist Eleanor Maccoby found fewer differences than assumed: girls had stronger verbal abilities than boys, whereas boys did better on spatial and mathematical tests….

      Making the leap from brain to behavior provokes the most strident disagreements. The most recent high-profile study accused of playing to stereotypes (and labeled "neurosexist") was a 2014 paper….It found that males had stronger connections within the left and right hemispheres of the brain and that females had more robust links between hemispheres. The researchers concluded that “the results suggest that male brains are structured to facilitate connectivity between perception and coordinated action, whereas female brains are designed to facilitate communication between analytical and intuitive processing modes.” (Counterclaim: the study did not correct for brain size.)

Promiscuous Men, Chaste Women and Other Gender Myths

[These excerpts are from an article by Cordelia Fine and Mark A. Elgar in the September 2017 issue of Scientific American.]

      The stereotype of the daring, promiscuous male—and his counterpart, the cautious, chaste female—is deeply entrenched. Received wisdom holds that behavioral differences between men and women are hardwired, honed by natural selection over millennia to maximize their differing reproductive potentials. In this view, men, by virtue of their innate tendencies toward risk-taking and competitiveness, are destined to dominate at the highest level of every realm of human endeavor, whether it is art, politics or science.

      But a closer look at the biology and behavior of humans and other creatures shows that many of the starting assumptions that have gone into this account of sex differences are wrong. For example, in many species, females benefit from being competitive or playing the field. And women and men often have similar preferences where their sex lives are concerned. It is also becoming increasingly clear that inherited environmental factors play a role in the development of adaptive behaviors; in humans, these factors include our gendered culture. All of which means that equality between the sexes might be more attainable than previously supposed.

      The origin of the evolutionary explanation of past and present gender inequality is Charles Darwin’s theory of sexual selection. His observations as a naturalist led him to conclude that, with some exceptions, in the arena of courtship and mating, the challenge to be chosen usually falls most strongly on males. Hence, males, rather than females, have evolved characteristics such as a large size or big antlers to help beat off the competition for territory, social status and mates. Likewise, it is usually the male of the species that has evolved purely aesthetic traits that appeal to females, such as stunning plumage, an elaborate courtship song or an exquisite odor.

      It was, however, British biologist Angus Bateman who, in the middle of the 20th century, developed a compelling explanation of why being male tends to lead to sexual competition. The goal of Bateman’s research was to test an important assumption from Darwin’s theory. Like natural selection, sexual selection results in some individuals being more successful than others. Therefore, if sexual selection acts more strongly on males than females, then males should have a greater range of reproductive success, from dismal failures to big winners. Females, in contrast, should be much more similar in their reproductive success. This is why being the animal equivalent of a brilliant artist, as opposed to a mediocre one, is far more beneficial for males than for females….

      Scholars mostly ignored Bateman’s study at first. But some two decades later evolutionary biologist Robert Trivers, now at Rutgers University, catapulted it into scientific fame. He expressed Bateman’s idea in terms of greater female investment in reproduction—the big, fat egg versus the small, skinny sperm—and pointed out that this initial asymmetry can go well beyond the gametes to encompass gestation, feeding (including via lactation, in the case of mammals) and protecting. Thus, just as a consumer takes far more care in the selection of a car than of a disposable, cheap trinket, Trivers suggests that the higher-investing sex—usually the female—will hold out for the best possible partner with whom to mate. And here is the kicker: the lower-investing sex—typically the male—will behave in ways that, ideally, distribute cheap, abundant seed as widely as possible.

      The logic is so elegant and compelling it is hardly surprising that contemporary research has identified many species to which the so-called Bateman-Trivers principles seem to apply, including species in which, unusually, it is males that are the higher-investing sex….

      In our own species, the traditional story is additionally complicated by the inefficiency of human sexual activity. Unlike many other species, in which coitus is hormonally coordinated to a greater or lesser degree to ensure that sex results in conception, humans engage in avast amount of nonreproductive sex. This pattern has important implications. First, it means that any one act of coitus has a low probability of giving rise to a baby, a fact that should temper overoptimistic assumptions about the likely reproductive return on seed spreading. Second, it suggests that sex serves purposes beyond reproduction—strengthening relationships, for example….

      Meanwhile the feminist movement increased women’s opportunities to enter, and excel in, traditionally masculine domains. In 1920 there were just 84 women studying at the top 12 law schools that admitted women, and those female lawyers found it nearly impossible to find employment. In the 21st century women and men are graduating from law school in roughly equal numbers, and women made up about 18 percent of equity partners in 2015.

      …It is hard to see how a young female lawyer, looking first at the many young women at her level and then at the very few female partners and judges, can be as optimistic about the likely payoff of leaning in and making sacrifices for her career as a young male lawyer. And this is before one considers the big-picture evidence of sexism, sexual harassment and sex discrimination in traditionally masculine professions such as law and medicine….

      Although sex certainly influences the brain, this argument overlooks the growing recognition in evolutionary biology that offspring do not just inherit genes. They also inherit a particular social and ecological environment that can play a critical role in the expression of adaptive traits. For example, adult male moths that hailed, as larvae, from a dense population develop particularly large testes. These enhanced organs stand the moths in good stead for engaging in intense copulatory competition against the many other males in the population. One would be forgiven for assuming that these generously sized gonads are a genetically determined adaptive trait. Yet adult male moths of the same species raised as larvae in a lower-density population instead develop larger wings and antennae, which are ideal for searching for widely dispersed females.

      …men now place more importance on a female partner's financial prospects, education and intelligence—and care less about her culinary and housekeeping skills—than they did several decades ago. Meanwhile the cliché of the pitiable bluestocking spinster is a historical relic: although wealthier and better-educated women were once less likely to marry now they are more likely to do so.

A Moth’s Eye

[These excerpts are from an article by Morgen Peck in the September 2017 issue of Scientific American.]

      It is a summer night, and the moths are all aflutter. Despite being drenched in moonlight, their eyes do not reflect it—and soon the same principle could help you see your cell-phone screen in bright sunlight.

      Developing low-reflectivity surfaces for electronic displays has been an area of intensive research. So-called transflective liquid-crystal displays reduce glare by accounting for both backlighting and ambient illumination. Another approach, called adaptive brightening control, uses sensors to boost the screen’s light. But both technologies guzzle batteries, and neither is completely effective. The anatomy of the moth eye presents a far more elegant solution….

      When light moves from one medium to another, it bends and changes speed as the result of differences in a material property called refractive index. If the difference is sharp—as when light moving through air suddenly hits a pane of glass—much of the light is reflected. But a moth’s eye is coated with tiny, uniform bumps that gradually bend (or refract) incoming light. The light waves interfere with one another and cancel one another out, rendering the eyes dark.

The Oldest Homo sapiens?

[These excerpts are from an article by Kate Wong in the September 2017 issue of Scientific American.]

      The year was 1961. A barite mining operation at the Jebel Irhoud massif in Morocco, some 100 kilometers west of Marrakech, turned up a fossil human skull. Subsequent excavation uncovered more bones from other individuals, along with animal remains and stone tools. Originally thought to be 40,000-year-old Neandertals, the fossils were later reclassified as Homo sapiens—and eventually related to roughly 160,000 years ago. Still, the Jebel I rhoud fossils remained something of a mystery because in some respects they looked more primitive than older H. sapiens fossils.

      Now new evidence is rewriting the Jebel Irhoud story again. A team led by Jean-Jacques Hublin of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, has recovered more human fossils and stone tools, along with compelling evidence that the site is far older than the revised estimate….If the fossils do in fact represent H. sapiens, as the team argues, the finds push back the origin of our species by more than 100,000 years and challenge leading ideas about where and how our lineage evolved. But other scientists disagree over exactly what the new findings mean….

      Experts have long agreed that H. sapiens got its start in Africa. Up to this point, the oldest commonly accepted traces of our species were 195,000-year-old remains from the site of Omo Kibish and 160,000-year-old fossils from Herto, both in Ethiopia. Yet DNA evidence and some enigmatic fossils hinted that our species might have deeper roots.

      In their recent work, Hublin and his colleagues unearthed fossils of several other individuals from a part of the Jebel Irhoud site that the miners left undisturbed. The team's finds include skull and lower jaw bones, as well as stone tools and the remains of animals the humans hunted. Multiple techniques date the rock layer containing the fossils and artifacts to between 350,000 and 280,000 years ago….

      Hublin noted that the findings do not imply that Morocco was the cradle of modern humankind. Instead, taken together with other fossil discoveries, they suggest that the emergence of H. sapiens was a pan-African affair. By 300,000 years ago early H. sapiens had spread across the continent. This dispersal was helped by the fact that Africa was quite different back then—the Sahara was green, not the forbidding desert barrier that it is today…

      The new fossils “raise major questions about what features define our species,” observes paleoanthropologist Marta Mirazon Lahr of the University of Cambridge. “[Is] it the globular skull, with its implications [for] brain reorganization, that makes a fossil H. sapiens? If so, the I rhoud population [represents] our close cousins” rather than members of our species. But if, on the other hand, a small face and the shape of the lower jaw are the key traits, then the Jebel Irhoud remains could be from our actual ancestors—and thus shift the focus of scientists who study modern human origins from sub-Saharan Africa to the Mediterranean—Mirazon Lahr says.

      Either way, the discoveries could fan debate over who invented the artifacts of Africa’s Middle Stone Age cultural period, which spanned the time between roughly 300,000 and 40,000 years ago. If H. sapiens was around 300,000 years ago, it could be a contender. But other human species were on the scene back then, too, including Homo heideibergensis and Homo naledi.

Africa’s CDC Can End Malaria

[These excerpts are from an article by Carl Manlan in the September 2017 issue of Scientific American.]

      More than 65 years ago Americans found a way to ensure that no one would have to die from malaria ever again. The disease was eliminated in the U.S. in 1951, thanks to strategies created through the Office of Malaria Control in War Areas, formed in 1942, and the Communicable Disease Center (now the U.S. Centers for Disease Control and Prevention), founded in 1946.

      The idea for Africa’s own Centers for Disease Control and Prevention (Africa CDC) was devised in 2013 and formalized after the worst Ebola outbreak in history the following year. The Africa CDC, which was officially launched in January of this year, is a growing partnership that aims to build countries’ capacity to help create a world that is safe and secure from infectious disease threats….

      Malaria and other preventable diseases continue to challenge our ability to transform our economies at the pace required to support our population growth. Ultimately, for Africa to achieve malaria eradication, it is necessary to translate the Africa CDC’s mandate from the African Union into a funded mechanism to inform health investment.

      Ending malaria was the impetus that led to a strong and reliable CDC in the U.S., and now Africa has an opportunity to repeat that success—ideally by 2030, when the world gathers to assess progress toward achieving the U.N.’s Sustainable Development Goals. We have the opportunity to save many, many lives through the Africa CDC. Let's make it happen.

End the Assault on Women’s Health

[This excerpt by the editors is from the September 2017 issue of Scientific American.]

      Current events are just the latest insult in a long history of male-centric medicine, often driven not by politicians but by scientists and physicians. Before the National Institutes of Health Revitalization Act of 1993, which required the inclusion of women and minorities in final-stage medication and therapy trials, women were actively excluded from such tests because scientists worried that female hormonal cycles would interfere with the results. The omission meant women did not know how drugs would affect them. They respond differently to illness and medication than men do, and even today those differences are inadequately understood. Women report. more intense pain than men in almost every category of disease, and we do not know why. Heart disease is the number-one killer of women in the U. S., yet only a third of clinical trial subjects in cardiovascular research are female—and fewer than a third of trials that include women report results by sex.

      The Republican assault on health care will just make things worse. The proposed legislation includes provisions that would let states eliminate services known as “essential health benefits,” which include maternity care. Before the ACA made coverage mandatory, eight out of 10 insurance plans for individuals and small businesses did not cover such care. The proposed cuts would have little effect on reducing insurance premiums, and the cost would be shifted to women and their families—who would have to take out private insurance or go on Medicaid (which the proposed bill greatly limits)—or to hospitals, which are required by law to provide maternity care to uninsured mothers.

      The bill, in its current form, would also effectively block funding for Planned Parenthood, which provides reproductive health services to 2.4 million women and men. The clinics are already banned from using federal funding for abortions except in cases of rape or incest or when the mother's life is in danger, in accordance with the federal Hyde Amendment. So the Planned Parenthood cuts would primarily affect routine health services such as gynecological exams, cancer screenings, STD testing and contraception—and these clinics are sometimes the only source for such care. Regardless of which side you are on in the pro-life/pro-choice debate, these attempts to remove access to such basic services should alarm us all.

      The Trump administration also has been chipping away at the ACA’s birth-control mandate. A proposed regulation leaked in May suggested the White House was working to create an exemption to allow almost any employer to opt out of covering contraception on religious or moral grounds. Nationwide, women are increasingly turning to highly effective long-acting reversible contraceptives (LARCs) such as intrauterine devices (IUDs). The percentage of women aged 15 to 44 using LARCs increased nearly fivefold from 2002 to 2013. Decreased coverage for contraceptives translates to less widespread use and will likely mean more unintended pregnancies and abortions.

      And abortions will become harder to obtain. After Roe v. Wade, many states tried to put in place laws to hamstring abortion clinics. These efforts have only ramped up in recent years, as many states have enacted so-called TRAP laws (short for targeted regulation of abortion providers), unnecessarily burdensome regulations that make it very difficult for these clinics to operate. Recognizing this fact, the Supreme Court struck down some of these laws in Texas in 2016, but many are still in place in other states. Rather than making women safer, as proponents claim, these restrictions interfere with their Supreme Court-affirmed right to safely terminate a pregnancy.

      Whether or not the repeal-and-replace legislation passes this year, these attacks are part of a larger war on women’s health that is not likely to abate anytime soon. We must resist this assault. Never mind “America First”—it’s time to put women first.

Beyond DNA

[These excerpts are from an article by Gemma Tarlach in the September 2017 issue of Discover.]

      The study of ancient proteins, paleoproteomics is an emerging interdisciplinary field that draws from chemistry and molecular biology as much as paleontology, paleoanthropology and archaeology. Its applications for understanding human evolution are broad:

      One 2016 study used ancient collagen, a common protein, to determine otherwise unidentifiable bone fragments as Neanderthal; another identified which animals were butchered at a desert oasis 250,000 years ago based on protein residues embedded in stone tools.

      Paleoproteomic research can also build evolutionary family trees based on shared or similar proteins, and reveal aspects of an individual’s physiology beyond what aDNA might tell us….

      Thanks to ancient proteins surviving far longer than aDNA — in January, one team claimed to have found evidence of collagen in a dinosaur fossil that’s 195 million years old — researchers are able to read those cheap molecular newspapers from deep time.

      The roots of paleoproteomics actually predate its sister field, paleogenomics. In the 1930s, archaeologists attempted (with little success) to determine the blood types of mummies by identifying proteins with immunoassays, which test for antibody-antigen reactions.

      A couple of decades later, geochemists found that amino acids, the building blocks of proteins, could survive in fossils for millions of years. But it wasn’t until this century that paleoproteomics established itself as a robust area of research.

      In 2000, researchers identified proteins in fossils using a type of mass spectrometer that, unlike earlier methods, left amino acid sequences more intact and readable. Much of today’s research uses a refined version of that method: zooarchaeology by mass spectrome-try (ZooMS)….

      And in 2016, Welker, Collins and colleagues used ZooMS to determine that otherwise unidentifiable bone fragments in the French cave Grotte du Renne belonged to Neanderthals, settling a debate over which member of Homo occupied the site about 40,000 years ago. Given how closely related Neanderthals are to our own species, the researchers' ability to identify a single protein sequence specific to our evolutionary cousins is stunning.

      ZooMS is not a perfect methodology. Analyzing proteins within a fossil requires destroying a piece of the specimen, something unthinkable for precious ancient hominin remains.

      That’s why the most significant applications for ZooMS may be to identify fragmentary fossils and to learn more about ancient hominins’ environments — especially the ones they created….

A Matter of Choice

[These excerpts are from an article by Peg Tyre in the August 2017 issue of Scientific American.]

      …President Donald Trump’s secretary of education, Betsy DeVos, is preparing to give the scheme its first national rollout in the U.S. She has made voucher programs the centerpiece of her efforts to enhance educational outcomes for students, saying they offer parents freedom to select institutions outside their designated school zone. “The secretary believes that when we put the focus on students, and not buildings or artificially constructed boundaries, we will be on the right path to ensuring every child has access to the education that fits their unique needs,” says U.S. Department of Education spokesperson Elizabeth Hill.

      Because the Trump administration has championed vouchers as an innovative way to improve education in the U.S., SCIENTIFIC AMERICAN examined the scientific research on voucher programs to find out what the evidence says about Friedman’s idea. To be sure, educational outcomes are a devilishly difficult thing to measure with rigor. But by and large, studies have found that vouchers have mixed to negative academic outcomes and, when adopted widely, can exacerbate income inequity. On the positive side, there is some evidence that students who use vouchers are more likely to graduate high school and to perceive their schools as safe.

      DeVos’s proposal marks a profound change of direction for American education policy. In 2002, under President George W. Bush’s No Child Left Behind Act, the federal education mantra was “what gets tested, gets taught,” and the nation’s public schools became focused on shaping their curriculum around state standards in reading and math. Schools where students struggled to perform at grade level in those subjects were publicly dubbed “failing schools.” Some were sanctioned. Others were closed. During those years, networks of privately operated, publicly funded charter schools, many of them with a curriculum that was rigorously shaped around state standards, opened, and about 20 percent of them flourished, giving parents in some low-income communities options about where to enroll their child. While charter schools got much of the media attention, small voucher programs were being piloted in Washington, D.C., and bigger programs were launched in Indiana, Wisconsin, Louisiana and Ohio….

      Until now only a handful of American cities and states have experimented with voucher programs. Around 500,000 of the country’s 56 million schoolchildren use voucher-type programs to attend private or parochial schools. The results have been spotty. In the 1990s studies of small voucher programs in New York City Washington, D.C., and Dayton, Ohio, found no demonstrable academic improvement among children using vouchers and high rates of churn —many students who used vouchers dropped out or transferred schools, making evaluation impossible. One study of 2,642 students in New York City who attended Catholic schools in the 1990s under a voucher plan saw an uptick in African-American students who graduated and enrolled in college but no such increases among Hispanic students.

      In 2004 researchers began studying students in a larger, more sustained voucher plan that had just been launched in Washington, D.C. This is the country's first and so far only federally sponsored voucher program. There 2,300 students were offered scholarships, and 1,700 students used those scholarships mostly to attend area Catholic schools. The analysts compared academic data on those who did and did not opt for parochial school and found that voucher users showed no significant reading or math gains over those who remained in public school. But graduation rates for voucher students were higher-82 percent compared with 70 percent for the control group, as reported by parents. A new one-year study of the Washington, D.C., program published in April showed that voucher students actually did worse in math and reading than students who applied for vouchers through a citywide lottery but did not receive them. Math scores among students who used vouchers were around 7 percentage points lower than among students who did not use vouchers. Reading scores for voucher students were 4.9 percentage points lower. The study authors hypothesized that the negative outcomes may be partly related to the fact that public schools offer more hours of instruction in reading and math than private schools, many of which cover a wider diversity of subjects such as art and foreign languages….

      Voucher proponents say parents, even those using tax dollars to pay tuition, should be able to use whatever criteria for school choice they see fit. A provocative idea, but if past evidence can predict future outcomes, expanding voucher programs seems unlikely to help US. schoolchildren keep pace with a technologically advancing world.

Life Springs

[These excerpts are from an article by Martin J. Van Kranendonk, David W. Deamer and Tara Djokic in the August 2017 issue of Scientific American.]

      …Some of the rocks are wrinkled orange and white layers, called geyserite, which were created by a volcanic geyser on Earth's surface. They revealed bubbles formed when gas was trapped in a sticky film, most likely produced by a thin layer of bacterialike microorganisms. The surface rocks and indications of biofilms support a new idea about one of the oldest mysteries on the planet: how and where life got started. The evidence pointed to volcanic hot springs and pools, on land, about 3.5 billion years ago.

      This is a far different picture of life's origins from the one scientists have been sketching since 1977. That was the year the research submarine Alvin discovered hydrothermal vents at the bottom of the Pacific Ocean pumping out minerals containing iron and sulfur and gases such as methane and hydrogen sulfide, surrounded by primitive bacteria and large worms. It was a thriving ecosystem. Biologists have since theorized that such vents, protected from the cataclysms wracking Earth's surface about four billion years ago, could have provided the energy, nutrients and a safe haven for life to begin. But the theory has problems. The big one is that the ocean has a lot of water, and in it the needed molecules might spread out too quickly to interact and form cell membranes and primitive metabolisms.

      Now we and others believe land pools that repeatedly dry out and then get wet again could be much better places. The pools have heat to catalyze reactions, dry spells in which complex molecules called polymers can be formed from simpler units, wet spells that float these polymers around, and further drying periods that maroon them in tiny cavities where they can interact and even become concentrated in compartments of fatty acids—the prototypes of cell membranes.

      …Charles Darwin had suggested, back in 1871, that microbial life originated in “some warm little pond.” A number of scientists from different fields now think that the author of On the Origin of Species had intuitively hit on something important….

      …simple molecular building blocks might join into longer information-carrying polymers like nucleic acids—needed for primitive life to grow and replicate—when exposed to the wet-dry cycles characteristic of land-based hot springs. Other key polymers, peptides, might form from amino acids under the same conditions. Crucially, still other building blocks called lipids might assemble into microscopic compartments to house and protect the information-carrying polymers. Life would need all the compounds to get started….

      Both on land and in the sea, chemical and physical laws have provided a very useful frame around this particular puzzle, and the geologic and chemical discoveries described here fill in different areas. But before we can see a clear picture of the origin of life, many more pieces need to be put in place. What is exciting, however, is that now we can see a path forward to the solution.

Technology as Magic

[This excerpt is from an article by David Pogue in the August 2017 issue of Scientific American.]

      We the people have always been helplessly drawn to the concept of magic: the notion that you can will something to happen by wiggling your nose, speaking special words or waving your hands a certain way. We've spent billions of dollars for the opportunity to see what real magic might look like, in the form of Harry Potter movies, superhero films and TV shows, from Bewitched on down.

      It should follow, then, that any time you can offer real magical powers for sale, the public will buy it. That’s exactly what’s been going on in consumer technology. Remember Arthur C. Clarke’s most famous line? “Any sufficiently advanced technology is indistinguishable from magic.” Well, I’ve got a corollary: “Any sufficiently magical product will be a ginormous hit.”

      Anything invisible and wireless, anything that we control with our hands or our voices, anything we can operate over impossible distances—those are the hits because they most resemble magic. You can now change your thermostat from thousands of miles away, ride in a car that drives itself, call up a show on your TV screen by speaking its name or type on your phone by speaking to it. Magic.

      For decades the conventional wisdom in product design has been to "make it simpler to operate" and “make it easier for the consumer.” And those are admirable goals, for sure. Some of the biggest technical advancements in the past 30 years—miniaturization, wireless, touch screens, artificial intelligence, robotics—have been dedicated to “simpler” and “easier.”

      But that’s not enough to feel magical. Real tech magic is simplicity plus awe. The most compelling tech conventions—GPS apps telling you when to turn, your Amazon Echo answering questions for you, your phone letting you pay for something by waving it at that product—feel kind of amazing every single time.

      The awe component is important. It's the difference between magic and mere convenience. You could say to your butler, “Jeeves, lock all the doors”—and yes, that’d be convenient. But saying, “Alexa, lock all the doors,” and then hearing the dead-bolts all over the house click by themselves? Same convenience, but this time it’s magical.

Plastic-Eating Worms

[These excerpts are from an article by Matthew Sedacca in the August 2017 issue of Scientific American.]

      Humans produce more than 300 million metric tons of plastic every year. Almost half of that winds up in landfills, and up to 12 million metric tons pollute the oceans. So far there is no sustainable way to get rid of it, but a new study suggests an answer may lie in the stomachs of some hungry worms.

      Researchers in Spain and England recently found that the larvae of the greater wax moth can efficiently degrade polyethylene, which accounts for 40 percent of plastics. The team left 100 wax worms on a commercial polyethylene shopping bag for 12 hours, and the worms consumed and degraded about 92 milligrams, or roughly 3 percent, of it. To confirm that the larvae’s chewing alone was not responsible for the polyethylene breakdown, the researchers ground some grubs into a paste and applied it to plastic films. Fourteen hours later the films had lost 13 percent of their mass—presumably broken down by enzymes from the worms’ stomachs.

      When inspecting the degraded plastic films, the team also found traces of ethylene glycol, a product of polyethylene breakdown, signaling true biodegradation….

      …the larvae’s ability to break down their dietary staple—beeswax—also allows them to degrade plastic. “Wax is a complex mixture of molecules, but the basic bond in polyethylene, the carbon-carbon bond, is there as well….The wax worm evolved a mechanism to break this bond.”

      Jennifer DeBruyn, a microbiologist at the University of Tennessee, who was not involved in the study, says it is not surprising that an organism evolved the capacity to degrade polyethylene. But compared with previous studies, she finds the speed of biodegradation in this one exciting. The next step, DeBruyn says, will be to pinpoint the cause of the breakdown. Is it an enzyme produced by the worm itself or by its gut microbes? Bertocchini agrees and hopes her team's findings might one day help harness the enzyme to break down plastics in landfills, as well as those scattered throughout the ocean. But she envisions using the chemical in some kind of industrial process—not simply “millions of worms thrown on top of the plastic.”

Awash in Plastic

[These excerpts are from an article by Jesse Greenspan in the August 2017 issue of Scientific American.]

      Henderson Island, a tiny, unpopulated coral atoll in the South Pacific, could scarcely be more remote. The nearest city of any size lies some 5,000 kilometers away. Yet when Jennifer Lavers, a marine biologist at the Institute for Marine and Antarctic Studies in Tasmania, ventured there two years ago to study invasive rodent-eradication efforts, she found the once pristine UNESCO World Heritage Site inundated with trash-17.6 metric tons of it, she conservatively estimates—pretty much all of it plastic. (The rubbish originates elsewhere but hitches a ride to Henderson on wind or ocean-currents.) One particularly spoiled stretch of beach yielded 672 visible pieces of debris per square meter, plus an additional 4,497 L items per square meter buried in the sand….

      By comparing the data with a study of the nearby Ducie and Oeno atolls conducted in 1991, the team extrapolated that there is between 200 and 2,000 times more trash on Henderson now than there was on those neighboring islands back then. Unidentifiable plastic fragments, resin pellets and fishing gear make up the bulk of the total (graphic), but the researchers also came across toothbrushes, baby pacifiers, hard hats, bicycle pedals and a sex toy. Thousands of new items wash up daily and make any cleanup attempt impractical, according to Lavers, who specializes in studying plastic pollution. Meanwhile many of the world's other coastlines could face a similar threat…

Find My Elephant

[These excerpts are from an article by Rachel Nuwer in the August 2017 issue of Scientific American.]

      How does one protect elephants from poachers in an African reserve the size of a small country? This daunting task typically falls to park rangers who may spend weeks patrolling the bush on foot, sometimes lacking basic gear such as radios, tents or even socks. They are largely losing to ivory poachers, as attested by the latest available data on Africa’s two species of elephant, both threatened: savanna elephant populations fell 30 percent between 2007 and 2014, and those of forest elephants plummeted by 62 percent between 2002 and 2011.

      To stem the losses, conservationists are increasingly turning to technology. The latest tool in the arsenal: real-time tracking collars, developed by the Kenya-based nonprofit Save the Elephants and currently being used on more than 325 animals in 10 countries. The organization’s researchers wrote algorithms that use signals from the collars to automatically detect when an animal stops moving (indicating it may be dead), slows down (suggesting it may be injured) or heads toward a danger zone, such as an area known for rampant poaching. Experimental accelerometers embedded in the collars detect aberrant behaviors such as “streaking”—sudden, panicked flight that might signal an attack. Unlike traditional tracking collars, many of which send geographical coordinates infrequently or store them onboard for later retrieval, these devices’ real-time feeds enable rangers to react quickly. In several cases, they have led to arrests….

      An early version of the program is being tested at four sites in Africa, with a 10-site expansion planned for September. At Lewa Wildlife Conservancy in Kenya, DAS is already seen as a game changer after its launch less than a year ago, says Batian Craig, director of 51 Degrees, a private company that oversees security operations at Lewa: “Being able to visualize all this information in one place and in real time makes a massive difference to protected-area managers.”

Aussie Invaders

[These excerpts are from an article by Erin Biba in the August 2017 issue of Scientific American.]

      The Australian government unleashed a strain of a hemorrhagic disease virus into the wild earlier this year, hoping to curb the growth of the continent’s rabbit population. This move might sound barbaric, but the government estimates that the animals—brought by British colonizers in the late 18th century—gnaw through about $115 million in crops every year. And the rabbits are not the only problem. For more than a century Australians have battled waves of invasive species with many desperate measures—including introducing nonnative predators—to limited avail.

      Australia is not the only country with invasive creatures. But because it is an isolated continent, most of its wildlife is endemic—and its top predators are long extinct. This gives alien species a greater opportunity to thrive….But the Tasmanian tiger, the marsupial lion and Megalania (a 1,300-pound lizard) are gone. The only top predator left, the Australian wild dog, or dingo…is under threat from humans because of its predilection for eating sheep.

      Along with rabbits, Australia is trying to fend off red foxes (imported for hunting), feral cats (once kept as pets), carp (brought in for fish farms) and even camels (used for traversing the desert). Wildlife officials have attempted to fight these invaders by releasing viruses, spreading poisons, building thousands of miles of fences, and sometimes hunting from helicopters. In one famous case, the attempted solution became its own problem: the cane toad was introduced in 1935 to prey on beetles that devour sugarcane. But the toads could not climb cane plants to reach the insects and are now a thriving pest species themselves.

      Despite scientists’ protestations, the government plans to introduce another virus later this year to try reducing the out-of-control carp population….

Scientists Can’t Be Silent

[This excerpt is from an editorial by Christopher Coons in the August 4, 2017, issue of Science.]

      In an era of rapid technological change and an increasingly global economy, investments in research and development are crucial for spurring economic growth and sustaining competitiveness. Yet, across the U.S. federal government, scientists are playing a decreasing role in the policy-making that supports this investment, often being pushed out by a political agenda that is stridently anti-science. Meanwhile, Americans are becoming more dis-trustful of democratic institutions, the scientific method, and basic facts—three core beliefs on which the research enterprise depends. The United States remains the unquestioned global leader in science and innovation, but given a White House that disregards the value of science and an American public that questions the very concept of scientific consensus, sustaining the U.S. commitment to science won't happen without a fight.

      Many Americans take for granted the ways in which the United States supports its scientists, but that hasn’t always been the case. Before 1940, the United States had only 13 Nobel laureates in science. Since World War II, however, the country has won over 180 scientific Nobel Prizes, far more than any other nation. That's not a direct proxy for achievement, but it reflects a fundamental change in the way Americans understand the value of research. This transformation didn’t happen by accident. Immigration laws have allowed aspiring scientists from around the world to study and innovate in the United States. Long-term, sustained investments in research and development have been supported by a network of universities, national laboratories, and federal research institutions such as the National Institutes of Health. Strong intellectual property laws have evolved to protect groundbreaking ideas. These efforts haven’t just won Nobel Prizes. Federal investments in diverse scientific and human capital have unleashed economic growth, created tens of millions of jobs, and returned taxpayer money invested many times over.

      Sadly, media headlines (and recent election results) reflect a growing distrust of science and scientists. Look no further than efforts to undermine the nearly unanimous scientific consensus on the impacts of climate change, the use of genetically modified organisms, or the importance of vaccinations. These trends predate the current administration, but President Trump has already taken steps that threaten scientific progress. In its 2018 budget proposal, the White House is seeking to cut overall federal research funding by nearly 17%. Dozens of key scientific positions throughout federal agencies remain unfilled. The administration has sought to shutter innovative programs such as the Department of Energy's Advanced Research Projects Agency-Energy.

      How should the scientific enterprise respond? In August 1939, Albert Einstein wrote to President Franklin D. Roosevelt, urging him to monitor the development of atomic weapons and consider new investments and partnerships with universities and industry. Einstein's letter galvanized federal involvement in creating a world-class scientific ecosystem. Scientists today should follow Einstein's lead. They should make the case for science with the public through online communities and in local meetings and media. Scientists should fight for scientific literacy by advocating for science, technology, engineering, and mathematics (STEM) education, as well as for women and minorities in STEM fields.

Inevitable or Improbable?

[This book review of “Improbable Destinies,” written by Adrian Woolfson is in the July 28, 2017, issue of Science.]

      In their seminal book Evolution and Healing, Randolph Nesse and George C. Williams describe the design of human bodies as “simultaneously extraordinarily precise and unbelievably slipshod!” Indeed, they conclude that our inconsistencies are so incongruous that one could be forgiven for thinking that we had been “shaped by a prankster.”

      By what agency did this unfortunate state of affairs come into being, and how might we amend it? Gene editing and synthetic biology offer the possibility of, respectively “correcting” or “rewriting” human nature, allowing us to expunge unfavorable aspects of ourselves—such as our susceptibility to diseases and aging—while enabling the introduction of more appealing features. The legitimacy of such enterprises, however, to some extent depends on whether the evolution of humans on Earth was inevitable.

      If our origin and nature were deterministically programmed into life’s history, it would be hard to argue that we should be any other way. If, on the other hand, we are the improbable products of a historically contingent evolutionary process, then human exceptionalism is compromised, and the artificial modification of our genomes may be perceived by some as being less of an affront to the natural order. In his compelling book Improbable Destinies, Jonathan Losos addresses this issue, recasting previous dialogues in the light of an experimental evolutionary agenda and, in so doing, arrives at a novel conclusion.

      Until recently, the evolutionary determinism debate focused on two contrary interpretations of an outcrop of rock located in a small quarry in the Canadian Rocky Mountains known as the Burgess Shale. Contained within the Burgess Shale, and uniquely preserved by as-yet-unknown processes, are the fossilized remains of a bestiary of animals, both skeletal and soft-bodied. These fossils are remarkable in that they appear to have originated in a geological instant 570 to 530 million years ago during the Cambrian. They comprise a bizarre zoo of outlandish body plans, some of which appeared to be unrepresented in living species.

      In his 1989 book Wonderful Life, the Harvard biologist Stephen Jay Gould argued that the apparently arbitrary deletion of distinct body plans in the Cambrian suggests that life's history was deeply contingent, underwritten by multiple chance events. As such, if the tape of life could be rewound back to the beginning and replayed again, it would be vanishingly unlikely that anything like humans would emerge again. The Cambridge paleontologist Simon Conway Morris, on the other hand, would have none of this.

      Citing a long list of examples to illustrate the ubiquity of convergence—the phenomenon whereby unrelated species evolve a similar structure—Conway Morris claimed that the evolution of humanlike organisms would be a near inevitability of any replay. In his scheme, articulated in 2003 in Life’s Solution, nature’s deep self-organizing forces narrowly constrain potential evolutionary outcomes, resulting in a relatively sparse sampling of genetic space.

      Losos closes the loop on this contentious debate, marshaling data from the burgeoning research area of experimental evolution. Unlike Darwin, who perceived the process of evolution to be imperceptibly slow and therefore inaccessible to direct experimentation, contemporary evolutionary biologists have realized that evolution can occur in rapid bursts and may consequently be captured on the wing.

      Given that microbes have an intergenerational time of 20 minutes or less, in 1988, the evolutionary biologist Richard Lenski reasoned that the bacterium Escherichia coli would comprise the perfect model experimental system to study condensed evolutionary time scales. Bacteria could additionally be frozen, allowing multiple parallel replays to be run again and again from any time point in their history. Twenty--eight years and 64,000 bacterial generations later, he concluded that the history of life owes its complexity both to repeatability and contingency.

      Losos and other investigators have demonstrated a similar degree of repeatability in the natural evolution of the Anolis lizard, three-spined sticklebacks, guppies, and deer mice. Importantly, however, when experimental populations evolve in divergent environments, novel outcomes are more commonly observed than convergence.

      These experiments were not a replay of the tape of life in time so much as a replay in space, but the findings were surprising in that they emerged within a relatively short time frame—a far cry from what one might have expected would be necessary to falsify the predictability hypothesis.

      Losos concludes that “both sets of forces—the random and the predictable ... together give rise to what we call history!” With this, humans are humbled once again, cast firmly into the sea of ordered indeterminism. Although he does not attempt to use this as a justification for human genomic modification, Losos argues that the genetic principles underlying life's multifarious convergent solutions might, among other things, be coopted to rescue imperiled species.

Nitrogen Stewardship in the Anthropocene

[These excerpts are from an article by Sybil P. Seitzinger and Leigh Phillips in the July 28, 2017, issue of Science.]

      Nitrogen compounds, mainly from agriculture and sewage, are causing widespread eutrophication of estuaries and coastal waters. Rapid growth of algal blooms can deprive ecosystems of oxygen when the algae decay, with sometimes extensive ecological and economic effects. Nitrogen oxides from fossil fuel combustion also contribute to eutrophication, and nitrous oxide, N2O, is an extremely powerful greenhouse gas (GHG)….climate change is worsening nitrogen pollution, notably coastal eutrophication. The results highlight the urgent need to control nitrogen pollution. Solutions may be found by drawing on decarbonization efforts in the energy sector.

      Increased precipitation and greater frequency and intensity of extreme rainfall events will see increased leaching and run-off—or nitrogen loading—in many agricultural areas….in the Mississippi-Atchafalaya River Basin, a business-as-usual emission scenario leads to an 18% increase in nitrogen loading by the end of the 21st century, driven by projected increases in both total and extreme precipitation. To counter this, a 30% reduction in anthropogenic nitrogen inputs in the region would be required. Farmers here already are trying to achieve a 20% loading reduction target imposed by the U.S. Environmental Protection Agency (EPA), requiring a 32% reduction in nitrogen inputs to the land. To offset the climate-induced boost to nitrogen pollution in addition to meeting this target would thus require a 62% reduction in nitrogen inputs, a colossal challenge for any farmer.

      …regions with historically substantial rainfall, high nitrogen inputs, and projected robust increases in rainfall are most likely to experience increased coastal nitrogen loading. This includes great swaths of east, south, and southeast Asia, India, and China—regions where coastal eutrophication already occurs.

      Solutions to coastal eutrophication and climate change are closely intertwined. To stay below a 2°C increase in global average temperature will require attaining net zero GHG emissions soon after mid-century. To achieve this goal, fossil fuel combustion must plummet, thus also reducing production, deposition, and consequent eutrophication due to nitrogen oxides associated with fossil fuel combustion. However, nitrogen and carbon GHG emissions from agriculture (such as N2O and CH4) are some of the most difficult to reduce. In the energy sector, humanity needs to perform one action: Stop burning fossil fuels. And, at least in principle, there are clean-energy alternatives. In agriculture, many different actions are required. There is no alternative to eating.

      Nevertheless, we can begin to structure our thinking about solutions for the sector in a way that mirrors the fuel-switching solutions for other GHG sources-a rule of thumb that everyone from policy-makers to farmers, communities, and even children can comprehend and help put into practice….

      Although the challenges are great, successful commercialization of technologies currently in development and increased nitrogen efficiency in agriculture could help to reduce the pressures on coastal ecosystems and to reduce GHG emissions while permitting increased production to feed a growing population. There is potentially an exciting optimistic story to be told about global nitrogen stewardship in the Anthropocene.

We Still Need to Beat HIV

[These excerpts are from an editorial by Francois Dabis and Linda-Gail Bekker in the July 28, 2017, issue of Science.]

      Despite remarkable advances in HIV treatment and prevention, the limited political will and leadership in many countries—particularly in West and Central Africa and Eastern Europe—have fallen short of translating these gains into action. As a result, nearly 2 million infections occurred in 2016, creating a situation that is challenging to counter. This week in Paris, the International AIDS Society (IAS) convened researchers, health experts, and policy-makers to discuss the global state of this epidemic. It has been more than three decades since AIDS was clinically observed and associated with HIV infection. Since then, HIV has accounted for 35 million deaths worldwide. Today, about 37 million people are infected. IAS and the French Research Agency on HIV and Viral Hepatitis (ANRS) have now released the Paris Statement…to remind world leaders why HIV science matters, how it should be strengthened, and why it should be funded globally and durably so that new evidence can be translated into policy.

      The good news is that scientific breakthroughs have led to new biomedical interventions and practices and, consequently, substantive reductions in morbidity, mortality, and new infections. Full-scale implementation of these new measures could eliminate HIV/AIDS as a health crisis. The bad news is that this effort is fragile because funding is under threat, health care systems of many countries are not equipped to deliver, and political will and leadership are lacking. Even if current efforts reduce new infections by 90% over the next decade, there would still be 200,000 new infections each year, with a worldwide lifelong treatment target of 40 million individuals living with HIV.

      Antiretroviral therapy (ART) has been the catalyst for change in how HIV infection is treated and prevented. A single, once-a-day, multidrug tablet, with few side effects, has converted HIV from a death sentence to a chronic, manageable disease for millions. It reduces viral levels so that the risk of transmission plummets. ART also has made mother-to-child transmission a rare event in many places. Pre-exposure prophylaxis and voluntary male circumcision have joined other behavioral measures (condom promotion, universal testing, and lifestyle changes) as cornerstones of prevention. Yet, ART is a lifelong and costly regime, and many people in the developed and the developing world cannot access it. Many nations do not have the health care infrastructure or the community engagement to support the robust new ways to prevent transmission or diagnose infection. Although we have come far, the benefits of these new approaches are not universal….

      Containment of this infectious disease is attainable. We encourage all to join in the global advocacy for HIV/ AIDS research and development.

Charlottesville, VA

[This letter to the MIT community was sent out on the morning of August 15, 2017, by L. Rafael Real, the President of MIT.]

      History teaches us that human beings are capable of evil. When we see it, we must call it what it is, repudiate it and reject it.

      This weekend in Charlottesville, Virginia, we witnessed a strand of hatred. White supremacy and anti-Semitism, whether embodied by neo-Nazis, the Ku Klux Klan or others, are bankrupt ideologies with a wicked aim as plain as our need to repel it.

      The United States must remain a country of freedom, tolerance and liberty for all. To keep that promise, it must always remain a place where those who hold radically opposing views can voice them. However, when an ideology contends that some people are less human than others, and when that ideology commands violence in the name of racial purity, we must reject that ideology as evil.

      I write to you this morning because I believe that the events of this weekend embody a threat of direct concern to our community.

      A great glory of the United States is the enduring institutions and ideals of our civil society. The independent judiciary. The free press. The universities. Free speech. The rule of law. The belief that we are all created equal. Each one reinforces and draws strength from the others. When those pillars come under attack, society is endangered. I believe we all have a responsibility to protect them—with a sense of profound gratitude for the freedoms they guarantee.

      At MIT, let us with one voice reject hatred—whatever its form. Let us unite in mourning those who lost their lives to this struggle in Charlottesville. And let us work for goodwill among us all.

Territory

[These excerpts are from chapter 6 of The Parable of the Beast by John N. Bleibtreu.]

      Domestic animals have, over the course of their domestication, lost much of their reliance on territory for subsistence and mating opportunities. Any wild animal removed from its territory and placed in another, even though the new area may be superficially identical, undergoes a great emotional crisis. Zoo animals transported from a cage in a zoo in one city to a cage in another may often, in their fright and panic, do themselves physical injury, or go through a long period of weight loss and apathy in their new quarters. Fully adult animals, caught in the wild and brought into captivity, often do not survive this combined shock of losing a familiar ecological territory and all the rituals of behavior that are enacted within it. For a successful life in captivity, wild animals must be captured young before their patterns of life become enmeshed with that of their familiar territory. Domestic animals can be transported around from one racetrack or show ring to another with a minimum of psychic injury. This is also true of wild circus animals, which have never been allowed to become familiar with any special territory and must adapt to being without any territory whatever from an early age. But once an animal has become accustomed to a territory, even the reduced territory of a zoo cage, it is hard indeed for it to adjust to a new territory.

      Most mammals affiliate themselves with their environment by means of chemical sense-impressions—exceptions being the higher primates including man, and such marine mammals as whales, porpoises, seals, etc., and the occasional special case like the giraffe whose scent receptor is removed by height from likely contact with scent traces in its territory.

      They identify their territories by scent markers, which they deposit themselves. The systems of scent markings vary with their relative usefulness to the animal concerned. For those so constructed as to be able to travel fairly rapidly while at the same time keeping their noses to the ground (as for instance foxes, wolves, jackals, cats, hippopotamuses and rhinoceroses), scent traces are deposited on the ground. The most convenient scent traces are the feces and urine of the animal concerned. One “housebreaks” a domestic dog or cat by defining its territory in relation to that of its master. The interior of the house is the master’s territory, the animal's territorial orientation accommodates to this fact and it restricts its fecal and urinary marking to its own territory elsewhere. Primates (including human infants) are notoriously difficult to housebreak. Territorial affiliation is not directly connected with olfactory marking, though a good case can be made for the persistence of marking habits, which among normal humans consist of optical markings (initials, etc.). In pathological mental states, however, they may well revert to fecal or urinary marking rituals.

      The treatment of feces and urine vary as usual with the species. Hippopotamuses and rhinoceroses distribute their feces; hippopotamuses with the special musculature of their tails, which whirl almost like miniature propellers. The rhinoceros roots about in his feces pile with his horn, distributing it and giving rise to an African legend that the rhinoceros has lost his horn in his feces and attempts to find it again—a legend that should be of some curious interest to depth psychologists, especially in view of the symbolic connection, which appears repeatedly in many cultures associating the horn of the rhinoceros and human priapean energy. For many years, because of the fame of the rhinoceros’ horn as an aphrodisiac, it was hunted hard, almost to the point of extinction, vast prices being paid in China particularly, for powdered rhinoceros horn….

      We still possess, in the structure of our shoulder bones and muscles, evidence that our ancestors were arboreal. The baboon, a terrestrial monkey, cannot hang from his arms as we can. Anthropologists surmise that our human ancestors, possibly several species of apes rather than merely one apish ancestor, descended from the trees at some later time than did the ancestors of the modern baboons. Fecal marking is not very useful to arboreal creatures; some of the lemurs mark with urine, others with special scent glands usually located under the tail. But in the higher reaches of the primate order, territories are established by visual recognition, a form almost unique among mammals.

      That traces of territorial marking instincts still persist among humans can be established by a wealth of detail. For example the “Kilroy Was Here” drawings of World War II are undoubtedly territorial markings. The keepers of public monuments fight a losing battle against the scrawled, carved, scratched legends that visitors leave. But perhaps the most directly territorial marking by humans is the urinal graffiti. As with animals, it is the male of the human species who is the most ardent marker; and the compulsion to mark insulting legends on the walls of urinals seemingly transects all economic classes and educational levels. We have all seen graffiti on walls where we should never expect to see them, in universities and exclusive clubs. Mating area markings are a commonplace among animals. Among tigers the female leaves a special urine trace on the male’s marking spot as a sexual invitation, and on their next encounter he is prepared for her presentation. Among humans the park benches and tree trunks adjoining a popular assignation area are overlaid with a mosaic of carved initials often joined with Cupid’s arrow and enclosed within the outline of a heart.

      Among humans the association between the related ideas of geographic and personal possession is demonstrated by the word property, which includes both real estate and personal belongings. For young American males of mating age the first territorial possession is not a geographic area but an automobile, the sexual significance of which is well understood by manufacturers and their advertising agents. In this respect the behavior of the American male is not very different from that of many birds and mammals. The acquisition of a territory is an indispensable commencement for any courtship activity. The American male must often get a car before he can get a mate.

Earthworms

[This excerpt is from chapter 5 of The Parable of the Beast by John N. Bleibtreu.]

      Space is far more subjectively appreciated by organisms than is time, which passes for us all, great beasts and tiny animalcules alike, in roughly the same way, since the circuit of the earth, the alternation of light and dark, affects all living things on this planet, directly or indirectly. The drive of an animal to acquire time by flight is readily seen by the most casual observer. The drive to acquire space is not quite so easily or readily seen. The discovery that there is no form of life that does not manifest some behavior that could be considered territorial has only lately come to the attention of biologists. Laboratory psychologists have been concerned with the functional basis of comparative systems of space perception, but even though von Uexkiill pondered the problem in the beginning of this century, the various behaviors by which the organism expressed this perception in its natural environment was slow in coming.

      In an attempt to discover whether the basic element of space-perception, form, was perceived as such by the eyeless earthworm, von Uexkiill performed an experiment based on an observation of Darwin’s. “Darwin early pointed out,” wrote von Uexkiill, “that the earthworm drags leaves and pine needles into its narrow cave. They serve it both for protection and for food. Most leaves spread out if one tries to pull them into a narrow tube petiole [stem] foremost. On the other hand, they roll up easily and offer no resistance if seized at the tip. Pine needles on the other hand, which always fall in pairs, must be grasped at their base, not their tip, if they are to be dragged into narrow holes with ease.

p style="text-indent: 40px;">       “It was inferred from the earthworm's spontaneously correct handling of leaves and needles that the form of these objects which plays a decisive part in its effector world must exist as a receptor in its perceptual world.

      “This assumption has been proven false. It was possible to show that identical small sticks dipped in gelatine were pulled into the earthworms’ holes indiscriminately by either end. But as soon as one end was covered with powder from the tip of a dried cherry leaf, the other with powder from its base, the earthworms differentiated between the two ends of the stick exactly as they do between the tip and base of the leaf itself. Although earthworms handle leaves in keeping with their form, they are not guided by the shape, but by the taste of the leaves. This arrangement has evidently been adopted for the reason that the receptor organs of earthworms are built too simply to fashion sensory cues of shape. This example shows how nature is able to overcome difficulties which seem to us utterly insurmountable.”


You can see a video on earthworm activity at night by IMPACT Product Development at
ACTIVE EARTHWORMS
Insect Chemicals

[This excerpt is from chapter 5 of The Parable of the Beast by John N. Bleibtreu.]

      …Since what was known about chemical communication systems suggested that they transmitted information via a binary code, Butler hypothesized the following sequence: If the absence of the queen-chemical produced certain behaviors, then her presence, and the presence of this chemical inhibited these behaviors. The logic may seem odd when related to human behavior, but one could say about humans that if the absence of oxygen produces suffocation, then the presence of oxygen inhibits suffocation.

      Suspecting, from the delay in the communal reaction to the presence or absence of the queen, that chemical information about her presence or absence was transmitted along with food particles from bee to bee in the ritual act of food sharing, Butler's first step was to exclude other factors which could conceivably be operative. In one adroit experiment he proved that neither the sight of the queen in the flesh, nor sounds made by her, nor a generalized scent exuded by her which might have pervaded the entire hive, informed the colony of her presence. He enclosed the queen in her own colony within a double-walled cage of wire mesh, through which no other bee could make physical contact with her. Within a matter of hours, though the members of the colony could see her, hear her, and smell her, the colony reacted with the classic symptoms of queen loss.

      Butler’s next step was to begin a detailed study of the various kinds of physical contact exchanged between the queen and the members of her colony. The queen is normally surrounded by a suite or an entourage. These words are the technical terms used to describe this social arrangement—a refreshing change from the usual run of bastardized Latin generally employed in scientific usage. This entourage generally contains ten or twelve nurses. Butler wished first to determine whether these nurses composed a special caste within the colony, or whether any worker bee could perform nurse duties. To this end he removed the queen with a forceps, placing her in various random locations within the hive. Each time this was done, the suite disbanded us the queen was removed, the nurses going about other worker tasks immediately, while at the new location a new suite formed itself around the queen immediately. In order to gather harder statistics (the kind that impress other scientists), he repeated the basic experiment, except that instead of using crude forceps he devised a complicated, electrically-powered transport cage, which moved the queen about the hive according to a predetermined pace. Using this procedure, he demonstrated that information concerning the presence of the queen was trans-mitted via the nurses who comprised her entourage to other bees via ritual food sharing, and from these others to still others, until, like a chain letter, the number of bees receiving a chemical trace of the queen’s presence, multiplied geometrically.

      The suite of the queen bee surrounds her completely. The bees directly in front of the queen feed her continuously. Those bees to her rear and sides stroke and caress her with their antennae or lick at her with their proboscises. Butler discovered that those bees who caressed the queen with their antennae then touched antennae with other members of the colony when they exchanged food. Those who fed the queen took in this queen substance via their proboscises and distributed it in the same way to others during food sharing. In 1962, ten years after Ribbands had stimulated his curiosity, Butler finally isolated the queen bee substance. He found that it was produced by the mandibular glands of the queen, and distributed all over her body through the act of grooming. The molecule, as he finally identified it, had a remarkable chemical resemblance to the ovary-inhibiting hormone of prawns. Butler, suspecting that its ovary-atrophying effects might well work on other phyla, experimented with various arthropods and vertebrates in hopes that this substance might be a hormone of universal application, but sorrowfully, he found no support for this supposition.

      It would be very interesting to discover whether or not the literal communication entity transmits an identical meaning across phylum barriers; whether, in effect, certain words in the chemical language mean the same thing to different classes of animals. In some instances—we know that they do. Insect alarm signals, the chemical exudations which comprise warning signals, are grossly similar in classes of insects. This seems to be generally true in animal communication. Courtship signals, for example, whether auditory, visual (behavioral), chemical, or a combination, seem to be highly specific. No other bird will respond affirmatively to a blue jay's courtship call, except another blue jay. But almost all the birds and mammals within range of its alarm call will respond alertly as the blue jay shrieks its warning through the woods. The same is true of insect chemical communication, and because of this it was possible for humans to devise insect repellents (which are essentially distillations of insect chemical alarm signals) which are effective across a wide spectrum of genera. The average personal insect repellent acts upon such a disparate collection of creatures as chiggers, ticks, and mosquitoes, perhaps others as well.

Gypsy Moth

[Thise excerpt is from chapter 5 of The Parable of the Beast by John N. Bleibtreu.]

      During the summer of 1869 a French painter and amateur naturalist named Leopold Trouvelot took a house for the season in Medford, Massachusetts—at 27 Myrtle Street, to be exact. The address has become notorious in the annals of entomology. M. Trouvelot wanted to do some illustrations of silkworms in various stages of their development, and to that end had imported several eggs from Europe. He left them on a table in his work-room and waited for the larvae to emerge. It must have been a warm summer, and the table on which the eggs were located must have been near a window. Perhaps the rustling of a curtain blowing in the wind brushed them from M. Trouvelot’s worktable and out the window into the yard. M. Trouvelot missed the eggs on the afternoon of their disappearance and made efforts to find them, for among the eggs were several belonging to a European moth, Porthera dispar, known in England as the gypsy moth, perhaps because the color of the wings of the male imago resemble the color of tanned, windburned skin. Knowing full well this moth’s European reputation as a destroyer of leafy trees, M. Trouvelot “gave public notice of [the eggs’] disappearance.” It was a pity that the public took no notice at the time.

      Within two years of this event, a vigorous colony of the moths had established itself on Myrtle Street and the citizens of the town of Medford found themselves living in a nightmare.

      There was a public hearing sometime later at which citizens spoke of their impressions of that summer of 1871. One Mrs. Belcher reported as follows: “My sister cried out one day, ‘They [the caterpillars] are marching up the street!’ I went to the front door and sure enough the street was black with them coming across from my neighbor, Mrs. Clifford, and heading straight for our yard.”

      A Mrs. Spinner stated: “I lived in Cross Street . . . and in June of that year [1871] I was out of town for three days. When I went away the trees in our yard were in splendid condition and there was not a sign of insect devastation upon them. When I returned there was scarcely a leaf upon the trees. The gypsy moth caterpillars were over everything.” A neighbor of Mrs. Spinner, Mr. D. M. Richardson, said: “The gypsy moth appeared in Spinner's place in Cross Street and after stripping the trees there, started across the street. It was about five o'clock in the evening that they started across in a great flock and they left a plain path across the road. They struck into the first apple tree in our yard and the next morning I took four quarts of cater-pillars off one limb.” Mrs. Hamlin continues: “When they got their growth, these caterpillars were bigger than your little finger and would crawl very fast. It seemed as if they could go from here to Park Street in less than half an hour.”

      Mr. Sylvester Lacy reported that: “I lived in Spring Street . . . and the place simply teemed with them and I used to fairly dread going down the street to the station. It was like running a gauntlet. I used to turn up my coat collar and run down the middle of the street. One morning in particular, I remember that I was completely covered with caterpillars inside my coat as well as out. The street trees were completely stripped down to the bark. . . . The worst place on Spring Street was at the houses of Messrs Plunket and Harman. The fronts of these houses were black with caterpillars and the sidewalks were a sickening sight, covered as they were with the crushed bodies of the pest.”

      The destruction in Medford was horrible. The landscape looked like a burned-out battlefield. In their desperation the insects had devoured even the grasses down to the bare dirt. The Massachusetts National Guard was called out to drench the countryside with Paris green, an arsenic compound. Artillery caissons were converted into spray wagons, but the moths continued to flourish.

      The plague spread very slowly. Female moths, though they did not appear markedly different from the males, are flightless. When they emerge from the pupa case, they can crawl only a few feet away from the spot where they are sought out by flying males and fertilized, so the new generation of eggs is laid not far from the previous spot.

      Dr. Charles Fernald, the resident entomologist at the Massachusetts State Agricultural Station at Hatch, was the first professional zoologist to become involved with the moth. He was not at home when the first delegation of citizens from Medford appeared on his doorstep with specimens of the caterpillar and the moth. His wife searched vainly through his collection of American moths in an attempt to identify the animal, and it was finally his young son who discovered Linnaeus’ original description of the moth in a European book. Fernald found the havoc caused by the moth utterly incredible until he performed some tests of his own with lettuce leaves and found that a fully mature caterpillar would consume well over ten square inches of foliage in one twenty-four hour feeding period.

      He wondered how the male moths located their flightless females. Was it random accident? The vernacular name for the moth in France was “Zigzag,” which described accurately their erratic, wavering flight. Fernald suspected there might be some truth to the folklore account that it was a scent exuded by the female which attracted the males, and that their lumbering flight pattern had evolved in response to this need. In traveling from one point to another the moth lumbers wildly off course, to and fro and up and down: it would thus more likely encounter air-borne scent traces than if it flew sharp and straight like a swallow.

      Fernald devised a trap, a wire-mesh box with a funnel entrance, which would admit insects but not permit them exit, and using live females for bait, he attracted numbers of males into it. He began using these traps as a gauge of infestation in any given area. By placing the traps in an open location and leaving them for several days, he could get an impression of the population density from the number of males captured. By this time the plague of caterpillars had spread from Medford to other Massachusetts towns, probably carried by travelers such as Mr. Lacy who dreaded going to the railroad station covered with caterpillars, but went nonetheless.

      In all of this work by Fernald and in the work to follow, there was very little of the philosophical speculation which so marked the studies stimulated by von Uexkiill or even Marston Bates working with his anopheles. It was all eminently practical. By 1915 the moths had spread as far north as Maine, and in August of that year entomologists had become curious about the range of this sex attractant. Traps containing live females were placed on several uninhabited islands off the coast of Maine, in Casco Bay. In one trap placed on Outer Green Island, better than two miles from the nearest moth, and left there for three weeks, two males were found when it was retrieved. In another trap placed on Billingsgate Island off the coast of Cape Cod, four males were found, though this trap was two and a quarter miles from the nearest point of infestation.

      To assure themselves that these males had actually flown the distance and not been blown by winds into the general vicinity of the traps, follow-up tests were conducted in a closed, airstill room twelve by fourteen feet, and the moth’s flight speed was timed at approximately 150 feet per minute. Virgin males rarely flew uninterruptedly for much more than a mile before alighting, but a small number of sexually experienced males flew constantly for several days, covering well over two miles before eventually collapsing in death.

      This is curious in that prior sexual experience would seem to be a powerful motivating force for males. Fernald timed a number of gypsy moth copulations and reported that they ranged in time from twenty-five minutes to three hours and eighteen minutes, with the average time being about one and a quarter hours. He writes that “after mating, the male is quite stupid but in about one-half hour regains his normal activity” and is quite eager to mate again.

      The next episode in the annals of the study of the sex stuff of the gypsy moth is almost comic in its foolishness. For the amount of sophisticated effort that was lavished upon obtaining the information, very little was finally learned. But the gigantic power of the United States Government was now bent upon the project, and the elephant labored to produce the mouse. By 1927 the gypsy moth was no longer a regional problem; it had become a national concern, and the United States Department of Agriculture was asked to aid in its control.

      It was imperative now for the Government to obtain a census of the moth. Fernald’s system of using live females as bait in traps was considered too dangerous : if, through some inadvertence a fertile female should escape, the horrors of Medford might be re-enacted somewhere else. Two government entomologists, Charles W. Collins and Samuel F. Potts, were assigned the task of devising an extract of female scent to be used for bait. After dissecting thousands of moths and baiting traps with sections of tissue taken from various parts of the female anatomy, they finally discovered (not to anyone's great surprise) that the scent was produced in the tissues immediately surrounding the female genitalia, particularly the area immediately surrounding the opening of the copulatory pouch. But though the Government contracted the services of several eminent university chemistry professors, no one was able to isolate and identify the chemical responsible. However, the Government is obsessed with the establishment of standards and specifications, and the notion of utilizing female genitalia (even if only that of a moth) as official United States Government equipment must have been repellent to the authorities in charge of the program, for thirty years were spent in the attempt to isolate the chemical in its pure form. Finally in 1960 three chemists, Martin Jacobson, Morton Beroza, and William A. Jones, working at the Beltsville, Maryland, United States Government Agricultural Research Station, succeeded. They had dissected the genitalia of well over half a million female moths in order to determine the official specifications for the scent bait to be used in United States Government gypsy moth census traps. For the record, the name of the chemical is dextrorotatory-10-acetox-y-I-hydroxy-cis-7- hexadecene.


You can see a video on gypsy moths by YouTube at
GYPSY MOTHS
Marine Worms

[This excerpt is from chapter 5 of The Parable of the Beast by John N. Bleibtreu.]

      Since the most primitive existing organisms inhabit the sea, one of the earliest and most curious studies of chemical communication was conducted by two marine biologists, Frank Lillie and Ernest Everett just, in 1913. One summer night during the dark of the moon they set out from the Woods Hole Biological Laboratory in a rowboat with a carbide lamp swinging from the bow. They wanted to know more about a most peculiar phenomenon, the swarms of minute marine worms which covered the surface of the sea at certain times like a red carpet, rising and falling in the gentle ocean swell. Lillie wrote the report: “The swarming usually begins with the appearance of a few males, readily distinguished by their red anterior segments and their white sexual segments darting rapidly through the water in round paths in and out of the circle of light cast by the lantern. The much larger females then begin to appear, usually in small numbers, swimming laboriously through the water. Both sexes rapidly increase in numbers during the next fifteen minutes and in an hour or an hour and a half all have disappeared into the night.” Lillie and Just imagined that this assembly was connected with reproduction. Males and females were obviously swollen with sperm and eggs; but they were unable to observe the mating act under these field conditions, so they captured several specimens and brought them back to the laboratory, putting them in separate containers to see what would happen. They believed it likely that a lunar rhythm controlled both the assembly and the actual spawning activity. They wanted to see whether the females dropped their eggs first, to be fertilized immediately thereafter by the males shedding sperm, or whether perhaps the sequence would be reversed. But they were disappointed. Nothing happened until, as is so often the case in science, there was a procedural accident. “One day,” Lillie writes, “a male was dropped accidentally into a bowl of seawater which had previously contained a female. He immediately began to shed sperm and swam round and round the bowl very rapidly casting sperm until the entire 200 cubic centimeters was opalescent.” The female had obviously left some chemical trace of her presence in the water, which stimulated the male. But the female had not previously shed any eggs into this water. Lillie then placed a female in the bowl in which the male had shed his sperm, and the female immediately cast her eggs. Lillie and Just were unable to extract or identify this substance. They did determine, however, that it was highly species-specific. The worm they worked with was Nereis limbata, and when they attempted to repeat the experiment with closely related species of worms, there was no response. This particular chemical language was understood only by N. limbata. No other worm could comprehend it. In 1951 a team. of German biochemists attempted to analyze this (as they called it) "sex stuff:" without too much success. They did discover that it was a protein substance which oozed out from the entire body of the egg-bearing female. This stimulates the male into shedding sperm. Chemical elements in the semen stimulate the female into casting her eggs into this floating mass of sperm. The system is very effective. It does not require any auditory or visual communication system.

Malaria and Mosquitoes

[These excerpts are from chapter 4 of The Parable of the Beast by John N. Bleibtreu.]

      No less an authority than Sir William Osier, the famous British physician and medical historian, has called malaria “the single greatest destroyer of mankind,” and considering the vast array of ills that human flesh is heir to, including the malice of other humans, the odd little parasite which produces the disease has accomplished a remarkable record. It may yet prove to have been the single greatest destroyer of vertebrates generally, and therefore one of the most potent pruning hooks of the natural selection process, for contrary to popular notion, it is neither restricted to the tropic zones nor to warm-blooded mammals. Fish, reptiles (particularly lizards), and birds have all been discovered suffering from the disease. It has spread so far from the tropics, which may well have been its original birthplace, that penguins living well below the Antarctic circle have been discovered suffering from the disease. Though in this latter instance, instead of a mosquito vector being the intermediate host, a louse or mite is suspected of carrying the parasite.

      Several authorities believe it has had an incalculable effect on human history, particularly the development of modern civilizations in the temperate zones where there is a shorter summer activity season for the adult female mosquito. The disease is also considered partially responsible for the decline of several great Mediterranean cultures, particularly that of Athens, between the fifth and third centuries B.C., where the literature abounds with excellent clinical descriptions of the classic symptoms. It may have worked analogously in other animal phyla, sapping its victims of energy, creating brain damage with resulting sluggish neurological reactions, so that many populations of animals which were superbly fitted to cope with other elements of their environment may have perished because of their inability to withstand the ravages of this disease. Paul F. Russell, Chairman of the World Health Organization's Committee on Malaria, estimates that the total number of contemporary cases of human malaria, as recently as 1952, ran about 350 million, or roughly 6.3 per cent of the world’s population….

      In its vertebrate host, the animal produces the major portion of its population inside the red blood corpuscles. There is a little-understood transitional phase when the animal enters the host and resides in one of the organs (the spleen and liver are favored locations in primates), and there may be some reproduction—enough to give the population a start—somewhere other than in the red blood cell. But this latter location is where the vast bulk of the population resides. The red corpuscles are living cells shaped like a concave disk, composed mainly of hemoglobin. They are produced by specialized cells within the marrow of the long bones of the body. The corpuscles live for about fifty to seventy days, and upon dying, their corpses are destroyed by processes occurring in the liver, the spleen, and the lymph nodes. These latter organs are the first to suffer from the effects of the disease. They become enlarged and overworked in an attempt to cope with the increased mortality of corpuscles, and if the parasite population is not stabilized by one means or another, they ultimately fail in their functions. But before this happens, other unpleasant effects are felt. Hemoglobin, the stuff on which the parasite lives, is an oxygen transport material transferring the oxygen taken in by the lungs to the various body tissues requiring it. If a person dies of malaria, his body literally smothers to death. The capillaries become distended, clogged with dead and dying cells, blood flow is impeded, and the brain—one of the most voracious users of oxygen—will be damaged by oxygen deprivation. If the damage is not too extensive, the brain can transfer functions from destroyed tissues to intact ones, but in many cases of malaria fatality, the mortal blow to the individual was dealt in the brain itself.

      The parasite enters into its vertebrate host as a tiny hairlike creature, called a sporozoite, which matured within the stomach of a female mosquito. The sporozoite is injected into the vertebrate as the mosquito spits its irritating saliva into the wound created by its proboscis mouth. In the late nineteenth century Alphonse Laveran, a French medical officer stationed in a military hospital in Algeria, first discovered the creature. Considering the tremendous technological advances which have occurred since then, surprisingly little more is known about the parasite….

      The crescent bodies which Laveran described were the asexually reproducing forms of the protozoa now known as schizonts. This schizont form is the one that the animal assumes to best exploit its vertebrate host. As it grows within the red blood cell, it produces forms called pseudopodia—literally false feet—which grow out in seemingly haphazard fashion from any part of the original schizont until the cell is completely filled with this Medusa-head mesh of living matter. These false feet now break away from one another; and as this happens they are given another name, merozoites. They become active as a swarm of snakes, and the blood cell ruptures under the pressure of their writhing. This cycle may take anywhere from one to three or four days—the period required is one of the species criteria. As the corpuscle ruptures, the merozoites escape into the surrounding plasma and each one actively squirms its way into another cell. Sometimes, if the population is dense, two or three may enter a single cell, but in that case future growth is stunted. The number of these merozoites which derive from any given schizont is also a species criterion. In the most virulent human form of the disease, the production averages about sixteen merozoites per schizont.

      Almost the entire population of parasites reproduces simultaneously; this causes the classic symptoms of malaria—a sudden chill which leaves the victim blue-lipped and shaking with an ague which can last a few minutes or an hour, and which is followed immediately by a furious rise in temperature, up to 107°F, often accompanied by delirium, frightful aches in the bones, vomiting, etc. This paroxysm ends after about four hours, and is followed by a relatively tranquil sweating stage during which the patient may fall into an exhausted sleep. The victim may then be relatively without symptoms until the next period of schizont reproduction, which occurs two or three or four days hence, depending on the species of parasite. The majority of these paroxysms occur, at least in the tropic forms of malaria, after midnight and before dawn. The reproductive cycle of the parasites is tuned to the favored time for the mosquitoes to seek out food. In temperate zones, or locations where the mosquitoes feed in the daytime or early evening, the symptoms of the disease occur then.

      For some reason that we still do not understand, a few members of this parasite population do not assume the asexual schizont form. These are the “oval forms” described by Laveran, and it is in their interest that the cycle of the schizonts occurs when it does, during the mosquito's favored feeding time, for if these oval forms remain in the blood of a vertebrate they will perish. They are sexual animals, and in order to complete their mating activities, they require a special setting, the unique and particular environment that obtains within the gut of a female mosquito. Not any mosquito—but a particular species of mosquito—and it was this host preference of the malaria parasite that led obtuse human entomologists to separate into species categories several populations of anopheles mosquitoes that had appeared identical.

      The particular set of conditions which prevail within the mosquito gut have never been reproduced in the laboratory, so no one has any idea what these requirements may be. But the full cycle of parasite mating activities can only occur in this setting. Some of the preliminary transformations of these oval forms into free-living sexual animals can occur outside the mosquito’s gut—on a microscope slide—though it is not believed that they can occur in the bloodstream of the vertebrate host. The transformation happens fairly quickly, within a span of ten minutes or so. Some of the oval bodies contain male spermatazoa; they burst, rupturing the corpuscle as they do, and releasing a swarm of swallow-tailed sperm into the surrounding plasma. These active creatures, which swim about lashing their double tails, are known as microgametes. The other oval bodies are females. Under the microscope, if one is careful about the staining procedures, one can differentiate between the two types of oval bodies. One can sometimes see within the oval envelope which surrounds the microgametes the compressed community of males, swarming and writhing, at least at a certain stage of maturity shortly before the males are released.

      The female oval bodies, known as macrogametocytes, appear dense and more coherent; they also grow larger and finally explode out from the cell which surrounds them through the sheer increase in bulk. Once released from the blood cell they are mobile, capable of some movement; given a surface to cling to, they screw themselves about with a hideous scrunching worm-like movement. In the mosquito gut, and occasionally on a microscope slide, they will develop still further, growing an odd humping protuberance, which seems to exert an attraction for the males. If any microgametocyte finds himself in the vicinity of this hump, he lashes himself toward it and enters it like a battering ram. Then genetic materials are exchanged and the female is fertilized. As this happens, she becomes endowed by scientists with still another name, oocyte, and from this point on any further development must occur within the gut of an appropriate species of mosquito.

      As the mosquito sucks up its meal of blood, it takes into its gut, along with the sexual forms of plasmodium, the asexual forms, schizonts and any free-swimming merozoites which happen to be about. These creatures are digested along with the ingested vertebrate’s blood. Only the sexual forms resist digestion.

      After being fertilized, the female screws herself deeply into the mosquito’s gut wall where, if conditions are proper, she will be enclosed within a cyst. She now grows to a huge size, large enough to be visible to the naked eye. She mushrooms through the mosquito’s gut wall, and continues growing from the other, outer, side. Dissecting a mosquito to remove the gut is not as difficult a procedure as may be imagined. It can be done by anyone who knows the trick with two ordinary pins on any smooth surface. The gut comes right out like a tiny piece of brown spaghetti, and the mature oocytes can clearly be seen protruding like mushrooms on small stalks from its outer wall. Eventually these oocytes burst, releasing a horde of small hairlike animals similar in appearance to microgametes. But their lashing tails, instead of extending from the rear of the body, extend fore and aft; the creature looks like a transparent snake with its opaque head carried amidships. The sporozoites now swim actively around within the fluids inside the mosquito’s body cavities until eventually some of them enter the salivary glands. Once there they seem content to remain even though, after a period of time, the population may become quite dense. There they collect and wait for the mosquito to spit them out along with its irritating saliva into the body of a vertebrate animal.

      It is believed by some malariologists that sporozoites have the ability to penetrate tissue, finding their way into the nearest capillary by burrowing right through the flesh. It is conceivably possible for a person to contract malaria even if he is not bitten by a mosquito—even if he squashes the insect on his skin before it has a chance to bite, the sporozoites may still be able to enter into his bloodstream and infect him. Once in the bloodstream the parasites migrate rapidly to the liver or the spleen (in the case of primates), where they begin reproducing and eventually the population spills out into the red blood cells where the parasites travel throughout the body as a whole, each enclosed in its hospitable capsule, a red blood cell.

      Interesting as this recital of the malaria parasite sequence may be, both in itself and as an example of parasitic adaptation, it may still seem far removed from the species problem. Yet it was largely through the concerted efforts of several malariologists that the modern “sexual” concept of the species, the so-called New Systematics, developed. Mounting field trips into remote areas with special equipment to study animal sexual relationships is an expensive business. While involved in their mating activities, many animals are peculiarly vulnerable to predators; they make strenuous efforts to accomplish the mating act under conditions of great privacy—in darkness, in inaccessible locations, etc.—and at this time more than any other, they resist observation.

      But since detailed and intimate knowledge of the entire generative cycle of both the mosquito and its infectious parasitic passenger was of such crucial importance to the health of mankind, no expense was spared in sending men and equipment anywhere in the world where the mosquito was suspected to exist so that all the information possible about the mosquito’s habits generally, its round of daily activities—its feeding habits, resting habits, those habits connected with copulation and oviposition—could be obtained for the purpose of destroying the mosquito more effectively and economically; and with their destruction, hopefully, control of the disease they carry.

      As with most blood-sucking insects, it is the female who requires the blood for the maturation of her eggs. The male anopheles drinks the juices of plants and fruits. In the laboratory it feeds happily on apples and raisins; the skins of these fruits represent the limit of the penetrating powers of its proboscis, which is blunter and more flaccid than the female’s.

      The female proboscis is a complicated instrument composed of comparatively rigid members—the jaws, which have grown together into a tube during the course of evolution. The lips remain flexible members; the insect uses them as a retractable guide-sleeve while introducing the proboscis into tissue--for the act of biting is an introduction, an insinuation; it is not a stab. The tip of the proboscis, for the final third of its length, is more flexible than the rest; something like the tip of a fishing rod, and once the main entrance into the tissue has been made, this tip searches for a capillary at an angle of about 45o, first probing in this direction, then being withdrawn and redirected into another direction, and so on, until it strikes a capillary. Then the saliva which contains some decongestant properties is injected to “thin out” the blood and the insect begins sucking it up, often taking blood in the amount of its own body weight. After this it flies away to a resting place where it shall remain for at least two days (depending on species) until the blood is digested and the eggs matured. The next act is that of oviposition, and for this the insect must fly to a body of water (a rain-filled hoofprint is enough) to lay its eggs, for the larva which shall emerge from the egg is an aquatic creature.

      It has been reckoned that the lifespan of the average mature female anopheles is somewhere around a week or so. One laboratory specimen has been maintained in the lab for eighty-six days, during which time it had six blood meals and laid six clutches of eggs; but this is considered exceptional by malariologists. The general feeling is that only the rare mosquito has more than two blood meals during the course of her lifetime. Here is another example of how absurdly unfavorable the environment of the plasmodium parasite would seem to be: the time required for the development of the oocysts in the gut of the mosquito varies with environmental temperature, shortening as the temperature rises, but on the average it seems to take about a week. The average parasite has then only one opportunity to re-establish its population within a new vertebrate host. And yet the prevalence of the disease testifies to the operational effectiveness of this seemingly unforgiving system. One chance for success is all that most animals get, and this one chance is sufficient.

      The species problem only entered into malaria studies as the carrier of Europe, Anopheles maculipennis began to be intensively studied during the 1920’s and 1930’s. A. maculipennis is unmistakable, a small, rather darkly speckled insect sitting high on its spindly legs and pitched at a distinctly downward angle. Its rearmost legs are normally held up in the air off the surface, even when the insect is resting, not preparing to bite, and the body is tilted downward as though it intended to stand on its head. Most resting mosquitoes carry their body horizontal to the surface. As malariologists all over Europe concentrated their attention on A. maculipennis, certain inconsistencies became apparent. The first disturbing note appeared in 1920 when the French malariologist Emile Rouboud published a paper noting the abundance of A. maculipennis in many parts of France where malaria had never been reported. Quite independently, in the next year, Carl Wesenberg-Lund reported the same situation in Denmark, as did Battista Grassi for Italy. Wesenberg-Lund believed that his Danish mosquitoes had changed their food preferences. Rouboud and Grassi were closer to the truth in their speculations; they believed they had discovered a new “race” or subspecies of maculipennis. They believed they were dealing with a “race” rather than a new species, because there was nothing in the appearance of the adult to differentiate these benign insects from the disease carriers.

      But then a Dutch entomologist, Nicholas H. Swellengrebel, given financial support for his studies as the result of an outbreak of malaria in the Netherlands, found that there was a difference between the carriers and the benigns. One could not discern it by comparing any two individuals from either population, but statistically he was able to determine average differences in the length of wing in each population; he named one of them, the suspected malaria carriers, “shortwings” (which has since been made over into formal Latin as atroparvus), and the other, the benign population, “longwings.” Assisted by a colleague, Abraham de Buck, he went on, in the manner of a good ethologist, to discover that between the populations the entire repertoire of behaviors differed: their feeding habits, adult mating habits, their larval breeding places—everything differed except their appearance. Linnaeus’ doctrinal shroud still blinded both these men, and they hesitated (they wrote that they considered it “inadvisable”) to give separate Latin species names to these two populations. The next step was obvious: would they interbreed and would the offspring be fertile?

      The shortwings would and did. Males would buzz any resting female within a cage no matter how small it was, and mount her. Matings between female longwings and male shortwings produced infertile offspring like horse and donkey matings. This was certainly diagnostic of a species difference. But the longwing males could not be induced to mate in small cages, or in large outdoor cages. They simply would not mate in captivity. This was curious. Were the pursuit of this curious phenomenon to be conducted purely for purposes of enlightenment, we would probably still be waiting for the answer. But a dread disease had struck a civilized nation, and so investigations leading to the solution of this problem were supported by serious men sitting on the boards of reputable institutions; it was no longer an eccentric obsession on the part of a handful of entomologists.


You can see a video on fighting malaria by Politifact at
FIGHTING MALARIA
Evolution of Species

[This excerpt is from chapter 4 of The Parable of the Beast by John N. Bleibtreu.]

      Today it is taken for granted that evolution occurs; the idea of phylogenetic development is accepted as is the idea of ontogenetic development. But our understanding of the time scales involved is completely missing. We simply have no idea how long it takes for almost anything to happen in evolution—or why rates should be so uneven. It would seem that certain populations of animals are halted in a hiatus of attentive expectancy for very long periods of time. In our order of primates, for example, there are the families of tree shrews, lemurs, lorises, tarsiers; for the most part squirrel-like insectivores which have not changed nearly so radically from their fossil forebears as have we. Darwin’s hypothesis, coupled with the discoveries of modern genetics, have given us a retrospective comprehension of what must have happened. We know that branches grow apart from the trunks of trees, and that their individual growth will depend in part on the vicissitudes of their particular location; they will either flourish if they have access to light and space, or remain stunted if they are deprived. The analogy persists in the “family trees” of animal relationships, and insofar as evolutionary theory goes, the most fascinating part of the process is the one that occurs right at the point of branching.

      How does it occur? Geographic isolation is not the answer. “San Francisco Bay,” Ernst Mayr writes, “which keeps the prisoners of Alcatraz isolated from the other [human] inhabitants of California, is not an isolating mechanism, nor is a mountain range or a stream that separates two populations that are otherwise able to interbreed.”

      The species problem, when finally examined here, at its roots involves an understanding of sexual behavior, of compatibility and incompatibility. What happens is that suddenly a splinter portion of any given population no longer chooses to interbreed with the main body. The members of this splinter party suddenly begin to interbreed exclusively with one another, rejecting potential mates from outside the group.

      As a result they share the genetic memory of their communal experiences with one another and develop their particularities aloof from the parent population. In the past fifty years the fallacies inherent in the Linnean system of classification on the basis of appearance have caused zoologists to formulate a classification based on something other than appearance. Appearance can vary with the individual. For example, prior to 1950 the weasels of North America were classified into twenty-two different species. A patient zoologist, Eugene R. Hall, after observing them for a long period, in 1951 published a 466-page paper which finally convinced his fellow taxonomists that the weasels of North America really belonged to only four separate species. There appeared the typical species gap between four weasel populations. All the rest of the varying animals were merely subspecies, or races.

      How then is the species defined if not on the basis of appearance? How is this gap between populations perceived? It is perceived in sexual terms. Ernst Mayr defines the species as follows: “The species, finally, is a genetic unit consisting of a large interconnecting gene pool.” It is the word interconnecting which is the operative word in this definition. So long as the pool is interconnecting, the population is capable of fermenting— producing its own interior variations. Only when it ceases to be interconnecting, when a discontinuous splinter becomes a separate fragment, does it become a species embarked on the road toward extraordinary differentiation—such differentiation for example as exists between the gibbon, the gorilla, and ourselves.

Pineal Body

[This excerpt is from chapter 2 of The Parable of the Beast by John N. Bleibtreu.]

      The pineal body is a small grey or white structure about a quarter of an inch long and weighing about a tenth of a grain in man. It is located at the very top of the spinal column where the neck enters the skull. It is the only structure in the brain which is not bilaterally symmetrical. Draw a mid-line down the brain from front to back and everything appearing on one side of this mid-line is duplicated on the other—except for the pineal body. This fact alone has always made it distinctive and a curiosity to anatomists. It is shaped roughly like a pine cone—therefore its name. Descartes was not entirely original in his description of the pineal body as the site of the soul. The Greek anatomist Herophilus of the fourth century B.C. described the pineal as a “sphincter which regulated the flow of thought.”

      Herophilus in turn may very well have come by his notion of pineal function from India, where speculation about this apparently useless appendage to the brain stretches back into the darkness of prehistory, perhaps as far as 3,500 years.

      The Hindus recognized something about the pineal body which had escaped the intuition of Western anatomists from Herophilus to Descartes right up to the year 1886, when by apparent coincidence two monographs on the subject were published independently, one in German by H. W. de Graaf, and one in English by E. Baldwin Spencer. The Hindus recognized from the very beginning that the pineal body was an eye! As such it is represented in oriental art and literature as the Third Eye of Enlightenment.

      The third eye of certain lizards and fish was well known; the eye was unmistakable in the Sphenodon genus of lizard, which contains the famous species Tuatara of New Zealand. In this creature the third eye is marked distinctly by the skull being opened in a central cleft, and the external scales arranged in a kind of rosette with a transparent membrane in the center. When dissected, this organ proved to have all the essential features of an eye--there was a pigmented retina, which surrounded an inner chamber filled with a globular mass analogous to a lens, but the connecting nerves were absent, and the anatomists of the nineteenth century decided that the organ was without visual function in the Tuatara. It was primarily De Graaf and Spencer who proved that this organ in the Tuatara lizards was the same organ that became buried and was rendered indistinct in function in mammals and was known as the pineal body.

      In 1958 two zoologists, Robert Stebbins and Richard Eakin from the University of California at Berkeley, did ethological studies on the common western fence lizard (Sceloporus occidentalis), capturing two hundred animals, removing the pineal eyes of one hundred, and performing a sham operation equally traumatic, but which left the pineal eye intact in the other hundred. They found that removing the pineal eye markedly affected behavior, particularly escape reactions. After recovering from the operation, the animals were released in their natural habitat, whereupon Stebbins and Eakin chased and tried to capture them. Before I0 A.M., they were able to capture 63 per cent of the pinealectomized lizards as opposed to 37 per cent of the sham operated animals. After 10 A.M. and until nightfall the percentage of captures was more equal, though a small balance of the sham operated animals always managed to escape better than the pinealectomized ones. This experiment proved that, contrary to all the previously held notions about the vestigial nature of this third eye—that it was a remnant as useless to lizards as the appendix supposedly is to humans—possession of an intact pineal eye has a marked survival value. It was most emphatically not nonfunctional.

      In mammals the pineal body was recognized to be a gland by the early Latin anatomists, who rated it more important than the pituitary, which is now considered the “master gland.” The Latin writers named the pineal “glandula superior” and the pituitary “glandular inferior,” but from the nineteenth century until just within the past ten years modern anatomists have been divided into two distinct camps—those who believed the pineal was an endocrine gland producing some hormonal excretion, and those who believed it was a useless vestigial appendage of the brain, a leftover reptilian eye. It was not until 1958 that the secretion of the pineal gland was isolated and identified, and the importance of the pineal as a time-sensor in humans finally established. Strangely enough this tremendously important work was done by a dermatologist, Aaron B. Lerner of the Yale University Medical School, and not by an endocrinologist. The story of how this came to pass is sufficiently curious to warrant a brief diversion.

The World within Us

[This excerpt is from chapter 2 of The Parable of the Beast by John N. Bleibtreu.]

      Partly, this disregard of this literally painful biological fact of life stems from the ancient Judeo-Christian dichotomy, the separation of body and mind into quite separate compartments. The bodily effects of such traveling is well known to athletes who usually schedule their arrival well in advance of competition so as to allow a period of recuperation. Yet businessmen or diplomats often allow themselves no more than a quick sprucing up at a hotel before being whisked off to meet the opposition.

      The mind is as much in the body as the body is in the world. The body penetrates the mind just as the world penetrates the body. We like to believe, since we see ourselves as enclosed within a shield of skin, that we are demarcated from the world by this envelope of skin, just as a theater curtain separates the audience from the stage before the performance. But the skin is a porous membrane. Electrically and chemically the world moves right through us as though we were made of mist.

      By and large we are unaware of the presence of the outside world within us. We are even more unconscious of the breathing of the skin’s pores than we are of the intake and exhaling of the lung's breathing. We do not feel the penetration of cosmic particles. This part of the world is all but unknown to most of us. And yet it is as the world enters into us, with its force and influence, that we become one with it.

Sand Fleas

[This excerpt is from chapter 2 of The Parable of the Beast by John N. Bleibtreu.]

      An Italian zoologist, Floriano Papi of the Zoological Institute of the University of Pisa, had become interested in the sand flea, Talitrus saitador. Though called a flea, this tiny creature is not an insect, but a crustacean distantly related to the shrimp. It normally inhabits that wet band of beach sand which is just out of the battering reach of the surf. During the bright part of the day, the late morning and early afternoon, it remains hidden in underground tunnels, obtaining its food by filtering plankton from the seawater which soaks through and collects in its tunnels. But around sunset the sand fleas emerge from underground and embark upon a strange migration, traveling inland for a distance of one hundred meters or more. Papi was fascinated by several aspects of this migration. It appeared to be purposeless, at least in any economic sense. It was not a food search, for the flea is unequipped to ingest any solid food. Differing numbers of animals make the trip each night. During certain periods large numbers of animals make the voyage, while at other times only a few may be found. Papi could not discover any associated phenomenon that correlated with this change in numbers. It did not seem to occur in connection with any easily observed astronomical occurrences, and Papi suspected that perhaps weather-induced barometric or humidity changes may have triggered more fleas at one time than another.

      Around sunrise the fleas return to the beach and the water’s edge. But if they are interrupted at any point during the course of their migration, they will attempt to escape threat by returning to the sea. Papi found that they returned to the sea unerringly, even when he caught them and transported them to a different location where familiar landmarks were absent. Strong offshore winds scattered the sand and continually reshaped it into different patterns, so that even under normal circumstances it would be difficult for them ever to locate themselves by means of a stable set of Euclidean reference points the way we humans do. Suspecting that perhaps sand fleas sensed the presence of the sea by some stimulus such as the sound of the surf, or differences in humidity, or scent, Papi caught some fleas and transported them across the boot of Italy to the Eastern shore, the Tyrrhenian seacoast, where he released them a few meters inland from the surf line. He was astonished to discover that instead of proceeding directly toward the nearby beach, they struck off the way they had come, apparently determined to march overland for more than one hundred kilometers in order to reach their familiar Adriatic beach.


You can see a video on catching sand fleas at
CATCHING SAND FLEAS
Dancing Bees

[These excerpts are from chapter 2 of The Parable of the Beast by John N. Bleibtreu.]

      This discovery came by accident. “When I wish,” he wrote, “to attract some bees for training experiments, I usually place upon a small table several sheets of paper which have been smeared with honey. Then I am often obliged to wait for many hours, sometimes for several days, until finally a bee discovers the feeding place. But as soon as one bee has found the honey, more will appear within a short time—perhaps as many as several hundred. They have all come from the same hive as the first forager; evidently this bee must have announced its discovery at home.” How was this knowledge communicated?

      Von Frisch first located the hive. Then he made arrangements to identify each individual bee, so as to determine whether there were communicators of information and receivers of information, or whether all bees in the hive were equally able to give out and receive information. He devised a two-digit color code for the abdomen and a single-digit color code for the thorax. He suspended pigments in shellac and placed dots of color on each animal until he could identify each of 999 bees in a hive.

      He then moved the colony into an artificial hive with glass walls, so he could observe the activity within. He noted that when a bee discovers a new source of food, it returns to the hive and “begins to perform what I have called a round dance. On the same spot, he turns around, once to the right, once to the left, repeating these circles again and again with great vigor. Often the dance continues for half a minute or longer at the same spot.” The dancer may then move to another part of the hive and repeat the dance.

      Von Frisch also noted another dance, which he called “the tail-wagging dance.” In this performance the bees “run a short distance in a straight line while wagging the abdomen very rapidly from side to side; then they make a complete 360° turn to the left, run straight ahead once more, turn to the right and repeat this pattern over and over again.”

      The deciphering of the coded message of these dances became frustratingly difficult. After von Frisch finally succeeded in breaking them, in the late 1940's, they seemed simple enough. But the initial difficulties centered around the fact that human language is digital, while animal communication is analog. Language is symbolic, and though we humans make analogies (slippery as an eel, strong as an ox, etc.), the analogies are rarely built into the structure of language. For example, if I say I am sad, or I am angry, the precise degree of sadness or anger is not known at this point. Even the addition of an adverb such as very angry, or somewhat sad is not enough. To communicate the degree, the exact point upon a sliding scale, we must resort, like animals, to analog communication devices—tones of the voice, facial expressions, gestures, postures, etc. In mathematical terms a slide rule is an analog computer, whereas an adding machine is a digital computer. When using the slide rule, the analogy or relationship of one scale to another is immediately and directly visible, while in an adding machine digits must be mechanically piled on top of one another, or removed from the pile, and no relationships are apparent.

      Since conventional code-breaking techniques were devised for deciphering digital symbolic communications, von Frisch was compelled to begin at the very beginning, trying to find the analogy within the pantomime of the bee's dance and the location of the food source. It took him many years of frustrating, painstaking effort.

      Now that we know the key, the problem seems childishly simple. But it only became apparent to von Frisch gradually, as he began moving his food source progressively farther from the hive. After the food was moved more than fifty meters from the hive, the bees stopped indicating its direction by means of the round dance, and transferred to the tail-wagging dance. Von Frisch found there were correlations between the duration of the dance and the flying time needed to reach the source. Tail winds or headwinds would alter the length of the performance even though the food source was equidistant in terms of meters.

      At last it finally happened—the connection was at last made between directional space and circadian rhythms. Von Frisch writes of this momentous discovery quite matter-of-factly: “When we watched the dances over a period of several hours, always supplying sugar at the same feeding place, we saw the direction of the straight part of the dances was not constant, but gradually shifted so that it was always quite different in the afternoon from what it had been in the morning. More detailed observations showed that the direction of the dances always changed by approximately the same angle as the earth’s rotation and the apparent motion of the sun across the sky.”

      …The next step was simple enough. The movement of the sun was regular but nonetheless unstable. If the bee were to indicate direction by analogy to the position of the sun, there would have to be some stable convention by which the analogy could be presented in dance form. Von Frisch noted that the dance always took place upon a vertical surface. So the next great leap into the breaking of the code came when von Frisch hypothesized that within the hive, in darkness, the one stable directional indicator might be the force of gravity. Working from this assumption, he finally succeeded in breaking the code completely. “If,” he writes, “the run points straight down, it means ‘Fly away from the sun to reach the food.’ If during the straight portion of the dance the bee heads 60° to the left of vertical, then the feeding place is situated 60° to the left of the sun.”

      However, there was another complication. Even on cloudy days, when the position of the sun was not apparent to the bees, they continued to communicate the location of food sources by the analogy of the dance. His first thought was that the bees were aware of the azimuth of the sun even when it was invisible….He tested this assumption by providing artificial sunbeams—by moving the experimental hive into the shade and diverting sunbeams from the wrong direction with a mirror—and discovered that by doing this, he could disorient the bees. Yet artificial light sources, such as a powerful flashlight beam, would not disorient them.

      At this point, in the early 1940’s, von Frisch’s work could have taken two directions: he could have explored the mystery of circadian rhythms, the mechanics of the innate sense of time—for the bees would necessarily have to be aware of the regular movement of the sun in order for them to utilize this movement as a navigational aid—or in conformity with the principle of Occam’s razor, he could assume the more simple explanation, that no complex time sense need exist innately within the bee, that merely some optical property of sunlight made the position of the sun visible to the bee when it was shrouded from human eyes by cloud cover.

      Von Frisch chose the latter course. Optics had entranced him from his youth; he had begun working with honeybees in the first place to disprove von Hess's contention that they were color-blind. His thirty-year involvement with breaking the code of their dance was kind of a side trip from his major goal. Perhaps he was happy to find a new optical problem to solve. At any rate, he writes: “Light rays coming directly from the sun consist of vibrations that occur in all directions perpendicular to the line along which sunlight travels. But the light of the blue sky has not reached us directly from the sun; it has first been scattered from particles in the atmosphere. This diffuse, scattered light coming from the sky is partially polarized, by which we mean that more of it is vibrating in one direction than in others.”


You can see a video on dancing bees by Smithsonian at
DANCING BEES
Daphnia

[This excerpt is from chapter 1 of The Parable of the Beast by John N. Bleibtreu.]

      …A very large number of organisms spend the larger part of their lives in the haploid phase—algae and fungi are examples. However, it may not be construed that the haploid phase of life—even human life—is sterile, incapable of producing diploid cells, of forming a fetus. This phenomenon, the production of diploid from haploid cells, is known as parthenogenesis and takes place quite routinely and normally in several animals. Perhaps the best known is Daphnia, a fresh-water crustacean commonly found in seasonal puddles of water in temperate zone meadows, marshes, and other similar places. These animals survive the winter in the form of eggs encased in hard leathery armor. Alternate freezing and thawing in the spring cracks these egg cases and releases the tiny shrimplike animals, which are all females. They produce young, other females, and the colony thrives and increases during the warm summer months without any need of male intervention. The transition, or metamorphosis, from haploid to diploid is completely asexual. The haploid phase is therefore in no way incomplete. It is fully capable of producing adult forms with a total complement of structural and behavioral traits.

      It is theoretically possible for human females to be similarly produced by human females, since all eggs, human as well as Daphnia eggs, produce, during the process of maturation, another genetically complete but much diminished copy of themselves known as the Polar Body. In mammals this polar body is normally expendable; it is thrown off by the egg and becomes absorbed by the tissues of the ovary. In Daphnia, however, this polar body is reintegrated into the egg, playing the role of sperm and supplying the missing genetic materials. As fall approaches and the Daphnia colony nears the end of its active locomotor phase of life, males appear. The present hypothesis holds that the shortened span of daylight causes females to produce them, though their production can be induced by chemical changes in the water as well as alterations of light. So far as is known, the sole function of males is to produce the hard leathery egg cases which only appear as a result of heterosexual copulation. The eggs produced without male intervention are soft-shelled and defenseless.

Behaviorism

[This excerpt is from chapter 1 of The Parable of the Beast by John N. Bleibtreu.]

      A great deal of valuable information was obtained from the Watson and the Pavlov studies, but since they were motivated exclusively by pragmatic considerations, they contained their own inbuilt limitations of application. The frightening social and political consequences of communities based on these models of behavior were lampooned by Aldous Huxley in Brave New World, and by George Orwell in 1984.

      In Europe animal behavior studies took an entirely different direction. They were conducted by zoologists, not psychologists. The emphasis of post-Darwinian zoology has been on evolution, on answering the very simple, basic question: How did living things get to be the way they are now? As to how forms acquired their present appearance, part of the answer is hopefully to be found in the fossil record. At least the sequence of the appearance of transitional forms is to be found there. But behavior cannot be fossilized. It can only be deduced; from the shape of an animal’s teeth, for example, one can deduce whether it was herbivorous or carnivorous, and from that, deduce, in general, its style of life—whether it was a grazer, browser, or hunter. But the hard answers to these questions come from witnessing the acts of life themselves, not from the circumstantial evidence which is often misleading.

Cattle Tick

[These excerpts are from chapter 1 of The Parable of the Beast by John N. Bleibtreu.]

      The cattle tick is a small, flat-bodied, blood-sucking arachnid with a curious life history. It emerges from the egg not yet fully developed, lacking a pair of legs and sex organs. In this state it is still capable of attacking cold-blooded animals such as frogs and lizards, which it does. After shedding its skin several times, it acquires its missing organs, mates, and is then prepared to attack warm-blooded animals.

      The eyeless female is directed to the tip of a twig on a bush by her photosensitive skin, and there she stays through darkness and light, through fair weather and foul, waiting for the moment that will fulfill her existence. In the Zoological Institute, at Rostock, prior to World War I ticks were kept on the ends of twigs, waiting for this moment for a period of eighteen years. The metabolism of the creature is sluggish to the point of being suspended entirely. The sperm she received in the act of mating remains bundled into capsules where it, too, waits in suspension until mammalian blood reaches the stomach of the tick, at which time the capsules break, the sperm are released and they fertilize the eggs which have been reposing in the ovary, also waiting in a kind of time suspension.

      The signal for which the tick waits is the scent of butyric acid, a substance present in the sweat of all mammals. This is the only experience that will trigger time into existence for the tick.

      The tick represents, in the conduct of its life, a kind of apotheosis of subjective time perception. For a period as long as eighteen years nothing happens. The period passes as a single moment; but at any moment within this span of literally senseless existence, when the animal becomes aware of the scent of butyric acid it is thrust into a perception of time, and other signals are suddenly perceived.

      The animal then hurls itself in the direction of the scent. The object on which the tick lands at the end of this leap must be warm; a delicate sense of temperature is suddenly mobilized and so informs the creature. If the object is not warm, the tick will drop off and reclimb its perch. If it is warm, the tick burrows its head deeply into the skin and slowly pumps itself full of blood. Experiments made at Rostock with membranes filled with fluids other than blood proved that the tick lacks all sense of taste, and once the membrane is perforated the animal will drink any fluid, provided it is of the right temperature.

      The extraordinary preparedness of this creature for that moment of time during which it will re-enact the purpose of its life contrasts strikingly with probability that this moment will ever occur. There are doubtless many bushes on which ticks perch, which are never bypassed by a mammal within range of the tick’s leap. As do most animals, the tick lives in an absurdly unfavorable world—at least so it would appear to the compassionate human observer. But this world is merely the environment of the animal. The world it perceives—which experimenters at Rostock called its umwelt, its perceptual world—is not at all unfavorable. A period of eighteen years, as measured objectively by the circuit of the earth around the sun, is meaningless to the tick. During this period, it is apparently unaware of temperature changes. Being blind, it does not see the leaves shrivel and fall and then renew themselves on the bush where it is affixed. Unaware of time it is also unaware of space, and the multitudes of forms and colors which appear in space. It waits, suspended in duration for its particular moment of time, a moment distinguished by being filled with a single, unique experience; the scent of butyric acid.

      Though we consider ourselves far removed as humans from such a lowly insect form as this, we too are both aware and unaware of elements which comprise our environment. We are more aware than the tick of the passage of time. We are subjectively aware of the aging process; we know that we grow older, that time is shortened by each passing moment. For the tick, however, this moment that precedes its burst of volitional activity, the moment when it scents butyric acid and is thrust into purposeful movement, is close to the end of time for the tick. When it fills itself with blood, it drops from its host, lays its eggs, and dies….

      The man who coined the term umwelt, who examined the perceptual system of the cattle tick, and who is considered by many to be the father of ethology, was an eccentric Baltic baron named Jakob Johann von Uexkiill.

A Cirrus Cloud Climate Dial?

[These excerpts are from an article by Ulrike Lohmann and Blaz Gasparini in the July 21, 2017, issue of Science.]

      Climate engineering is a potential means to offset the climate warming caused by anthropogenic greenhouse gases. Suggested methods broadly fall into two categories. Methods in the first category aim to remove carbon dioxide (CO2) from the atmosphere, whereas those in the second aim to alter Earth’s radiation balance. The most prominent and best researched climate engineering approach in the second category is the injection of atmospheric aerosol particles or their precursor gases into the stratosphere, where these particles reflect solar radiation back to space. Climate engineering through cirrus cloud thinning, in contrast, mainly targets the long-wave radiation that is emitted from Earth.

      Wispy, thin, and often hardly visible to the human eye, cirrus clouds do not reflect a lot of solar radiation back to space. Because they form at high altitudes and cold temperatures, cirrus clouds emit less long-wave radiation to space than does a cloud-free atmosphere. The climate impact of cirrus clouds is therefore similar to that of greenhouse gases. Their long-wave warming (greenhouse) effect prevails over their reflected solar radiation (cooling) effect….

      If cirrus thinning works, it should be preferred over methods that target changes in solar radiation, such as stratospheric aerosol injections, because cirrus thinning would counteract greenhouse gas warming more directly. Solar radiation management methods cannot simultaneously restore temperature and precipitation at present-day levels but lead to a reduction in global mean precipitation because of the decreased solar radiation at the surface. This adverse effect on precipitation is minimized for cirrus seeding because of the smaller change in solar radiation.

Sulfur Injections for a Cooler Planet

[These excerpts are from an article by Ulrike Niemeler and Simone Tilmes in the July 21, 2017, issue of Science.]

      Achieving the Paris Agreement’s aim to limit the global temperature increase to at most 2°C above preindustrial levels will require rapid, substantial greenhouse gas emission reductions together with large-scale use of “negative emission” strategies for capturing carbon dioxide (CO2) from the air. It remains unclear, however, how or indeed whether large net-negative emissions can be achieved, and neither technology nor sufficient storage capacity for captured carbon are available. Limited commitment for sufficient mitigation efforts and the uncertainty related to net-negative emissions have intensified calls for options that may help to reduce the worst climate effects. One suggested approach is the artificial reduction of sunlight reaching Earth's surface by increasing the reflectivity of Earth's surface or atmosphere.

      Research in this area gained traction after Crutzen called for investigating the effects of continuous sulfur injections into the stratosphere—or stratospheric aerosol modification (SAM)—as one method to deliberately mitigate anthropogenic global warming. The effect is analogous to the observed lowering of temperatures after large volcanic eruptions. SAM could be seen as a last-resort option to reduce the severity of climate change effects such as heat waves, floods, droughts, and sea level rise. Another possibility could be the seeding of ice clouds—an artificial enhancement of terrestrial radiation leaving the atmosphere—to reduce climate warming.

      SAM technologies are presently not developed. Scientists are merely beginning to grasp the potential risks and benefits of these kinds of interventions….However, different models consistently identify side effects; for example, the reduction of incoming solar radiation at Earth’s surface reduces evaporation, which in turn reduces precipitation. This slowing of the hydrological cycle affects water availability, mostly in the tropics, and reduces monsoon precipitation….

      Currently, a single person, company, or state may be able to deploy SAM without in-depth assessments of the risks, potentially causing global impacts that could rapidly lead to conflict. As such, it is essential that international agreements are reached to regulate whether and how SAM should be implemented. A liability regime would rapidly become essential to resolve conflicts, especially because existing international liability rules do not provide equitable and effective compensation for potential SAM damage. Such complexities will require the establishment of international governance of climate intervention, overseeing research with frequent assessments of benefits and side effects.

      Climate intervention should only be seen as a supplement and not a replacement for greenhouse gas mitigation and decarbonization efforts because the necessary level and application time of SAM would continuously grow with the need for more cooling to counteract increasing greenhouse gas concentrations. A sudden disruption of SAM would cause an extremely fast increase in global temperature. Also, SAM does not ameliorate major consequences of the CO2 increase in the atmosphere, such as ocean acidification, which would continue to worsen.

Golding on Education

[These excerpts are from On the Crest of the Wave by William Golding.]

      But what has happened to the woman [Education]?...her face is fretted with the lines of worry and exasperation. The hand across the shoulders holds a pair of scales so that the children can see how this thing in this pan weighs more than the other one. A pair of dividers hangs from her little finger so that they can check that this thing is longer than that. Her right hand still points into the dawn, but the little girl is yawning; and the boy is looking at his feet. For the worry and exasperation in Education’s face are because she has learnt something herself: that the supply of teachable material is as limited as the supply of people who can teach; that neither can be manufactured or bought anywhere; and lastly, most important of all — thought that turns her exasperation into panic — she is pointing the children in one direction and being moved, herself, in another.

      …Education still points to the glorious dawn, official at any rate, but has been brought to see, in a down-to-earth manner, that what we really want is technicians and civil servants and soldiers and airmen and that only she can supply them. She still calls what she is doing “education” because it is proper, a dignified word — but she should call it “training”, as with dogs. In the Wellsian concept, the phrase she had at the back of her mind was “Natural Philosophy”; but the overtones were too vast, too remote, too useless on the national scale, emphatically on the side of “knowing” rather than “doing”.

      Now I suppose that I had better admit that all this is about “Science” in quotation marks, and I do so with fear and trembling. For to attack “Science” is to be labelled reactionary; and to applaud it, the way to an easy popularity. “Science” has become a band-waggon on which have jumped parsons and television stars, novelists and politicians, every public man, in fact, who wants an easy hand-up, all men who are either incapable of thought or too selfishly careerist to use it; so that the man in the street is persuaded by persistent half-truths that “Science” is the most important thing in the world, and Education has been half-persuaded, too. But it cannot be said often enough or loudly enough that “Science” is not the most important thing. Philosophy is more important than “Science”; so is history; so is courtesy, come to that, so is aesthetic perception. I say nothing of religion, since it is outside the scope of this article; but on the national scale, we have come to pursue naked, inglorious power when we thought we were going to pursue Natural Philosophy.

      The result of this on the emotional climate is perceptible already. Mind, I have no statistical evidence to present; unless convictions that have grown out of experience may be called unconsciously statistical. I recognize that everything I say may be nothing more than an approximation to the truth, because the truth itself is so qualified on every side, so slippery that you have to grab at it as, and how, you can. But we are on the crest of the wave and can see a little way forward.

      It is possible when writing a boy’s report to admit that he is not perfect. This has always been possible; but today there is a subtle change and the emphasis is different. You can remark on his carelessness; you can note regretfully his tendency to bully anyone smaller. You can even suggest that the occasions when he removed odd coins from coats hung in the changing room are pointers to a deep unhappiness. Was he perhaps neglected round about the change of puberty? Does he not need a different father-figure; should he not therefore, try a change of psychiatrist? You can say all this; because we all live in the faith that there is some machine, some expertise that will make an artificial silk purse out of a sow’s ear. But there is one thing you must not say because it will be taken as an irremediable insult to the boy and to his parents. You must not say he is unintelligent. Say that and the parents will be after you like a guided missile. They know that intelligence cannot be bought or created. They know, too, it is the way to the good life, the shaming thing, that we pursue without admitting it, the naked power, the prestige, the two cars and the air travel. Education, pointing still, is nevertheless moving their way; to the world where it is better to be envied than ignored, better to be well-paid than happy, better be successful than good—better to be vile, than vile-esteemed.

      I must be careful. But it seems to me that an obvious truth is being neglected. Our humanity, our capacity for living together in a full and fruitful life, does not reside in knowing things for the sake of knowing them or even in the power to exploit our surroundings. At best these are hobbies and toys — adult toys, and I for one would not be without them. Our humanity rests in the capacity to make value judgments, unscientific assessments, the power to decide that this is right, that wrong, this ugly, that beautiful, this just, that unjust. Yet these are precisely the questions which “Science” is not qualified to answer with its measurement and analysis. They can be answered only by the methods of philosophy and the arts. We are confusing the immense power which the scientific method gives us with the all-important power to make the value judgments which are the purpose of human education.

      The pendulum has swung too far. There was a time in education — and I can just remember it — when science fought for its life, bravely and devotedly. Those were the days when any fool who had had Homer beaten into his arse was thought better educated than a bright and inquiring Natural Philosopher. But now the educational world is full of spectral shapes, bowing acknowledgments to religious instruction and literature but keeping an eye on the laboratory where is respect, jam tomorrow, power. The arts are becoming the poor relations. For the arts cannot cure a disease or increase production or ensure defence. They can only cure or ameliorate sicknesses so deeply seated that we begin to think of them in our new wealth as built-in: boredom and satiety, selfishness and fear. The vague picture of the future which our political parties deduce from us as a desirable thing is limitless prosperity, health to enable us to live out a dozen television charters, more of everything; and dutifully they shape our education so that the picture can be painted in.

      The side effect is to enlarge the importance of measurement and diminish the capacity to make value judgments. This is not deliberate and will be denied anyway. But when the centre of gravity is shifted away from the social virtues and the general refining and developing of human capacities, because that is not what we genuinely aspire to, no amount of lip-service and face-saving can alter the fact that the change is taking place. Where our treasure is, there are our hearts also.

Unlocking a Key to Maize’s Amazing Success

[These excerpts are from an article by Elizabeth Pennisi in the July 21, 2017, issue of Science.]

      Munching fresh corn on the cob is a summer tradition for many Americans, but they’re far from alone in their love of maize. This staple grows on every continent save Antarctica and provides food and biofuels for millions of people. Now, researchers studying ancient and modern maize have found a clue to its popularity over the millennia: maize’s easily adjustable flowering time, which enabled ancient peoples to get the plant to thrive in diverse climates, according to studies presented this month at the meeting of the Society for Molecular Biology and Evolution here. The studies found hints of the genomic shifts behind such rapid change and began to clarify when L maize flowering came under farmer control….

      Ancient farmers in Mexico began cultivating maize's wild ancestor, teosinte, about 9000 years ago, but researchers have long wondered why this plant initially became so popular given that teosinte lacks some of maize’s most desirable qualities, such as an easily harvestable large cob of kernels.

      Wondering whether an adaptable flowering time explained some of the appeal, Tenaillon about 20 years ago began growing two standard maize strains and selecting only the earliest and latest bloomers for re-planting the following year. The tiny flowers enshrouded by a leaf produce the silk—long, pollen-catching strands—whereas male flowers make up the corn “tassel.” After just 13 generations, early- and late-flowering individuals in each strain had developed a 3-week difference in timing, she reported at the meeting. Both types of plants grew at about the same rate and reached about the same height, indicating the early bloomers simply followed a sped-up program for when to flower. Thus ancient farmers may have been able to more quickly breed crops, an advantage in colder places with shorter growing seasons.

How to Govern Geoengineering?

[These excerpts are from an editorial by Janos Pasztor, Cynthia Scharf and Kai-Uwe Schmidt in the July 21, 2017, issue of Science.]

      The Paris Agreement aims to limit the global temperature rise to 1.5° to 2°C above preindustrial temperature, but achieving this goal requires much higher levels of mitigation than currently planned. This challenge has focused greater attention on climate geoengineering approaches, which intentionally alter Earth’s climate system, as part of an overall response starting with radical mitigation. Yet it remains unclear how to govern research on, and potential deployment of, geoengineering technologies.

      There are two main types of geoengineering: carbon dioxide removal (CDR) from the atmosphere and solar radiation management (SRM) to cool the planet. Geoengineering does not obviate the need for radical reductions in greenhouse gas (GHG) emissions to zero, combined with adaptation to inevitable climate impacts. However, some scientists say that geoengineering could delay or reduce the overshoot. In so doing, we may expose the world to other serious risks, known and unknown.

      Since 2009, the U.K. Royal Society, the European Union, and the U.S. National Academy of Sciences have recognized the need for governance and for a strategic approach to climate geoengineering policies. However, national governments and intergovernmental actors have thus far largely ignored their recommendations….

      The world is heading to an increasingly risky future and is unprepared to address the institutional and governance challenges posed by these technologies. Geoengineering has planet-wide consequences and must therefore be discussed by national governments within intergovernmental institutions, including the United Nations. The research community has been addressing many of these issues, but the global policy community and the public largely have not. It is time to do so.

Trump’s Science Shop is Small and Waiting for Leadership

[These excerpts are from an article by Jeffrey Mervis in the July 14, 2017, issue of Science.]

      The 1976 law that created the White House Office of Science and Technology Policy (OSTP) lets presidents tailor the office to fit their priorities. Under former President Barack Obama, OSTP grew to a record size and played a role in all the administration's numerous science and technology initiatives. In contrast, President Donald Trump has all but ignored OSTP during his first 6 months in office, keeping it small and excluding it from even a cursory role in formulating science-related policies and spending plans.

      OSTP is not alone across the government in awaiting a new crop of key managers. But such leadership voids can be paralyzing for a small shop. Trump has yet to nominate an OSTP director, who traditionally also serves as the president's science adviser. Nor has he announced his choices for as many as four other senior OSTP officials who would need to be confirmed by the Senate. An administration official, however, told Science that OSTP has reshuffled its work flow—and that there’s a short list for the director’s position….

      Although much about OSTP’s future remains uncertain, Trump has renewed the charter of the National Science and Technology Council, a multiagency group that carries out much of the day-to-day work of advancing the president's science initiatives. He has also retained three offices that oversee the government’s multibillion-dollar efforts in nanotechnology, information technology research, and climate change. Still pending is the status of the President’s Council of Advisors on Science and Technology, a body of eminent scientists and high-tech industry leaders that went out of business at the end of the Obama administration.

Can We Beat Influenza?

[These excerpts are from an editorial by Wenqing Zhang and Robert G. Webster in the July 14, 2017, issue of Science.]

      …Currently, only two influenza A viruses and two influenza B clades are circulating and causing disease in humans, but 16 additional subtypes of influenza A viruses are circulating in nature (14 in birds and two in bats). Of the latter, six occasionally infect humans, providing an ever-looming pandemic threat. However, there is still a lack of fundamental knowledge to predict if and when a particular viral subtype will acquire pandemic ability. We therefore still fail to predict influenza pandemics, and this must change.

      …Problems with sharing influenza samples climaxed in 2007 but then began to be addressed in 2011 following the adoption of the WHO’s Pandemic Influenza Preparedness Framework, which places virus sharing and access to benefits on an equal footing. Fair and equitable sharing of benefits arising from the use of genetic resources under the Nagoya protocol should promote further pathogen sharing in a broader context….

      …Vaccine production has not changed much in decades; it remains a lengthy egg-based process. Furthermore, vaccine efficacy, especially in the elderly, is unsatisfactory and requires annual updates. Universal vaccines that protect against all influenza subtypes are being researched and hold promise for future infection control. Antiviral agents used to treat influenza are limited to antineuraminidase drugs, but polymerase-targeting drugs are in development, suggesting the possibility of future multidrug therapies.

      The approaching 100-year anniversary of the 1918 Spanish influenza pandemic, considered one of the greatest public health crises in history, reminds us that influenza has the potential to cause catastrophic disease at any time….As knowledge is gained and technology improves, so will our pandemic predictive ability and response capacity, but everything depends on rapid global sharing of both viruses and genomic information.

Copernicus

[This excerpt is from Copernicus by William Golding.]

      The book — Concerning the Revolutions of the Celestial Bodies — contains two elements which it is necessary to understand. One, of course, was a detailed mathematical examination of his own system and an attempt to square it with the results of observation. This was beyond the comprehension of all but a few dozen mathematicians of his time. The system did not make prediction easier; it simply happened to be nearer the truth, and if you were attuned to the aesthetic of mathematics, you might feel that. But in the other part of his book, Copernicus made a profound tactical mistake. He tried to ignore mathematics altogether and to find some ground on which uninstructed common sense might operate. In a word, he tried to meet the arguments that might be brought against him by an appeal to common sense. In so doing, he only made it easier for the average non-astronomical, non-mathematical reader to counter him at every point. For in the days of naked-eye observation, the Ptolemaic universe was the common-sense one.

      Here is an example: “It is the vault of heaven that contains all things, and why should motion not be attributed rather to the contained than the container, to the located than the locator? The latter view was certainly that of Heraclides and Ecphantus the Pythagorean, and Hicetas of Syracuse according to Cicero.”

      This gave infinite scope to those who would play at quotations. In short, his attempt to appeal to common sense was an abject failure, because no man who could not feel the aesthetic of mathematics in his bones could possibly avoid the conclusion that Copernicus was a theorist with straw in his hair.

      So when the book came out, in 1543, it gathered to itself a series of uninformed criticisms which are difficult to parallel because, in a way, the event was unparalleled.

      Thus a political thinker of the sixteenth century, Jean Bodin, said: “No one in his senses, or imbued with the slightest knowledge of physics, will ever think that the earth, heavy and unwieldy from its own weight and mass, staggers up and down around its own center and that of the sun ; for at the slightest jar of the earth, we would see cities and fortresses, towns and mountains, thrown down.”

      Now this is very sensible, granted the kind of universe Bodin, and indeed Copernicus, inhabited. All Bodin needed was a lifetime’s study of mathematics.

      Martin Luther, who was used to laying down the law, dealt with Copernicus summarily in terms of Holy Writ: “This fool wishes to reverse the entire science of astronomy; but sacred Scripture tells us that Joshua commanded the sun to stand still and not the earth.”

      That is a clincher. The mind reels to think of the laborious history of other discoveries and other reassessments one would have to unravel for Martin Luther before he could have been brought to doubt the evidence of Holy Writ and his own senses.

      Copernicus got nowhere once he stepped outside the world of mathematical demonstration. Only six years after Copernicus died, Melanchthon summed up the average man’s reaction: `”Now it is a want of honesty and decency to assert such notions publicly, and the example is pernicious. It is part of a good mind to accept the truth as revealed by God and to acquiesce in it.”

      The truth was that man's universe was about to turn inside out and explode. No man could introduce such an idea and be believed. Copernicus, in his appeal to common sense — it was really an appeal to intuition — had attempted the impossible.

      The history of his idea is revealing. Again it was a matter of teamwork. His system never worked exactly. Tycho Brahe who came after him was an anti-Copernican, but he knew the value of accurate observations and spent his life compiling them. Then he produced a cosmology of his own, with the earth still at the center. Kepler, his successor, returned to the ideas of Copernicus, and at last, with Brahe's accurate observations, made them work. He got rid of the last hangover from the ancient system, the idea that movement in a circle is perfect and therefore the one movement admissible in the heavens. He went from the circle to the ellipse, and in his work we find a model of the solar system which, with minor alterations has been accepted ever since.

      What was Copernicus, then? In him, the ancient and the modern meet. He inherited the work of three thousand years, and he pointed the way toward Newton and Einstein. He was, as it were, a man who lived beside a great river and built the first bridge across it.

Men’s Attire

[This excerpt is from Crosses by William Golding.]

      Who wants to see the slightly adjusted outline of an old man whose figure is beginning to ‘go’ like ice cream left in the sun? Society should be more charitable. Once men have started their decline and begun to gather their parcels of fat, they should be allowed to hide themselves under what clothing they choose. Does anyone really believe that decrepitude can be charmed out of sight or obliterated by a tailor’s sleight of hand?

      It was not always so. Once, a degree of fantasy was allowable, and it gave old men the chance to preserve a proper dignity for themselves. Male clothes were as varied then as the ones women wear now. You could hide your baldness with velvet or silk or feathers or laurels. You could wear jewels in your beard. You could have shoulders of ermine, and below them a glistening cataract which gave massiveness to your body and hid your reedy legs. You could clothe yourself in the elegance and severity of a toga, so that as you walked, your feet were moving in the folds and flounces of a full skirt. For the young, there was a brave show of leg and thigh, rather than a dun-colored drainpipe. There was the codpiece to proclaim and if possible exaggerate your virility. The slimness of your waist was accentuated by a belt, where hung a rapier which was little but a variation on the codpiece. You could expose your chest if you had hair on it. You could wear your hair in long curls, and finish off with a jaunty velvet cap and feather perched over one ear.

      The historian will probably object that you stank. But so did everyone else. And you could always wear perfume, an aid which has been defined as ‘one stink covering another’. Young men must have thought a good deal of themselves in those days; and an old one moved among them with all the dignity of a battle-ship among destroyers. But today we all wear the same uniform, the same livery of servitude to convention. Youth has not its grace, nor age its privilege. An old man exhibits his infirmities in the same clothes that do nothing but hide the graces of his grandson. We have lost both ways.

      …Until man is free of this drab convention and can dress as he likes, and habitually does so dress from one end of life to the other, we shall continue to button and zip and strap ourselves into a structure not much more becoming than a concrete wall, and about as comfortable.

Leonida’s Gift

[This excerpt is from The Hot Gates by William Golding.]

      To most of the Persian army, this must have meant nothing. There had been, after all, nothing but a small column of dust hanging under the cliffs in one corner of the plain. If you were a Persian, you could not know that this example would lead, next year, to the defeat and destruction of your whole army at the battle of Plataea, where the cities of Greece fought side by side. Neither you nor Leonidas nor anyone else could foresee that here thirty years’ time was won for shining Athens and all Greece and all humanity.

      The column of dust diminished. The King of Kings gave an order. The huge army shrugged itself upright and began the march forward into the Hot Gates, where the last of the Spartans were still fighting with nails and feet and teeth.

      I came to myself in a great stillness, to find I was standing by the little mound. This is the mound of Leonidas, with its dust and rank grass, its flowers and lizards, its stones, scruffy laurels and hot gusts of wind. I knew now that something real happened here. It is not just that the human spirit reacts directly and beyond all argument to a story of sacrifice and courage, as a wine glass must vibrate to the sound of the violin. It is also because, way back and at the hundredth remove, that company stood in the right line of history. A little of Leonidas lies in the fact that I can go where I like and write what I like. He contributed to set us free.

Successful Evolution

[This excerpt is from chapter 5 of Full House by Stephen Jay Gould.]

      Many classic “trends” of evolution are stories of such unsuccessful groups—trees pruned to single twigs, then falsely viewed as culminations rather than lingering vestiges of former robustness. We cannot grasp the irony of life's little joke until we recognize the primacy of variation within complete systems, and the derivative nature of abstractions or exemplars chosen to represent this varied totality. The full evolutionary bush of horses is a complete system; the steamrollered “line” from Hyracotherium to Equus is one labyrinthine path among many, with no special claim beyond the fortuity of a barely continued existence.

      These conceptual errors have plagued the interpretation of horses, and the more general evolutionary messages supposedly conveyed by them, from the very beginning. Huxley himself, in the printed version of his capitulation to Marsh’s interpretation of horses as an American tale, used the supposed ladder of horses as a formal model for all vertebrates. For example, he denigrated the teleosts (modern bony fishes) as dead ends without issue: “They appear to me to be off the main line of evolution—to represent, as it were, side tracks starting from certain points of that line.” But teleosts are the most successful of all vertebrate groups. Nearly 50 percent of all vertebrate species are teleosts. They stock the world’s oceans, lakes, and rivers, and include nearly one hundred times as many species as primates (and about five times more than all mammals combined). How can we call them “off the main line” just because we can trace our own pathway back to common ancestry with theirs, more than 300 million years ago?

Fusion

[This excerpt is from chapter 24 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      It was not until 1929 that another idea was put forth: the notion that, given the prodigious temperatures and pressures of a star's interior, atoms of light elements might fuse together to form heavier atoms— that atoms of hydrogen, as a start, could fuse to form helium; that the source of cosmic energy, in a word, was thermonuclear. Huge amounts of energy had to be pumped into light nuclei to make them fuse together, but once fusion was achieved, even more energy would be given out. This would in turn heat up and fuse other light nuclei, producing yet more energy, and this would keep the thermonuclear reaction going. The inside of the sun reaches enormous temperatures, something on the order of twenty million degrees. I found it difficult to imagine a temperature like this—a stove at this temperature (George Cramow wrote in The Birth and Death of the Sun) would destroy everything around it for hundreds of miles.

      At temperatures and pressures like this, atomic nuclei — naked, stripped of their electrons —would be rushing around at tremendous speed (the average energy of their thermal motion would be similar to that of alpha particles) and continually crashing, uncushioned, into one another, fusing to form the nuclei of heavier elements.

      We must imagine the interior of the Sun [Gamow wrote) as some gigantic kind of natural alchemical laboratory where the transformation of various elements into one another takes place almost as easily as do the ordinary chemical reactions in our terrestrial laboratories.

      Converting hydrogen to helium produced a vast amount of heat and light, for the mass of the helium atom was slightly less than that of four hydrogen atoms—and this small difference in mass was totally transformed into energy, in accordance with Einstein’s famous e = mc2. To produce the energy generated in the sun, hundreds of millions of tons of hydrogen had to be converted to helium each second, but the sun is composed predominantly of hydrogen, and so vast is its mass that only a small fraction of it has been consumed in the earth’s lifetime. If the rate of fusion were to decline, then the sun would contract and heat up, restoring the rate of fusion; if the rate of fusion were to become too great, the sun would expand and cool down, slowing it. Thus, as Cramow put it, the sun represented “the most ingenious, and perhaps the only possible, type of ‘nuclear machine,’” a self-regulating furnace in which the explosive force of nuclear fusion was perfectly balanced by the force of gravitation. The fusion of hydrogen to helium not only provided a vast amount of energy, but also created a new element in the world. And helium atoms, given enough heat, could be fused to make heavier elements, and these elements, in turn, to make heavier elements still.

      Thus, by a thrilling convergence, two ancient problems were solved at the same time: the shining of stars, and the creation of the elements. Bohr had imagined an aufbau, a building up of all the elements starting from hydrogen, as a purely theoretical construct—but such an aufbau was realized in the stars. Hydrogen, element 1, was not only the fuel of the universe, it was the ultimate building block of the universe, the primordial atom, as Prout had thought back in 1815. This seemed very elegant, very satisfying, that all one needed to start with was the first, the simplest of atoms.

Neils Bohr

[This excerpt is from chapter 24 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      It was Niels Bohr, also working in Rutherford’s lab in 1913, who bridged the impossible, by bringing together Rutherford’s atomic model with Planck’s quantum theory. The notion that energy was absorbed or emitted not continuously but in discrete packets, “quanta,” had lain silently, like a time bomb, since Planck had suggested it in 1900. Einstein had made use of the idea in relation to photoelectric effects, but otherwise quantum theory and its revolutionary potential had been strangely neglected, until Bohr seized on it to bypass the impossibilities of the Rutherford atom. The classical view, the solar-system model, would allow electrons an infinity of orbits, all unstable, all crashing into the nucleus. Bohr postulated, by contrast, an atom that had a limited number of discrete orbits, each with a specific energy level or quantal state. The least energetic of these, the closest to the nucleus, Bohr called the “ground state”—an electron could stay here, orbiting the nucleus, without emitting or losing any energy, forever. This was a postulate of startling, outrageous audacity, implying as it did that the classical theory of electromagnetism might be inapplicable in the minute realm of the atom.

      There was, at the time, no evidence for this; it was a pure leap of inspiration, imagination—not unlike the leaps he now posited for the electrons themselves, as they jumped, without warning or intermediates, from one energy level to another. For, in addition to the electron's ground state, Bohr postulated, there were higher-energy orbits, higher-energy “stationary states,” to which electrons might be briefly translocated. Thus if energy of the right frequency was absorbed by an atom, an electron could move from its ground state into a higher-energy orbit, though sooner or later it would drop back to its original ground state, emitting energy of exactly the same frequency as it had absorbed—this is what happened in fluorescence or phosphorescence, and it explained the identity of spectral emission and absorption lines, which had been a mystery for more than fifty years.

      Atoms, in Bohr’s vision, could not absorb or emit energy except by these quantum jumps—and the discrete lines of their spectra were simply the expression of the transitions between their stationary states. The increments between energy levels decreased with distance from the nucleus, and these intervals, Bohr calculated, corresponded exactly to the lines in the spectrum of hydrogen (and to Balmer’s formula for these). This coincidence of theory and reality was Bohr’s first great triumph. Einstein felt that Bohr’s work was “an enormous achievement,” and, looking back thirty-five years later, he wrote, “[it] appears to me as a miracle even today. . . . This is the highest form of musicality in the sphere of thought.” The spectrum of hydrogen—spectra in general—had been as beautiful and meaningless as the markings on butterflies’ wings, Bohr remarked; but now one could see that they reflected the energy states within the atom, the quantal orbits in which the electrons spun and sang. “The language of spectra,” wrote the great spectroscopist Arnold Sommerfeld, “has been revealed as an atomic music of the spheres.”

      Could quantum theory be extended to more complex, multi-electron atoms? Could it explain their chemical properties, explain the periodic table? This became Bohr’s focus as scientific life resumed after the First World War.

      As one moved up in atomic number, as the nuclear charge or number of protons in the nucleus increased, an equal number of electrons had to be added to preserve the neutrality of the atom. But the addition of these electrons to an atom, Bohr envisaged, was hierarchical and orderly. While he had concerned himself at first with the potential orbits of hydrogen’s lone electron, he now extended his notion to a hierarchy of orbits or shells for all the elements. These shells, he proposed, had definite and discrete energy levels of their own, so that if electrons were added one by one, they would first occupy the lowest-energy orbit available, and when that was full, the next-lowest orbit, then the next, and so on. Bohr’s shells corresponded to Mendeleev’s periods, so that the first, innermost shell, like Mendeleev's first period, accommodated two elements, and two only. Once this shell was completed, with its two electrons, a second shell began, and this, like Mendeleev's second period, could accommodate eight electrons and no more. Similarly for the third period or shell. By such a building-up, or aufbau, Bohr felt, all the elements could be systematically constructed, and would naturally fall into their proper places in the periodic table.

      Thus the position of each element in the periodic table represented the number of electrons in its atoms, and each element’s reactivity and bonding could now be seen in electronic terms, in accordance with the filling of the outermost shell of electrons, the so-called valency electrons. The inert gases each had completed outer valency shells with a full complement of eight electrons, and this made them virtually unreactive. The alkali metals, in Group I, had only one electron in their outermost shell, and were intensely avid to get rid of this, to attain the stability of an inert-gas configuration; the halogens in Group VII, conversely, with seven electrons in their valency shell, were avid to acquire an extra electron and also achieve an inert-gas configuration. Thus when sodium came into contact with chlorine, there would be an immediate (indeed explosive) union, each sodium atom donating its extra electron, and each chlorine atom happily receiving it, both becoming ionized in the process.

      The placement of the transition elements and the rare-earth elements in the periodic table had always given rise to special problems. Bohr now suggested an elegant and ingenious solution to this: the transition elements, he proposed, contained an additional shell of ten electrons each; the rare-earth elements an additional shell of fourteen. These inner shells, deeply buried in the case of the rare-earth elements, did not affect chemical character in nearly so extreme a way as the outer shells; hence the relative similarity of all the transition elements and the extreme similarity of all the rare-earth elements.

      Bohr’s electronic periodic table, based on atomic structure, was essentially the same as Mendeleev's empirical one based on chemical reactivity (and all but identical with the block tables devised in pre-electronic times, such as Thomsen’s pyramidal table and Werner’s ultralong table of 1905). Whether one inferred the periodic table from the chemical properties of the elements or from the electronic shells of their atoms, one arrived at exactly the same point. Moseley and Bohr had made it absolutely clear that the periodic table was based on a fundamental numerical series that determined the number of elements in each period: two in the first period, eight each in the second and third, eighteen each in the fourth and fifth; thirty-two in the sixth and perhaps also the seventh. I repeated this series-2, 8, 8, 18, 18, 32—over and over to myself.

      At this point I started to revisit the Science Museum and spend hours once again gazing at the giant periodic table there, this time concentrating on the atomic numbers inscribed in each cubicle in red. I would look at vanadium, for example—there was a shining nugget in its pigeonhole—and think of it as element 23, a 23 consisting of 5 + 18: five electrons in an outer shell around an argon “core” of eighteen. Five electrons—hence its maximum valency of 5; but three of these formed an incomplete inner shell, and it was such an incomplete shell, I had now learned, that gave rise to vanadium's characteristic colors and magnetic susceptibilities. This sense of the quantitative did not replace the concrete, the phenomenal sense of vanadium but heightened it, because I saw it now as a revelation, in atomic terms, of why vanadium had the properties it did….

Atomic Numbers

[This excerpt is from chapter 24 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      …One could round off a weight that was slightly less or slightly more than a whole number (as Dalton did), but what could one do with chlorine, for example, with its atomic weight of 35.5? This made Prout’s hypothesis difficult to maintain, and further difficulties emerged when Mendeleev made the periodic table. It was clear, for example, that tellurium came, in chemical terms, before iodine, but its atomic weight, instead of being less, was greater. These were grave difficulties, and yet throughout the nineteenth century Prout’s hypothesis never really died—it was so beautiful, so simple, many chemists and physicists felt, that it must contain an essential truth.

      Was there perhaps some atomic property that was more integral, more fundamental than atomic weight? This was not a question that could be addressed until one had a way of “sounding” the atom, sounding, in particular, its central portion, the nucleus. In 1913, a century after Prout, Harry Moseley, a brilliant young physicist working with Rutherford, set about exploring atoms with the just-developed technique of X-ray spectroscopy. His experimental setup was charming and boyish: using a little train, each car carrying a different element, moving inside a yard-long vacuum tube, Moseley bombarded each element with cathode rays, causing them to emit characteristic X-rays. When he came to plot the square roots of the frequencies against the atomic number of the elements, he got a straight line; and plotting it another way, he could show that the increase in frequency showed sharp, discrete steps or jumps as he passed from one element to the next. This had to reflect a fundamental atomic property, Moseley believed, and that property could only be nuclear charge.

      Moseley’s discovery allowed him (in Soddy's words) to “call the roll” of the elements. No gaps could be allowed in the sequence, only even, regular steps. If there was a gap, it meant that an element was missing. One now knew for certain the order of the elements, and that there were ninety-two elements and ninety-two only, from hydrogen to uranium. And it was now clear that there were seven missing elements, and seven only, still to be found. The “anomalies” that went with atomic weights were resolved: tellurium might have a slightly higher atomic weight than iodine, but it was element number 52, and iodine was 53. It was atomic number, not atomic weight, that was crucial.

      The brilliance and swiftness of Moseley’s work, which was all done in a few months of 1913-14, produced mixed reactions among chemists. Who was this young whippersnapper, some older chemists felt, who presumed to complete the periodic table, to foreclose the possibility of discovering any new elements other than the ones he had designated? What did he know about chemistry—or the long, arduous processes of distillation, filtration, crystallization that might be necessary to concentrate a new element or analyze a new compound? But Urbain, one of the greatest analytic chemists of all—a man who had done fifteen thousand fractional crystallizations to isolate lutecium—at once appreciated the magnitude of the achievement, and saw that far from disturbing the autonomy of chemistry, Moseley had in fact confirmed the periodic table and reestablished its centrality. “The law of Moseley . . . confirmed in a few days the conclusions of my twenty years of patient work.”

      Atomic numbers had been used before to denote the ordinal sequence of elements ranked by their atomic weight, but Moseley gave atomic numbers real meaning. The atomic number indicated the nuclear charge, indicated the element’s identity, its chemical identity, in an absolute and certain way. There were, for example, several forms of lead—isotopes—with different atomic weights, but all of these had the same atomic number, 82. Lead was essentially, quintessentially, number 82, and it could not change its atomic number without ceasing to be lead. Tungsten was necessarily, unavoidably, element 74. But how did its 74-ness endow it with its identity?

Rutherford’s Gold Foil

[This excerpt is from chapter 23 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      The alpha particles emitted by radioactive decay (they were later shown to be helium nuclei) were positively charged and relatively massive—thousands of times more massive than beta particles or electrons—and they traveled in undeviating straight lines, passing straight through matter, ignoring it, without any scattering or deflection (although they might lose some of their velocity in so doing). This, at least, appeared to be the case, though in 1906 Rutherford observed that there might be, very occasionally, small deflections. Others ignored this, but to Rutherford these observations were fraught with possible significance. Would not alpha particles be ideal projectiles, projectiles of atomic proportions, with which to bombard other atoms and sound out their structure? He asked his young assistant Hans Geiger and a student, Ernest Marsden, to set up a scintillation experiment using screens of thin metal foils, so that one could keep count of every alpha particle that bombarded these. Firing alpha particles at a piece of gold foil, they found that roughly one in eight thousand particles showed a massive deflection—of more than 90 degrees, and sometimes even 180 degrees. Rutherford was later to say, “It was quite the most incredible event that ever happened to me in my life. It was almost as incredible as if you fired a fifteen-inch shell at a piece of tissue paper and it came back and hit you.”

      Rutherford pondered these curious results for almost a year, and then, one day, as Geiger recorded, he “came into my room, obviously in the best of moods, and told me that now he knew what the atom looked like and what the strange scatterings signified.”

      Atoms, Rutherford had realized, could not be a homogenous jelly of positivity stuck with electrons like raisins (as J. J. Thomson had suggested, in his “plum pudding” model of the atom), for then the alpha particles would always go through them. Given the great energy and charge of these alpha particles, one had to assume that they had been deflected, on occasion, by something even more positively charged than themselves. Yet this happened only once in eight thousand times. The other 7,999 particles might whiz through, undeflected, as if most of the gold atoms consisted of empty space; but the eight-thousandth was stopped, flung back in its tracks, like a tennis ball hitting a globe of solid tungsten. The mass of the gold atom, Rutherford inferred, had to be concentrated at the center, in a minute space, not easy to hit—as a nucleus of almost inconceivable density. The atom, he proposed, must consist overwhelmingly of empty space, with a dense, positively charged nucleus only a hundred-thousandth its diameter, and a relatively few, negatively charged electrons in orbit about this nucleus—a miniature solar system, in effect.

Radiochemistry

[This excerpt is from chapter 22 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      The half-life of radium was much longer than that of its emanation, radon—about 1,600 years. But this was still very small compared to the age of the earth—why, then, if it steadily decayed, had all the earth’s radium not disappeared long ago? The answer, Rutherford inferred, and was soon able to demonstrate, was that radium itself was produced by elements with a much longer half-life, a whole train of substances that he could trace back to the parent element, uranium. Uranium in turn had a half-life of four and a half billion years, roughly the age of the earth itself. Other cascades of radioactive elements were derived from thorium, which had an even longer half-life than uranium. Thus the earth was still living, in terms of atomic energy, on the uranium and thorium that had been present when the earth formed.

      These discoveries had a crucial impact on a long-standing debate about the age of the earth. The great physicist Kelvin, writing in the early 186os, soon after the publication of The Origin of Species, had argued that, based on its rate of cooling, and assuming no source of heat other than the sun, the earth could be no more than twenty million years old, and that in another five million years it would become too cold to support life. This calculation was not only dismaying in itself, but was impossible to reconcile with the fossil record, which indicated that life had been present for hundreds of millions of years—and yet there seemed no way of rebutting it. Darwin was greatly disturbed by this.

      It was only with the discovery of radioactivity that the conundrum was solved. The young Rutherford, it was said, nervously facing the famous Lord Kelvin, now eighty years old, suggested that Kelvin’s calculation had been based on a false assumption. There was another source of warmth besides the sun, Rutherford said, and a very important one for the earth. Radioactive elements (chiefly uranium and thorium, and their breakdown products, but also a radioactive isotope of potassium) had served to keep the earth warm for billions of years and to protect it from the premature heat-death that Kelvin had predicted. Rutherford held up a piece of pitchblende, the age of which he had estimated from the amount of helium it contained. This piece of the earth, he said, was at least 500 million years old.

      Rutherford and Soddy were ultimately able to delineate three separate radioactive cascades, each containing a dozen or so breakdown products emanating from the disintegration of the original parent elements. Could all of these breakdown products be different elements? There was no room in the periodic table for three dozen elements between bismuth and thorium—room for half a dozen, perhaps, but not much more. Only gradually did it become clear that many of the elements were just versions of one another; the emanations of radium and thorium and actinium, for example, though they had widely differing half-lives, were chemically identical, all the same element, though with slightly different atomic weights. (Soddy later named these isotopes.) And the end points of each series were similar—radium G, actinium E, and thorium E, so-called, were all isotopes of lead.

      Every substance in these cascades of radioactivity had its own unique radio signature, a half-life of fixed and invariable duration, as well as a characteristic radiation emission, and it was this which allowed Rutherford and Soddy to sort them all out, and in so doing to found the new science of radiochemistry.

      The idea of atomic disintegration, first raised and then retreated from by Marie Curie, could no longer be denied. It was evident that every radioactive substance disintegrated in the act of giving off energy and turned into another element, that transmutation lay at the heart of radioactivity.

      I loved chemistry in part because it was a science of transformations, of innumerable compounds based on a few dozen elements, themselves fixed and invariant and eternal. The feeling of the elements’ stability and invariance was crucial to me psychologically, for I felt them as fixed points, as anchors, in an unstable world. But now, with radioactivity, came transformations of the most incredible sort. What chemist would have conceived that out of uranium, a hard, tungsteny metal, there could come an alkaline earth metal like radium; an inert gas like radon; a tellurium-like element, polonium; radioactive forms of bismuth and thallium; and, finally, lead—exemplars of almost every group in the periodic table?

      No chemist would have conceived this (though an alchemist might), because the transformations lay beyond the sphere of chemistry. No chemical process, no chemical attack, could ever alter the identity of an element, and this applied to the radioactive elements too. Radium, chemically, behaved similarly to barium; its radioactivity was a different property altogether, wholly unrelated to its chemical or physical properties. Radioactivity was a marvelous (or terrible) addition to these, a wholly other property (and one that annoyed me at times, for I loved the tungstenlike density of metallic uranium, and the fluorescence and beauty of its minerals and salts, but I felt I could not handle them safely for long; similarly I was infuriated by the intense radioactivity of radon, which otherwise would have made an ideal heavy gas).

      Radioactivity did not alter the realities of chemistry, or the notion of elements; it did not shake the idea of their stability and identity. What it did do was to hint at two realms in the atom—a relatively superficial and accessible realm governing chemical reactivity and combination, and a deeper realm, inaccessible to all the usual chemical and physical agents and their relatively small energies, where any change produced a fundamental alteration of the element’s identity.


You can see radiation in a spinthariscope on YouTube!
This is what Sacks observed.
Cuttlefish

[This excerpt is from chapter 22 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      We all adopted particular zoological groups: Eric became enamored of sea cucumbers, holothurians; Jonathan of iridescent bristled worms, polychaetes; and I of squids and cuttlefish, octopuses, cephalopods—the most intelligent and, to my eyes, the most beautiful of invertebrates. One day we all went down to the seashore, to Hythe in Kent, where Jonathan's parents had taken a house for the summer, and went out for a day's fishing on a commercial trawler. The fishermen would usually throw back the cuttlefish that ended up in their nets (they were not popular eating in England). But I, fanatically, insisted that they keep them for me, and there must have been dozens of them on the deck by the time we came in. We took all the cuttlefish back to the house in pails and tubs, put them in large jars in the basement, and added a little alcohol to preserve them. Jonathan’s parents were away, so we did not hesitate. We would be able to take all the cuttlefish back to school, to Sid—we imagined his astonished smile as we brought them in—and there would be a cuttlefish apiece for everyone in the class to dissect, two or three apiece for the cephalopod enthusiasts. I myself would give a little talk about them at the Field Club, dilating on their intelligence, their large brains, their eyes with erect retinas, their rapidly changing colors.

      A few days later, the day Jonathan's parents were due to return, we heard dull thuds emanating from the basement, and going down to investigate, we encountered a grotesque scene: the cuttlefish, insufficiently preserved, had putrefied and fermented, and the gases produced had exploded the jars and blown great lumps of cuttlefish all over the walls and floor; there were even shreds of cuttlefish stuck to the ceiling. The intense smell of putrefaction was awful beyond imagination. We did our best to scrape off the walls and remove the exploded, impacted lumps of cuttlefish. We hosed down the basement, gagging, but the stench was not to be removed, and when we opened windows and doors to air out the basement, it extended outside the house as a sort of miasma for fifty yards in every direction.

      Eric, always ingenious, suggested we mask the smell, or replace it, by an even stronger, but pleasant smell—a coconut essence, we decided, would fill the bill. We pooled our resources and bought a large bottle of this, which we used to douche the basement, and then distributed liberally through the rest of the house and its grounds.

      Jonathan’s parents arrived an hour later and, advancing toward the house, hit an overwhelming scent of coconut. But as they drew nearer they hit a zone dominated by the stench of putrefied cuttlefish—the two smells, the two vapors, for some curious reason, had organized themselves in alternating zones about five or six feet wide. By the time they reached the scene of our accident, our crime, the basement, the smell was insupportable for more than a few seconds. The three of us were all in deep disgrace over the incident, I especially, since it had arisen from my greed in the first place (would not a single cuttlefish have done?) and my folly in not realizing how much alcohol they would need. Jonathan’s parents had to cut short their holiday and leave the house (the house itself, we heard, remained uninhabitable for months). But my love of cuttlefish remained unimpaired.

Spontaneous Radiation

[These excerpts are from chapter 21 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      What had been a mild puzzle with uranium had become a much more acute one with the isolation of radium, a million times more radioactive. While uranium could darken a photographic plate (though this took several days) or discharge an ultrasensitive gold-leaf electroscope, radium did this in a fraction of a second; it glowed spontaneously with the fury of its own activity; and, as became increasingly evident in the new century, it could penetrate opaque materials, ozonize the air, tint glass, induce fluorescence, and burn and destroy the living tissues of the body, in a way that could be either therapeutic or destructive.

      With radiation of every other sort, going all the way from X-rays to radio waves, energy had to be provided by an external source; but radioactive elements, apparently, had their own power and could emit energy without decrement for months or years, and neither heat nor cold nor pressure nor magnetic fields nor irradiation nor chemical reagents made the least difference to this.

      Where did this immense amount of energy come from? The firmest principles in the physical sciences were the principles of conservation —that matter and energy could neither be created nor destroyed. There had never been any serious suggestion that these principles could ever be violated, and yet radium at first appeared to do exactly that—to be a perpetuum mobile, a free lunch, a steady and inexhaustible source of energy.

      One escape from this quandary was to suppose that the energy of radioactive substances had an exterior source; this indeed was what Becquerel first suggested, on the analogy of phosphorescence—that radioactive substances absorbed energy from some-thing, from somewhere, and then reemitted it, slowly, in their own way. (He coined the term hyperphosphoresrence for this.)

      Notions of an outside source—perhaps an X-ray-like radiation bathing the earth—had been entertained briefly by the Curies, and they had sent a sample of a radium concentrate to Hans Geitel and Julius Elster in Germany. Elster and Geitel were close friends (they were known as “the Castor and Pollux of physics”), and they were brilliant investigators, who had already shown radioactivity to be unaffected by vacua, cathode rays, or sunlight. When they took the sample down a thousand-foot mine in the Harz Mountains—a place where no X-rays could reach—they found its radioactivity undiminished…..

      But if it was imaginable—just—that a slow dribble of energy such as uranium emitted might come from an outside source, such a notion became harder to believe when faced with radium, which (as Pierre Curie and Albert Laborde would show, in 1903)was capable of raising its own weight of water from freezing to boiling in an hour. It was harder still when faced with even more intensely radioactive substances, such as pure polonium (a small piece of which would spontaneously become red-hot) or radon, which was 200,000 times more radioactive that radium itself—so radioactive that a pint of it would instantly vaporize any vessel in which it was contained. Such a power to heat was unintelligible with any etheric or cosmic hypothesis.

      With no plausible external source of energy, the Curies were forced to return to their original thought that the energy of radium had to have an external origin, to be an “atomic property”—although a basis for this was hardly imaginable. As early as 1898, Marie Curie added a bolder, even outrageous thought, that radioactivity might come from the disintegration of atoms, that it could be “an emission of matter accompanied by a loss of weight of the radioactive substances”—a hypothesis even more bizarre, it might have seemed, than its alternative, for it had been axiomatic in science, a fundamental assumption, that atoms were indestructible, immutable, unsplittable—the whole of chemistry and classical physics was built on that faith….

      All scientific tradition, from Democritus to Dalton, from Lucretius to Maxwell, insisted upon this principle, and one can readily understand how, after her first bold thoughts about atomic disintegration, Marie Curie withdrew from the idea, and (using unusually poetic language) ended her thesis on radium by saying, “the cause of this spontaneous radiation remains a mystery . . . a profound and wonderful enigma.”

The Curies, Polonium and Radium

[This excerpt is from chapter 21 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      Eve Curie’s biography of her mother—which my own mother gave me when I was ten—was the first portrait of a scientist I ever read, and one that deeply impressed me. It was no dry recital of a life’s achievements, but full of evocative, poignant images—Marie Curie plunging her hands into the sacks of pitchblende residue, still mixed with pine needles from the Joachimsthal mine; inhaling acid fumes as she stood amid vast steaming vats and crucibles, stirring them with an iron rod almost as big as herself; transforming the huge, tarry masses to tall vessels of colorless solutions, more and more radioactive, and steadily concentrating these, in turn, in her drafty shed, with dust and grit continually getting into the solutions and undoing the endless work. (These images were reinforced by the film Madame Curie, which I saw soon after reading the book.)

      Even though the rest of the scientific community had ignored the news of Becquerel's rays, the Curies were galvanized by it: this was a phenomenon without precedent or parallel, the revelation of a new, mysterious source of energy; and nobody, apparently, was paying any attention to it. They wondered at once whether there were any substances besides uranium that emitted similar rays, and started on a systematic search (not confined, as Becquerel’s had been, to fluorescent substances) of everything they could lay their hands on, including samples of almost all the seventy known elements in some form or other. They found only one other substance besides uranium that emitted Becquerel’s rays, another element of very high atomic weight—thorium. Testing a variety of pure uranium and thorium salts, they found the intensity of the radioactivity seemed to be related only to the amount of uranium or thorium present; thus one gram of metallic uranium or thorium was more radioactive than one gram of any of their compounds.

      But when they extended their survey to some of the common minerals containing uranium and thorium, they found a curious anomaly, for some of these were actually more active than the element itself. Samples of pitchblende, for instance, might be up to four times as radioactive as pure uranium. Could this mean, they wondered, in an inspired leap, that another, as-yet-unknown element was also present in small amounts, one that was far more radioactive than uranium itself?

      In 1897 the Curies launched upon an elaborate chemical analysis of pitchblende, separating the many elements it contained into analytic groups: salts of alkali metals, of alkaline earth elements, of rare-earth elements—groups basically similar to those of the periodic table—to see if the unknown radioactive element had chemical affinities with any of them. Soon it became clear that a good part of the radioactivity could be concentrated by precipitation with bismuth.

      They continued rendering their pitchblende residue down, and in July of 1898 they were able to make a bismuth extract four hundred times more radioactive than uranium itself. Knowing that spectroscopy could be thousands of times more sensitive than traditional chemical analysis, they now approached the eminent rare-earth spectroscopist Eugene Demarcay to see if they could get a spectroscopic confirmation of their new element. Disappointingly, no new spectral signature could be obtained at this point; but nonetheless, the Curies wrote,

      we believe the substance we have extracted from pitchblende contains a metal not yet observed, related to bismuth by its analytical properties. If the existence of this new metal is confirmed we propose to call it polonium, from the name of the original country of one of us.

      They were convinced, moreover, that there must be still another radioactive element waiting to be discovered, for the bismuth extraction of polonium accounted for only a portion of the pitchblende’s radioactivity.

      They were unhurried—no one else, after all, it seemed, was even interested in the phenomenon of radioactivity, apart from their good friend Becquerel—and at this point took off on a leisurely summer holiday. (They were unaware at the time that there was another eager and intense observer of Becquerel’s rays, the brilliant young New Zealander Ernest Rutherford, who had come to work in J. J. Thomson’s lab in Cambridge.) In September the Curies returned to the chase, concentrating on precipitation with barium—this seemed particularly effective in mopping up the remaining radioactivity, presumably because it had close chemical affinities with the second as-yet-unknown element they were now seeking. Things moved swiftly, and within six weeks they had a bismuth-free (and presumably polonium-free) barium chloride solution which was nearly a thousand times as radioactive as uranium. Dernarcay's help was sought once again, and this time, to their joy, he found a spectral line (and later several lines: “two beautiful red bands, one line in the blue-green, and two faint lines in the violet”) belonging to no known element. Emboldened by this, the Curies claimed a second new element a few days before the close of 1898. They decided to call it radium, and since there was only a trace of it mixed in with the barium, they felt its radioactivity “must therefore be enormous.”

      It was easy to claim a new element: there had been more than two hundred such claims in the course of the nineteenth century, most of which turned out to be cases of mistaken identity, either “discoveries” of already known elements or mixtures of elements. Now, in a single year, the Curies had claimed the existence of not one but two new elements, solely on the basis of a heightened radioactivity and its material association with bismuth and barium (and, in the case of radium, a single new spectral line). Yet neither of their new elements had been isolated, even in microscopic amounts.

      Pierre Curie was fundamentally a physicist and theorist (though dexterous and ingenious in the lab, often devising new and original apparatus—one such was an electrometer, another a delicate balance based on a new piezo-electric principle—both subsequently used in their radioactivity studies). For him, the incredible phenomenon of radioactivity was enough—it invited a vast new realm of research, a new continent where countless new ideas could be tested.

      But for Marie, the emphasis was different: she was clearly enchanted by the physicality of radium as well as its strange new powers; she wanted to see it, to feel it, to put it in chemical combination, to find its atomic weight and its position in the periodic table.

      Up to this point the Curies’ work had been essentially chemical, removing calcium, lead, silicon, aluminum, iron, and a dozen rare-earth elements—all the elements other than barium—from the pitchblende. Finally, after a year of this, there came a time when chemical methods alone no longer sufficed. There seemed no chemical way of separating radium from barium, so Marie Curie now began to look for a physical difference between their compounds. It seemed probable that radium would be an alkaline earth element like barium and might therefore follow the trends of the group. Calcium chloride is highly soluble; strontium chloride less so; barium chloride still less so—radium chloride, Marie Curie predicted, would be virtually insoluble. Perhaps one could make use of this to separate the chlorides of barium and radium, using the technique of fractional crystallization. As a warm solution is cooled, the less soluble solute will crystallize out first, and this was a technique which had been pioneered by the rare-earth chemists, striving to separate elements that were chemically almost indistinguishable. It was one that required great patience, for hundreds, even thousands, of fractional crystallizations might be needed, and it was this repetitive and tantalizingly slow process that now caused the months to extend into years.

      The Curies had hoped they might isolate radium by 1900, but it was to take nearly four years from the time they announced its probable existence to obtain a pure radium salt, a decigram of radium chloride—less than a ten-millionth part of the original. Fighting against all manner of physical difficulties, fighting the doubts and skepticisms of most of their peers, and sometimes their own hopelessness and exhaustion; fighting (although they did not know it) against the insidious effects of radioactivity on their own bodies, the Curies finally triumphed and obtained a few grains of pure white crystalline radium chloride—enough to calculate radium’s atomic weight (226), and to give it its rightful place, below barium, in the periodic table.

      To obtain a decigram of an element from several tons of ore was an achievement with no precedent; never had an element been so hard to obtain. Chemistry alone could not have succeeded in this, nor could spectroscopy alone, for the ore had to be concentrated a thousandfold before the first faint spectral lines of radium could even be seen. It had required a wholly new approach—the use of radioactivity itself—to identify the infinitesimal concentration of radium in its vast mass of surrounding material, and to monitor it as it was slowly, reluctantly, forced into a state of purity.

      With this achievement, public interest in the Curies exploded, spreading equally to their magical new element and the romantic, heroic husband-and-wife team who had dedicated themselves so totally to its exploration. In 1903, Marie Curie summarized the work of the previous six years in her doctoral thesis, and in the same year she received (with Pierre Curie and Becquerel) the Nobel Prize in physics.

Spectroscopy

[This excerpt is from chapter 17 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      Kirchhoff and others (and especially Lockyer himself) went on to identify a score of other terrestrial elements in the sun, and now the Fraunhofer mystery—the hundreds of black lines in the solar spectrum—could be understood as the absorption spectra of these elements in the outermost layers of the sun, as they were transilluminated from within. On the other hand, a solar eclipse, it was predicted, with the central brilliance of the sun obscured and only its brilliant corona visible, would produce instead dazzling emission spectra corresponding to the dark lines….

      At this point, Bunsen and Kirchhoff turned their attention away from the heavens, to see if they could find any new or undiscovered elements on the earth using their new technique. Bunsen had already observed the great power of the spectroscope to resolve complex mixtures—to provide, in effect, an optical analysis of chemical compounds. If lithium, for example, was present in small amounts along with sodium, there was no way, with conventional chemical analysis, to detect it. Nor were flame colors of help here, because the brilliant yellow flame of sodium tended to flood out other flame colors. But with a spectroscope, the characteristic spectrum of lithium could be seen immediately, even if it was mixed with ten thousand times its weight of sodium.

      This enabled Bunsen to show that certain mineral waters rich in sodium and potassium also contained lithium (this had been completely unsuspected, the only sources hitherto having been certain rare minerals). Could they contain other alkali metals too? When Bunsen concentrated his mineral water, rendering down 600 quintals (about 44 tons) to a few liters, he saw, amid the lines of many other elements, two remarkable blue lines, close together, which had never been seen before. This, he felt, must be the signature of a new element. “I shall name it cesium because of its beautiful blue spectral line,” he wrote, announcing its discovery in November 1860.

      Three months later, Bunsen and Kirchhoff discovered another new alkali metal; they called this rubidium, from “the magnificent dark red color of its rays.”

      Within a few decades of Bunsen and Kirchhoff’s discoveries twenty more elements were discovered with the aid of spectroscopy—indium and thallium (which were also named for their brilliantly colored spectral lines), gallium, scandium, and germanium (the three elements Mendeleev had predicted), all the remaining rare-earth elements, and, in the 1890s, the inert gases.

      But perhaps the most romantic story of all, certainly the one that most appealed to me as a boy, had to do with the discovery of helium. It was Lockyer himself who, during a solar eclipse in 1868, was able to see a brilliant yellow line in the sun's corona, a line near the yellow sodium lines, but clearly distinct from them. He surmised that this new line must belong to an element unknown on earth, and named it helium (he gave it the metallic suffix of -ium because he assumed it was a metal). This finding aroused great wonder and excitement, and it was even speculated by some that every star might have its own special elements. It was only twenty-five years later that certain terrestrial (uranium) minerals were found to contain a strange, light gas, readily released, and when this was submitted to spectroscopy it proved to be the selfsame helium.

Ida Noddack

[This excerpt is from chapter 16 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      Ida Tacke Noddack was one of a team of German scientists who found element 75, rhenium, in 1925-26. Noddack also claimed to have found element 43, which she called masurium. But this claim could not be supported, and she was discredited. In t934, when Fermi shot neutrons at uranium and thought he had made element 93, Noddack suggested that he was wrong, that he had in fact split the atom. But since she had been discredited with element 43, no one paid any attention to her. Had she been listened to, Germany would probably have had the atomic bomb and the history of the world would have been different. (This story was told by Glenn Seaborg when he was presenting his recollections at a conference in November 1997.)

Mendeleev

[This excerpt is from chapter 16 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      Like my own parents, Mendeleev had come from a huge family—he was the youngest, I read, of fourteen children. His mother must have recognized his precocious intelligence, and when he reached fourteen, feeling that he would be lost without a proper education, she walked thousands of miles from Siberia with him—first to the University of Moscow (from which, as a Siberian, he was barred) and then to St. Petersburg, where he got a grant to train as a teacher. (She herself, apparently, nearing sixty at the time, died from exhaustion after this prodigious effort. Mendeleev, profoundly attached to her, was later to dedicate the Principles to her memory.)

      Even as a student in St. Petersburg, Mendeleev showed not only an insatiable curiosity, but a hunger for organizing principles of all kinds. Linnaeus, in the eighteenth century, had classified animals and plants, and (much less successfully) minerals, too. Dana, in the r 830s, had replaced the old physical classification of minerals with a chemical classification of a dozen or so main categories (native elements, oxides, sulfides, and so on). But there was no such classification for the elements themselves, and there were now some sixty elements known. Some elements, indeed, seemed almost impossible to categorize. Where did uranium go, or that puzzling, ultralight metal, beryllium? Some of the most recently discovered elements were particularly difficult—thallium, for example, discovered in 1862, was in some ways similar to lead, in others to silver, in others to aluminum, and in yet others to potassium.

      It was nearly twenty years from Mendeleev's first interest in classification to the emergence of his periodic table in 1869. This long pondering and incubation (so similar, in a way, to Darwin’s before he published On the Origin of Species) was perhaps the reason why, when Mendeleev finally published his Principles, he could bring a vastness of knowledge and insight far beyond any of his contemporaries—some of them also had a clear vision of periodicity, but none of them could marshal the overwhelming detail he could.

      Mendeleev described how he would write the properties and atomic weights of the elements on cards and ponder and shuffle these constantly on his long railway journeys through Russia, playing a sort of patience or (as he called it) “chemical solitaire,” groping for an order, a system that might bring sense to all the elements, their properties and atomic weights.

      There was another crucial factor. There had been considerable confusion, for decades, about the atomic weights of many elements. It was only when this was cleared up finally, at the Karlsruhe conference in 1860, that Mendeleev and others could even think of achieving a full taxonomy of the elements. Mendeleev had gone to Karlsruhe with Borodin (this was a musical as well as a chemical journey, for they stopped at many churches en route, trying out the local organs for themselves). With the old, pre-Karlsruhe atomic weights one could get a sense of local triads or groups, but one could not see that there was a numerical relationship between the groups themselves. Only when Cannizzaro showed how reliable atomic weights could be obtained and showed, for example, that the proper atomic weights for the alkaline earth metals (calcium, strontium, and barium) were 40, 88, and 137 (not 20, 44, and 68, as formerly believed) did it become clear how close these were to those of the alkali metals—potassium, rubidium, and cesium. It was this closeness, and in turn the closeness of the atomic weights of the halogens—chlorine, bromine, and iodine—which incited Mendeleev, in 1868, to make a small grid juxtaposing the three groups:

      Cl       35.5     K       39     Ca       40
               Br      80        Rb     85     Sr        88
               I      127        Cs    133     Ba     137

      And it was at this point, seeing that arranging the three groups of elements in order of atomic weight produced a repetitive pattern—a halogen followed by an alkali metal, followed by an alkaline earth metal—that Mendeleev, feeling this must be a fragment of a larger pattern, leapt to the idea of a periodicity governing all the elements—a Periodic Law.

      Mendeleev’s first small table had to be filled in, and then extended in all directions, as if filling up a crossword puzzle; this in itself required some bold speculations. What element, he wondered, was chemically allied with the alkaline earth metals, yet followed lithium in atomic weight? No such element apparently existed —or could it be beryllium, usually considered to be trivalent, with an atomic weight of 14.5? What if it was bivalent instead, with an atomic weight, therefore, not of 14.5 but 9? Then it would follow lithium and fit into the vacant space perfectly.

      Moving between conscious calculation and hunch, between intuition and analysis, Mendeleev arrived within a few weeks at a tabulation of thirty-odd elements in order of ascending atomic weight, a tabulation that now suggested there was a recapitulation of properties with every eighth element. And on the night of February 16, 1869, it is said, he had a dream in which he saw almost all of the known elements arrayed in a grand table. The following morning, he committed this to paper.

      The logic and pattern of Mendeleev's table were so clear that certain anomalies stood out at once. Certain elements seemed to be in the wrong places, while certain places had no elements. On the basis of his enormous chemical knowledge, he repositioned half a dozen elements, in defiance of their accepted valency and atomic weights. In doing this, he displayed an audacity that shocked some of his contemporaries (Lothar Meyer, for one, felt it was monstrous to change atomic weights simply because they did not “fit”).

      In an act of supreme confidence, Mendeleev reserved several empty spaces in his table for elements “as yet unknown.” He asserted that by extrapolating from the properties of the elements above and below (and also, to some extent, from those to either side) one might make a confident prediction as to what these unknown elements would be like. He did exactly this in his 1871 table, predicting in great detail a new element (“eka-aluminum”) which would come below aluminum in Group III. Four years later just such an element was found, by the French chemist Lecoq de Boisbaudran, and named (either patriotically, or in sly reference to himself, gallus, the cock) gallium.

      The exactness of Mendeleev's prediction was astonishing: he predicted an atomic weight of 68 (Lecoq got 69.9) and a specific gravity of 5.9 (Lecoq got 5.94) and correctly guessed at a great number of gallium's other physical and chemical properties—its fusibility, its oxides, its salts, its valency. There were some initial discrepancies between Lecoq’s observations and Mendeleev’s predictions, but all of these were rapidly resolved in favor of Mendeleev. Indeed, it was said that Mendeleev had a better grasp of the properties of gallium—an element he had never even seen—than the man who actually discovered it.

      Suddenly Mendeleev was no longer seen as a mere speculator or dreamer, but as a man who had discovered a basic law of nature, and now the periodic table was transformed from a pretty but unproven scheme to an invaluable guide which could allow a vast amount of previously unconnected chemical information to be coordinated. It could also be used to suggest all sorts of research in the future, including a systematic search for “missing” elements. “Before the promulgation of this law,” Mendeleev was to say nearly twenty years later, “chemical elements were mere fragmentary, incidental facts in Nature; there was no special reason to expect the discovery of new elements.”

      Now, with Mendeleev’s periodic table, one could not only expect their discovery, but predict their very properties. Mendeleev made two more equally detailed predictions, and these were also confirmed with the discovery of scandium and germanium a few years later.' Here, as with gallium, he made his predictions on the basis of analogy and linearity, guessing that the physical and chemical properties of these unknown elements, and their atomic weights, would be between those of the neighboring elements in their vertical groups.

      The keystone to the whole table, curiously, was not anticipated by Mendeleev, and perhaps could not have been, for this was not a question of a missing element, but of an entire family or group. When argon was discovered in 1894—an element which did not seem to fit anywhere in the table—Mendeleev denied at first that it could be an element and thought it was a heavier form of nitrogen (N3, analogous to ozone, O3). But then it became apparent that there was a space for it, right between chlorine and potassium, and indeed, for a whole group coming between the halogens and the alkali metals in every period. This was realized by Lecoq, who went on to predict the atomic weights of the other yet-to-be-discovered gases—and these, indeed, were discovered in short order. With the discovery of helium, neon, krypton, and xenon, it was clear that these gases formed a perfect periodic group, a group so inert, so modest, so unobtrusive, as to have escaped for a century the chemist's attention. The inert gases were identical in their inability to form compounds; they had a valency, it seemed, of zero.

Alkali Metals

[This excerpt is from chapter 11 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      Lavoisier, making his list of elements in 1789, had included the “alkaline earths” (magnesia, lime, and baryta) because he felt they contained new elements—and to these Davy added the alkalis (soda and potash), for these, he suspected, contained new elements too. But there were as yet no chemical means sufficient to isolate them. Could the radically new power of electricity, Davy wondered, succeed here where ordinary chemistry had failed? First he attacked the alkalis, and early in 1807 performed the famous experiments that isolated metallic potassium and sodium by electric current. When this occurred, Davy was so ecstatic, his lab assistant recorded, that he danced with joy around the lab.

      One of my greatest delights was to repeat Davy’s original experiments in my own lab, and I so identified with him that I could almost feel I was discovering these elements myself. Having read how he first discovered potassium, and how it reacted with water, I diced a little pellet of it (it cut like butter, and the cut surface glittered a brilliant silver-white—but only for an instant; it tarnished at once). I lowered it gently into a trough full of water and stood back—hardly fast enough, for the potassium caught fire instantly, melted, and as a frenzied molten blob rushed round and round in the trough, with a violet flame above it, spitting and crackling loudly as it threw off incandescent fragments in all directions. In a few seconds the little globule had burned itself out, and tranquility settled again over the water in the trough. But now the water felt warm, and soapy; it had become a solution of caustic potash, and being alkaline, it turned a piece of litmus paper blue.

      Sodium was much cheaper and not quite as violent as potassium, so I decided to look at its action outdoors. I obtained a good-sized lump of it—about three pounds—and made an excursion to the Highgate Ponds in Hampstead Heath with my two closest friends, Eric and Jonathan. When we arrived, we climbed up a little bridge, and then I pulled the sodium out of its oil with tongs and flung it into the water beneath. It took fire instantly and sped around and around on the surface like a demented meteor, with a huge sheet of yellow flame above it. We all exulted—this was chemistry with a vengeance!

      There were other members of the alkali metal family even more reactive than sodium and potassium, metals like rubidium and cesium (there was also the lightest and least reactive, lithium). It was fascinating to compare the reactions of all five by putting small lumps of each into water. One had to do this gingerly, with tongs, and to equip oneself and one’s guests with goggles: lithium would move about the surface of the water sedately, reacting with it, emitting hydrogen, until it was all gone; a lump of sodium would move around the surface with an angry buzz, but would not catch fire if a small lump was used; potassium, in contrast, would catch fire the instant it hit the water, burning with a pale mauve flame and shooting globules of itself everywhere; rubidium was still more reactive, spluttering violently with a reddish violet flame; and cesium, I found, exploded when it hit the water, shattering its glass container. One never forgot the properties of the alkali metals after this.


Watch the reaction of alkali metals on YouTube!
Alkali Metals
Watch another video of the reaction of alkali metals on YouTube!
Alkali Metals
What’s the Damage from Climate Change?

[These excerpts are from an article by William A. Pizer in the June 30, 2017, issue of Science.]

      Questions of environmental regulation typically involve trade-offs between economic activity and environmental protection. A tally of these trade-offs, put into common monetary terms—that is, a cost-benefit analysis (CBA) —has been required for significant regulations (e.g., those having an annual effect on the economy of $100 million or more) by the U.S. government for more than four decades. Ethical debate over the role of CBA is at least as old as the requirement itself, but the practical reality is that it pervades government policy-making. If estimates of environmental impacts and valuation are absent or questionable, the case for environmental protection is weakened….

      Between 2009 and 2016, the U.S. government established an interagency working group to produce improved estimates of the cost associated with carbon dioxide emissions. It made use of the only three models, based on peer-reviewed research, that put together the four key components necessary to value the benefits of reducing climate change: projections of population, economic activity, and emissions; climate models to predict how small perturbations to baseline emissions affect the climate; damage models that translate climate change into impacts measured against the baseline economic activity and population; and a discounting model to translate the future damages associated with current incremental emissions into an appropriate damage value today….The damage component is arguably the most challenging: Information must be combined from numerous studies, covering multiple climate change impacts, spanning a range of disciplines, and often requiring considerable work to make them fit together.

Déjà vu for U.S. Nuclear Waste

[This editorial by Allison Macfarlane and Rod Ewing is in the June 30, 2017, issue of Science.]

      With the arrival of the 115th U.S. Congress, the House of Representatives began hearings on the Nuclear Waste Policy Amendments Act of 2017. The legislation restarts Yucca Mountain as a repository for highly radioactive waste. In 1987, Congress amended the Nuclear Waste Policy Act of 1982, selecting Yucca Mountain, Nevada, as the only site to be studied, expecting the repository to open in 1998. It did not. Over the past 30 years, not much has changed. But going forward—or not—with Yucca Mountain will not address the systemic problems of the U.S. nuclear waste program, and this may well lead to continued failure.

      Spent nuclear fuel from power plants has accumulated at more than 70 sites in 35 states, and highly radioactive waste from defense programs remain at U.S. Department of Energy (DOE) sites. Used fuel is transferred to casks, where they may wait for decades to cool to temperatures required for transport. Clearly, the back end of the nuclear fuel cycle is broken.

      There are major obstacles to the current nuclear waste program. Nuclear facilities, whether for disposal or interim storage, take decades to plan, license, and build. Moreover, sustained opposition to a nuclear facility can prevail, simply because opponents only need to succeed occasionally to derail large, complicated projects. The United States needs a strategy that can persist over decades, not just until the next election.

      Key to any strategy is the creation of a new organization for the management of nuclear waste. The DOE, designated by law to lead the nation on this issue, has failed, in part due to changing political winds. Creating a new organization to manage this problem is not a new recommendation. The 2012 Blue Ribbon Commission on America’s Nuclear Future suggested a new, single-purpose, government-chartered corporation for the task. A series of recent meetings (Reset of U.S. Nuclear Waste Management Strategy and Policy) considered a utility-owned organization to have important advantages. Indeed, four of the most advanced nuclear waste management programs—in Canada, Finland, Sweden, and Switzerland—have placed responsibility for managing and disposal of spent fuel with a single nonprofit organization owned by the nuclear utilities. Consequently, these companies have strong technical and financial incentives to make decisions focused on the final goal—geological disposal.

      Funding a new organization is critically important. Although U.S. ratepayers have paid ($0.001/kWh) for nuclear waste disposal, the U.S. program has not moved forward because congressional appropriations from the Nuclear Waste Fund are subject to statutory (Budget Control Act of 2011) and procedural (congressional budget resolutions) limits, in addition to political ones, that restrict availability of the funds. The Nuclear Waste Fund, now $35 billion dollars, is used instead to offset the federal debt. A utility-owned management organization would not suffer from the vagaries of the political process. Fees could be collected and used by the utility-owned organization.

      Also essential is trust. Although a new organization would exist within a web of oversight entities (federal regulator, state agencies, independent scientific review, and public interest groups), it could only operate successfully with the trust of all affected parties. The organization must direct a robust science program and manage a major engineering and construction project under intense public scrutiny and engagement over many decades.

      A new U.S. program also should pay attention to the successes of other programs, particularly in Sweden, Finland, Canada, and France. A well-designed process with technical criteria for site selection and for public engagement and approval has been key to their success.

      Without addressing these issues, the U.S. program cannot expect to succeed. Otherwise, in 30 years, Congress will be holding hearings on yet another generation of amendments to the Nuclear Waste Policy Act.

Lavoisier’s Chemistry

[This excerpt is from chapter 10 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      Lavoisier’s demonstration that combustion was a chemical process—oxidation, as it could now be called—implied much else, and was for him only a fragment of a much wider vision, the revolution in chemistry that he had envisaged. Roasting metals in closed retorts, showing that there was no ghostly weight gain from “particles of fire” or weight loss from loss of phlogiston, had demonstrated to him that there was neither creation nor loss of matter in such processes. This principle of conservation, moreover, applied not only to the total mass of products and reactants, but to each of the individual elements involved. When one fermented sugar with yeast and water in a closed vessel to yield alcohol, as in one of his experiments, the total amounts of carbon and hydrogen and oxygen always stayed the same. They might be reaggregated chemically, but their amounts were unchanged.

      The conservation of mass implied a constancy of composition and decomposition. Thus Lavoisier was led to define an element as a material that could not be decomposed by existing means, and this enabled him (with de Morveau and others) to draw up a list of genuine elements—thirty-three distinct, undecomposable, elementary substances, replacing the four Elements of the ancients. This in turn allowed Lavoisier to draw up a “balance sheet,” as he called it, a precise accounting of each element in a reaction.

      The language of chemistry, Lavoisier now felt, had to be transformed to go with his new theory, and he undertook a revolution of nomenclature, too, replacing the old, picturesque but uninformative terms—like butter of antimony, jovial bezoar, blue vitriol, sugar of lead, fuming liquor of Libavius, flowers of zinc—with precise, analytic, self-explanatory ones. If an element was compounded with nitrogen, phosphorus, or sulfur, it became a nitride, a phosphide, a sulfide. If acids were formed, through the addition of oxygen, one might speak of nitric acid, phosphoric acid, sulfuric acid; and of the salts of these as nitrates, phosphates, and sulfates. If smaller amounts of oxygen were present, one might speak of nitrites or phosphites instead of nitrates and phosphates, and so on. Every substance, elementary or compound, would have its true name, denoting its composition and chemical character, and such names, manipulated as in an algebra, would instantly indicate how they might interact or behave in different circumstances. (Although I was keenly conscious of the advantages of the new names, I missed the old ones, too, for they had a poetry, a strong feeling of their sensory qualities or hermetic antecedents, which was entirely missing from the new, systematic and scentless chemical names.)

      Lavoisier did not provide symbols for the elements, nor did he use chemical equations, but he provided the essential background to these, and I was thrilled by his notion of a balance sheet, this algebra of reality, for chemical reactions. It was like seeing language, or music, written down for the first time. Given this algebraic language, one might not need an actual afternoon in the lab—one could in effect do chemistry on a blackboard, or in one’s head.

      All of Lavoisier’s enterprises—the algebraic language, the nomenclature, the conservation of mass, the definition of an element, the formation of a true theory of combustion—were organically interlinked, formed a single marvelous structure, a revolutionary refounding of chemistry such as he had dreamed of, so ambitiously, in 1773. The path to his revolution was not easy or direct, even though he presents it as obvious in the Elements of Chemistry; it required fifteen years of genius time, fighting his way through labyrinths of presupposition, fighting his own blindnesses as he fought everyone else’s.

      There had been violent disputes and conflicts during the years in which Lavoisier was slowly gathering his ammunition, but when the Elements was finally published—in 1789, just three months before the French Revolution—it took the scientific world by storm. It was an architecture of thought of an entirely new sort, comparable only to Newton’s Principia. There were a few holdouts—Cavendish and Priestley were the most eminent of these—but by 1791 Lavoisier could say, “all young chemists adopt the theory and from that I conclude that the revolution in chemistry has come to pass.”

      Three years later Lavoisier’s life was ended, at the height of his powers, on the guillotine. The great mathematician Lagrange, lamenting the death of his colleague and friend, said: “It required only a moment to sever his head, and one hundred years, perhaps, may not suffice to produce another like it.”

Antoine Lavoisier’s Scientific Achievement

[This excerpt is from chapter 10 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      In his biography of Lavoisier, Douglas McKie includes an exhaustive list of Lavoisier's scientific activities which paints a vivid picture of his times, no less than his own remarkable range of mind: “Lavoisier took part,” McKie writes,

      …in the preparation of reports on the water supply of Paris, prisons, mesmerism, the adulteration of cider, the site of the public abattoirs, the newly-invented “aerostatic machines of Montgolfier” (balloons), bleaching, tables of specific gravity, hydrometers, the theory of colors, lamps, meteorites, smokeless grates, tapestry making, the engraving of coats-of-arms, paper, fossils, an invalid chair, a water-driven bellows, tartar, sulfur springs, the cultivation of cabbage and rape seed and the oils extracted thence, a tobacco grater, the working of coal mines, white soap, the decomposition of nitre, the manufacture of starch . . . the storage of fresh water on ships, fixed air, a reported occurrence of oil in spring water .. the removal of oil and grease from silks and woollens, the preparation of nitrous ether by distillation, ethers, a reverberatory hearth, a new ink and inkpot to which it was only necessary to add water in order to maintain the supply of ink . , the estimation of alkali in mineral waters, a powder magazine for the Paris Arsenal, the mineralogy of the Pyrenees, wheat and flour, cesspools and the air arising from them, the alleged occurrence of gold in the ashes of plants, arsenic acid, the parting of gold and silver, the base of Epsom salt, the winding of silk, the solution of tin used in dyeing, volcanoes, putrefaction, fire-extinguishing liquids, alloys, the rusting of iron, a proposal to use “inflammable air” in a public firework display (this at the request of the police), coal measures, dephlogisticated marine acid, lamp wicks, the natural history of Corsica, the mephitis of the Paris wells, the alleged solution of gold in nitric acid, the hygrometric properties of soda, the iron and salt works of the Pyrenees, argentiferous lead mines, a new kind of barrel, the manufacture of plate glass, fuels, the conversion of peat into charcoal, the construction of corn mills, the manufacture of sugar, the extraordinary effects of a thunder bolt, the retting of flax, the mineral deposits of France, plated cooking vessels, the formation of water, the coinage, barometers, the respiration of insects, the nutrition of vegetables, the proportion of the components in chemical compounds, vegetation, and many other subjects, far too many to be described here, even in the briefest terms.

Robert Hooke – Robert Boyle’s Assistant

[This excerpt is from chapter 10 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      Hooke himself was to become a marvel of scientific energy and ingenuity, abetted by his mechanical genius and mathematical ability. He kept voluminous, minutely detailed journals and diaries, which provide an incomparable picture not only of his own ceaseless mental activity, but of the whole intellectual atmosphere of seventeenth-century science. In his Micrographia, Hooke illustrated his compound microscope, along with drawings of the intricate, never-before-seen structures of insects and other creatures (including a famous picture of a Brobdingnagian louse, attached to a human hair as thick as a barge pole). He judged the frequency of flies’ wingbeats by their musical pitch. He interpreted fossils, for the first time, as the relics and impressions of extinct animals. He illustrated his designs for a wind gauge, a thermometer, a hygrometer, a barometer. And he showed an intellectual audacity sometimes even greater than Boyle's, as with his understanding of combustion, which, he said, “is made by a substance inherent, and mixt with the Air.” He identified this with “that property in the Air which it loses in the Lungs.” This notion of a substance present in limited amounts in the air that is required for and gets used up in combustion and respiration is far closer to the concept of a chemically active gas than Boyle’s theory of igneous particles.

      Many of Hooke’s ideas were almost completely ignored and forgotten, so that one scholar observed in 1803, “I do not know a more unaccountable thing in the history of science than the total oblivion of this theory of Dr. Hooke, so clearly expressed, and so likely to catch attention.” One reason for this oblivion was the implacable enmity of Newton, who developed such a hatred of Hooke that he would not consent to assume the presidency of the Royal Society while Hooke was still alive, and did all he could to extinguish Hooke's reputation. But deeper than this is perhaps what Gunther Stent calls “prematurity” in science, that many of Hooke’s ideas (and especially those on combustion) were so radical as to be unassimilable, even unintelligible, in the accepted thinking of his time.

Robert Boyle

[This excerpt is from chapter 10 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      Chemistry as a true science, I read, made its first emergence with the work of Robert Boyle in the middle of the seventeenth century. Twenty years Newton’s senior, Boyle was born at a time when the practice of alchemy still held sway, and he still maintained a variety of alchemical beliefs and practices, side by side with his scientific ones. He believed that gold could be created, and that he had succeeded in creating it (Newton, also an alchemist, advised him to keep silent about this). He was a man of immense curiosity (of “holy curiosity,” in Einstein’s phrase), for all the wonders of nature, Boyle felt, proclaimed the glory of God, and this led him to examine a huge range of phenomena.

      He examined crystals and their structure, and was the first to discover their cleavage planes. He explored color, and wrote a book on this which influenced Newton. He devised the first chemical indicator, a paper soaked with syrup of violets which would turn red in the presence of acid fluids, green with alkaline ones. He wrote the first book in English on electricity. He prepared hydrogen, without realizing it, by putting iron nails in sulfuric acid. He found that although most fluids contracted when frozen, water expanded. He showed that a gas (later realized to be carbon dioxide) was evolved when he poured vinegar on powdered coral, and that flies would die if kept in this “artificial air.” He investigated the properties of blood and was interested in the possibility of blood transfusion. He experimented with the perception of odors and tastes. He was the first to describe semipermeable membranes. He provided the first case history of acquired achromatopsia, a total loss of color vision following a brain infection.

      All these investigations and many others he described in language of great plainness and clarity, utterly different from the arcane and enigmatic language of the alchemists. Anyone could read him and repeat his experiments; he stood for the openness of science, as opposed to the closed, hermetic secrecy of alchemy.

      Although his interests were universal, chemistry seemed to hold a very special appeal for him (even as a youth he called his own chemical laboratory “a kind of Elysium”). He wished, above all, to understand the nature of matter, and his most famous book, The Sceptical Chymist, was written to debunk the mystical doctrine of the Four Elements, and to unite the enormous, centuries-old empirical knowledge of alchemy and pharmacy with the new, enlightened rationality of his age.

Malodorous Chemicals

[This excerpt is from chapter 8 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      The bad smells, the stenches, always seemed to come from compounds containing sulfur (the smells of garlic and onion were simple organic sulfides, as closely related chemically as they were botanically), and these reached their climax in the sulfuretted alcohols, the mercaptans. The smell of skunks was due to butyl mercaptan, I read—this was pleasant, refreshing, when very dilute, but appalling, overwhelming, at close quarters. (I was delighted, when I read Antic Hay a few years later, to find that Aldous Huxley had named one of his less delectable characters Mercaptan.)

      Thinking of all the malodorous sulfur compounds and the atrocious smell of selenium and tellurium compounds, I decided that these three elements formed an olfactory as well as a chemical category, and thought of them thereafter as the “stinkogens.”

      I had smelled a bit of hydrogen sulfide in Uncle Dave's lab—it smelled of rotten eggs and farts and (I was told) volcanoes. A simple way of making it was to pour dilute hydrochloric acid on ferrous sulfide. (The ferrous sulfide, a great chunky mass of it, I made myself by heating iron and sulfur together till they glowed and combined.) The ferrous sulfide bubbled when I poured hydrochloric acid on it, and instantly emitted a huge quantity of stinking, choking hydrogen sulfide. I threw open the doors into the garden and staggered out, feeling very queer and ill, remembering how poisonous the gas was. Meanwhile, the infernal sulfide (I had made a lot of it) was still giving off clouds of toxic gas, and this soon permeated the house. My parents were, by and large, amazingly tolerant of my experiments, but they insisted, at this point, on having a fume cupboard installed and on my using, for such experiments, less generous quantities of reagents.

      When the air had cleared, morally and physically, and the fume cupboard had been installed, I decided to make other gases, simple compounds of hydrogen with other elements besides sulfur. Knowing that selenium and tellurium were closely akin to sulfur, in the same chemical group, I employed the same basic formula: compounding the selenium or tellurium with iron, and then treating the ferrous selenide or ferrous telluride with acid. If the smell of hydrogen sulfide was bad, that of hydrogen selenide was a hundred times worse—an indescribably horrible, disgusting smell that caused me to choke and tear, and made me think of have very putrefying radishes or cabbage (I had a fierce hatred of cabbage library, and his c and brussels sprouts at this time, for boiled, overboiled, they had been staples at Braefield).

      Hydrogen selenide, I decided, was perhaps the worst smell in the world. But hydrogen telluride came close, was also a smell from hell. An up-to-date hell, I decided, would have not just rivers of fiery brimstone, but lakes of boiling selenium and tellurium, too.

Risks of Chemistry

[This excerpt is from chapter 8 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      Chemical exploration, chemical discovery, was all the more romantic for its dangers. I felt a certain boyish glee in playing with these dangerous substances, and I was struck, in my reading, by the range of accidents that had befallen the pioneers. Few naturalists had been devoured by wild animals or stung to death by noxious plants or insects; few physicists had lost their eyesight gazing at the heavens, or broken a leg on an inclined plane; but many chemists had lost their eyes, limbs, and even their lives, usually through producing inadvertent explosions or toxins. All the early investigators of phosphorus had burned themselves severely. Bunsen, investigating cacodyl cyanide, lost his right eye in an explosion, and very nearly his life. Several later experimenters, like Moissan, trying to make diamond from graphite in intensely heated, high-pressure “bombs,” threatened to blow themselves and their fellow workers to kingdom come. Humphry Davy, one of my particular heroes, had been nearly asphyxiated by nitrous oxide, poisoned himself with nitrogen peroxide, and severely inflamed his lungs with hydrofluoric acid. Davy also experimented with the first “high” explosive, nitrogen trichloride, which had cost many people fingers and eyes. He discovered several new ways of making the combination of nitrogen and chlorine, and caused a violent explosion on one occasion while he was visiting a friend. Davy himself was partially blinded, and did not recover fully for another four months. (We were not told what damage was done to his friend's house.)

      The Discovery of the Elements devoted an entire section to “The Fluorine Martyrs.” Although elemental chlorine had been isolated from hydrochloric acid in the 1770s, its far more active cousin, fluorine, was not so easily obtained. All the early experimenters, I read, “suffered the frightful torture of hydrofluoric acid poisoning,” and at least two of them died in the process. Fluorine was only isolated in 1886, after almost a century of dangerous trying.

The Unaffordable Urban Paradise

[This excerpt is from an article by Richard Florida in the July/August 2017 issue of Technology Review.]

      …Urban areas provide the diversity, creative energy, cultural richness, vibrant street life, and openness to new ideas that attract startup talent. Their industrial and warehouse buildings also provide employees with flexible and reconfigurable work spaces. Cities and startups are a natural match.

      For years, economists, mayors, and urbanists believed that high-tech development was an unalloyed good thing, and that more high-tech startups and more venture capital investment would “lift all boats.” But the reality is that high-tech development has ushered in a new phase of what I call winner-take-all urbanism, where a relatively small number of metro areas, and a small number of neighborhoods within them, capture most of the benefits.

      Middle-class neighborhoods have been hollowed out in the process. In 1970, roughly two-thirds of Americans lived in middle-class neighborhoods; today less than 40 percent of us do. The middle-class share of the population shrank in a whopping 203 out of 229 U.S. metro areas between 2000 and 2014. And places where the middle class is smallest include such superstar cities and tech hubs as New York, San Francisco, Boston, Los Angeles, Houston, and Washington, D.C.

      Despite all this, it wouldn’t make any sense to put the brakes on high-tech development. Doing so would only cut off a huge source of innovation and economic development. High-tech industry remains a major driver of economic progress and jobs, and it provides much-needed tax revenues that cities can use to address and mitigate the problems that come with financial success.

      But if high-tech development causes problems, and stopping it doesn’t solve those problems, what comes next?

      High-tech companies should—out of self-interest, if for no other reason—embrace a shift to a kind of urbanism that allows many more people, especially blue-collar and service workers, to share in the gains of urban development. The superstar cities they’ve helped create cannot survive when nurses, EMTs, teachers, police officers, and other service providers can no longer afford to live in them.

      Here’s how they can do it. First, they can work with cities to help build more housing, which would reduce housing prices. They can support efforts to liberalize outdated zoning and building codes to enable more housing construction, and invest in the development of more affordable housing for service and blue-collar workers.

      Second, they can work for, support, and invest in the development of more and better public transit to connect outlying areas to booming cores and tech clusters where employment is—and to spur and generate denser real estate and business development around those stops and stations.

      Third, they can engage the wider business community and government to upgrade the jobs of low-wage service workers—who now make up more than 45 percent of the national workforce—into higher-paying, family-supporting work.

      This last idea might seem outlandish, but it's analogous to how the U.S. turned low-paying manufacturing jobs of the early 20th century into middle-class jobs in the 1950s and 1960s….

East African Turmoil Imperils Giraffes

[These excerpts are from an article by Jane Qiu in the June 23, 2017, issue of Science.]

      In recent months, drought and overgrazing in northern Kenya have sent thousands of herders and their livestock into national parks and other protected areas, intensifying tensions over land and grazing. Violence has taken the lives of several rangers, and a surge in wildlife killings is devastating populations of one of East Africa’s most majestic beasts: giraffes. “This affects all wildlife, but giraffes may be particularly hard hit,” says Fred Bercovitch, a zoologist….

      For hunters, “giraffes are an easy target,” he notes. And as scientists have recognized only recently, giraffes have multiple species, and several populations are already in serious decline. In the past 30 years, populations of two East African varieties, the Nubian and reticulated giraffes, have plunged by 97% and 78%, respectively, and the International Union for the Conservation of Nature may soon declare them critically endangered….

      The biggest threats to the animals are rapid human population growth and the influx of herders, along with refugees fleeing regional conflicts. In the refugee camps bordering Kenya and Somalia, for instance, bush meat, including giraffes, is an important source of food for half a million destitute people.

      A traditional predator, the lion, may also be taking a growing toll….

      Adding to the pressure is exponential growth in mining and infrastructure development—highways, railways, oil pipelines, and industrial compounds—which often encroach on key giraffe habitats, including those in national parks. The newly opened $3.2 billion Mombasa-Nairobi railway, for instance, cuts through Kenya’s Tsavo National Park….

Table Salt

[This excerpt is from chapter 7 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      It was from Griffin that I first gained a clear idea of what was meant by “acids” and “alkalis” and how they combined to produce “salts.” Uncle Dave demonstrated the opposition of acids and bases by measuring out precise quantities of hydrochloric acid and caustic soda, which he mixed in a beaker. The mixture became extremely hot, but when it cooled, he said, “Now try it, drink it.” Drink it—was he mad? But I did so, and tasted nothing but salt. :You see,” he explained, “an acid and a base come together, and they neutralize each other; they combine and make a salt.”

      Could this miracle happen in reverse, I asked? Could salty water be made to produce the acid and the base all over again? “No,” Uncle said, “that would require too much energy. You saw how hot it got when the acid and base reacted— the same amount of heat would be needed to reverse the reaction. And salt,” he added, “is very stable. The sodium and chloride hold each other tightly, and no ordinary chemical process will break them apart. To break them apart you have to use an electric current.”

      He showed me this more dramatically one day by putting a piece of sodium in a jar full of chlorine. There was a violent conflagration, the sodium caught fire and burned, weirdly, in the yellowish green chlorine—but when it was over, the result was nothing more than common salt. I had a heightened respect for salt, I think, after that, having seen the violent opposites that came together in its making and the strength of the energies, the elemental forces, that were now locked in the compound.


Watch this reaction on YouTube!
Making Table Salt
Chemistry of Minerals

[This excerpt is from chapter 6 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      The eighteenth century, Uncle told me, had been a grand time for the discovery and isolation of new metals (not only tungsten, but a dozen others, too), and the greatest challenge to eighteenth-century chemists was how to separate these new metals from their ores. This is how chemistry, real chemistry, got on its feet, investigating countless different minerals, analyzing them, breaking them down, to see what they contained. Real chemical analysis—seeing what minerals would react with, or how they behaved when heated or dissolved—of course required a laboratory, but there were elementary observations one could do almost anywhere. One could weigh a mineral in one's hand, estimate its density, observe its luster, the color of its streak on a porcelain plate. Hardness varied hugely, and one could easily get a rough approximation—talc and gypsum one could scratch with a fingernail; calcite with a coin; fluorite and apatite with a steel knife; and orthoclase with a steel file. Quartz would scratch glass, and corundum would scratch anything but diamond.

Burning Aluminum

[This excerpt is from chapter 4 Uncle Tungsten, an autobiography by Oliver Sacks.]

      On one visit, Uncle Dave showed me a large bar of aluminum. After the dense platinum metals, I was amazed at how light it was, scarcely heavier than a piece of wood. “I’ll show you something interesting,” he said. He took a smaller lump of aluminum, with a smooth, shiny surface, and smeared it with mercury. All of a sudden—it was like some terrible disease—the surface broke down, and a white substance like a fungus rapidly grew out of it, until it was a quarter of an inch high, then half an inch high, and it kept growing and growing until the aluminum was completely eaten up. “You’ve seen iron rust—oxidizing, combining with the oxygen in the air,” Uncle said. “But here, with the aluminum, it’s a million times faster. That big bar is still quite shiny, because it’s covered by a fine layer of oxide, and that protects it from further change. But rubbing it with mercury destroys the surface layer, so then the aluminum has no protection, and it combines with the oxygen in seconds.”

      I found this magical, astounding, but also a little frightening—to see a bright and shiny metal reduced so quickly to a crumbling mass of oxide….


Watch this reaction on YouTube!
Burning Aluminum
Waning Woods

[This brief article by Jason G. Goldman is in the July 2017 issue of Scientific American.]

      We humans have left our mark on the entire planet; not a single ecosystem remains completely untouched. But some landscapes have been affected less than others. And the extent to which the earth can provide habitats for plants and animals, sequester atmospheric carbon and regulate the flow of freshwater depends on the vastness of the least affected regions. These tracts, where human influence is still too weak to easily detect by satellite, are prime targets for conservation. Using satellite imagery, a group of researchers mapped the global decline between 2000 and 2013 of such “intact forest landscapes” (IFLs), defined as forested or naturally treeless ecosystems of 500 square kilometers or more. Around half of the area of the world’s IFLs are in the tropics, and a third can be found in the boreal forests of North America and Eurasia. Logging, agriculture, mining and wildfires contributed to the drop, as reported in January in Science Advances.

      The bright side? Landscapes under formal protection, such as national parks, were more likely to remain intact.

Probiotics Are No Panacea

[These excerpts are from an article by Ferris Jabr in the July 2017 issue of Scientific American.]

      Walk into any grocery store, and you will likely find more than a few “probiotic” products brimming with so-called beneficial bacteria that are supposed to treat everything from constipation to obesity to depression. In addition to foods traditionally prepared with live bacterial cultures (such as yogurt and other fermented dairy products), consumers can now purchase probiotic capsules and pills, fruit juices, cereals, sausages, cookies, candy, granola bars and pet food. Indeed, the popularity of probiotics has grown so much in recent years that manufacturers have even added the microorganisms to cosmetics and mattresses.

      A closer look at the science underlying microbe-based treatments, however, shows that most of the health claims for probiotics are pure hype. The majority of studies to date have failed to reveal any benefits in individuals who are already healthy. The bacteria seem to help only those people suffering from a few specific intestinal disorders….

      The popular frenzy surrounding probiotics is fueled in large part by surging scientific and public interest in the human microbiome: the overlapping ecosystems of bacteria and other microorganisms found throughout the body. The human gastrointestinal system contains about 39 trillion bacteria, according to the latest estimate, most of which reside in the large intestine. In the past 15 years researchers have established that many of these commensal microbes are essential for health. Collectively, they crowd out harmful microbial invaders, break down fibrous foods into more digestible components and produce vitamins such as K and B12.

      The idea that consuming probiotics can boost the ability of already well-functioning native bacteria to promote general health is dubious for a couple of reasons. Manufacturers of probiotics often select specific bacterial strains for their products because they know how to grow them in large numbers, not because they are adapted to the human gut or known to improve health. The particular strains of Bifidobacterium or Lactobacillus that are typically found in many yogurts and pills may not be the same kind that can survive the highly acidic environment of the human stomach and from there colonize the gut.

      Even if some of the bacteria in a probiotic managed to survive and propagate in the intestine, there would likely be far too few of them to dramatically alter the overall composition of one’s internal ecosystem. Whereas the human gut contains tens of trillions of bacteria, there are only between 100 million and a few hundred billion bacteria in a typical serving of yogurt or a microbe-filled pill. Last year a team of scientists at the University of Copenhagen published a review of seven randomized, placebo-controlled trials (the most scientifically rigorous types of studies researchers know how to conduct) investigating whether probiotic supplements—including biscuits, milk-based drinks and capsules—change the diversity of bacteria in fecal samples. Only one study—of 34 healthy volunteers—found a statistically significant change, and there was no indication that it provided a clinical benefit….

      Despite a growing sense that probiotics do not offer anything of substance to individuals who are already healthy, researchers have documented some benefits for people with certain conditions.

      In the past five years, for example, several combined analyses of dozens of studies have concluded that probiotics may help prevent some common side effects of treatment with antibiotics. Whenever physicians prescribe these medications, they know they stand a good chance of annihilating entire communities of beneficial bacteria in the intestine, along with whatever problem-causing microbes they are trying to dispel. Normally the body just needs to grab a few bacteria from the environment to reestablish a healthy microbiome. But sometimes the emptied niches get filled up with harmful bacteria that secrete toxins, causing inflammation in the intestine and triggering diarrhea. Adding yogurt or other probiotics—especially the kinds that contain Lactobacillus—during and after a course of antibiotics seems to decrease the chances of subsequently developing these opportunistic infections….

      Probiotics also seem to ameliorate irritable bowel syndrome, a chronic disease characterized by abdominal pain, bloating, and frequent diarrhea or constipation (or a mix of the two)…

      …Put another way, treatments for microbe-related disorders are most successful when they work in tandem with the human body’s many microscopic citizens, not just against them.

Tapping the Trash

[These excerpts are from an article by Michael E. Webber in the July 2017 issue of Scientific American.]

      On December 20, 2015, a mountain of urban refuse collapsed in Shenzhen, China, killing at least 69 people and destroying dozens of buildings. The disaster brought to life the towers of waste depicted in the 2008 dystopian children’s movie WALL-E, which portrayed the horrible yet real idea that our trash could pile up uncontrollably, squeezing us out of our habitat. A powerful way to transform an existing city into a sustainable one—a city that preserves the earth rather than ruining it—is to reduce all the waste streams and then use what remains as a resource. Waste from one process becomes raw material for another.

      Many people continue to migrate to urban centers worldwide, which puts cities in a prime position to solve global resource problems. Mayors are taking more responsibility for designing solutions simply because they have to, especially in countries where national enthusiasm for tackling environmental issues has cooled off. International climate agreements forged in Paris in December 2015 also acknowledged a central role for cities. More than 1,000 mayors flocked to the French capital during the talks to share their pledges to reduce emissions. Changing building codes and investing in energy efficiency are just two starting points that many city leaders said they could initiate much more quickly than national governments.

      It makes sense for cities to step up. Some of them—New York City, Mexico City, Beijing—house more people than entire countries do. And urban landscapes are where the challenges of managing our lives come crashing together in concentrated form. Cities can lead because they can quickly scale up solutions and because they are living laboratories for improving quality of life without using up the earth’s resources, polluting its air and water, and harming human health in the process.

      Cities are rife with wasted energy, wasted carbon dioxide, wasted food, wasted water, wasted space and wasted time. Reducing each waste stream and managing it as a resource—rather than a cost—can solve multiple problems simultaneously, creating a more sustainable future for billions of people….

      One obvious place to start reducing waste is leaky water pipes. A staggering 10 to 40 percent of a city’s water is typically lost in pipes. And because the municipality has cleaned that water and powered pumps to move it, the leaks throw away energy, too.

      Energy consumption itself is incredibly wasteful. More than half the energy a city consumes is released as waste heat from smokestacks, tailpipes, and the backs of heaters, air conditioners and appliances. Making all that equipment more efficient reduces how much energy we need to produce, distribute and clean up.

      Refuse is another waste stream to consolidate. The U.S. generates more than four pounds of solid waste per person every day. Despite efforts to compost, recycle or incinerate some of it, a little more than half is still dumped in landfills….

      Once cities reduce waste streams, they should use waste from one urban process as a resource for another. This arrangement is rare, but compelling projects are rising. Modern waste-to-energy systems, such as one in Zurich, burn trash cleanly, and some, including one in Palm Beach, Fla., recover more than 95 percent of the metals in the gritty ash that is left by the combustion….

      Municipalities also need to help residents become smarter citizens because each individual makes resource decisions every time he or she buys a product or flips a switch. Access to education and data will be paramount. Connecting those citizens also requires collaboration and neighborly interactions: parks, playgrounds, shared spaces, schools, and religious and community centers-all of which were central tenets of centuries-old designs for thriving cities. The more modern and smart our cities become, the more we might need these old-world elements to keep us together.

Raise Alcohol Taxes, Reduce Violence

[These excerpts are from an article by Kunmi Sobowale in the July 2017 issue of Scientific American.]

      …alcohol is a common instigator of violence against others, as well as harm to oneself.

      This link between alcohol and violence has been shown in multiple countries. In 1998 the Bureau of Justice Statistics reported that in the U.S., two thirds of violent attacks on intimate partners occurred in the context of alcohol abuse. Drinking increases the perpetration of physical and sexual violence. Alcohol use also reportedly increases the severity of violent assaults. Although drinking alcohol does not always lead to violence and is not a prerequisite for violence to occur, the link between alcohol and violence is undeniable.

      The victims are overwhelmingly women. But children are also harmed. Parents who drink heavily are more likely to physically abuse their child. Youngsters who live in neighborhoods with more bars or liquor stores are more likely to be maltreated….

      Violent, drunken men fall victim, too. They are as likely to I die from alcohol-related firearm incidents as drunk-driving accidents. For all the effort put into preventing drunk driving, we have utterly failed to appreciate that being intoxicated while in possession of a firearm is an equally dangerous situation. Nevertheless, many states permit customers to carry firearms into establishments that serve alcohol.

      …Suicide attempts are often impulsive acts. When sober, many patients regret these efforts to take their own life. Unfortunately, alcohol intoxication increases the risk that people will attempt suicide with a firearm, and because guns are the most lethal suicide method in the U.S., it is often too late for regrets….

      Compared with other approaches to violence prevention, higher taxes on alcohol seem more politically feasible—certainly they will get more support than gun-control measures. Taxes are more effective than most other alcohol-consumption interventions, and they garner revenue for local governments. States can use that money to support programs that aid victims of violence. Taxation also drives down youth drinking, which, in turn, lowers the chance that young people will grow into heavy drinkers. One common argument against taxing alcohol is that it disproportionately affects poor people—but a recent study in the journal Preventing Chronic Disease suggests that is not true. Moreover, in general, evidence suggests that less alcohol access does not lead people to use other drugs.

      If policy makers are serious about violence prevention—to say nothing of reducing car and other accidents—they need to reduce alcohol use. Taxation is a simple and powerful way to do so.

Planned Parenthood Remains a Vital Link in Health System

[These excerpts are from an editorial in the Portland Press Herald in the March 13, 2017, issue in Augusta, ME.]

      A woman who wants to have two children will spend about three years of her life pregnant, trying to get pregnant or postpartum. She will also spend three decades of her life trying to avoid an unintended pregnancy – and that’s true whether she votes Democratic, Republican, none of the above or not at all.

      Since family planning services are a pillar of any serious public health program, it’s strange to see Republicans in Washington insist that making them harder to get is essential to their idea of health care reform….

      About half of all pregnancies are unintended, but that neat statistic does not provide a true picture.

      Poor women are five times more likely to find themselves in that situation than women who are more well off, largely because of the way that we ration health care in this country. A woman with private insurance who gets regular checkups is much more likely to have birth control and family planning counseling than one who gets medical care only when she’s sick.

      Planned Parenthood works with anyone who walks through their doors, including people with private health insurance, Medicaid or no health insurance at all. It provides abortions – a constitutional right – but it also delivers the kind of care that makes abortions unnecessary….

A Composite Window into Human History

[These excerpts are from an article by Niels N. Johansen, Greger Larson, David J. Meltzer and Marc Vander Linden in the June 16, 2017, issue of Science.]

      Most obviously, there is far more to human history than our biology, especially over the past ~100,000 years, during which culture has played an increasingly dominant role in human evolution. Many of archaeology’s “grand challenges” concern the understanding of human cultures and cultural change. Although aDNA [ancient DNA] research can contribute to addressing these challenges, its potential has yet to be fully realized. The main obstacles are fourfold: the problem of scale; the challenge of aligning different analytical units; the difficulty of discerning causal connections between population and cultural changes; and the frequent lack of genuine collaboration between fields.

      Integration of aDNA and archaeology is arguably least complicated at the smallest spatial scale….

      A central problem is the methodological challenge of aligning very different types of evidence. On the one hand, past populations identified by ancient genomes are biological units whose members were related by degrees that can be precisely measured. These populations can be complex entities: Differences in class, religion, language, and culture may (or may not) limit mating and gene flow. On the other hand, archaeological cultures are classificatory labels based on material culture that can vary widely over space and time and even within cultural units. We cannot assume that individuals who shared material culture traits were part of the same biological population……

      Geneticists are often keen to use aDNA to understand the causes and mechanisms of demographic and cultural change. But archaeologists long ago abandoned the idea that migrations or encounters between populations are a necessary or sufficient explanation of cultural change….

      The days when virtually every aDNA study revealed facets of heretofore unknown population histories are ending. And by now, the finding that human history is a story of dispersal and admixture is no longer revelatory. We have an opportunity to jointly explore human history more deeply, and we should make the most of it.

Pittsburgh Myth, Paris Reality

[These excerpts are from an editorial by Patrick Gallagher in the June 16, 2017, issue of Science.]

      When announcing his decision to withdraw the United States from the Paris Agreement, President Trump reminded the world that, “I was elected to represent the citizens of Pittsburgh, not Paris.” In doing so, he repeated a tired trope: that Pittsburgh is a rusty urban relic—a manufacturing city of steel that has fallen on hard times, held back by unfair global competition and onerous environmental regulation. But such a nostalgic version of Pittsburgh, and of many other communities across the country; is a myth. If the president truly wants to represent the interests of Americans, he would learn from the real histories of these regions and promote economic and environmental progress through research, education, and innovation.

      Biographer James Parton, visiting Pittsburgh in its manufacturing heyday, described the smoky, sooty landscape as “hell with the lid taken off.” By the early 1940s, after decades of leading the nation in steel production, the city was paying a heavy price for its economic success. Industry leaders, realizing that environmental catastrophe would be bad for business, partnered with local government in one of the country’s first clean air initiatives.

      Environmental regulations did not drive the region’s coal industry—long the engine of manufacturing—to collapse. That industry’s fate is more intricately tied to the availability of low-cost natural gas, whose rise—including the shale gas boom—was buoyed by U.S. research efforts during the oil embargo of the 1970s. A lack of innovation and investment were the true linchpins of Pittsburgh’s economic distress. Its aging and inefficient factories were unable to compete with foreign firms. The city lost nearly half of its population, unemployment peaked at 17% in 1983, and Pittsburgh became an economic shadow of its former self.

      The region clawed back from its economic breakdown by refocusing on technology innovation fueled by federally funded research at its major universities, especially Carnegie Mellon University and the University of Pittsburgh. Today, Pittsburgh is home to one of the most vibrant technology and health care markets in the country. It is teeming with startup companies and is an internationally recognized research leader in medicine, robotics, advanced manufacturing, big data, and autonomous systems. It is no accident that the top of the city’s tallest building now advertises the University of Pittsburgh Medical Center—not U.S. Steel.

      Such history, seen in Pittsburgh and elsewhere across the country; reminds us that economic and environmental narratives are intertwined….

      Instead of shielding domestic businesses from this opportunity, the United States should be increasing its investments in climate- and energy-related research and supporting the most innovative companies. Training and education should be bolstered so that all Americans can thrive in this rapidly changing economy. The draconian cuts proposed in these areas by the Trump administration augur a less-competitive economic future—even if environmental restrictions are lifted.

      The real story of Pittsburgh, and the real story of the United States, points to an economic approach to the challenge of climate change that is drastically different from that voiced by the president. It’s a story that says, from a place of hard-earned experience: Be the innovation leader.

The Messy Truth about Weight Loss

[These excerpts are from an article by Susan B. Roberts and Sai Krupa Das in the June 2017 issue of Scientific American.]

      …Obesity increases the risk of all the major noncommunicable diseases, including type 2 diabetes, heart disease, stroke and several types of cancers—enough to decrease a person’s potential life span by as much as 14 years. Research shows that excessive weight also interferes with our body's ability to fight off infections, sleep deeply and age well, among other problems. It is long past time for us to understand how to combat this epidemic.

      …For decades health experts figured that it did not matter too much how you created that deficit: as long as you got the right nutrients, you could safely lose weight with any combination of increased exercise and reduced consumption of food. But this assumption does not take into account the complexities of human physiology and psychology and so quickly falls apart when tested against real-world experience….

      The formula for maintaining a stable weight—consume no more calories than the body needs for warmth, basic functioning and physical activity—is just another way of saying that the first law of thermodynamics still holds for biological systems: the total amount of energy taken into a closed system (in this case, the body) must equal the total amount expended or stored. But there is nothing in that law that requires the body to use all sources of food with the same efficiency. Which brings us to the issue of whether all calories contribute equally to weight gain….

      Food does not come to us as pure protein, carbohydrate or fat, of course. Salmon consists of protein and fat. Apples contain carbohydrates and fiber. Milk contains fat, protein, carbohydrates and a lot of water. It turns out that a food’s physical properties and composition play a greater role in how completely the body can digest and absorb calories than investigators had anticipated.

      In 2012, for example, David Baer of the U.S. Department of Agriculture’s Beltsville Human Nutrition Research Center in Maryland proved that the body is unable to extract all the calories that are indicated on a nutritional label from some nuts, depending on how they are processed. Raw whole almonds, for example, are harder to digest than Atwater would have predicted, so we get about a third fewer calories from them, whereas we can metabolize all the calories found in almond butter.

      Whole grains, oats and high-fiber cereals are also digested less efficiently than we used to think….

      One of the most common pieces of advice that people get when they are trying to lose weight is that they should exercise more. And physical activity certainly helps to keep your heart, brain, bones and other body parts in good working order. But detailed measurements conducted in our lab and others show that physical activity is responsible for only about one third of total energy expenditure (assuming a stable body weight). The body’s basal metabolism—that is, the energy it needs to maintain itself while at rest—makes up the other two thirds. Intriguingly, the areas of the body with the greatest energy requirement are the brain and certain internal organs, such as the heart and kidneys—not the skeletal muscle, although strength training can boost basal metabolism modestly.

      In addition, as anyone who has ever reached middle age understands all too well, metabolism changes over time. Older people need fewer calories to keep their body running than they did in their youth. Metabolic rate also differs among individuals….The inescapable conclusion: when it comes to metabolic rate—and your ability to lose or maintain your weight—parentage makes a difference.

      But let us suppose that you have started to lose some weight. Naturally, your metabolic rate and calorie requirements must fall as your body becomes smaller, meaning that weight loss will slow down. That is just a matter of physics: the first law of thermodynamics still applies. But the human body is also subject to the pressures of evolution, which would have favored those who could hold on to their energy stores by becoming even more fuel-efficient. And indeed, studies show that metabolic rate drops somewhat more than expected during active weight loss. Once a person’s weight has stabilized at a new, lower level, exercise can help in weight management by compensating for the reduced energy requirement of a smaller body….

      In other words, the role of hunger has long been to keep us alive. Thus, there is no point in fighting it directly. Instead one of the keys to successful weight management is to prevent hunger and temptation from happening in the first place….

Knowledge Is Infrastructure

[These excerpts are from an article by Robbert Dijkgraaf in the June 2017 issue of Scientific American.]

      Curiosity-driven basic research has brought truly revolutionary transformations, such as the rapid growth of computer-based intelligence and the discovery of the genetic basis of life. Albert Einstein’s century-old theory of relativity is used every day in our GPS devices. Perhaps the best U.S. government investment ever was, the $4.5-million grant from the National Science Foundation that led to the Google search algorithm—an investment that has multiplied by more than 100,000 times.

      Basic research not only radically alters our deep understanding of the world, it also leads to new tools and techniques that spread throughout society, such as the World Wide Web, originally developed for particle physicists to foster scientific collaboration. It trains the sharpest minds on the toughest challenges, and its products are widely used by industry and society No one can exclusively capture its rewards—it is a truly public good….

      It is human to focus on necessities in times of stress. But investing in basic research, just like saving for retirement, is a prerequisite for ensuring welfare, innovation and societal progress. Long-term investments in basic research are crucial and lead to an even higher goal: the global benefits of embracing the scientific culture of accuracy, truth seeking, critical questioning and dialogue, healthy skepticism, respect for facts and uncertainties, and wonder at the richness of nature and the human spirit.

Science without Walls

[These excerpts are from an article by the editors in the June 2017 issue of Scientific American.]

      The U.S. appears to be plunging headlong into a new era of isolationism. The White House wants to pull out of international agreements, including the Paris climate deal and the North American Free Trade Agreement. It has issued executive orders trying to halt or slow the flow of refugees and immigrants to the nation.

      This is bad for the U.S. and terrible for hundreds of thousands of desperate people across the planet. And it will strangle science. The choke hold will leave us more vulnerable to emerging, deadly viruses and will hamper efforts to explore space and control global threats such as climate change.

      Research depends on ideas shared across political borders—including among countries in conflict. Even as the cold war was raging, hostility between the U.S. and the Soviet Union was put aside when American medical researcher Albert B. Sabin and his Soviet counterparts tested a live-virus, oral polio vaccine in the U.S.S.R. That successful trial provided the scientific proof needed for the vaccine's use around the world and ultimately helped to eradicate polio in most countries….

      Louis Pasteur once declared that “science knows no country, because knowledge belongs to humanity, and is the torch which illuminates the world.” Nations have repeatedly seen the wisdom of his words…

      In recent years the U.S. has taken some crucial steps to strengthen our science diplomacy: In 2009 President Barack Obama spoke in Cairo about working with scientists in the Muslim world to develop novel sources of energy, create green jobs, digitize records, provide clean water and grow new crops….

      Yet the future of the envoy program under President Donald Trump remains unclear. Trump’s travel bans have thrown researchers’ plans into disarray—making foreign scientists and scholars question whether they should attempt to come to the U.S. for jobs or conferences and raising doubts about whether foreign scientists working here can risk visiting relatives in Muslim-majority countries, lest they be prevented from returning.

      That is unfortunate because better science—and dialogue about science—benefits us all….

      Let’s resist the urge to turn inward and isolate ourselves. Instead we must continue to forge strong ties worldwide, using science as a diplomatic wedge. We gain far more from these partnerships than we risk. Weakening them will hurt us all.

History and Tyranny

[This excerpt is from an article by Timothy Snyder in the Summer 2017 issue of American Educator.]

      History does not repeat, but it does instruct. As the Founding Fathers debated our Constitution, they took instruction from the history they knew. Concerned that the democratic republic they envisioned would collapse, they contemplated the descent of ancient democracies and republics into oligarchies and empires. As they knew, Aristotle warned that inequality brought instability, while Plato believed that demagogues exploited free speech to install themselves as tyrants. In founding a democratic republic upon law and establishing a system of checks and balances, the Founding Fathers sought to avoid the evil that they, like the ancient philosophers, called tyranny. They had in mind the usurpation of power by a single individual or group, or the circumvention of law by rules for their own benefit. Much of the succeeding political debate in the United States has concerned the problem of tyranny within American society: over slaves and women, for example.

      It is thus a primary American tradition to consider history when our political order seems imperiled. If we worry today that the American experiment is threatened by tyranny, we can follow the example of the Founding Fathers ad contemplate the history of other democracies and republics. The good news is that we can draw upon more recent and relevant examples than ancient Greece and Rome. The bad news is that the history of modern democracy is also one of decline and fall. Since the American colonies declared their independence from a British monarchy that the Founders deemed “tyrannical,” European history has seen three major democratic moments: after the First World War in 1918, after the Second World War in 1945, and after the end of Communism in 1989. Many of the democracies founded at these junctures failed, in circumstances that in some important respects resemble our own.

      History can familiarize, and it can warn. In the late 19th century, just as in the late 20th century, the expansion of global trade generated expectations of progress. In the early 21st, these hopes were challenged by new visions of mass politics in which a leader or a party claimed to directly represent the will of the people. European democracies collapsed into right-wing authoritarianism and fascism in the 1920s and ‘30s. The communist Soviet Union, established in 1922, extended its model into Europe in the 1940s. The European history of the 20th century shows us that societies can break, democracies can fall, ethics can collapse, and ordinary men can find themselves standing over death pits with guns in their hands. It would serve us well today to understand why.

      Both fascism and communism were responses to globalization: to the real and perceived inequalities it created, and the apparent helplessness of the democracies in addressing them. Fascists rejected reason in the name of will, denying objective truth in favor of a glorious myth articulated by leaders who claimed to give voice to the people. They put a face on globalization, arguing that its complex challenges were the result of a conspiracy against the nation. Fascists ruled for a decade or two, leaving behind an intact intellectual legacy that grows more relevant by the day. Communists ruled for longer, for nearly seven decades in the Soviet Union, and more than four decades in much of Eastern Europe. They proposed rule by a disciplined party elite with a monopoly on reason that would guide society toward a certain future according to supposedly fixed laws of history.

      We might be tempted to think that our democratic heritage automatically protects us from such threat. This is a misguided reflex. In fact, the precedent set by the Founders demands that we examine history to understand the deep sources of tyranny, and to consider the proper responses to it. Americans today are no wiser than thee Europeans who saw democracy yield to fascism, Nazism, or communism in the 20th century. Our one advantage is that we might learn from the experience. Now is a good time to do so…

Making School Lunch Climate-Friendly

[This article by Kari Hamerschlag and Christopher D. Cook is in the Spring 2017 issue of the Friends of the Earth Newsmagazine.]

      Imagine if kids’ school lunches were good for their health and for the planet. What if school cafeterias could cut greenhouse gas emissions while saving money and improving kids' health? A new study by Friends of the Earth shows it's not such a pie-in-the-sky idea - in fact, it’s already happening.

      Teaming up with Oakland public schools, Friends of the Earth produced a study with inspiring results: one of California’s largest school districts reduced its impact on climate change, and saved money, by serving less and better meat and dairy.

      Industrial meat production and meat-centered diets generate a major portion of climate-harming greenhouse gases, and guzzle huge amounts of water.

      By reducing the meat and cheese in lunches and serving more nutritious, plant-based meals, the Oakland Unified School District significantly reduced its carbon and water footprint.

      Over a two-year period, Oakland’s school food service cut its meals’ carbon footprint by 14 percent, and reduced water use by six percent. Meanwhile, the district saved money and was able to purchase better quality and more sustainable meat from organic, grass-fed dairy cows.

      If every California K-12 school food service matched Oakland’s carbon footprint reductions, they would reduce their footprint by 80 million kg of CO2 emissions - equivalent to Californians driving almost 200 million fewer miles per year.

      Kids can be picky eaters. So it’s even more impressive that Oakland's effort increased student satisfaction by serving local, fresh and tasty meals. Equally important, these plant-based meals met or exceeded USDA meal pattern requirements. While reducing meat and dairy consumption, OUSD increased its purchases of fruits, vegetables and legumes by approximately 10 percent.

      Other school districts across the country have followed Oakland’s lead. More than 200 school districts have adopted Meatless Mondays, and sustainable food procurement standards like the Good Food Purchasing Program emphasize the importance of reducing animal foods. In an effort to promote less and better meat, Friends of the Earth has teamed up with the Humane Society and other organizations to create www.forwardfood.org, a website offering tools, case studies and recipes.

      At a time when people are hungry for positive solutions, the Oakland school lunch story provides an inspiring model of sustainability across California and the nation. Check out our full study at http://bitly/SchoolFoodFootprint - and spread the good word!

Oldest Members of our Species Discovered in Morocco

[This excerpt is from an article by Ann Gibbons in the June 9, 2017, issue of Science.]

      For decades, researchers seeking the origin of our species have scoured the Great Rift Valley of East Africa. Now, their quest has taken an unexpected detour west to Morocco: Researchers have redated a long-overlooked skull from a cave called Jebel Irhoud to a startling 300,000 years ago, and unearthed new fossils and stone tools. The result is the oldest well-dated evidence of Homo sapiens, pushing back the appearance of our kind by 100,000 years….

      The discoveries, reported in Nature, suggest that our species came into the world face-first, evolving modern facial traits while the back of the skull remained elongated like those of archaic humans. The findings also suggest that the earliest chapters of our species’s story may have played out across the African continent….

The Dishonest HONEST Act

[This excerpt is from an editorial by David Michaels and Thomas Burke in the June 9, 2017, issue of Science.]

      The Trump administration aims to eliminate many regulations and make it more difficult to adopt new ones. More subtle and dangerous are attempts in Congress to undermine public health and environmental protections by limiting the use of scientific evidence under the guise of increased transparency. This effort, which as envisioned by U.S. Environmental Protection Agency (EPA) leadership would greatly reduce the amount of science used in decision-making, undermines the credibility and application of scientific evidence, weakens the scientific enterprise, and imperils public and environmental health.

      The Honest and Open New EPA Science Treatment (HONEST) Act, in the Senate after passing the House of Representatives in March, would prohibit the EPA from using studies for agency decision-making unless raw data, computer codes, and virtually everything used by scientists to conduct the study are provided to the agency and made publicly available online. Transparency and reproducibility are long-standing priorities in science, and we welcome good-faith efforts to evaluate scientific evidence for use in public policy. But on these issues, the Act is dishonest—an attempt by politicians to override scientific judgment and dictate narrow standards by which science is deemed valuable for policy. It imposes burdens that will detract from scientists’ ability to do research and to have it influence decision-making, all aimed at bringing the process to a standstill, minimizing the role of science, and limiting regulations.

      Federal agencies must already adhere to strict standards of transparency and quality while considering a broad body of scientific evidence, and uncertainties therein. Polluters and manufacturers of dangerous products have taken a page from the tobacco industry playbook, magnifying those uncertainties to prolong the review of scientific data, slow the regulatory process, and evade liability. By writing narrow data standards into law, the Act will provide another avenue for such challenges to regulations and to the underlying science.

When Dinosaurs Went Bad

[These excerpts are from an article by Gemma Tarlach in the May 2017 issue of Discover.]

      In 1842, English anatomist Richard Owen proposed the term dinosauria for the strange animal fossils he and colleagues had begun to study. Owen drew from ancient Greek to create the word: deinos, meaning “terrible” in the awesome-to-behold sense, and sauros, “reptile” or “lizard.”…

      “Dinosaurs were very alien, very different,” says University of Leicester paleontologist David Unwin. “[Paleontologists] tried to force them to fit into paradigms that didn’t exist then.”…

      “Remember the origin of the word dinosaur predates the theory of evolution,” Lamanna says. “Ideas about animal [species] being transitional had L yet to materialize. Now we know that dinosaurs are sort of bizarre croc-birds, but back then the concept would have been very hard to imagine.”

      Early on, a few great minds did suspect that science might be getting dinosaurs wrong. Comparative anatomist Thomas Henry Huxley, for example, noticed similarities in the body plans of dinosaurs and birds as early as the 1860s. He thought there might be an indirect evolutionary relationship, though he never claimed birds descended from dinosaurs.

      But Huxley — often called Darwin’s bulldog for his staunch support of evolution — couldn’t rally others to the idea. It would be more than a century before the dinosaur-bird connection gained traction…

      Research waned during and immediately after the world wars, but at the same time, dinosaurs’ sheer weirdness made for perfect escapist fare in movies such as 1933’s King Kong. In fact, Hollywood and pop culture’s embrace of dinosaurs may have set research back….

      In 1969, he laid out a case for Deinonychus as an “active and very agile predator” that was potentially warmblooded. In subsequent research, Ostrom went a big step further: He compared his famous find with specimens of the earliest-known bird, Archaeopteryx, and made the link Huxley had stopped short of a century earlier: Birds evolved from dinosaurs.

      Ostrom’s theory, bolstered by additional finds, reignited interest in the field….

      In 1995, a farmer in northeastern China found a feathered dinosaur. “It was the final piece of the puzzle,” Lamanna says. “The skeptics had to admit that dinosaurs were progenitors of birds because feathers are such a uniquely avian characteristic. It was one of those rare moments in science where the answer is so clear it's like getting hit over the head with a two-by-four.”

      …Unwin agrees, adding that digitization allows paleontologists to model movement in a way that would have been impossible a few decades ago. “We might not be able to tell how T. rex moved yet, but increasingly we can tell you how T. rex didn’t move,” he says.

Rosalind Franklin

[These excerpts are from an article by Carl Engelking in the May 2017 issue of Discover.]

      In 1962, Francis Crick, James Watson and Maurice Wilkins shared the Nobel Prize for describing DNA’s double-helix structure — arguably the greatest discovery of the 20th century. But no one mentioned Rosalind Franklin — arguably the greatest snub of the 20th century….

      Franklin was also a brilliant chemist and a master of X-ray crystallography, an imaging technique that reveals the molecular structure of matter based on the pattern of scattered X-ray beams. Her early research into the microstructures of carbon and graphite are still cited, but her work with DNA was the most significant and it may have won three men a Nobel.

      While at King’s College London in the early 1950s, Franklin was close to proving the double-helix theory after capturing “photograph #51,” considered the finest image of a DNA molecule at the time. But then both Watson and Crick got a peek at Franklin’s work: Her colleague, Wilkins, showed Watson photograph #51, and Max Perutz, a member of King’s Medical Research Council, handed Crick unpublished data from a report Franklin submitted to the council. In 1953, Watson and Crick published their iconic paper in Nature, loosely citing Franklin, whose “supporting” study also appeared in that issue.

      Franklin left King's in 1953 in a long-planned move to join J.D. Bernal’s lab at Birkbeck College, where she discovered the structure of the tobacco mosaic virus. But in 1956, in the prime of her career, she developed ovarian cancer —perhaps due to her extensive X-ray work. Franklin continued working in the lab until her death in 1958 at age 37….

Carl Linnaeus

[This excerpt is from an article by Gemma Tarlach in the May 2017 issue of Discover.]

      It started in Sweden: a functional, user-friendly innovation that took over the world, bringing order to chaos. No, not an Ikea closet organizer. We’re talking about the binomial nomenclature system, which has given us clarity and a common language, devised by Carl Linnaeus.

      Linnaeus, born in southern Sweden in 1707, was an “intensely practical” man, according to Sandra Knapp, a botanist and taxonomist at the Natural History Museum in London. He lived at a time when formal scientific training was scant and there was no system for referring to living things. Plants and animals had common names, which varied from one location and language to the next, and scientific “phrase names,” cumbersome Latin descriptions that could run several paragraphs.

      The 18th century was also a time when European explorers were fanning out across the globe, finding ever more plants and animals new to science.

      “There got to be more and more things that needed to be described, and the names were becoming more and more complex,” says Knapp. Linnaeus, a botanist with a talent for noticing details, first used what he called “trivial names” in the margins of his 1753 book Species Plantarum. He intended the simple Latin two-word construction for each plant as a kind of shorthand, an easy way to remember what it was.

      “It reflected the adjective-noun structure in languages all over the world,” Knapp says of the trivial names, which today we know as genus and species. The names moved quickly from the margins of a single book to the center of botany, and then all of biology. Linnaeus started a revolution, but it was an unintentional one.

      Today we regard Linnaeus as the father of taxonomy, which is used to sort the entire living world into evolutionary hierarchies, or family trees. But the systematic Swede was mostly interested in naming things rather than ordering them, an emphasis that arrived the next century with Charles Darwin.

Pythagoras

[This excerpt is from an article by Mark Barna in the May 2017 issue of Discover.]

      Pythagoras, a sixth-century B.C. Greek philosopher and mathematician, is credited with inventing his namesake theorem and various proofs. But forget about the certainty.

      Babylonian and Egyptian mathematicians used the equation centuries before Pythagoras, says Karen Eva Carr, a retired historian at Portland State University, though many scholars leave open the possibility he developed the first proof. Moreover, Pythagoras’ students often attributed their own mathematical discoveries to their master, making it impossible to untangle who invented what.

      Even so, we know enough to suspect Pythagoras was one of the great mathematicians of antiquity. His influence was widespread and lasting. Theoretical physicist James Overduin sees an unbroken chain from Pythagoras to Albert Einstein, whose work on curving space and time Overduin calls “physics as geometry!”

      Even today, the sea of numerical formulas typically on physicists’ blackboards suggests the Pythagorean maxim “All is number,” an implication that everything can be explained, organized and, in many cases, predicted through mathematics….

Ada Lovelace

[This brief article by Lacy Schley was in the May 2017 issue of Discover.]

      To say she was ahead of her time would be an understatement. Ada Lovelace earned her place in history as the first computer programmer — a full century before today’s computers emerged.

      She couldn’t have done it without British mathematician, inventor and engineer Charles Babbage. Their collaboration started in the early 1830s, when Lovelace was just 17 and still known by her maiden name of Byron. (She was the only legitimate child of poet Lord Byron.) Babbage had drawn up plans for an elaborate machine he called the Difference Engine — essentially, a giant mechanical calculator. In the middle of his work on it, the teenage Lovelace met Babbage at a party.

      There, he showed off an incomplete prototype of his machine. According to a family friend who was there: “While other visitors gazed at the working of this beautiful instrument with the sort of expression ... that some savages are said to have shown on first seeing a looking-glass or hearing a gun ... Miss Byron, young as she was, understood its working, and saw the great beauty of the invention.”

      It was mathematical obsession at first sight. The two struck up a working relationship and eventual close friendship that would last until Lovelace’s death in 1852, when she was only 36. Babbage abandoned his Difference Engine to brainstorm a new Analytical Engine — in theory, capable of more complex number crunching — but it was Lovelace who saw that engine’s true potential.

      The Analytical Engine was more than a calculator — its intricate mechanisms and the fact that the user fed it commands via a punch card meant the engine could perform nearly any mathematical task ordered. Lovelace even wrote instructions for solving a complex math problem, should the machine ever see the light of day. Many historians would later deem those instructions the first computer program, and Lovelace the first programmer. While she led a raucous life of gambling and scandal, it's her work in “poetical science,” as she called it, that defines her legacy.

      In the words of Babbage himself, Lovelace was an “enchantress who has thrown her magical spell around the most abstract of Sciences and has grasped it with a force which few masculine intellects ... could have exerted over it.”

Galileo Galilei

[These excerpts are from an article by Eric Betz in the May 2017 issue of Discover.]

      Around Dec. 1, 1609, Italian mathematician Galileo Galilei pointed a telescope at the moon and created modern astronomy. His subsequent observations turned up four satellites — massive moons orbiting Jupiter, and showed that the Milky Way's murky light shines from many dim stars. Galileo also found sunspots upon the surface of our star and discovered the phases of Venus, which confirmed that the planet circles the sun inside Earth’s own orbit.

      “I give infinite thanks to God, who has been pleased to make me the first observer of marvelous things,” he wrote.

      The 45-year-old Galileo didn’t invent the telescope, and he wasn't the first to point one at the sky. But his conclusions changed history. Galileo knew he'd found proof for the theories of Polish astronomer Nicolaus Copernicus (1473-1543), who had launched the Scientific Revolution with his sun-centered solar system model.

      Galileo’s work wasn't all staring at the sky, either: His studies of falling bodies showed that objects dropped at the same time will hit the ground at the same time, barring air resistance — gravity doesn't depend on their size. And his law of inertia allowed for Earth itself to rotate.

      But all this heavenly motion contradicted Roman Catholic doctrine, which was based in Aristotle’s incorrect views of the cosmos. The church declared the sun-centered model heretical, and an inquisition in 1616 ordered Galileo to stop promoting these views. The real blow from religious officials came in 1633, after Galileo published a comparison of the Copernican (sun-centered) and Ptolemaic (Earth-centered) systems that made the latter’s believers look foolish. They placed him under house arrest until his death in 1642, the same year Isaac Newton was born.

Charles Darwin

[These excerpts are from an article by Nathaniel Scharping in the May 2017 issue of Discover.]

      Charles Darwin would not have been anyone’s first guess for a revolutionary scientist.

      As a young man, his main interests were collecting beetles and studying geology in the countryside, occasionally skipping out on his classes at the University of Edinburgh Medical School to do so. It was a chance invitation in 1831 to join a journey around the world that would make Darwin, who had once studied to become a country parson, the father of evolutionary biology.

      Aboard the HMS Beagle, between bouts of seasickness, Darwin spent his five-year trip studying and documenting geological formations and myriad habitats throughout much of the Southern Hemisphere, as well as the flora and fauna they contained.

      Darwin’s observations pushed him to a disturbing realization — the Victorian-era theories of animal origins were all wrong. Most people in Darwin’s time still adhered to creationism, the idea that a divine being was responsible for the diversity of life we find on Earth.

      Darwin’s observations implied a completely different process. He noticed small differences between members of the same species that seemed to depend upon where they lived. The finches of the Galapagos are the best-known example: From island to island, finches of the same species possessed differently shaped beaks, each adapted to the unique sources of food available on each island.

      This suggested not only that species could change — already a divisive concept back then — but also that the changes were driven purely by environmental factors, instead of divine intervention. Today, we call this natural selection….

      Through it all, the theory of evolution was never far from his mind, and the various areas of research he pursued only strengthened his convictions. Darwin slowly amassed overwhelming evidence in favor of evolution in the 20 years after his voyage.

      All of his observations and musings eventually coalesced into the tour de force that was On the Origin of Species, published in 1859 when Darwin was 50 years old. The 500-page book sold out immediately, and Darwin would go on to produce six editions, each time adding to and refining his arguments.

      In non-technical language, the book laid out a simple argument for how the wide array of Earth’s species came to be. It was based on two ideas: that species can change gradually over time, and that all species face difficulties brought on by their surroundings. From these basic observations, it stands to reason that those species best adapted to their environments will survive and those that fall short will die out….

Nikola Tesla

[These excerpts are from an article by Eric Betz in the May 2017 issue of Discover.]

      Nikola Tesla grips his hat in his hand. He points his cane toward Niagara Falls and beckons bystanders to turn their gaze to the future. This bronze Tesla — a statue on the Canadian side — stands atop an induction motor, the type of engine that drove the first hydroelectric power plant.

      We owe much of our modern electrified life to the lab experiments of the Serbian-American engineer, born in 1856 in what's now Croatia. His designs advanced alternating current at the start of the electric age and allowed utilities to send current over vast distances, powering American homes across the country. He developed the Tesla coil — a high-voltage transformer — and techniques to transmit power wirelessly. Cellphone makers (and others) are just now utilizing the potential of this idea.

      Tesla is perhaps best known for his eccentric genius. He once proposed a system of towers that he believed could pull energy from the environment and transmit signals and electricity around the world, wirelessly. But his theories were unsound, and the project was never completed. He also claimed he had invented a “death ray.”

      …And Tesla didn’t actually discover alternating current, as everyone thinks. It was around for decades. But his ceaseless theories, inventions and patents made Tesla a household name, rare for scientists a century ago….

Isaac Newton

[This excerpt is from an article by Bill Andrews in the May 2017 issue of Discover.]

      Isaac Newton was born on Christmas Day, 1642. Never the humble sort, he would have found the date apt: The gift to humanity and science had arrived. A sickly infant, his mere survival was an achievement. Just 23 years later, with his alma mater Cambridge University and much of England closed due to plague, Newton discovered the laws that now bear his name. (He had to invent a new kind of math along the way: calculus.) The introverted English scholar held off on publishing those findings for decades, though, and it took the Herculean efforts of friend and comet discoverer Edmund Halley to get Newton to publish. The only reason Halley knew of Newton’s work? A bet the former had with other scientists on the nature of planetary orbits. When Halley mentioned the orbital problem to him, Newton shocked his friend by giving the answer immediately, having long ago worked it out.

      Halley persuaded Newton to publish his calculations, and the results were the Philosophiae Naturalis Principia Mathematica, or just the Principia, in 1687. Not only did it describe for the first time how the planets moved through space and how projectiles on Earth traveled through the air; the Principia showed that the same fundamental force, gravity, governs both. Newton united the heavens and the Earth with his laws. Thanks to him, scientists believed they had a chance of unlocking the universe’s secrets….

Marie Curie

[These excerpts are from an article by Lacy Schley in the May 2017 issue of Discover.]

      Despite her French name, Marie Curie’s story didn’t start in France. Her road to Paris and success was a hard one, as equally worthy of admiration as her scientific accomplishments.

      Born Maria Salomea Sklodowska in 1867 in Warsaw, Poland, she faced some daunting hurdles, both because of her gender and her family’s poverty, which stemmed from the political turmoil at the time. Her parents, deeply patriotic Poles, lost most of their money supporting their homeland in its struggle for independence from Russian, Austrian and Prussian regimes. Her father, a math and physics professor, and her mother, headmistress of a respected boarding school in Russian-occupied Warsaw, instilled in their five kids a love of learning. They also imbued them with an appreciation of Polish culture, which the Russian government discouraged.

      When Curie and her three sisters finished regular schooling, they couldn’t carry on with higher education like their brother. The local university didn’t let women enroll, and their family didn’t have the money to send them abroad. Their only options were to marry or become governesses. Curie and her sister Bronislawa found another way.

      The pair took up with a secret organization called Flying University, or sometimes Floating University. Fittingly, given the English abbreviation, the point of FU was to stick it to the Russian government and provide a pro-Polish education, in Polish — expressly forbidden in Russian-controlled Poland.

      Eventually, the sisters hatched a plan that would help them both get the higher education they so desperately wanted. Curie would work as a governess and support Bronislawa’s medical school studies. Then, Bronislawa would return the favor once she was established. Curie endured years of misery as a governess, but the plan worked. In 1891, she packed her bags and headed to Paris and her bright future.

      At the University of Paris, Curie was inspired by French physicist Henri Becquerel. In 1896, he discovered that uranium emitted something that looked an awful lot like — but not quite the same as — X-rays, which had been discovered only the year before. Intrigued, Curie decided to explore uranium and its mysterious rays as a Ph.D. thesis topic.

      Eventually, she realized whatever was producing these rays was happening at an atomic level, an important first step to discovering that atoms weren’t the smallest form of matter. It was a defining moment for what Curie would eventually call radioactivity….

      In 1903, Curie, her husband and Becquerel won the Nobel Prize in Physics for their work on radioactivity, making Curie the first woman to win a Nobel.

      …In 1911 Curie won her second Nobel 1 Prize, this time in chemistry, for her work with polonium and radium. She remains the only person to win Nobel prizes in two different sciences….

      She died in 1934 from a type of anemia that very likely stemmed from her exposure to such extreme radiation during her career. In fact, her original notes and papers are still so radioactive that they’re kept in lead-lined boxes, and you need protective gear to view them.

The Global Warming Wild Card

[These excerpts are from an article by Varun Sivaram in the May 2017 issue of Scientific American.]

      A shimmering waterfall beckoned visitors into the India pavilion at the 2015 Paris Climate Change Conference. Inside, multimedia exhibits and a parade of panelists proclaimed that the nation’s clean energy future was fast-approaching. Prime Minister Nareitlya Modi went even further, announcing that his country would lead a new International Solar Alliance to ramp up solar power in 120 countries. Indian officials resolved to be leaders in battling global climate change.

      I had arrived in Paris after a research trip that crisscrossed India and I struggled to square that confident optimism with the facts I had seen on the ground: heavy reliance on coal power plants, a failing electrical grid that could not handle large additions of wind or solar electricity; and a widespread attitude that India, as a developing country, should not have to reduce its carbon emission to grow using fossil fuels as other major countries have done. Still, by the end of the conference, India and 194 other countries, along with the European Union, had adopted the Paris Agreement, which commits the world to limit global warming to two degrees Celsius. In November 2016 the agreement went into legal force, making each country’s pledge binding under international law.

      Despite the lofty rhetoric of India's leaders, their vision of a clean energy future is far from assured. Even though India’s pledge set ambitious targets for solar and wind power, its overall commitment to curb emissions was underwhelming. If the government just sat on its hands, emissions would rise rapidly yet stay within the sky-high limits the country set for itself in Paris.

      That would be disastrous for the world. India has one of the fastest-growing major economies on the planet, with a population expected to rise to 1.6 billion by 2040. By that time, electricity demand could quadruple. If the nation does not take drastic measures, by midcentury it could well be the largest greenhouse gas emitter (it is third now, behind China and the U.S.), locked into a fossil-fuel infrastructure that would likely ruin the world's quest to contain climate change. If it adds coal power at the rate needed to keep up with its skyrocketing demand, for example, its greenhouse gas emissions could double by 2040.

      Yet India is in some ways starting with a clean slate. Unlike the developed world, where the challenge is replacing dirty fossil-fuel infrastructure with clean energy, most of India’s infrastructure has not been built yet….

      India will not achieve a low-carbon transition alone. It will need help developing new technologies and financing their deployment. Some signs are encouraging: India has partnerships with the U.S. on clean energy research and development, with Germany on financing grid infrastructure and with multilateral development banks on deploying renewable energy.

      The scale of assistance will need to increase by at least an order of magnitude. Otherwise, India will most likely continue to install inefficient coal plants, guzzle foreign oil and struggle with rickety grids. Rather than hoping that India builds a low-carbon future, foreign leaders need to step up to help India make that choice. There is a strong financial incentive to do so: by accelerating India's energy transition, countries can open a lucrative export market for their clean energy industries. And there is a larger imperative: the fate of the planet hangs in the balance.

Don’t Pass the Weed or Say “Guns”

[These excerpts are from an article by Steve Mirsky in the May 2017 issue of Scientific American.]

      According to [David] Hemenway, on average in the U.S., more than 300 people get shot daily. A third of them die. “Since I graduated from college [in 1966], there have been more civilian deaths from guns in the United States than combat .. deaths on the battlefield in all the wars in United States history, including the Civil War and World War II.” (And, of course, the war that started brewing after the Boston Massacre.)

      “Twenty years ago [the CDC was] doing a tiny amount of funding for firearms research ... $2.6 million a year total,” Hemenway said. “This was too much for the gun lobby and Republicans in Congress, and they attacked the CDC. And now the CDC does no funding of firearms research. Zero.”…

      The National Institutes of Health also feels the chill of a congressional freeze-out. Hemenway talked about research that examined grants given by the NIH during a 40-year period. “How many deaths were from cholera, diphtheria, polio and rabies in the United States? And the answer was 2,000. How many research awards were given by the NIH during that period to cholera, diphtheria, polio and rabies: 486. During the same 40-year-period, how many people were shot in the United States with guns? The answer's four million. How many research awards were there about guns and gun issues? Three.”…

      Another group of researchers at the AAAS meeting who deal with some unfriendly fells are those trying to determine if cannabis can be good medicine. In fact, in the parts of the country where medical marijuana is legal, opioid deaths and prescription pain medication use are way down—so there's at least some evidence that pot seems to help manage pain.

      But marijuana is still what is known as a Schedule I substance—the Food and Drug Administration does not recognize a legitimate medical use for it, so scientists have to jump through flaming hoops to get any to study.

Aspirin vs. Cancer

[These excerpts are from an article by Viviane Callier in the May 2017 issue of Scientific American.]

      If ever there was a wonder drug, aspirin might be it. Originally derived from the leaves of the willow tree, this mainstay of the family medicine cabinet has been used successfully for generations to treat conditions ranging from arthritis to fever, as well as to prevent strokes, heart attacks and even some types of cancer, among other ills. Indeed, the drug is so popular that annual consumption worldwide totals about 120 billion tablets.

      In recent years scientists have discovered another possible use for aspirin: stopping the spread of cancer cells in the body after an initial tumor has already formed. The research is still developing, but the findings hint that the drug could one day form the basis for a powerful addition to current cancer therapies.

      Not everyone responds equally well to the drug, however, and for some people it can be downright dangerous. Investigators are thus trying to develop genetic tests to determine who is most likely to benefit from long-term use of aspirin. The latest research into the drug's cancer-inhibiting activity is generating findings that could possibly guide those efforts.

      During the past century researchers demonstrated that aspirin inhibits the production of certain hormone like substances called prostaglandins. Depending on where in the body these prostaglandins are produced, they may trigger pain, inflammation, fever or blood clotting.

      Obviously no one wants to block these natural responses all the time—particularly as they help the body to heal from cuts, bruises, infections and other, injuries. But sometimes they linger for too long, causing more harm than good. Long-lasting, or chronic, inflammation, for example, increases the risk of developing heart disease and cancer by causing repeated damage to otherwise normal tissue. Eventually the damaged tissue, depending on where it is located and a host of other factors, may become a vessel-clogging plaque in a coronary artery or a tiny tumor hidden deep within the body By turning down the prostaglandin spigot, aspirin prevents thousands of heart attacks every year and probably stops a significant number of tumors from forming in the first place.

      In 2000 scientists discovered a second major mechanism of action for aspirin in the body. The drug boosts the production of molecules called resolvins, which also helps to quench the fires of inflammation.

      More recently, investigators have started to elucidate a third way that aspirin works—one that interferes with the ability of cancer cells to spread, or metastasize, through the body. Intriguingly, in this case, the drug's anti-inflammatory properties do not appear to play the starring role.

      Metastasis is a complex process that, somewhat counterintuitively, requires a certain amount of cooperation between tumor cells and their host. Some number of malignant cells must break away from the original tumor, cross the walls of a nearby blood vessel to enter the bloodstream and avoid getting detected by immune system defenders as they travel about the body. Those that survive this gauntlet must then cross the walls of another blood vessel at a different location in the body, nestle into surrounding tissue that is completely different from their original birthplace and start to grow….

      How does aspirin stop tumor cells from hijacking platelets to do their bidding? Instead of blocking a single compound (a prostaglandin, for example), in this case the drug seems to turn entire groups of genes on or off in the nuclei of certain blood cells….

      To date, the only way to know for sure that a patient is resistant to aspirin's anticlotting effects is to test the person's blood after several weeks of therapy to see if it takes longer to form clots than it once did—an expensive proposition that is not very practical. Genetic tests would presumably be less expensive, but they are a long way off….

      Nevertheless, aspirin cannot make up for a lifetime of bad habits. Quitting smoking—or better yet, never starting—eating moderately, keeping your body lean and remaining physically active may be as effective as—or even more effective than—taking aspirin on a daily basis for keeping lots of health problems, including heart disease and cancer, at bay. Aspirin may well be an amazing drug, but it is still not a cure for everything that ails you.

On Witches and Terrorists

[This excerpt is from an article by Michael Shermer in the May 2017 issue of Scientific American.]

      As recounted by author and journalist Daniel P. Mannix, during the European witch craze the Duke of Brunswick in Germany invited two Jesuit scholars to oversee the Inquisition's use of torture to extract information from accused witches. “The Inquisitors are doing their duty. They are arresting only people who have been implicated by the confession of other witches,” the Jesuits reported. The duke was skeptical. Suspecting that people will say anything to stop the pain, he invited the Jesuits to join him at the local dungeon to witness a woman being stretched on a rack. “Now, woman, you are a confessed witch,” he began. “I suspect these two men of being warlocks. What do you say? Another turn of the rack, executioners.” The Jesuits couldn’t believe what they heard next. “No, no!” the woman groaned. “You are quite right. I have often seen them at the Sabbat. They can turn themselves into goats, wolves and other animals.... Several witches have had children by them. One woman even had eight children whom these men fathered. The children had heads like toads and legs like spiders.” Turning to the flabbergasted Jesuits, the duke inquired, “Shall I put you to the torture until you confess?”

      One of these Jesuits was Friedrich Spee, who responded to this poignant experiment on the psychology of torture by publishing a book in 1631 entitled Cautio Criminalis, which played a role in bringing about the end of the witch mania and demonstrating why torture as a tool to obtain useful information doesn’t work. This is why, in addition to its inhumane elements, it is banned in all Western nations, including the U.S., whose Eighth Amendment of the Constitution prohibits “cruel and unusual punishments.”

Busting Myths of Origin

[These excerpts are from an article by Ann Gibbons in the May 19, 2017, issue of Science.]

      When the first busloads of migrants from Syria and Iraq rolled into Germany 2 years ago, some small towns were overwhelmed. The village of Sumte, population 102, had to take in 750 asylum seekers. Most villagers swung into action, in keeping with Germany’s strong Willkommenskultur, or “welcome culture.” But one self-described neo-Nazi on the district council told The New York Times that by allowing the influx, the German people faced “the destruction of our genetic heritage" and risked becoming “a gray mishmash!”

      In fact, the German people have no unique genetic heritage to protect. They—and all other Europeans—are already a mishmash, the children of repeated ancient migrations, according to scientists who study ancient human origins. New studies show that almost all indigenous Europeans descend from at least three major migrations in the past 15,000 years, including two from the Middle East. Those migrants swept across Europe, mingled with previous immigrants, and then remixed to create the peoples of today.

      …in the 15th century, German nationalists resurrected the myth of Arminius, who is often depicted as a blond, muscular young chieftain and known as Hermann. Hailed as the first “German” hero, he was said to have united the Germanic tribes and driven the Romans from their territory. That was considered the start of a period when fearsome Germanic tribes such as the Vandals swept around Europe, wresting territory from Romans and others.

      In the 20th century, the Nazis added their own dark spin to that origin story, citing Arminius as part of an ancient pedigree of a “master race” from Germany and northern Europe that they called Aryans. They used their view of prehistory and archaeology to justify claims to the tribes’ ancient homelands in Poland and Austria.

      Scholars agree that there was indeed a real battle that sent shock waves through the Roman Empire, which then stretched from the island of Britain to Egypt. But much of the rest of Arminius's story is myth: The Romans persisted deep in Germania until at least the third century C.E., as shown by the recent discovery of a third-century Roman battlefield in Harzhorn, Germany. And Arminius by no means united the more than 50 Germanic tribes of the time. He persuaded five tribes to join him in battle, but members of his own tribe soon killed him.

      Moreover, Arminius and his kin were not pure “Aryan” if that term means a person whose ancestors lived solely in what is now Germany or Scandinavia. The Cherusci tribe, like all Europeans of their day and later, were themselves composites, built from serial migrations into the heart of Europe and then repeatedly remixed.

The End of Sand

[These excerpts are from an article by David Owen in the May 29, 2017, issue of The New Yorker.]

      Sand covers so much of the earth’s surface that shipping it across borders—even uncontested ones—seems extreme. But sand isn’t just sand, it turns out. In the industrial world, it’s “aggregate,” a category that includes gravel, crushed stone, and various recycled materials. Natural aggregate is the world’s second most heavily exploited natural resource, after water, and for many uses the right kind is scarce or inaccessible. In 2014, the United Nations Environment Programme published a report titled “Sand, Rarer Than One Thinks,” which concluded that the mining of sand and gravel “greatly exceeds natural renewal rates” and that “the amount being mined is increasing exponentially, mainly as a result of rapid economic growth in Asia.”

      Pascal Peduzzi, a Swiss scientist and the director of one of the U.N.’s environmental groups, told the BBC last May that China's swift development had consumed more sand in the previous four years than the United States used in the past century. In India, commercially useful sand is now so scarce that markets for it are dominated by “sand mafias”—criminal enterprises that sell material taken illegally from rivers and other sources, sometimes killing to safeguard their deposits. In the United States, the fastest-growing uses include the fortification of shorelines eroded by rising sea levels and more and more powerful ocean storms—efforts that, like many attempts to address environmental challenges, create environmental challenges of their own.

      Geologists define sand not by composition but by size, as grains between 0.0625 and two millimetres across. Just below sand on the size scale is silt; just above it is gravel. Most sand consists chiefly of quartz, the commonest form of silica, but there are other kinds. Sand on ocean beaches usually includes a high proportion of shell pieces and, increasingly, bits of decomposing plastic trash; Hawaii's famous black sand is weathered fragments of volcanic glass; the sand in the dunes at White Sands National Monument, in New Mexico, is mainly gypsum. Sand is almost always formed through the gradual disintegration of bigger rocks, by the action of ice, water, wind, and time…

      Aggregate is the main constituent of concrete (eighty per cent) and asphalt (ninety-four per cent)….A mile-long section of a single lane of an American interstate highway requires thirty-eight thousand tons. The most dramatic global increase in aggregate consumption is occurring in parts of the world where people who build roads are trying to keep pace with people who buy cars. Chinese officials have said that by 2030 they hope to have completed a hundred and sixty-five thousand miles of roads—a national network nearly three and a half times as long as the American interstate system.

      Windowpanes, wineglasses, and cellphone screens are made from melted sand. Sand is used for filtration in water-treatment facilities, septic systems, and swimming pools. Oil and gas drillers inject large quantities of hard, round sand into fracked rock formations in order to hold the cracks open, like shoving a foot in the door. Railroad locomotives drop angular sand onto the rails in front of their wheels as they brake, to improve traction. Australia and India are major exporters of garnet sand, which is crushed to make an abrasive material used in sandblasting and by water-jet cutters. Foundries use sand to form the molds for iron bolts, manhole covers, engine blocks, and other cast-metal objects….

      …Florida lies on top of a vast limestone formation, but most of the stone is too soft to be used in construction. “The whole Gulf Coast is starved for aggregate,” William Langer, the research geologist, told me. “So they import limestone from Mexico, from a quarry in the Yucatan, and haul it by freighter across the Caribbean.” Even that stone is wrong for some uses. “You can build most of a road with limestone from Mexico,” he continued, “but it doesn’t have much skid resistance. So to get that they have to use granitic rock, which they ship down the East Coast from quarries in Nova Scotia or haul by train from places like inland Georgia.” When Denver International Airport was being built, in the nineteen-nineties, local quarries were unable to supply crushed stone as rapidly as it was needed, so vast quantities were brought from a quarry in Wyoming whose principal product was stone ballast for railroad tracks. The crushed stone was delivered by a freight train that ran in a continuous loop between the quarry and the work site.

      Deposits of sand, gravel, and stone can be found all over the United States, but many of them are untouchable, because they're covered by houses, shopping malls, or protected land. Regulatory approval for new quarries is more and more difficult to obtain: people don't want to live near big, noisy holes, even if their lives are effectively fabricated from the products of those holes. The scarcity of alternatives makes existing quarries increasingly valuable….

      Unfortunately for Dubai's builders and real-estate developers, desert sand is also unsuitable for construction and, indeed, for almost any human use. The grains don't have enough fractured faces for concrete and asphalt, and they’re too small and round for water-filtration systems….

Sticky Schools: How to Find and Keep Teachers in the Classroom

[These excerpts are from an article by Anne Podolsky, Tara Kini, Joseph Bishop and Linda Darling-Hammond in the May 2017 issue of Phi Delta Kappan.]

      While few enter teaching with expectations of becoming wealthy, teachers do expect to earn a salary that allows them to live a middle-class lifestyle in the community where they teach. Teacher compensation affects the supply of teachers, including the distribution of teachers across districts, as well as the quantity and quality of individuals preparing to be teachers. Salaries also appear to influence teacher attrition: Teachers are more likely to quit when they work in districts with lower wages.

      Although salaries vary by state, teacher salaries in the U.S. are generally lower than those offered to other college graduates….

      Teacher working conditions — which might also be described as student learning conditions — are a strong predictor of teacher decisions about where to teach and whether to stay. Working conditions are a broad term to capture all the school environment factors that affect student and adult learning, including leadership, opportunities for collaboration, accountability systems, class sizes, facilities, and instructional resources such as books and access to technology. Teaching and learning conditions are often much worse in high-poverty than in low-poverty schools and contribute to high rates of teacher turnover.

      To improve teacher working conditions, the federal government, states, and districts should consider adopting policies that address three issues in particular: school leadership and administrative support, resources for teaching and learning, and opportunities for professional collaboration and shared decision making.

Solving the Teacher Shortage

[These excerpts are from an article by Barnett Berry and Patrick M. Shields in the May 2017 issue of Phi Delta Kappan.]

      …Current teacher shortages vary somewhat more by region and subject area, but they are just as serious today as in the 1990s. In the 2015-16 school year, for example, 48 states and the District of Columbia reported shortages of teachers in special education, 42 reported shortages of math teachers, 40 reported shortages of science teachers, and 30 reported shortages of bilingual education/ESL teachers. There are several reasons why the demand for teachers exceeds the supply once again:

      • Student enrollment is on an upward trend —and expected to grow by 3 million in the next decade.

      • Many districts and schools are trying to restore teacher positions and course offerings cut during the Great Recession.

      • Fewer individuals are entering the profession: Between 2009 and 2014, enrollments in teacher preparation programs dropped 35% nationwide (from 691,000 in 2009 to 451,000 just five years later).

      • The U.S. loses about 8% of its teachers annually; the attrition rate in this country is about two times as high as it is in top-performing nations like Finland and Singapore .

      The declining interest in teaching likely has much to do with subtle shifts in the nature of the profession. As top-down school reform increased under No Child Left Behind, teaching became less attractive to young people. For example, a 2014 Gallup poll showed that teachers scored “dead last” among 12 occupational groups in agreeing with the statement that their opinions count at work. One poll in Georgia found that teachers who leave the profession tend to report feeling “devalued” by recent policies and “under constant stress,” fueled by high-stakes testing and unfair and inaccurate teaching evaluations. Similarly, teachers have experienced a steep decline in professional autonomy, particularly in high-poverty schools. In response to a 2003 survey by the U.S. Department of Education, a majority of teachers said they enjoyed a high degree of professional autonomy. By 2012, however, the reverse was true, with the majority re-porting they had little autonomy….

     …In California, one recent poll found that 60% of the state's voters would give public school teachers a letter grade of A or B for their teaching, and 77% believe the state should “spend more on schools.” In North Carolina, 62% of poll respondents said the best way to improve the state's schools would be to increase funding for public education, particularly to increase teacher salaries.

      In short, the time is now for teachers to speak out. With help from parents and the many other Americans who trust and support them, they can and should become more forceful advocates —in their local districts, statehouses, and national forums alike — for the kinds of teacher recruitment, preparation, and support that are known to strengthen the profession and yield powerful results for students.

Saying Goodbye to Glaciers

[This excerpt is from an article by Twila Moon in the May 12, 2017, issue of Science.]

      Global glacier volume is shrinking. This loss of Earth’s land ice is of international concern. Rising seas, to which melting ice is a key contributor, are expected to displace millions of people within the lifetime of many of today’s children. But the problems of glacier loss do not stop at sea level rise; glaciers are also crucial water sources, integral parts of Earth's air and water circulation systems, nutrient and shelter suppliers for flora and fauna, and unique landscapes for contemplation or exploration.

      Finding that ice sheets can respond to cli-mate change on subannual to decadal time scales, glaciology research surged in the early 21st century Scientists now study the world's glaciers at many scales, from centimeter-scale in situ measurements to world-wide satellite-based monitoring campaigns.

      Field studies facilitate detailed spatiotemporal sampling and deployment of coincident measurement suites, such as Global Positioning System (GPS), weather stations, seismometers, time-lapse cameras, and radar instruments. The results elucidate glacier hydrology, subsurface environments, and glacier dynamics on time scales of minutes to months. Aerial surveys cover inaccessible regions such as crevassed zones and even help with data acquisition between satellite missions.

      To cover regions as large as ice sheets, space-borne satellite data are indispensable. Gravity-measuring satellites help estimate ice volume variations, altimetry satellites detect changing surface elevations, and optical and radar imaging satellites measure ice motion, monitor glacier advance and retreat, and observe surface properties, including melt. Growing in situ and satellite archives, along with glacier and ice sheet reconstruction efforts, are beginning to provide the longer records needed to separate glacier “climate” from glacier “weather”. The results from this surge in data and scientific effort point clearly to rapid and largely irreversible ice loss.

Newest Member of Human Family Is Surprisingly Young

[These excerpts are from an article by Ann Gibbons in the May 12, 2017, issue of Science.]

      …The new fossils reinforce a picture of a small-brained, small-bodied creature, which makes the dates reported in one paper all the more startling: 236,000 to 335,000 years ago. That means a creature reminiscent of much earlier human ancestors such as H. habilis lived at the same time as modern humans were emerging in Africa and Neandertals were evolving in Europe….

      First announced in 2015, H. naledi was a puzzle from the start. Fossils from 15 individuals, including fragile parts of the face that are preserved in the new skull, show that the species combines primitive traits such as a small brain, flat midface, and curving fingers with more modern-looking features in its teeth, jaw, thumb, wrist, and foot. Berger's team put it in our genus, Homo.

      But where it really fit in our family tree “hinged on the date,” says paleoanthropologist William Kimbel of Arizona State University in Tempe. Dating cave specimens is notoriously difficult because debris falling from cave walls or ceilings can mix with sediments around a fossil and skew the dates. And these fossils likely were moved over time by rising and falling groundwater, so identifying the sediments where they were originally buried is a challenge….

      Berger says the search for stone tools and other evidence to test whether H. naledi was capable of modern symbolic behavior is his top priority. “We’re going after all these critical questions—is there fire in there, is there DNA?” he says. His team began new forays into the caves last week.

It Doesn’t Matter How You Feel…

[These excerpts are from an article by Michael P. Jansen in the May issue of Chem 13 News.]

      In addition to our professional conduct and ethics, we owe it to our constituents — students, parents and the general public — to look like professionals.

      Now, more than ever, optics count.

      It’s unfortunate that we are frequently judged on appearance. It can be difficult to command respect if we don’t look the part. Jeans, T-shirts and sweat pants should not be part of our teaching wardrobe. How can we command respect if we're dressed for lounging at home or raking leaves?...

      At a minimum, our attire should be at least one step up from our students. If teenagers are required to wear a uniform — with pride in their school — then the same should apply to us. We must set a good example….

      Teachers can’t have it both ways. If we want respect — we need to begin by looking like we deserve it.

American Studies

[These excerpts are from an article by Jonathan Blitzer in the May 22, 2017, issue of The New Yorker.]

      …the Georgia Board of Regents, which oversees the university system, had instituted a policy barring undocumented students from the state's top five public schools. Georgia had thirty-five public colleges, serving about three hundred and ten thousand students, of whom some five hundred were undocumented; only twenty-nine undocumented students were enrolled at the top five schools. Nevertheless, the state legislature wanted the Board of Regents to send a message. As a state senator's spokesman said, “We can’t afford to have illegal immigrants taking a taxpayer-subsidized spot in our colleges.” Two other states—South Carolina and Alabama—ban undocumented students from public universities.

      Each year, about three thousand undocumented students graduate from high school in Georgia, but their opportunities for college are severely limited. At the public universities they're still allowed to attend, they must pay out-of-state tuition, more than double what state residents pay. To matriculate at private colleges, they have to apply as international students, and often that doesn't allow them to qualify for the financial aid they may need. Many of them have given up on applying altogether….

      The University of Georgia, in Athens, did not accept black students until 1961.The following year, in an effort to maintain segregation, the state spent four hundred and fifty thousand dollars on grants and scholarships to send black students from Georgia to institutions in other states. Among the last schools to desegregate were the flite universities that now barred undocumented students….

      In the nineteen-fifties and sixties, despite the Supreme Court’s decision in Brown v. Board of Education, school systems remained segregated, and black institutions were drastically underfunded. Between 1954 and 1965, black children in Mississippi made up fifty-seven per cent of school-aged students, but received only thirteen per cent of the state’s spending on education .Throughout the South, civil-rights activists created informal institutions, called freedom schools, to educate and organize students in desperate need of academic support.

      In Prince Edward County, Virginia, in 1959, the local government shut down the public-school system in order to resist integration….

In a First, Natural Selection Defeats a Biocontrol Insect

[These excerpts are from an article by Elizabeth Pennisi in the May 12, 2017, issue of Science.]

      Twenty years ago, Stephen Goldson thought his group had beaten the Argentine stem weevil, an invasive insect devastating New Zealand's pastures. An entomologist, Goldson had scoured the South American countryside and come up with an efficient weevil killer: a parasitoid wasp that laid eggs in the insect, leaving its larvae to eat the host from within. It worked—one of three biocontrol successes that made Goldson nationally famous and led to his appointment as a strategist for New Zealand's science adviser in 2009.

      But in 2011, the weevil seemed to be making a comeback. So Goldson went back to work at the Christchurch location of AgResearch, the New Zealand government's pastoral research agency, and at Lincoln University. Now, after combing through 2 decades of field data from across New Zealand, he and his colleagues have found that the pest is indeed winning the battle against the wasp….they conclude that it has outevolved the parasite—something never before documented among thousands of bio-control agents around the world….

      It’s not yet clear whether the weevil’s comeback is a rare case, perhaps fueled by New Zealand’s unusual agricultural ecosystem or the specific wasp used, or whether resistance could thwart other biocontrols….

Of Meat and Men

[These excerpts are from an article by Angus Chen in the May 2017 issue of Scientific American.]

      “Women and lowly men are so hard to handle. If you let them too close to you, they become disobedient. But if you keep them at a distance, they become resentful,” Confucius is quoted as saying in the Analects, a collection dating back to the fifth century B.C. Confucius did not invent gender bias, of course, nor did he devise its systemic expression in patriarchy. But the answer to when the concentration of social power in men first arose, and why, may lie in the bones of his ancestors.

      The clue shows up in connective tissue, or collagen, examined during a recent study involving bones from 175 Neolithic and Bronze Age people who lived in China. A carbon signature in this protein suggests the types of grains the people consumed, and a nitrogen signature reveals the proportion of meat in their diet……

      The bone chemistry indicates that male and female diets were similar during the Neolithic period, which started about 10,000 years ago and in which agriculture began. Both sexes ate meats and grains….

      The menu shift began at the end of the Neolithic and continued through the Bronze Age, often estimated to have begun in China around 1700 B.C. People there increasingly planted wheat, which leaves a carbon signature distinct from that of the millet they had already been growing. The osteoanalysis shows that between 771 and 221 B.C. men continued eating millet and meat—but the latter disappeared from women’s diets and was replaced with wheat. Women’s bones also began showing cribra orbitalia, a type of osteoporosis and an indicator of childhood malnutrition….

      Some anthropologists have a theory for why the balance of power tipped just as wheat was introduced, as well as other commodities such as cattle and bronze. These new resources afforded opportunities for wealth to accumulate and may have provided an opening for men to take control of the novel foods and wares—and to use their new power to suppress women.

      Violence may have played a role, too….In civilizations rife with bloodshed, a warrior class often inflates the value of men…

      Nevertheless, the early bias evidence in China extends beyond bones. Women’s graves started to include fewer burial treasures than men's during the Bronze Age, suggesting females were also treated poorly in death….

Moving Forward after the March

[This excerpt is from an editorial by Rush D. Holt in the May 3, 2017, issue of Science.]

      On 22 April, many thousands of people took part in demonstrations, teach-ins, museum open houses, and science festivals in hundreds of places around the world—an unparalleled show of support for science. Some journalists have tried to portray the march as yet another political demonstration against President Trump and Congress. Yet, neither appeared to be the target for most marchers—not in the United States and even less so around the world. That the March for Science saw the scientific community and the wider public come together in unprecedented numbers signaled that the day was not just a protest by scientists with concerns about their funding or job security. The multitude of T-shirt slogans, placards, and impassioned remarks by marchers and speakers of all ages, backgrounds, and professions spoke volumes—something serious was going on.

      Scientists are not usually demonstrative. What drove so many to say that they can, should, and will venture into the public square? And what drove so many nonscientists (according to organizers, more than two-thirds of the marchers) to join them earnestly? It may take some time to draw the real message out of the cacophony, if indeed a single message can be discerned. I think it was a deep concern about the state of science in our societies and our governments that propelled people to speak out. To me, 22 April was a manifestation of long-simmering cultural change.

  Website by Avi Ornstein, "The Blue Dragon" – 2016 All Rights Reserved