Increase Your Brain Power
Sonia in Vert
Publications
Shared Idea
Interesting Excerpts
Awards and Honors
Presentations
This Week's Puzzle
Last Week's Puzzle
Interesting Excerpts
The following excerpts are from articles or books that I have recently read. They caught my interest and I hope that you will find them worth reading. If one does spark an action on your part and you want to learn more or you choose to cite it, I urge you to actually read the article or source so that you better understand the perspective of the author(s).


North Atlantic Right Whales Face Extinction

[These excerpts are from an article by Elizabeth Pennisi in the November 10, 2017, issue of Science.]

      In a sad reversal of fortune, the North Atlantic right whale is in deep trouble again after rebounding in recent decades from centuries of hunting. Recent population trends are so dire that experts predict the whale could vanish within 20 years, making it the first great whale to go extinct in modern times.

      …whale experts reported that roughly 100 reproductively mature females remain, but they are not living long enough or reproducing quickly enough for the species to survive. Ship strikes have long been a threat, and fatal entanglements in commercial fishing gear are taking an increasing toll. And researchers have found that even when an entangled female doesn’t die, dragging ropes, buoys, or traps can exhaust her, making her less likely to reproduce.

      …Eubalaena glacialis, the North Atlantic right whale—so-called by 18th century whalers because it was easy to kill and rich in valuable blubber—is one of three right whale species. It is found along North America’s east coast, breeding in the winter in waters off of Florida and migrating to summer feeding waters off New England and northeastern Canada. Its accessible habitat has made it one the world’s best documented large whales. But its range is also in one of the most industrialized stretches of ocean in the world, crowded with threats including ships, fishing operations, and energy infrastructure.

      Over the past few decades, right whale numbers appeared to be slowly climbing, from roughly 300 to about 500. Governments helped it along by taking steps to prevent ship strikes, such as imposing speed limits on or rerouting larger vessels in some waters, and installing sensors that can warn mariners when the whales are nearby….

      Entanglement, however, appears to be taking a growing toll because of increased fishing in areas where the whales are foraging….

Science for Global Understanding

[This excerpt is from an editorial by Flavia Schleger in the November 10, 2017, issue of Science.]

      Resilience is also at the core of the debate at the UN Climate Change Conference (COP23) currently taking place in Bonn. The growing pressures of climate change and stress on natural resources through pollution, overuse, and mismanagement are fueling conflicts and violent extremism and forcing an increasing number of people to flee their homes. This calls for sound, inclusive STI, cooperative approaches between the sciences and among different knowledge systems, and standing up to climate change deniers among scientists and policy-makers.

      The United Nations’ Agenda 2030 for sustainable development recognizes the central role of STI in enabling the international community to respond to global challenges….

Going Negative

[These excerpts are from an article by Elizabeth Kolbert in the November 20, 2017, issue of The New Yorker.]

      This past April, the concentration of carbon dioxide in the atmosphere reached a record four hundred and ten parts per million. The amount of CO2 in the air now is probably greater than it’s been at any time since the mid-Pliocene, three and a half million years ago, when there was a lot less ice at the poles and sea levels were sixty feet higher. This year’s record will be surpassed next year, and next year’s the year after that. Even if every country fulfills the pledges made in the Paris climate accord—and the United States has said that it doesn’t intend to carbon dioxide could soon reach levels that, it’s widely agreed, will lead to catastrophe, assuming it hasn’t already done so.

      Carbon-dioxide removal is, potentially, a trillion-dollar enterprise because it offers a way not just to slow the rise in CO2 but to reverse it. The process is sometimes referred to as “negative emissions”: instead of adding carbon to the air, it subtracts it. Carbon-removal plants could be built anywhere, or everywhere, Construct enough of them and, in theory at least, CO2 emissions could continue unabated and still we could avert calamity. Depending on how you to at things, the technology represents either the ultimate insurance policy or the ultimate moral hazard.

      …In fact, fossil fuels currently provide about eighty per cent of the world's energy. Proportionally, this figure hasn’t changed much since the mid-eighties, but, because global energy use has nearly doubled, the amount of coal, oil, and natural gas being burned today is almost two times greater.

      …they argued that self-replicating machines could solve the world’s energy problem and, more or less at the same time, clean up the mess humans have made by burning fossil fuels. The machines would be powered by solar panels, and as they multiplied they’d produce more solar panels, which they’d assemble using elements, like silicon and aluminum, extracted from ordinary dirt. The expanding collection of 'panels would produce ever more power, at a rate that would increase exponentially. An array covering three hundred and eighty-six thousand square miles—an area larger than Nigeria but, as Lackner and Wendt noted, “smaller than many deserts”—could supply all the world’s electricity many times over.

      This same array could be put to use scrubbing carbon dioxide from the atmosphere….

      …Carbon dioxide should be regarded the same way we view other waste products, like sewage or garbage. We don’t expect people to stop producing waste….At the same time, we don’t let them shit on the sidewalk or toss their empty yogurt containers into the street….

      One of the reasons we’ve made so little progress on climate change, he contends, is that the issue has acquired an ethical charge, which has polarized people. To the extent that emissions are seen as bad, emitters become guilty….If CO2 is treated as just another form of waste, which has to be disposed of, then people can stop arguing about whether it’s a problem and finally start doing something.

      Carbon dioxide was “discovered,” by a Scottish physician named Joseph Black, in 1754. A decade later, another Scotsman, James Watt, invented a more efficient steam engine, ushering in what is now called the age of industrialization but which future generations may dub the age of emissions. It is likely that by the end of the nineteenth century human activity had raised the average temperature of the earth by a tenth of a degree Celsius (or nearly two-tenths of a degree Fahrenheit).

      As the world warmed, it started to change, first gradually and then suddenly. By now, the globe is at least one degree Celsius (1.8 degrees Fahrenheit) warmer than it was in Black’s day, and the consequences are becoming ever more apparent. Heat waves are hotter, rainstorms more intense, and droughts drier. The wildfire season is growing longer, and fires, like the ones that recently ravaged Northern California, more numerous. Sea levels are rising, and the rate of rise is accelerating. Higher sea levels exacerbated the damage from Hurricanes Harvey, Irma, and Maria, and higher water temperatures probably also made the storms more ferocious….

      Meanwhile, still more warming is locked in. There’s so much inertia in the climate system, which is as vast as the earth itself; that the globe has yet to fully adjust to the hundreds of billions of tons of carbon dioxide that have been added to the atmosphere in the past few decades. It’s been calculated that to equilibrate to current. CO2 levels the planet still needs to warm by half a degree. And every ten days another billion tons of carbon dioxide are released….

      No one can say exactly how warm the world can get before disaster—the inundation of low-lying cities, say, or the collapse of crucial ecosystems, like coral reefs—becomes inevitable. Officially, the threshold is two degrees Celsius (3.6 degrees Fahrenheit) above preindustrial levels. Virtually every nation signed on to this figure at a round of climate negotiations held in Cancun in 2010.

      Meeting in Paris in 2015,world leaders decided that the two-degree threshold was too high; the stared aim of the climate accord is to hold ”the increase in the global average temperature to well below 2oC” and to try to limit it to 1.5oC. Since the planet has already warmed by one degree and, for all practical purposes, is committed to another half a degree, it would seem impossible to meet the latter goal and nearly impossible to meet the former. And it is nearly impossible, unless the world switches course and instead of just adding CO2 to the atmosphere also starts to remove it….

      BECCS, which stands for “bioenergy with carbon capture and storage,” takes advantage of the original form of carbon engineering: photosynthesis. Trees and grasses and shrubs, as they grow, soak up CO2 from the air. (Replanting forests is a low-tech form of carbon removal.) Later, when the plants rot or are combusted, the carbon they have absorbed is released back into the atmosphere. If a power station were to burn wood, say, or cornstalks, and use C.C.S. to sequester the resulting CO2, this cycle would be broken. Carbon would be sucked from the Aix by the green plants and then forced underground. BECCS represents a way to generate negative emissions and, at the same time, electricity. The arrangement, at least as far as the models are concerned, could hardly be more convenient.

      …Photovoltaic cells have been around since the nineteen-fifties, but for decades they were prohibitively expensive. Then the prices started to drop, which increased demand, which led to further price drops, to the point where today, in many parts of the world, the cost of solar power is competitive with the cost from new coal plants. …

      BECCS doesn’t make big enough demands; instead, it requires vast tracts of arable land. Much of this land would, presumably, have to be diverted from food production, and at a time when the global population—and therefore global food demand—is projected to be growing. (It’s estimated that to do BECCS on the scale envisioned by some below-two degrees scenarios would require an area larger than India.)….

      One of the peculiarities of climate discussions is that the strongest argument for any given strategy is usually based on the hopelessness of the alternatives: this approach must work, because clearly the others aren’t going to. This sort of reasoning rests on a fragile premise—what might be called solution bias. There has to be an answer out there somewhere, since the contrary is too horrible to contemplate.

      Early last month, the Trump Administration announced its intention to repeal the Clean Power Plan, a set of rules aimed at cutting power plants’ emissions. The plan, which had been approved by the Obama Administration, was eminently achievable. Still, according to the current Administration, the cuts were too onerous. The repeal of the plan is likely to result in hundreds of millions of tons of additional emissions.

      A few weeks later, the United Nations Environmental Programme released its annual Emissions Gap Report. The report labeled the difference between the emissions reductions needed to avoid dangerous climate change and those which contries have pledged to achieve as “alarmingly high.”…..

      As a technology of last resort, carbon removal is, almost by its nature, paradoxical. It has become vital without necessarily being viable. It may be impossible to manage and it may also be impossible to manage without.

The Seven Deadly Sins of AI Predictions

[This list is excerpted from an article by Rodney Brooks in the November/December 2017 issue of Technology Review.]

      1. Overestimating and underestimating

      2. Imagining magic

      3. Performance versus competence

      4. Suitcase words

      5. Exponentials

      6. Hollywood scenarios

      7. Speed of deployment

We Need Computers with Empathy

[These excerpts are from an article by Rana el Kaliouby in the November/December 2017 issue of Technology Review.]

      …But Alexa was oblivious to my annoyance. Like the majority of virtual assistants and other technology out there, she’s clueless about what we’re feeling.

      We’re now surrounded by hyper-connected smart devices that are autonomous, conversational, and relational, but they're completely devoid of any ability to tell how annoyed or happy or depressed we are. And that's a problem.

      What if, instead, these technologies—smart speakers, autonomous vehicles, television sets, connected refrigerators, mobile phones—were aware of your emotions? What if they sensed nonverbal behavior in real time? Your car might notice that you look tired and offer to take the wheel. Your fridge might work with you on a healthier diet. Your wearable fitness tracker and TV might team up to get you off the couch. Your bathroom mirror could sense that you’re stressed and adjust the lighting while turning on the right mood-enhancing music. Mood-aware technologies would make personalized recommendations and encourage people to do things differently, better, or faster.

      Today, an emerging category of AI—artificial emotional intelligence, or emotion AI—is focused on developing algorithms that can identify not only basic human emotions such as happiness, sadness, and anger but also more complex cognitive states such as fatigue, attention, interest, confusion, distraction, and more….

      …In online learning environments, it is often hard to tell whether a student is struggling. By the time test scores are lagging, it’s often too late—the student has already quit. But what if intelligent learning systems could pro-vide a personalized learning experience? These systems would offer a different explanation when the student is frustrated, slow down in times of confusion, or just tell a joke when it’s time to have some fun….

Using Bricks to Store Electricity

[These excerpts are from an article by David L. Chandler in the November/December 2017 issue of Technology Review.]

      Firebricks have been part of humanity's technological arsenal for at least three millennia, since the era of the Hittites. Created with special heat-resistant clays fired at high temperatures, firebricks can withstand temperatures of up to 1,600 °C. Now a proposal from MIT researchers shows that this ancient invention could play a key role in helping the world shift away from fossil fuels.

      The idea is to store excess electricity produced when demand is low—for example, from wind farms at night—by using electric resistance heaters, the same kind found in electric ovens or clothes dryers, which convert electricity into heat. The devices would use the excess electricity to heat up a large mass of firebricks, which can retain the heat for many hours, given sufficient insulation. Later, the heat could be used directly for industrial processes, or it could feed generators that convert it back to electricity.

      The technology itself is old, but its usefulness in this context is due to the rapid recent rise of intermittent renewable energy sources and the peculiarities of the way electricity prices are set….

      Forsberg says the demand for industrial heat is actually larger than the total demand for electricity, and unlike the demand for electricity, it is constant. Factories can make use of extra heat whenever it’s available, providing an almost limitless market for it….

The Long View

[These excerpts are from an article by Robert L. Hampel in the November 2017 issue of Phi Delta Kappan.]

      Spurning the option of admission by certificate, several prestigious colleges and universities created the College Board [in 1899], which gave examinations meant to ensure that high school students met particular requirements in various academic fields. By spelling out those requirements in detail, the College Board shaped the corresponding high school courses. For instance, the eight-part syllabus for physics stipulated 51 different experiments.

      Furthermore, prominent university leaders and other educators convened blue-ribbon groups (most notably the Committee of Ten in 1893) to sketch what a true high school should and should not teach. And when the business mogul/philanthropist Andrew Carnegie endowed pensions for college faculty, it became important to decide the true meaning of “college” as well — the foundation needed to decide which faculty would be eligible for a pension. To the foundation officers, a genuine college had several attributes, including a four-year time span and admission restricted to graduates of four-year secondary schools. They even specified how often a course should meet in a real high school and college – hence the familiar Carnegie unit and the emphasis on time served and credits accumulated as the basis for promotion and graduation.

      …By the 1920s, it was much easier to differentiate secondary education and higher education.

      …there were disadvantages to making the boundaries less blurry. I point out three, in particular.

      First, the prestige of high school declined. The word “secondary” began to convey not just a part of a sequence but a lesser status. By the mid-20th century, high school teachers were rarely called professor, and their salaries lagged behind what college faculty earned. The growing reliance on SAT and ACT tests also downplayed the importance of what students achieved in high school — what mattered was their apparent potential for future success. Rising enrollments meant that high school graduation was no longer the badge of merit for a small fraction of teenagers (in contrast, the status of college graduation eroded less severely when enrollments soared after the mid 20th century).

      Second, getting an education took longer. Neither high schools nor colleges showed much interest in reducing their traditional four-year time spans….Instead, Americans came to believe that more time in school and college was not only academically useful but also psychologically crucial. Close friendships, athletic glory, extracurricular success, and opportunities to date attractive young men and women rivaled the formal curriculum….

      Third, the new boundaries could be illusory. By the 1920s, the messy array of 19th-century schools and colleges had been cleaned up, and the system looked relatively logical, rational, and streamlined. But the overlap between school and college was still substantial. In the 1930s, for instance, many college students knew less than high school students. In one study, 22% of high school seniors surpassed the test score of the average college sophomore, and 10% did better than the average college senior. Within each grade, the variations were vast….

People Are Not Dying Because of Opioids

[This excerpt is from an article by Carl L. Hart in the November 2017 issue of Scientific American.]

      I am concerned that declaring the opioid crisis a national emergency will serve primarily to increase law-enforcement budgets, precipitating an escalation of this same sort of routine racial discrimination. Recent federal data show that more than 80 percent of those who are convicted for heroin trafficking are either black or Latino, even though whites use opioids at higher rates than other groups and tend to buy drugs from individuals within their racial group.

      The president also claimed that the opioid crisis “is a world-wide problem.” It isn’t. Throughout Europe and other regions where opioids are readily available, people are pot dying at comparable rates as those in the U.S., largely because addiction is treated not as a crime but as a public health problem.

      It is certainly possible to die from an overdose of an opioid alone, but this accounts for a minority of the thousands of opioid-related deaths. Many are caused when people combine an opioid with another sedative (such as alcohol), an antihistamine (such as promethazine) or abenzodiazepine (such as Xanax or Klonopin). People are not dying because of opioids; they are dying because of ignorance.

      There is now one more opioid in the mix—fentanyl, which produces a heroinlike high but is considerably more potent. To make matters worse, according to some media reports, illicit heroin is sometimes adulterated with fentanyl. This, of course, can be problematic—even fatal—for unsuspecting heroin users who ingest too much of the substance thinking that it is heroin alone. One simple solution is to offer free, anonymous drug-purity testing services. If a sample contains adulterants, users would be informed. These services already exist in places such as Belgium, Portugal, Spain and Switzerland, where the first goal is to keep users safe. Law-enforcement officers should also do such testing whenever they confiscate street drugs, and they should notify the community whenever potentially dangerous adulterants are found. In addition, the opioid overdose antidote naloxone should be made more affordable and readily available not just to first responders but also to opioid users and to their family and friends.

Nip Misinformation in the Bud

[This excerpt is from an aeditorial by Rick Weiss in the October 27, 2017, issue of Science.]

      The democratization of journalism through crowd sourcing, blogging, and social media has proven to be a sharp, double-edged sword. The internet has vastly expanded the sourcing of news and information, capturing stories that might otherwise go untold and delivering a diversity of perspectives that no single media outlet could hope to offer. At the same time, this new and open model has given anyone with web access a global platform to propagate information that is mistakenly or intentionally false. This is especially problematic when it comes to scientific information, which is critical to rational policy-making in areas like health, environmental protection, and national security, and at its best is often misinterpreted by the lay public. Yet recent years have seen a reduction in specialized science pages and reporters in the nation’s newsrooms in favor of reliance on general assignment staffers, even as deadlines have grown shorter—reducing opportunities to ensure accuracy and clarity before publication.

      Postpublication fact checking is helping. From the “Pinocchios” that the Washington Post awards to those caught stretching the truth, to the day-to-day debunkings posted by organizations like FactCheck, Snopes, and PolitiFact, the recent explosion in fact-checking initiatives is a welcome response to this bubbling new environment. But memes take root quickly and die hard. So, in the fight against misinformation, fact checking is often too little, too late. When it comes to stories about science—or about legislation, economics, or other domains where science can be informative—it would be far better to help journalists and the public get it right before having to call in the truth squads….

The Exercise Pill

[These excerpts are from an article by Nicola Twilley in the November 6, 2017, issue of The New Yorker.]

      …the biologist Ron Evans introduced me to two specimens: Couch Potato Mouse and Lance Armstrong Mouse.

      Couch Potato Mouse had been raised to serve as a proxy for the average American. Its daily exercise was limited to an occasional waddle toward a bowl brimming with pellets of laboratory standard “Western Diet,” which consists almost entirely of fat and sugar and is said to taste like cookie dough. The mouse was lethargic, lolling in a fresh layer of bedding, rolls of fat visible beneath thinning, greasy-looking fur. Lance Armstrong Mouse had been raised under exactly the same conditions, yet, despite its poor diet and lack of exercise, it was lean and taut, its eyes and coat shiny as it snuffled around its cage. The secret to its healthy appearance and youthful energy, Evans explained, lay in a daily dose of GW501516: a drug that confers the beneficial effects of exercise without the need to move a muscle. [Referred to henceforth as 516.]

      …The drug works by mimicking the effect of endurance exercise on one particular gene: PPAR-delta. Like all genes, PPAR-delta issues instructions in the form of chemicals—protein-based signals that tell cells what to be, what to burn for fuel, which waste products to excrete, and so on. By binding itself to the receptor for this gene, 516 reconfigures it in a way that alters the messages the gene sends—boosting the signal to break down and burn fat and simultaneously suppressing instructions related to breaking down and burning sugar….

      …Evans refers to the compound as “exercise in a pill.” But although Evans understands the mechanism behind 516’s effects at the most minute level, he doesn't know what molecule triggers that process naturally during exercise….For all the known benefits of a short loop around the park, scientists are, for the most part, incapable of explaining how exercise does what it does.

      …The company was about to embark on Phase III trials—the large, expensive, double-blind, placebo-controlled trials that are required for F.D.A. approval—when the results of a long-term-toxicity test came in. Mice that had been given large doses of the drug over the course of two years (a lifetime for a lab rodent) developed cancer at a higher rate than their dope-free peers. Tumors appeared all over their bodies….the only way to conclusively prove that even a lower dose would not have a similar effect on humans would be to run a seventy-year trial….

      …516 is not the only “exercise pill” in development…. Compound 14 caused the blood-glucose levels of obese, sedentary mice on a high-fat diet to approach normal levels in just a week, while melting away five per cent of their body weight. It works, he explained, by fooling cells into thinking that they are running out of energy, causing them to burn through more of the body's fuel reserves.

      …Bruce Spiegelman, a Harvard cell biologist, has discovered two potent exercise hormones. One of them, irisin, turns metabolically inert white fat in mice into mitochondria-packed, energy-burning brown fat, and Spiegelman said that he's seen evidence that it may also boost levels of healthy proteins in the area of the brain associated with learning and memory….

      Even if everything goes smoothly, however, 516 is multiple trials and several years away from reaching the market. And although Evans is convinced that his improved version of the drug is safe, any molecule that affects metabolic processes is necessarily interacting with a variety of other molecules throughout the body, in ways that we don’t yet understand. Nonetheless, Evans, James, and Spiegelman are all confident that legal drugs mimicking some of the effects of exercise are on their way, sometime within the next ten to fifteen years….

      Although 516 has not been approved as a drug, plenty of people are taking it. Once the structure of a new compound has been published, chemical-supply laboratories are free to synthesize it for sale, “for research purposes only.” 516 is easy and relatively cheap to make, and it is readily available online….

Get Toxic Chemicals Out of Cosmetics

[This editorial by the editors is from the November 2017 issue of Scientific American.]

      Earlier this year a group of more than a dozen health advocacy groups and individuals petitioned the U.S. Food and Drug Administration to ban lead acetate from hair dyes. The compound, a suspected neurotoxin, is found in many hair products—Grecian Formula, for example. Lead acetate has been outlawed for nearly a decade in Canada and Europe. Studies show it is readily absorbed through the skin and can cause toxic levels of lead to accumulate in the blood.

      How is it possible that this chemical is still being sold to U.S. consumers in cosmetic products? The main reason is that petitions such as the one calling out lead acetate are one of the few ways, under current law, that the agency charged with ensuring food, drug and cosmetic safety can even start to limit dangerous chemicals used on our faces and in our bodies. We need to do better.

      Under the Federal Food, Drug, and Cosmetic Act and the Fair Packaging and Labeling Act, the FDA can regulate cosmetic chemicals. But it only steps in if it has “reliable information” that there is a problem. In practice, that has often meant that nothing is done before a public outcry Years can pass while the FDA investigates and deliberates. Aside from these situations, the safety of cosmetics and personal care products is the responsibility of the companies that make them. The law requires no specific tests before a company brings a new product with a new chemical to market, and it does not require companies to release whatever safety data they may collect.

      The result is that several chemicals with realistic chances of causing toxic effects can be found in everything from shampoo to toothpaste. One is formaldehyde, a carcinogenic by-product released by the preservatives used in cosmetics. In 2011 the National Toxicology Program at the Department of Health and Human Services declared formaldehyde a known human carcinogen, demonstrated by human and animal studies to cause cancer of the nose, head, neck and lymphatic system. Other research indicates it can be dangerous at the levels found in cosmetics, and nearly one fifth of cosmetic products contained the chemical. Other risky substances include phthalates, parabens (often found in moisturizers, makeup and hair products) and triclosan, which the FDA banned from hand soaps in 2016 yet is still allowed in other cosmetics. At exposures typical of cosmetic users, several of these chemicals have been linked to cancer, impaired reproductive ability and compromised neurodevelopment in children.

      A recent study published online by Ami R. Zota of George Washington University and Bhavna Shamasunder of Occidental College in the American Journal of Obstetrics & Gynecology showed that women of color are at especially high risk of exposure. In an attempt to adhere to Caucasian beauty ideals, the researchers found, women of color are more likely to use chemical hair straighteners and skin lighteners, which disproportionately expose them to high doses of phthalates, parabens, mercury and other toxic substances.

      The U.S. should protect its citizens. One worthwhile approach is to emulate the European Union's directive on cosmetics, which has banned more than 1,300 chemicals from personal health or cosmetic products. In some cases, the E.U. has acted after seeing only preliminary toxicity data. This is a prime example of the “precautionary principle” that has guided U.S. health agencies in setting acceptable levels of exposure to other potentially hazardous substances, such as lead.

      Right now the number of studies on cosmetics is limited, and the FDA does not have the resources or directive to initiate broad tests. This past May senators Dianne Feinstein of California and Susan Collins of Maine reintroduced the Personal Care Products Safety Act in Congress. The bill would require, among other things, that all cosmetics makers pay annual fees to the agency to help finance new safety studies and enforcement—totaling approximately $20 million a year. With that money, the FDA must assess the safety of at least five cosmetics chemicals a year. The bill also gives the agency the authority to pull products off the shelves immediately when customers have reported bad reactions, without waiting for a review that can take multiple years.

      Consumers should not be forced to scrutinize the ingredient lists in their medicine cabinets and report adverse reactions. That should be the FDA’S job. The Feinstein-Collins bill empowers the agency to make efficient determinations from sound science.

The Seventh Sense

[This excerpt is from the introduction to Eats, Shoots & Leaves by Lynne Truss.]

      Punctuation has been defined many ways. Some grammarians use the analogy of stitching: punctuation as the basting that holds the fabric of language in shape. Another writer tells us that punctuation marks are the traffic signals of language: they tell us to slow down, notice this, take a detour, and stop. I have even seen a rather fanciful reference to the full stop and comma as “the invisible servants in fairy tales — the ones who bring glasses of water and pillows, not storms of weather or love”. But best of all, I think, is the simple advice given by the style book of a national newspaper: that punctuation is “a courtesy designed to help readers to understand a story without stumbling”.

      Isn’t the analogy with good manners perfect? Truly good manners are invisible: they ease the way for others, without drawing attention to themselves. It is no accident that the word “punctilious” (“attentive to formality or etiquette”) comes from the same original root word as punctuation. As we shall see, the practice of “pointing” our writing has always been offered in a spirit of helpfulness, to underline meaning and prevent awkward misunderstandings between writer and reader. In 1644 a schoolmaster from Southwark, Richard Hodges, wrote in his The English Primrose that “great care ought to be had in writing, for the due observing of points: for, the neglect thereof will pervert the sense”, and he quoted as an example, “My Son, if sinners intise [entice] thee consent thou, not refraining thy foot from their way.” Imagine the difference to the sense, he says, if you place the comma after the word “not”: “My Son, if sinners intise thee consent thou not, refraining thy foot from their way.” This was the 1644 equivalent of Ronnie Barker in Porridge, reading the sign-off from a fellow lag's letter from home, “Now I must go and get on my lover”, and then pretending to notice a comma, so hastily changing it to, “Now I must go and get on, my lover.”

      To be fair, many people who couldn’t punctuate their way out of a paper bag are still interested in the way punctuation can alter the sense of a string of words. It is the basis of all “I’m sorry, I’ll read that again” jokes. Instead of “What would you with the king?” you can have someone say in Marlowe’s Edward II, “What? Would you? With the king?” The consequences of mispunctuation (and re-punctuation) have appealed to both great and little minds, and in the age of the fancy-that email a popular example is the comparison of two sentences:

      A woman, without her man, is nothing.

      A woman: without her, man is nothing.

      Which, I don’t know, really makes you think, doesn’t it? Here is a popular “Dear Jack” letter that works in much the same fundamentally pointless way:

      Dear Jack,

      I want a man who knows what love is all about. You are generous, kind, thoughtful. People who are not like you admit to being useless and inferior. You have ruined me for other men. I yearn for you. I have no feelings whatsoever when we're apart. I can be forever happy — will you let me be yours?

      Jill

      Dear Jack,

      I want a man who knows what love is. All about you are generous, kind, thoughtful people, who are not like you. Admit to being useless and inferior. You have ruined me. For other men I yearn! For you I have no feelings whatsoever. When we're apart I can be forever happy. Will you let me be?

      Yours,

      Jill

      But just to show there is nothing very original about all this, five hundred years before email a similarly tiresome puzzle was going round:

      Every Lady in this Land

      Hath 20 Nails on each Hand;

      Five & twenty on Hands and Feet;

      And this is true, without deceit.

      (Every lady in this land has twenty nails. On each hand, five; and twenty on hands and feet.)

Our Next Billion Years

[This excerpt is from an article by Max Tegmark in the November 2017 issue of Discover.]

      Seth Lloyd, an MIT quantum computer pioneer, showed that computing speed is limited by energy. This means that a 1-kilogram computer, equivalent to a small laptop, can perform at most 5x1050 operations per second — that’s a whopping 36 orders of magnitude more than the computer on which I’m typing these words. We’ll get there in a couple of centuries if computational power keeps doubling every couple of years. He also showed that a 1 kg computer can store up to 1031 bits, which is about one billion billion times better than my laptop.

      Actually attaining these limits may be challenging, even for superintelligent life. However, Lloyd is optimistic that the practical limits aren’t that far from the ultimate ones. Indeed, existing quantum computer prototypes have already miniaturized their memory by storing 1 bit per atom. Scaling that up would allow storing about 1025 bits per kilogram — a trillion times better than my laptop. Moreover, using electromagnetic radiation to communicate between these atoms would permit about 5 x1040 operations per second — 31 orders of magnitude better than my CPU.

      The potential for future life to compute and figure things out is truly mind-boggling: In terms of orders of magnitude, today's best supercomputers are much further from the ultimate 1 kg computer than they are from the blinking turn signal on a car, a device that stores merely 1 bit of information, flipping it between on and off about once per second.

Untangling Spider Biology

[These excerpts are from an article by Elizabeth Pennisi in the October 20, 2017, issue of Science.]

      For a display of nature's diabolical inventiveness, it’s hard to beat spiders. Take the reclusive ogre-faced spider, with its large fangs and bulging, oversized middle eyes. Throughout the tropics these eight-legged monsters hang from twigs, an expandable silk net stretched between their front legs so they can cast it, lightning-fast, over their victims. Showy 'peacock spiders, in contrast, flaunt rainbow-colored abdomens to attract mates, while their outsized eyes discern fine detail and color—the better to see both strutting mates and unsuspecting prey. Bolas spiders, named for the South American weapon made of cord and weights, specialize in mimicry. By night, the female bolas swings a silken line with a sticky ball at its end while emitting the scent of a female moth to lure and nab male moths.

      …Spiders’ universal ability to make silk helps explain their global success—an estimated 90,000 species thrive on every continent except Antarctica. This material, used for capturing prey, rappelling from high places, and building egg cases and dwellings, is itself fantastically diverse, its makeup varying from species to species. The same goes for venom, another universal spider attribute—each species makes a different concoction of up to 1000 different compounds.

      …Based mainly on fossil evidence and specimens preserved in amber, biologists concluded long ago that spiders descended from a many-legged, scorpionlike ancestor that by 380 million years ago had a long tail but looked quite spiderlike and may even have had silk glands. By 300 million years ago, fossils show, eight-legged creatures with spiderlike mouth parts, primitive silk glands, and stumpy abdomens had emerged. Those abdomens were still segmented, not fused as in today’s spiders. But what happened afterward to produce the explosion of spider diversity seen now has been mysterious.

      Today, taxonomists recognize three spider groups. The Mygalomorphae—ground-dwelling creatures characterized by fangs that point straight down—include about 2500 species, including tarantulas and so-called trapdoor and funnel-web spiders. Another group, Liphistiidae, consists of 97 species, many of which also build trapdoors to capture prey. The third group, the Araneomorphae, includes 5500 jumping spiders, 4500 dwarf spiders, 2400 wolf spiders, and thousands of web spinners.

Evolution Accelerated when Life Set Foot on Land

[These excerpts are from an article by Elizabeth Pennisi in the October 13, 2017, issue of Science.]

      Life probably originated in water, but nature did some of its best work once organisms made landfall. That's what Geerat Vermeij has concluded after surveying fossils and family trees to discover where and when some of life's greatest modern advances evolved. Almost all of these seemingly out-of-the-blue innovations, from fungus farming by insects to the water transport systems that made tall trees possible, came about after plants and animals learned how to survive on land some 440 million years ago….

      Many researchers have focused on how newly land-based organisms coped with gravity and the threat of desiccation. But Vermeij wondered instead how the move to land might have changed the pace of evolution. He compiled a list of key innovations that showed up in several groups of organisms and provided a big competitive edge, such as herbivory by vertebrates, flight, echolocation, and warm-bloodedness. Existing fossil evidence enabled him to date the origin of a dozen of these adaptations.

      Nine appeared first on land and later in the sea….

From Students to Scientists

[This excerpt is from an article by Olivia Ho-Shing in the Fall 2017 issue of American Educator.]

      What does it mean to be a scientist? In the most basic of terms, a scientist is someone who does scientific research. But what personal qualities does it take to do scientific research?

      In his book Letters to a Young Scientist, renowned biologist Edward O. Wilson recounts his own coming-of-age story as a scientist, and distills the motivating qualities of science down to curiosity and creativity. Individuals become scientists when they are curious about a phenomenon in the world around them and ask about the real nature of that phenomenon: What are its origins, its causes, or its consequences? Scientists then employ some creativity to answer their questions through a systematic testing of hypotheses (the scientific method), and form some conclusion based on their findings.

      This explanation of how scientists approach research highlights something very powerful: anybody with curiosity and creativity, by subscribing to the scientific method, can do science and discover something new about our natural world. From an early age, children brim with questions and sometimes come up with overly creative methods to test a hypothesis (say, using a magnifying glass to start a fire). It becomes incumbent upon teachers, then, to continually help foster students’ curiosity and creativity as critical aspects of their learning, particularly in science.

      Wilson describes the broad field of science as a “culture of illuminations dedicated to the most effective way ever conceived of acquiring factual knowledge.” His description points to another critical aspect in becoming a scientist: not only acquiring some knowledge but contributing that knowledge to a shared culture and community. Scientists engage with others in their field through collaborations, presentations, and publication, thereby strengthening their own findings and assessing information within a broader context of knowledge….

Message Control

[These excerpts are from an article by Brooke Borel in the October 2017 issue of Scientific American.]

      Science doesn’t happen in a vacuum. But historically, many researchers haven't done a great job of confronting—or even acknowledging—the entangled relation between their work and how it is perceived once it leaves the lab….When communication breaks down between science and the society it serves, the resulting confusion and distrust muddies everything from research to industry investment to regulation.

      In the emerging era of CRISPR and gene drives, scientists don’t want to repeat the same mistakes. These new tools give researchers an unprecedented ability to edit the DNA of any living thing—and, in the case of gene drives, to alter the DNA of wild populations. The breakthroughs could address big global problems, curtailing health menaces such as malaria and breeding crops that better withstand climate change. Even if the expectations of CRISPR and gene drives do come to fruition—and relevant products are safe for both people and the environment—what good is the most promising technology if the public rejects it?

      …To avoid that outcome, some researchers are taking a new tack. Rather than dropping fully formed technology on the public, they are proactively seeking comments and reactions, sometimes before research even starts….By opening an early dialogue with regulators, environmental groups and communities where the tools may be deployed, scientists are actually tweaking their research plans while wresting more control over the narrative of their work.

The Roots of Science Denial

[These excerpts are from an article by Katharine Hayhoe, as told to Jen Schwartz, in the October 2017 issue of Scientific American.]

      Science denial is basically anti-intellectualism. It’s a thread that has run though American society for decades, possibly even centuries. Back in 1980, Isaac Asimov said that it’s “nurtured by the false notion that democracy means that ‘my ignorance is just as good as your knowledge.’” Today we’re dealing with its most recent manifestation, at its peak.

      Climate change is a special case of science denial, which of course goes back to Galileo. The Catholic Church didn’t push back on Galileo until he stuck his head out of the ivory tower and published in Italian rather than in Latin, so that he could tell the common people something that was in direct opposition to the church’s official program. Same with Darwin. The church didn't have a problem with his theory of evolution until he published a popular book that everyone could read.

      Similarly, we’ve known about the relationship between carbon dioxide and global warming since the 1890s. It’s been about 50 years since scientists warned President Lyndon B. Johnson about the dangers of a changing climate. But scientists back then didn't get the deluge of hate mail that I get now So what shifted? It started, possibly, with [Columbia University climate scientist] James Hansen's testimony before Congress in 1988. He announced that a resource we all rely on—and makes many of the world's biggest companies rich—is harming not just the environment but all of humanity. I think it’s no accident that Hansen is the most vilified and attacked climate scientist in the U.S. because he was the first person to emerge from ivory tower and start talking about global warming in a sphere where its implications became apparent for policy and politics.

      So you can see that the problem people have with science is never the actual science. People have a problem with the implications of science for their worldview and, even more important, for their ideology. When anti-intellectualism rises to the surface, it's because there are new, urgent results coming out of the scientific community that challenge the perspective and status quo of people with power. Renewable energy is now posing a very significant threat to them. The more viable the technologies, the greater the pushback. It’s a last-ditch effort to resist change, which is why denial is at a fever pitch.

      Today, although many of the objections to climate science are couched in science-y terms—it’s just a natural cycle, scientists aren’t sure, global cooling, could it be volcanoes—or even religious-y terms—God is in control—99 percent of the time, that language is just a smokescreen. If you refuse to engage these arguments and push through for even five minutes, the conversation will naturally lead to profound objections to climate change solutions....

      Even in the science community, there’s so much confusion over how to communicate. The deficit model—just give them the facts!—does not work in public discourse unless everybody is politically neutral. That’s why social science is increasingly important. I was the experimental method in a recent paper where a researcher asked me to speak at an evangelical Christian college. He asked the students about global warming before and after my talk and found statistically significant differences on their perspectives. Many people are now doing this kind of message testing. How humans interact with information is an emerging area of research that's desperately important.

      Scientists also tend to understate the impact of climate change….

      Look, we can’t fix all these issues—cultural, political, psychological—before we take necessary action on climate change. People say to me, “Well, if you could just get everyone onboard with the science....” I’m like, good luck with that! How did that work out the past few centuries? This climate problem is urgent. The window is closing. We have to fix it with the flawed, imperfect society we have today….

      My number-one piece of advice for people doing climate—or any science—outreach is, Don’t focus on the dismissive people. They’re really a very small part of the population, and they're primarily older white men….

Better Batteries

[This excerpt is from an article by Matthew Sedacca in the October 2017 issue of Scientific American.]

      Some owners of Samsung Galaxy Note7 smartphones learned the hard way last year that lithium-ion batteries, commonly used in many consumer electronics, can be flammable and even explosive. Such batteries typically rely on liquid electrolytes, which are made up of an organic solvent and dissolved salts. These liquids enable ions to flow between electrodes separated by a porous membrane, thus creating a current. But the fluid is prone to forming dendrites—microscopic lithium fibers that can cause batteries to short-circuit and heat up rapidly. Now research suggests that gas-based electrolytes could yield a more powerful and safer battery.

      …recently tested electrolytes composed of liquefied fluoromethane gas solvents, which can absorb lithium salts as well as their conventional liquid-based counterparts do. After the experimental battery was fully charged and drained 400 times, it held a charge nearly as long as it did when new; a conventional lithium-ion battery tends to last nearly 20 percent as long. The condensed-gas battery also generated no dendrites. The findings were published earlier this year in Science.

      If a standard lithium-ion battery is punctured—and the membrane separating the electrodes is pierced—the electrodes can come into contact and short-circuit. This causes the battery to overheat in the presence of its reactive lithium electrolyte and possibly catch fire (which is exacerbated by oxygen entering from outside). But fluoromethane liquefies only under pressure, so if the new batteries are punctured, the pressure releases, the liquid reverts to a gas and the gas can escape…

      The batteries perform well in temperatures as low as —60 degrees Celsius, unlike standard lithium-ion batteries, so they could power instruments in high-altitude drones and long-range spacecraft….

      Donald Sadoway, a professor of materials chemistry at the Massachusetts Institute of Technology, who was not involved in the study, says the new concept “opens our eyes to a class of liquids that has been understudied.” But, he adds, the researchers need to ensure that excessive heat does not cause the batteries’ liquefied gas to expand rapidly and lead to a dangerous increase in pressure.

The Distracted Student Mind – Enhancing Its Focus and Attention

[These excerpts are from an article by Larry D. Rosen in the October 2017 issue of Kappan.]

      For more than three decades….my research team and I have watched Americans move from an initial fear of computers to a state of wary acceptance to eager adaptation to what has become more or less an obsession with the tiny devices we now carry in our purses and pockets.

      What does this obsession mean for today’s students? Recent research findings are sobering:

      • Typically, college students unlock their phones 50 times a day, using them for close to 4½ hours out of every 24-hour cycle. Put another way, they check their phones every 15 minutes — all day long (and sometimes all night) — and they look at them for about five minutes each time.

      • Teenagers are almost always attempting to multitask, even when they know full well that they cannot do so effectively.

      • When teenagers have their phones taken away, they become highly anxious (and visibly agitated within just a few minutes).

      • The average adolescent or young adult finds it difficult to study for 15 minutes at a time; when forced to do so, they will spend at least five of those minutes in a state of distraction.

      …consider how many decades it took for wired telephones to fully penetrate American society. Cell phones took hold much more quickly, but even so, it took a couple of decades before cell phone use reached 50 million users (the benchmark for penetrating society, according to consumer scientists). Then came the World Wide Web, which hit 50 million users in just four years. More recently, MySpace took 2.5 years to do so, Facebook did it in two years, YouTube took just a single year, and Instagram hit the mark in a matter of months. If that seems fast, consider that both Angry Birds and Pokemon GO took just one month to garner 50 million users….

      The question is, what does this increasingly rapid influx of media and technologies do to us mentally, physically, and neurologically? More specifically…as young people are buffeted by one new communications technology after another, what happens to their ability to focus on the present?...

      This is a huge problem, given that sleep plays an absolutely critical role in learning, allowing us to consolidate important information, rid ourselves of unwanted information, and dispose of stray toxic molecules left in the brain during the day. The human body includes hormonal mechanisms that ensure that it gets the sleep it needs — as day turns to dusk, the pineal gland starts to secrete melatonin, which is a hormone that gradually makes people sleepy. However, most electronic devices emit light in the blue part of that spectrum, which tells the pineal gland to shut down the melatonin and orders the adrenal gland to secrete cortisol, which wakes people up. The closer one holds the device to one’s eyes, the more blue light is absorbed and the more difficult it is to get a good night's sleep. The upshot is that 80% of today's teens say they rarely or never sleep well. The National Sleep Foundation recommends nine hours per night, but most teens now get far less than that. Most weeks, they accrue 12 hours of sleep debt, which can only be repaid by sleeping during the day (often in class)….

      What can educators do for students who’ve become used to accessing their smartphones all day long, are constantly distracted by texts and alerts, spend countless hours on social media, use their phones right up to bedtime, and rarely get a good night’s sleep? There’s no simple solution. For example, studies suggest that if we take away their phones, that only makes them anxious, impeding their learning. Plus, online conversations are their lifeblood, accounting for much, if not most, of their social lives. However, some simple strategies can help. Drawing from my own and others’ research, here’s what I recommend:

      #1. Make sure students understand that their brains need the occasional "reset."…

      #2. Help students build stamina for studying with tech breaks.…

      #3. Advise students to treat sleep as sacred….

      #4. Tell students to minimize the alerts and notifications….

      #5. Advise parents to create specific tech-free zones….

      I am hopeful, though, that with conscious effort we can help students strengthen their powers of attention. I've heard from many educators who have implemented the strategies described above and have seen students become less distracted by fears of missing the latest text or update. While these strategies require diligence, they are not difficult or complicated. And if you're skeptical that they can help students, then try them on yourself and your own family first — it shouldn’t take long before you begin to feel better able to control your “human-ware”and less like your hardware and software are controlling you.

Giant Shape-Shifters

[These excerpts are from an article by Annie Sneed in the October 2017 issue of Scientific American.]

      Paleontologists unearthed a strange sight in Newfoundland in the early 2000s: an ancient fossil bed of giant, frond-shaped marine organisms. Researchers had discovered these mysterious extinct creatures—called rangeomorphs—before, but they continue to defy categorization. Now scientists believe the Newfoundland fossils and their brethren could help answer key questions about life on Earth.

      Rangeomorphs date back to the Ediacaran period, which lasted from about 635 million to 541 million years ago. They had stemlike bodies that sprouted fractal-like branches and were soft like jellyfish. Scientists think these creatures grew to sizes until then unseen among animals—up to two meters long. After they went extinct, the planet saw an explosion of diverse large animal life during the Cambrian. “Rangeomorphs are part of the broader context of what was going on at this time in Earth’s history,” says study coauthor Jennifer Hoyal Cuthill, a paleobiology research fellow at the Tokyo Institute of Technology. Figuring out how rangeomorphs grew to such great sizes could help provide context for understanding how big, diverse animals originated and how conditions on Earth—which were shifting around this time—may have affected the evolution of life….

      The researchers examined various aspects of the rangeomorphs’ stems and branches, then used mathematical models to investigate the relation between the fossils’ surface areas and volumes. Their models, combined with the fossil observations, revealed that the organisms’ size and shape appeared to be governed by the amount of available nutrients….This may explain why they could reach such large sizes during a period when Earth's geochemistry was changing.

      But other experts are hesitant to generalize in this way….

Angels and Men

[These excerpts are from an article by Claudia Roth Pierpont in the October 16, 2017, issue of The New Yorker.]

      …There is little doubt that Leonardo was arrested. Although any time he may have spent in jail was brief, and the was dismissed….It is impossible to know if this affected the artist’s habit, later cited as a mark of his character, of buying caged birds from the market, just to set them free. But it does seem connected with the drawings he made, during the next few years, of two fantastical inventions: a machine that he explained was meant “to open a prison from the inside,” and another for tearing bars of windows.

      These drawings are part of a a vast treasure of texts and images, amounting to more than seven thousand surviving pages, now dispersed across several countries and known collectively as “Leonardo’s notebooks”—which is precisely what they were. Private notebooks of all sizes, some carried about for quick sketches and on-the-spot observations, others used for long-term, exacting studies in geology, botany, and human anatomy, to specify just a few of the areas in which he posed fundamental questions, and reached answers that were often hundreds of years ahead, of his time. Why is the sky blue? How does the heart function? What are the differences in air pressure above and beneath a bird’s wing, and how might this knowledge enable man to make a flying machine? Music, engineering, astronomy. Fossils and the doubt they cast on the Biblical story of creation……He intended publication, but never got around to it; there was always something more to learn. In the following centuries, at least half the pages were lost. What survives is an unparalleled record of a human mind at work, as fearless and dogged as it was brilliant….

      A chariot fitted with enormous whirling blades, slicing men in half or cutting off their legs, leaving pieces scattered; guns with multiple barrels arranged like organ pipes to increase the speed and intensity of firing; a colossal missile-launching crossbow. Leonardo made many such frightening drawings…. He had never demonstrated any military skills before, and his intention in these drawings remains a matter of dispute. Was he an unworldly visionary or a conscienceless inventor?...

Scaredy-Cats

[These excerpts are from an article by Jason G. Goldman in the October 2017 issue of Scientific American.]

      Humans kill large carnivores—a category of animals that includes wolves, bears, lions, tigers and pumas—at more than nine times their mortality rate in the wild. Although they may not be our prey in the traditional sense, new research shows that some of the World’s biggest carnivores are responding to humans in a way that resembles how prey animals react to predators. Biologists at the Santa Cruz Puma Project, an ongoing research effort in the mountains of California’s central coast, report that even the formidable puma, or mountain lion, shows its fearful side when people are around.

      In a recent study, the researchers followed 17 mountain lions outfitted with GPS collars to the animals’ deer kill sites. Once the cats naturally left the scene between feedings, ecologist Justine A. Smith…and her team trained motion-activated cameras on the prey carcasses. On the animals’ return, the cameras triggered nearby speakers, which broadcast recordings of either frogs croaking or humans conversing.

      The pumas almost always fled immediately on hearing the human voices, and many never returned to resume feeding or took a long time to do so. But they only rarely stopped eating or fled when they heard the frogs. They also spent less than half as much time feeding during the 24 hours after first hearing human chatter, compared with hearing the frogs….

      The human presence in such a situation has far-reaching consequences. A previous study found that Santa Cruz pumas living near residential areas killed 36 percent more deer than those in less populated places. The new finding could explain why: if the cats are scared away from their kills before they finish feeding, they may be taking more prey to compensate. And fewer deer could mean more plants go uneaten, according to Chris Darimont, a professor of conservation science at the University of Victoria in British Columbia, who was not involved in the study. Thus, fear of humans may alter the entire food chain.

      “Humans are the most significant source of mortality for pumas in this population even though [the cats are] not [legally] hunted” for food or sport, Smith says. Many are hunted illegally, struck by vehicles or legally killed by governmental agencies as a means of protecting livestock. “So they have good reason to be fearful of us,” she adds. Darimont predicts other large carnivores would show similar responses because humans have effectively become the planet's apex predators—even if we often do not eat what we kill. ”I expect this to be common because the human predator preys on just about every medium-to-large vertebrate on the planet,” he says. “And at very high rates.”

Put Science back in Congress

[This excerpt is from an editorial in the October 2017 issue of Scientific American.]

      The White House and Congress have lost their way when it comes to science. Notions unsupported by evidence are informing decisions about environmental policy and other areas of national interest, including public health, food safety, mental health and biomedical research. The president has not asked for much advice from his Office of Science and Technology Policy, evidently.

      The congressional committees that craft legislation on these matters do not even have formal designated science advisers. That’s a big problem. Take the House Committee on Science, Space, and Technology. Its leader, Republican Representative Lamar Smith of Texas, clearly misunderstands the scientific process, which includes assessment by independent peer reviewers prior to publication. The result has been a nakedly antiscience agenda. The committee has packed its hearings with industry members as witnesses instead of independent researchers. Democratic members have felt compelled to hold alternative hearings because they feel Smith has not allowed the real experts to speak. Smith’s misinformed leadership has made it clear that congressional science committees need to be guided by genuinely objective experts.

      So far this year, Smith and fellow committee member Representative Frank Lucas of Oklahoma have each introduced bills that would seriously weaken the Environmental Protection Agency. Lucas’s bill would help stack the EPA’s Science Advisory Board with industry representatives and supporters. Smiths—the Honest and Open New EPA Science Treatment (HONEST) Act—would make it harder for the EPA to create rules based on good research. As Rush Holt, CEO of the American Association for the Advancement of Science, a former representative and a nuclear physicist, said of an earlier version of the bill, this sort of legislation is nothing less than an attempt to “fundamentally substitute] a [political] process for the scientific process.”

      This is lunacy. We should not allow elected officials—especially the heads of congressional science committees—to interfere with the scientific process, bully researchers or deny facts that fit poorly with their political beliefs. Instead of seeing science as a threat, officials should recognize it as an invaluable tool for improving legislation.

The Most Effective Climate Change Solutions Are Rarely Discussed

[These excerpts are from a current news article in the October 2017 issue of The Science Teacher.]

      Governments and schools are not communicating the most effective ways for individuals to reduce their carbon footprints, according to new research. The four actions that most substantially decrease an individual’s carbon footprint arc eating a plant-based diet, avoiding air travel, living car-free, and having smaller families.

      The study found that the incremental changes advocated by governments are unlikely to reduce greenhouse gas emissions beneath the levels needed to prevent 2°C of climate warming…

      Lead author Seth Wynes said: ‘We found that [the four actions] could result in substantial decreases in an individual’s carbon footprint. For example, living car-free saves about 2.4 tons of CO2 equivalent per year, while eating a plant-based diet saves 0.8 tons of CO2 equivalent a year.

      “These actions have much greater potential to reduce emissions than commonly promoted strategies like comprehensive recycling (which is four times less effective than a plant-based diet) or changing household lightbulbs (eight times less effective).”

      The researchers also found that neither Canadian school textbooks nor government resources from the E.U., U.S., Canada, or Australia highlight these actions, instead focusing on incremental changes with much smaller potential to reduce emissions.

      Study co-author Kimberly Nicholas said: “We recognize these are deeply personal choices. But we can't ignore the climate effect our lifestyle actually has. Personally, I've found it really positive to make many of these changes. It's especially important for young people establishing lifelong patterns to be aware which choices have the big-gest impact. We hope this information sparks discussion and empowers individuals.”…

Shellfish Secrets Could Help Save Soldiers

[These excerpts are from an article by David L. Chandler in the September/October 2017 issue of Technology Review.]

      The shells of marine organisms take a beating as they get propelled onto rocky shores by storms and tides or chomped by sharp-toothed predators. But recent research has shown that one type of shell stands above all others in its toughness: the conch. Now, an MIT team has explored the secrets behind these shells’ extraordinary impact resilience—and they've shown that this strength could be reproduced in engineered materials, leading to superior protective headgear and body armor.

      Conch shells “have this really unique architecture….” This internal structure makes the material 10 times as resistant to fractures as nacre, or mother-of-pearl—the shell’s basic material. The key is that it forms a “zigzag matrix….”

      Protective helmets and other impact-resistant gear require a combination of strength and toughness….Strength refers to a material’s ability to resist damage, which steel does well, for example. Toughness, on the other hand, refers to a material's ability to dissipate energy, as rubber does. Traditional helmets use a metal shell for strength and a flexible liner for both comfort and energy dissipation. But in the new composite material, this combination of qualities is distributed through the whole material.

      The printing technology would make it possible to form the conch-inspired material into individualized helmets or body armor….

The Myth of the Skills Gap

[These excerpts are from an article by Andrew Weaver in the September/October 2017 issue of Technology Review.]

      The contention that America’s workers lack the skills employers demand is an article of faith among analysts, politicians, and pundits of every stripe, from conservative tax cutters to liberal advocates of job training. Technology enthusiasts and entrepreneurs are among the loudest voices declaiming this conventional wisdom….

      Two recent developments have heightened debate over the idea of a “skills gap”: an unemployment rate below 5 percent, and the growing fear that automation will render less-skilled workers permanently unemployable.

      Proponents of the idea tell an intuitively appealing story: information technology has hit American firms like a whirlwind, intensifying demand for technical skills and leaving unprepared American workers in the dust. The mismatch between high employer requirements and low employee skills leads to bad outcomes such as high unemployment and slow economic growth.

      The problem is, when we look closely at the data, this story doesn’t match the facts. What’s more, this view of the nation’s economic challenges distracts us from more productive ways of thinking about skills and economic growth while promoting unproductive hand-wringing and a blinkered focus on only the supply side of the labor market—that is, the workers.

      Although much research touches on this topic, almost none of the existing studies directly measure skills, the key quantity of interest. I have conducted a series of nationally representative skill surveys covering a range of technical occupations: manufacturing production workers, IT help-desk technicians, and laboratory technologists. The surveys specifically target managers with knowledge of both hiring and operations at their businesses. The basic strategy is to ask: what skills do employers demand, and do the employers that demand high skill levels have trouble hiring workers?

      The results yield a number of surprises. First, persistent hiring problems are less widespread than many pundits and industry representatives claim….

      …Given a tighter labor market and higher educational requirements for these entry-level technical jobs, it would be reasonable to expect hiring to be more difficult. Not so. Only 15 percent of IT help desks report extended vacancies in technician positions. While the results do show higher levels of long-term lab-tech openings, it turns out that many of these are concentrated in the overnight shift and thus reflect inadequate compensation for difficult working conditions, not a structural skill deficiency. A little over a quarter of clinical diagnostic labs report at least one long-term vacancy.

      The survey results do show some hiring challenges, but not for the reasons posited by the conventional skill-gap narrative. In fact, the data reveal that high-tech and cutting-edge establishments do not have greater hiring difficulties than other establishments. Furthermore, the data imply that we should be careful about calling for more technical skills without specifying which skills we are talking about. It is quite common to hear advocates—and even academics—assert that the answer to the nation's labor-market and economic-growth challenges is for workers to acquire more science, technology, engineering, and mathematics (STEM) skills. However, my data show that employers looking for higher-level computer skills generally do not have a harder time filling job openings. Manufacturers requiring higher-level math do sometimes have more hiring challenges, but math requirements are not a problem for IT help desks or clinical labs.

      So what are the skill requirements most consistently associated with hiring difficulties? In manufacturing, its higher-level reading, while for help-desk technicians it's higher-level writing. Proponents of the skill-gap theory sometimes assert that the problem, if not a lack of STEM skills, is actually the result of a poor attitude or inadequate soft skills among younger workers. But while demand for a few soft skills—like the ability to initiate new tasks without guidance from management—is occasionally predictive of hiring problems, most soft-skill demands, including requirements for cooperation and teamwork, are not.

      This is not to say that STEM or soft skills are not enormously useful. However, specific recommendations and courses of study need to be tightly connected to particular occupational requirements and employer needs….

      Even economists and labor-market experts don’t know the exact mix or level of skills that particular occupations demand….

      The danger is not that we will run out of tasks humans can usefully perform or that required skill levels will be catastrophically high; it's that misguided anxiety about skill gaps will lead us to ignore the need to improve coordination between workers and employers. It’s this bad coordination—not low-quality workers—that presents the real challenge.

A Quick Fix for Rush Hour

[These excerpts are from an article by Peter Dizikes in the September/October 2017 issue of Technology Review.]

      Cities plagued with terrible traffic may be overlooking a simple, low-cost solution: high-occupancy-vehicle . (HOV) policies can reduce traffic drastically, according to a new study coauthored by MIT economists.

      In Jakarta, Indonesia, travel delays became 46 percent worse during the morning rush hour and 87 percent worse during the evening rush hour after a policy requiring three or more individuals in a car was discontinued on important city-center roads….

     Jakarta instituted HOV regulations in 1992 to address its notoriously bad traffic, requiring three people in each vehicle on some major roads between 7 and 10 A.M. and between 4:30 and 7 P.M. However, many commuters picked up so-called jockeys—people willing to ride in their cars for a small fee—instead of carpooling. Skeptics contended that the policy therefore wasn’t actually reducing the number of vehicles on the roads. So Jakarta scrapped it in 2016—first for a week, then for a month, and then permanently. ..

      After the HOV policy was abandoned, the average speed of Jakarta’s rush-hour traffic declined from about 17 to 12 miles per hour in the mornings, and from about 13 to seven miles per hour in the evenings. By comparison, people usually walk at around three miles per hour.

Sleeping with the Jellyfish

[This brief article appeared in the September 29, 2017, issue of Science.]

      We think of sleep as restoring our brains: a time to process memories, cleanse our cells of toxins, and prepare for a new day. But even animals that lack brains need to snooze. Biologists have discovered that, like humans, jellyfish appear to enter a daily state of rest and are groggy when they “wake up.” Scientists observed that 23 jellyfish in the genus Cassiopea slowed their rhythmic pulsing from 60 pulses per minute during the day to just 39 per minute at night, and were slow to respond when they were moved around their tanks during “resting” hours. When the researchers put them through the jellyfish version of sleep deprivation—squirting water at them every 20 minutes for 6 to 12 hours every night—the jellyfish were not nearly as active the next morning, they reported last week in Current Biology. Because these creatures are very low on the animal family tree, the work suggests that the ability to sleep evolved quite early.

Russia Heightens Defenses against Climate Change

[This excerpt is from an article by Angela Davydova in the September 23, 2017, issue of Science.]

      When a squall tore through Moscow at the end of May, the toll was unusually high: The fierce gales killed 1S people and injured scores more, officials say, and inflicted about $3.5 billion in damages in Russia’s capital region….

      The new charge to the environment ministry reflects a sea change in Russia’s views about climate change and how the nation must respond. Politicians have acknowledged that extreme weather events have doubled over the past 25 years, to 590 in 2016, and that average temperatures are rising, particularly in the Arctic. Yet until recently, tackling climate change was a low priority for the federal government. One reason is complacence, because Russia's greenhouse gas emissions have already plummeted since the collapse of the Soviet Union. Another is political: Russia’s economy depends heavily on pumping oil and gas out of the ground. Many influential voices here routinely debunked climate change, and some Russian newspapers in recent years chalked up climate variability to a mythical U.S. weapon aimed at Russia, or as a foreign plot aimed at Russia’s energy exports….

      Unease spread nationwide this summer, after forest fires razed 4.6 million hectares of Siberian taiga and flooding ravaged the Far East. The mosquito-borne West Nile virus has made gains in southern Russia, and tick-borne encephalitis and Lyme disease are spreading in the north. Officials as well as scientists blame those disturbing patterns on climate change….

Refilling the Coral Reef Glass

[These excerpts are from an editorial by David Obura in the September 23, 2017, issue of Science.]

      Coral reefs around the world have suffered from 3 years of coral bleaching, following three decades of record high temperatures. It is now clear that coral reefs cannot survive, unchanged, under climate change. Their final state will depend not only on societal conviction to restore coral health but also on the ability to sustain investments and action that support this commitment.

      For the past 50 years, warnings of anthropogenic climate change and evidence of the impacts of increasing populations, resource consumption, and energy use worldwide went largely unheeded. During this time, local impacts were transformed into global ones. But in the past 2 years, world leaders signaled a sea change by signing the Paris Agreement and by adopting the United Nations Sustainable Development Goals (SDGs). The Paris Agreement's target of less than 2°C increase in global temperatures provides the only chance for coral reef survival. If the agreement is fully implemented, temperatures will eventually decline, improving conditions for surviving reef corals and for reef rescue….

      Support and expansion of conservation efforts in all ocean basins are urgently needed, but they may only succeed if temperatures stabilize under low greenhouse gas emission scenarios. Two things will eliminate any chance for coral reef survival: not dealing with nonclimate stresses that erode reef resilience, and runaway warming above 2°C. If these occur, it is virtually certain that major reef systems will not survive in the Anthropocene.

Eliminating the Human

[These excerpts are from an op-ed article by David Byrne in the September/October 2017 issue of Technology Review.]

      I have a theory that much recent tech development and innovation over the last decade or so has an unspoken overarching agenda. It has been about creating the possibility of a world with less human interaction. This tendency is, I suspect, not a bug—it’s a feature. We might think Amazon was about making books available to us that we couldn’t find locally—and it was, and what a brilliant idea—but maybe it was also just as much about eliminating human contact.

      The consumer technology I am talking about doesn't claim or acknowledge that eliminating the need to deal with humans directly is its primary goal, but it is the outcome in a surprising number of cases. I’m sort of thinking maybe it is the primary goal, even if it was not aimed at consciously. Judging by the evidence, that conclusion seems inescapable….

      Human interaction is often perceived, from an engineer’s mind-set, as complicated, inefficient, noisy, and slow. Part of making something “frictionless” is getting the human part out of the way. The point is not that making a world to accommodate this mind-set is bad, but that when one has as much power over the rest of the world as the tech sector does over folks who might not share that worldview, there is the risk of a strange imbalance. The tech world is predominantly male—very much so. Testosterone combined with a drive to eliminate as much interaction with real humans as possible for the sake of “simplicity and efficiency” do—the math, and there’s the future….

      Minimizing interaction has some knock-on effects—some of them good, some not. The externalities of efficiency, one might say.

      For us as a society, less contact and interaction—real interaction—would seem to lead to less tolerance and understanding of difference, as well as more envy and antagonism. As has been in evidence recently, social media actually increases divisions by amplifying echo effects and allowing us to live in cognitive bubbles. We are fed what we already like or what our similarly inclined friends like (or, more likely now, what someone has paid for us to see in an ad that mimics content). In this way, we actually become less connected—except to those in our group.

      Social networks are also a source of unhappiness. A study earlier this year by two social scientists, Holly Shakya at UC San Diego and Nicholas Christakis at Yale, showed that the more people use Facebook, the worse they feel about their lives. While these technologies claim to connect us, then, the surely unintended effect is that they also drive us apart and make us sad and envious….

      We have evolved as social creatures, and our ability to cooperate is one of the big factors in our success. I would argue that social interaction and cooperation, the kind that makes us who we are, is something our tools can augment but not replace.

      When interaction becomes a strange and unfamiliar thing, then we will have changed who and what we are as a species. Often our rational thinking convinces us that much of our interaction can be reduced to a series of logical decisions—but we are not even aware of many of the layers and subtleties of those interactions. As behavioral economists will tell us, we don't behave rationally, even though we think we do…..

      Humans are capricious, erratic, emotional, irrational, and biased in what sometimes seem like counterproductive ways. It often seems that our quick-thinking and selfish nature will be our downfall. There are, it would seem, lots of reasons why getting humans out of the equation in many aspects of life might be a good thing.

      But I’d argue that while our various irrational tendencies might seem like liabilities, many of those attribute ally work in our favor. Many of our e ion responses have evolved over millennia, and they are based on the probability that they will, more likely than not, offer the best way to deal with a situation….

      We’re a social species—we benefit from passing discoveries on, and we benefit from our tendency to cooperate to achieve what we cannot alone….

      “We” do not exist as isolated individuals. We, as individuals, are inhabitants of networks; we are relationships. That is how we prosper and thrive.

My Zero-Waste Life

[These excerpts are from an article by Bradley Layton in the September/October 2017 issue of Technology Review.]

      …Aren’t we all persistently, consistently, and incessantly metabolizing organic and inorganic matter?

      Life itself is defined by metabolism. Fail to metabolize, and you die. The deep ecological question of our hypersuccessful global species now becomes: how do we bring every molecule, every atom, full circle indefinitely as we shuttle matter back and forth between Earth’s biosphere and humanity’s technosphere?...

      Along the way, it’s become abundantly clear to me that we cannot afford to keep extracting and using fossil fuels 10 million times faster than they were created—particularly when their use generates waste in the form of carbon dioxide, methane, and oceans of plastic. (It does not take an MIT degree to understand this concept; when I talk to high schoolers about it, they quickly catch on.)…

      As our population approaches 10 billion, it's more critical than ever that we live sustainably—and clean up after ourselves.

The Unsustainable Scientist

[These excerpts are from an article by Jeffrey J. McDonnell in the September 15, 2017, issue of Science.]

      But how? A sustainable scientist is still a hard-working scientist. Combining hard work with laserlike focus and ruthless time management is an important step toward making your life sustainable. Even more important is opportunity management.

      Early on, I worried that each new opportunity….I felt like I couldn't say no. I now understand that early-career opportunities, like gray hairs, don’t stop appearing, and that sometimes it’s important to turn them down so that you can complete the things you’ve already said yes to. As you work on learning to say no, the “want to-need to” matrix can be a useful tool: Say yes only to the things you both need and want to do, and say no if you either do not need or do not want to do something.

      Similarly, developing a personal work philosophy can help you allocate your most precious resource: time…

Darwin, the Crowdsourcer

[These excerpts are from a book review by Christopher Kemp in the September 15, 2017, issue of Science.]

      At Down House in Kent, Charles Darwin is instructing his manservant Parslow to lower another dead pigeon into a foul-smelling pot. It is February 1856. Darwin is 47 years old. Endlessly curious, he has begun to suspect that the skeletons of different pigeon varieties will support his secretly held ideas about how species are related and have changed throughout history….

      Things began inauspiciously enough. In 1831, inspired by the works of naturalist Alexander von Humboldt, Darwin—a medical school dropout—embarked on a 5-year voyage aboard the HMS Beagle.

      He was 22 years old, was constantly seasick, and was Captain Fitzroy’s third choice for the position of ship's naturalist. But Darwin’s keen mind questioned everything….

      After his return, Darwin’s ideas about evolution began to formulate, but he knew that he needed evidence to support them.

      Whenever possible, he crowdsourced his data, enlisting the help of his children (he had ten, three of whom died in childhood), for example. Often, they were sent across the Kentish fields with specific work orders: “Collect a hundred Lythrum plants and bring them home,” or “Track the routes of the bees that crisscross the clover-studded meadows.”

      Parslow the manservant helped, too, as did Darwin’s long-suffering wife, Emma, who watched with dismay as he carpeted the hallway of Down House with paper covered in frog spawn. Catherine Thorley, the children's governess, assisted in the completion of a painstaking plant survey of nearby meadows. Schoolmaster Ebenezer Norman tabulated all of Darwin’s data for him. The local vicar helped build beehives.

      Researchers also offered vital information. Kew Gardens botanist Joseph Hooker, for example, provided Darwin with fresh orchid specimens, sent express from London in sealed cans. (Despite ample evidence, Victorian scientists simply refused to believe that many plant species reproduce sexually. The idea that the orchids might be having sex in the parlor—in front of the children!—was simply too horrid to entertain. But it was so. And, with Hooker’s help, Darwin proved it.)

Defending Science

[This excerpt is from are from the start of Defending Science – within Reason, by Susan Haack.]

      Attitudes to science range all the way from uncritical admiration at one extreme, through distrust, resentment, and envy, to denigration and outright hostility at the other. We are confused about what science can and what it can't do, and about how it does what it does; about how science differs from literature or art; about whether science is really a threat to religion; about the role of science in society and the role of society in science. And we are ambivalent about the value of science. We admire its theoretical achievements, and welcome technological developments that improve our lives; but we are disappointed when hoped-for results are not speedily forthcoming, dismayed when scientific discoveries threaten cherished beliefs about ourselves and our place in the universe, distrustful of what we perceive as scientists’ arrogance or elitism, disturbed by the enormous cost of scientific research, and disillusioned when we read of scientific fraud, misconduct, or incompetence.

A Looming Tragedy of the Sand Commons

[These excerpts are from an article by Aurora Torres, Jodi Brandt, Kristen Lear and Jianguo Liu in the September 8, 2017, issue of Science.]

      Between 1900 and 2010, the global volume of natural resources used in buildings and transport infrastructure increased 23-fold. Sand and gravel are the largest portion of these primary material inputs (79% or 28.6 gigatons per year in 2010) and are the most extracted group of materials worldwide, exceeding fossil fuels and biomass. In most regions, sand is a common-pool resource, i.e., a resource that is open to all because access can be limited only at high cost. Because of the difficulty in regulating their consumption, common-pool resources are prone to tragedies of the commons as people may selfishly extract them without considering long-term consequences, eventually leading to overexploitation or degradation. Even when sand mining is regulated, it is often subject to rampant illegal extraction and trade. As a result, sand scarcity is an emerging issue with major sociopolitical, economic, and environmental implications.

      Rapid urban expansion is the main driver of increasing sand appropriation, because sand is a key ingredient of concrete, asphalt, glass, and electronics. Urban development is thus putting more and more strain on limited sand deposits, causing conflicts around the world. Further strains on sand deposits arise from escalating transformations in the land-sea interface as a result of burgeoning coastal populations, land scarcity, and rising threats from climate change and coastal erosion. Even hydraulic fracturing is among the plethora of activities that demand the use of increasing amounts of sand. In the following, we identify linkages between sand extraction and other global sustainability challenges.

      Sand extraction from rivers, beaches, and seafloors affects ecosystem integrity through erosion, physical disturbance of benthic habitats, and suspended sediments . Thus, extensive mining is likely to place enormous burdens on habitats, migratory pathways, ecological communities, and food webs….

      Such environmental impacts have cascading effects on the provisioning of ecosystem services and human well-being. For example, sand mining is a frequent cause of shoreline and river erosion and destabilization, which undermine human resilience to natural hazards such as storm surges and tsunami events, especially as sea level continues to rise…

Reasoning versus Post-truth

[These excerpts from are from a commentary article by Wayne Melville in the September 2017 issue of The Science Teacher.]

      Empirical evidence and reasoning have not always been at the heart of the scientific enterprise. Evidence-based reasoning evolved in response to beliefs that were increasingly untenable to early natural philosophers. In the early 1600s, the first scientific academies were established in part to uphold the primacy of experiment in questions about the natural world. Such a stance was counter to scholasticism, the dominant medieval method of learning “rooted in Aristotle and endorsed by the Church ….”

      Synthesizing Christianity and Aristotelian thought, scholasticism viewed the universe as simultaneously religious and physical. The scholastic reaction to the heliocentrism put forth in the 1543 publication of De revolutionibus orbium coelestium is entirely understandable: Copernicus challenged not just a “scientific” model of the universe but also a view of man’s place in creation.

      The difficulty that philosopher and scientist Francis Bacon had with deductive scholasticism was that it was static, not permitting new knowledge to develop. By introducing and promoting induction as a method for studying nature, L Bacon profoundly influenced the course of scientific inquiry….

      …After Galileo died in 1642, both Grand Duke Ferdinano II and his brother Prince (later Cardinal) Leopoldo recognized the political value of continuing to support Galileo’s experimental practices.

      This led to Leopoldo’s creation of the scientific Accademia del Cimento in 1657. In 1664, the Accademician Francesco Redi recorded that Leopoldo was interested in science “not for vain or idle diversion, but rather to find in things the naked, pure, genuine truth…” Leopoldo’s commitment to experimentation was captured in the Accadernia's motto: Provando e riprovando (Test and Test Again)….

      The Accademicians regarded experimentation as central to the practice of science, directly in contrast to both Aristotle and the Church….

      …In doing so, the Accademicians worked to remove any reference to philosophy or mythological cosmology from experimental science and so establish the authority of experimentation in questions about the natural world. This was an overt challenge to the prevailing scholastic view of the natural world.

      To reason from evidence is not simple, as it opens the evidence to speculation and argumentation. The Accademicians often struggled to reconcile their interpretations of the experimental data….A particular point of contention within the Accademia. was the range of views about the relationship between experimentation and the still powerful approach of Aristotle….

      The work of the Accademia set out the need for replicable tests, the control of variables, and the standardization of measurement and instrumentation. It also demonstrated that modern science is more than just knowledge; science is a human endeavor based on curiosity about the natural world, observation. argument, creativity, and reason….As science teachers, we must model, teach, and practice these qualities if we are to engage our students with Lthe need for evidence and reasoned argument.

      …As educators, our challenge is to use our authority in the classroom to engage, alongside our students and as learners ourselves, with all of the practices of science, and thus build trust in those practices.

      Post-truth relies on the distrust of both the sources and value of information. This loss of trust in institutions and academic disciplines—including science—along with the wide availability of misinformation that conforms to what people want to hear, diminishes expertise and learning. Drawing from history, we can give students the tools and attitudes needed to challenge those who would devalue reason so that reasoned decision-making can triumph. Just as the Accademicians challenged scholasticism and eventually prevailed, so must we challenge the very idea of post-truth.

With Ice Cream, Fattier May Not Be Tastier

[These excerpts from are from a “Current News” article in the September 2017 issue of The Science Teacher.]

      Researchers have found that people generally cannot tell the difference between fat levels in ice creams.

      In a series of taste tests, participants could not distinguish a 2% difference in fat levels in two vanilla ice cream samples as long as the samples were in the 6-12% fat-level range. While the subjects could detect a 4% difference between ice cream with 6% and 10% fat levels, they could not detect a 4% fat difference in samples between 8% and 12% fat….

      The researchers also found that fat levels did not significantly sway consumers' preferences in taste. The consumers’ overall liking of the ice cream did not change when fat content dropped from 14% to 6%, for example….

      The study may challenge some ice-cream marketing that suggests ice creams with high fat levels are higher-quality and better-tasting products, according to researchers….

How Civilization Started

[This excerpt is from an article by John Lanchester in the September 18, 2017, issue of The New Yorker.]

      Science and technology: we tend to think of them as siblings, perhaps even as twins, as parts of STEM (for “science, technology, engineering, and mathematics”). When it comes to the shiniest wonders of the modem world—as the supercomputers in our pockets communicate with satellites—science and technology are indeed hand in glove. For much of human history, though, technology had nothing to do with science. Many of our most significant inventions are pure tools, with no scientific method behind them. Wheels and wells, cranks and mills and gears and ships’ masts, clocks and rudders and crop rotation: all have been crucial to human and economic development, and none historically had any connection with what we think of today as science. Some of the most important things we use every day were invented long before the adoption of the scientific method. I love my laptop and my iPhone and my Echo and my G.R S., but the piece of technology I would be most reluctant to give up, the one that changed my life from the first day I used it, and that I'm still reliant on every waking hour—am reliant on right now, as I sit typing—dates from the thirteenth century: my glasses. Soap prevented more deaths than penicillin. That’s technology, not science.

      In “Against the Grain: A Deep History of the Earliest States,” James C. Scott, a professor of political science at Yale, presents a plausible contender for the most important piece of technology in the history of man. It is a technology so old that it predates Homo sapiens and instead should be credited to our ancestor Homo erectus. That technology is fire. We have used it in two crucial, defining ways. The first and the most obvious of these is cooking. As Richard Wrangham has argued in his book “Catching Fire,” our ability to cook allows us to extract more energy from the food we eat, and also to eat a far wider range of foods. Our closest animal relative, the chimpanzee, has a colon three times as large as ours, because its diet of raw food is so much harder to digest: The extra caloric value we get from cooked food allowed us to develop our big brains, which absorb roughly a fifth of the energy we consume, as opposed to less than a tenth for most mammals' brains. That difference is what has made us the dominant species on the planet.

      The other reason fire was central to our history is less obvious to contemporary eyes: we used it to adapt the landscape around us to our purposes. Hunter-gatherers would set fires as they moved, to clear terrain and make it ready for fast-growing, prey-attracting new plants. They would also drive animals with fire. They used this technology so much that, Scott thinks, we should date the human-dominated phase of earth, the so-called Anthropocene, from the time our forebears mastered this new tool.

Our Connected Struggle

[These excerpts are from an article by Michelle Chan in the Summer 2017 issue of Friends of the Earth Newsmagazine.]

      Environmentalists understand one thing above all others: in any ecosystem, everything is interconnected. Bears rely on salmon, which rely on rivers, which rely on trees - all the way down to the microorganisms that sustain our soil. Every part plays a critical role, even if we don’t understand it. All of life - and our fates - are intertwined….

      Nature inspires hope and awe: The mind-boggling 3,000-mile migration of the monarch butterfly, alight on paper-thin wings; the damp quiet of ancient redwoods, which have stood in witness to history for thousands of years….

      Excludes nothing and no one. Our fraternity with one another, and with nature, cannot exclude Muslims, immigrants, people of color, or those who disagree with us politically. This communion does not build walls or prohibition lists; it doesn't divide the world between “worthy” or “unworthy” people, or into places to be protected and those to be sacrificed. As our nation becomes increasingly divided and laden with hate, fear and militarism, it flies against nature’s most powerful and enduring lesson: everything and everyone is intrinsically valuable and vital to the whole….

      Our environmental ethic requires us to consider not only our planet's crisis, but our role in creating and solving it - which includes both ecological and social dimensions. We begin by making changes in our own lives to get in “right relationship” with the planet and with others. Then we take that to the systemic level, to transform the institutions that perpetuate environmental and social injustice - understanding that these are interconnected….

      Nature teaches us that all things are interconnected and intrinsically valuable. As environmentalists, we have the opportunity., and responsibility, to let our interconnectedness inspire and animate us to overcome these turbulent and dangerous times. Let us exclude nothing and no one.

Measuring and Managing Bias

[These excerpts are from an editorial by Jeremy Berg in the September 1, 2017, issue of Science.]

      As someone who grew up with a mother who was a medical researcher, who has been married to a woman very active in scientific research for more than 30 years, and who has had many female colleagues and students, I was surprised when I first took a test to measure implicit gender bias and found that I have a strong automatic association between being male and being involved in science. We all carry a range of biases that are products of our culture and experiences or, in some cases, of outcomes that we desire. Fortunately, many such biases can be measured and, in some cases, effectively managed. A key is to first acknowledge their presence and then to take steps to minimize their influence on important decisions and results.

      Implicit biases—those that we are not consciously aware of—might seem difficult to demonstrate or quantify. However, implicit association tests…can be a useful tool for achieving this. IATs are based on measuring the times needed to classify attributes in a simple computer exercise that takes about 5 minutes to complete….I have found the same strong automatic association between male and science and with female and liberal arts. The good news is that direct awareness of one's own implicit biases can reduce their impact on outcomes, at least in some circumstances. I sometimes catch myself assuming that a scientist is male, and then remind myself of my implicit bias test results, and try to think deliberately to avoid making such assumptions.

      Implicit biases are intrinsic human characteristics that should be acknowledged and managed, rather than denied or ignored. Everyone should consider taking one or more IATs to understand this approach and measure his or her own implicit biases….

Postmodernism vs. Science

[This article by Michael Shermer is in the September 2017 issue of Scientific American.]

      In a 1946 essay in the London Tribune entitled “In Front of Your Nose,” George Orwell noted that “we are all capable of believing things which we know to be untrue, and then, when we are finally proved wrong, impudently twisting the facts so as to show that we were right. Intellectually, it is possible to carry on this process for an indefinite time: the only check on it is that sooner or later a false belief bumps up against solid reality, usually on a battlefield.”

      The intellectual battlefields today are on college campuses, where students' deep convictions about race, ethnicity; gender and sexual orientation and their social justice antipathy toward capitalism, imperialism, racism, white privilege, misogyny and “cissexist heteropatriarchy” have bumped up against the reality of contradictory facts and opposing views, leading to campus chaos and even violence. Students at the University of California, Berkeley, and outside agitators, for example, rioted at the mere mention that conservative firebrands Milo Yiannopoulos and Ann Coulter had been invited to speak (in the end, they never did). Middlebury College students physically attacked libertarian author Charles Murray and his liberal host, professor Allison Stanger, pulling her hair, twisting her neck and sending her to the ER.

      One underlying cause of this troubling situation may be found in what happened at Evergreen State College in Olympia, Wash., in May, when biologist and self-identified “deeply progressive” professor Bret Weinstein refused to participate in a “Day of Absence” in which “white students, staff and faculty will be invited to leave the campus for the day’s activities.” Weinstein objected, writing in an e-mail: “on a college campus, one’s right to speak—or to be—must never be based on skin color.” In response, an angry mob of 50 students disrupted his biology class, surrounded him, called him a racist and insisted that he resign. He claims that campus police informed him that the college president told them to stand down, but he has been forced to stay off campus for his safety’s sake.

      How has it come to this? One of many trends was identified by Weinstein in a Wall Street Journal essay: “The button-down empirical and deductive fields, including all the hard sciences, have lived side by side with ‘critical theory,’ postmodernism and its perception-based relatives. Since the creation in 1960s and ‘70s of novel, justice-oriented fields, these incompatible world-views have repelled one another.”

      In an article for Quillette.corn on “Methods Behind the Campus Madness,” graduate researcher Sumantra Maitra of the University of Nottingham in England reported that 12 of the 13 academics at U.C. Berkeley who signed a letter to the chancellor protesting Yiannopoulos were from “Critical theory, Gender studies and Post-Colonial/Postmodernist/Marxist background.” This is a shift in Marxist theory from class conflict to identity politics conflict; instead of judging people by the content of their character, they are now to be judged by the color of their skin (or their ethnicity, gender, sexual orientation, et cetera). “Postmodernists have tried to hijack biology, have taken over large parts of political science, almost all of anthropology, history and English,” Maitra concludes, “and have proliferated self-referential journals, citation circles, non-replicable research, and the curtailing of nuanced debate through activism and marches, instigating a bunch of gullible students to intimidate any opposing ideas.”

      Students are being taught by these postmodern professors that there is no truth, that science and empirical facts are tools of oppression by the white patriarchy, and that nearly everyone in America is racist and bigoted, including their own professors, most of whom are liberals or progressives devoted to fighting these social ills. Of the 58 Evergreen faculty members who signed a statement “in solidarity with students” calling for disciplinary action against Weinstein for “endangering” the community by granting interviews in the national media, I tallied only seven from the sciences. Most specialize in English, literature, the arts, humanities, cultural studies, women’s studies, media studies, and “quotidian imperialisms, intermetropolitan geography [and] detournement.” A course called “Fantastic Resistances” was described as a “training dojo for aspiring ‘social justice warriors’” that focuses on “power asymmetries.”

      If you teach students to be warriors against all power asymmetries, don’t be surprised when they turn on their professors and administrators. This is what happens when you separate facts from values, empiricism from morality, science from the humanities.

Life before Roe

[These excerpts are from an article by Rachel Benson Gold and Megan K. Donovan in the September 2017 issue of Scientific American.]

      When she went before the U.S. Supreme Court for the first time in 1971, the 26-year-old Sarah Weddington became the youngest attorney to successfully argue a case before the nine justices—a distinction she still holds today.

      Weddington was the attorney for Norma McCorvey, the pseudonymous “Jane Roe” of the 1973 Roe v. Wade decision that recognized the constitutional right to abortion—one of the most notable decisions ever handed down by the justices….

      The pre-Roe era is more than just a passing entry in the history books. More than 40 years after Roe v. Wade, antiabortion politicians at the state level have succeeded in re-creating a national landscape in which access to abortion depends on where a woman lives and the resources available to her. From 2011 to 2016 state governments enacted a stunning 338 abortion restrictions, and the onslaught continues with more than 50 new restrictions so far this year. At the federal level, the Trump administration and congressional leaders are openly hostile to abortion rights and access to reproductive health care more generally. This antagonism is currently reflected in an agenda that seeks to eliminate insurance coverage of abortion and roll back public funding for family-planning services nationwide.

      Restrictions that make it more difficult for women to get an abortion infringe on their health and legal rights. But they do nothing to reduce unintended pregnancy, the main reason a woman seeks an abortion. As the pre-Roe era demonstrates, women will still seek the necessary means to end a pregnancy. Cutting off access to abortion care has a far greater impact on the options available and the type of care a woman receives than it does on whether or not she ends a pregnancy.

      The history of abortion underscores the reality that the procedure has always been with us, whether or not it was against the law. At the nation's founding, abortion was generally permitted by states under common law. It only started becoming criminalized in the mid-1800s, although by 1900 almost every state had enacted a law declaring most abortions to be criminal offenses.

      Yet despite what was on the books, abortion remained common because there were few effective ways to prevent unwanted pregnancies. Well into the 1960s, laws restricted or prohibited outright the sale and advertising of contraceptives, making it impossible for many women to obtain—or even know about—effective birth control. In the 1950s and 1960s between 200,000 and 1.2 million women underwent illegal abortions each year in the U.S., many in unsafe conditions. According to one estimate, extrapolating data from North Carolina to the nation as a whole, 699,000 illegal abortions occurred in the U.S. during 1955, and 829,000 illegal procedures were performed in 1967.

      A stark indication of the risk in seeking abortion in the pre-Roe era was the death toll. As late as 1965, illegal abortion accounted for an estimated 17 percent of all officially reported pregnancy-related deaths—a total of about 200 in just that year. The actual number may have been much higher, but many deaths were officially attributed to other causes, perhaps to protect women and their families. (In contrast, four deaths resulted from complications of legally induced abortion in 2012 of a total of about one million procedures.)

      The burden of injuries and deaths from unsafe abortion did not fall equally on everyone in the pre-Roe era. Because abortion was legal under certain circumstances in some states, women of means were often able to navigate the system and obtain a legal abortion with help from their private physician. Between 1951 and 1962, 88 percent of legal abortions performed in New York City were for patients of private physicians rather than for women accessing public health services.

      In contrast, many poor women and women of color had to go outside the system, often under dangerous and deadly circumstances. Low-income women in New York in the 1960s were more likely than affluent ones to be admitted to hospitals for complications following an illegal procedure. In a study of low-income women in New York from the same period, one in 10 said they had tried to terminate a pregnancy illegally.

      State and federal laws were slow to catch up to this reality. It was only in 1967 that Colorado became the first state to reform its abortion law, permitting the procedure on grounds that included danger to the pregnant woman’s life or health. By 1972,13 states had similar statutes, and an additional four, including New York, had repealed their antiabortion laws completely. Then came Roe v. Wade in 1973—and the accompanying Doe v. Bolton decision—both of which affirmed abortion as a constitutional right.

      The 2016 Supreme Court decision in Whole Womans Health v. Hellerstedt reaffirmed a woman’s constitutional right to abortion. But the future of Roe is under threat as a result of President Donald Trump’s commitment to appointing justices to the Supreme Court who he says will eventually overturn Roe. Should that happen, 19 states already have laws on the books that could be used to restrict the legal status of abortion, and experts at the Center for Reproductive Rights estimate that the right to abortion could be at risk in as many as 33 states and the District of Columbia….

      Instead of repeating the mistakes of the past, we need to protect and build on gains already made. Serious injury and death from abortion are rare today, but glaring injustices still exist. Stark racial, ethnic and income disparities persist in sexual and reproductive health outcomes. As of 2011, the unintended pregnancy rate among poor women was five times that of women with higher incomes, and the rate for black women was more than double that for whites. Abortion restrictions—including the discriminatory Hyde Amendment, which prohibits the use of federal dollars to cover abortion care for women insured through Medicaid—fall disproportionately on poor women and women of color.

      These realities are indefensible from a moral and a public health standpoint. The time has come for sexual and reproductive health care to be a right for all, not a privilege for those who can afford it.

Is there a “Female” Brain?

[These excerpts are from an article by Lydia Denworth in the September 2017 issue of Scientific American.]

      As she continued reading, Joel came across a paper contradicting that idea. The study, published in 2001 by Tracey Shors and her colleagues at Rutgers University, concerned a detail of the rat brain: tiny protrusions on brain cells, called dendritic spines, that regulate transmission of electrical signals. The researchers showed that when estrogen levels were elevated, female rats had more dendritic spines than males did. Shors also found that when male and female rats were subjected to the acutely stressful event of having their tail shocked, their brain responded in opposite ways: males grew more spines; females ended up with fewer.

      From this unexpected finding, [Daphna] Joel developed a hypothesis about sex differences in the brain that has stirred up new controversy in a field already steeped in it. Instead of contemplating brain areas that differ between females and males, she suggested that we should consider our brain as a “mosaic” (repurposing a term that had been used by others), arranged from an assortment of variable, sometimes changeable, masculine and feminine features. That variability itself and the behavioral overlap between the sexes—aggressive females and empathetic males and even men and women who display both traits—suggest that brains cannot be lumped into one of two distinct, or dimorphic, categories. That three-pound mass lodged underneath the skull is neither male nor female, Joel says….Joel tested her idea by analyzing MRI brain scans of more than 1,400 brains and demonstrated that most of them did indeed contain both masculine and feminine characteristics. “We all belong to a single, highly heterogeneous population,” she says….

      In the late 1800s, long before MM was a gleam in any scientist's eye, the primary measurable difference in male and female brains was their weight (assessed postmortem, naturally). Because women's brains were, on average, five ounces lighter than men's, scientists declared that women must be less intelligent….

      For much of the next century concrete sex differences in the brain were the province not of neuroscientists but endocrinologists, who studied sex hormones and mating behavior. Sex determination is a complex process that begins when a combination of genes on the X and Y chromosomes act in utero, flipping the switch on feminization or masculinization. But beyond reproduction and distinguishing boy versus girl, reports persisted of psychological and cognitive sex differences. Between the 1960s and early 1980s Stanford University psychologist Eleanor Maccoby found fewer differences than assumed: girls had stronger verbal abilities than boys, whereas boys did better on spatial and mathematical tests….

      Making the leap from brain to behavior provokes the most strident disagreements. The most recent high-profile study accused of playing to stereotypes (and labeled "neurosexist") was a 2014 paper….It found that males had stronger connections within the left and right hemispheres of the brain and that females had more robust links between hemispheres. The researchers concluded that “the results suggest that male brains are structured to facilitate connectivity between perception and coordinated action, whereas female brains are designed to facilitate communication between analytical and intuitive processing modes.” (Counterclaim: the study did not correct for brain size.)

Promiscuous Men, Chaste Women and Other Gender Myths

[These excerpts are from an article by Cordelia Fine and Mark A. Elgar in the September 2017 issue of Scientific American.]

      The stereotype of the daring, promiscuous male—and his counterpart, the cautious, chaste female—is deeply entrenched. Received wisdom holds that behavioral differences between men and women are hardwired, honed by natural selection over millennia to maximize their differing reproductive potentials. In this view, men, by virtue of their innate tendencies toward risk-taking and competitiveness, are destined to dominate at the highest level of every realm of human endeavor, whether it is art, politics or science.

      But a closer look at the biology and behavior of humans and other creatures shows that many of the starting assumptions that have gone into this account of sex differences are wrong. For example, in many species, females benefit from being competitive or playing the field. And women and men often have similar preferences where their sex lives are concerned. It is also becoming increasingly clear that inherited environmental factors play a role in the development of adaptive behaviors; in humans, these factors include our gendered culture. All of which means that equality between the sexes might be more attainable than previously supposed.

      The origin of the evolutionary explanation of past and present gender inequality is Charles Darwin’s theory of sexual selection. His observations as a naturalist led him to conclude that, with some exceptions, in the arena of courtship and mating, the challenge to be chosen usually falls most strongly on males. Hence, males, rather than females, have evolved characteristics such as a large size or big antlers to help beat off the competition for territory, social status and mates. Likewise, it is usually the male of the species that has evolved purely aesthetic traits that appeal to females, such as stunning plumage, an elaborate courtship song or an exquisite odor.

      It was, however, British biologist Angus Bateman who, in the middle of the 20th century, developed a compelling explanation of why being male tends to lead to sexual competition. The goal of Bateman’s research was to test an important assumption from Darwin’s theory. Like natural selection, sexual selection results in some individuals being more successful than others. Therefore, if sexual selection acts more strongly on males than females, then males should have a greater range of reproductive success, from dismal failures to big winners. Females, in contrast, should be much more similar in their reproductive success. This is why being the animal equivalent of a brilliant artist, as opposed to a mediocre one, is far more beneficial for males than for females….

      Scholars mostly ignored Bateman’s study at first. But some two decades later evolutionary biologist Robert Trivers, now at Rutgers University, catapulted it into scientific fame. He expressed Bateman’s idea in terms of greater female investment in reproduction—the big, fat egg versus the small, skinny sperm—and pointed out that this initial asymmetry can go well beyond the gametes to encompass gestation, feeding (including via lactation, in the case of mammals) and protecting. Thus, just as a consumer takes far more care in the selection of a car than of a disposable, cheap trinket, Trivers suggests that the higher-investing sex—usually the female—will hold out for the best possible partner with whom to mate. And here is the kicker: the lower-investing sex—typically the male—will behave in ways that, ideally, distribute cheap, abundant seed as widely as possible.

      The logic is so elegant and compelling it is hardly surprising that contemporary research has identified many species to which the so-called Bateman-Trivers principles seem to apply, including species in which, unusually, it is males that are the higher-investing sex….

      In our own species, the traditional story is additionally complicated by the inefficiency of human sexual activity. Unlike many other species, in which coitus is hormonally coordinated to a greater or lesser degree to ensure that sex results in conception, humans engage in avast amount of nonreproductive sex. This pattern has important implications. First, it means that any one act of coitus has a low probability of giving rise to a baby, a fact that should temper overoptimistic assumptions about the likely reproductive return on seed spreading. Second, it suggests that sex serves purposes beyond reproduction—strengthening relationships, for example….

      Meanwhile the feminist movement increased women’s opportunities to enter, and excel in, traditionally masculine domains. In 1920 there were just 84 women studying at the top 12 law schools that admitted women, and those female lawyers found it nearly impossible to find employment. In the 21st century women and men are graduating from law school in roughly equal numbers, and women made up about 18 percent of equity partners in 2015.

      …It is hard to see how a young female lawyer, looking first at the many young women at her level and then at the very few female partners and judges, can be as optimistic about the likely payoff of leaning in and making sacrifices for her career as a young male lawyer. And this is before one considers the big-picture evidence of sexism, sexual harassment and sex discrimination in traditionally masculine professions such as law and medicine….

      Although sex certainly influences the brain, this argument overlooks the growing recognition in evolutionary biology that offspring do not just inherit genes. They also inherit a particular social and ecological environment that can play a critical role in the expression of adaptive traits. For example, adult male moths that hailed, as larvae, from a dense population develop particularly large testes. These enhanced organs stand the moths in good stead for engaging in intense copulatory competition against the many other males in the population. One would be forgiven for assuming that these generously sized gonads are a genetically determined adaptive trait. Yet adult male moths of the same species raised as larvae in a lower-density population instead develop larger wings and antennae, which are ideal for searching for widely dispersed females.

      …men now place more importance on a female partner's financial prospects, education and intelligence—and care less about her culinary and housekeeping skills—than they did several decades ago. Meanwhile the cliché of the pitiable bluestocking spinster is a historical relic: although wealthier and better-educated women were once less likely to marry now they are more likely to do so.

A Moth’s Eye

[These excerpts are from an article by Morgen Peck in the September 2017 issue of Scientific American.]

      It is a summer night, and the moths are all aflutter. Despite being drenched in moonlight, their eyes do not reflect it—and soon the same principle could help you see your cell-phone screen in bright sunlight.

      Developing low-reflectivity surfaces for electronic displays has been an area of intensive research. So-called transflective liquid-crystal displays reduce glare by accounting for both backlighting and ambient illumination. Another approach, called adaptive brightening control, uses sensors to boost the screen’s light. But both technologies guzzle batteries, and neither is completely effective. The anatomy of the moth eye presents a far more elegant solution….

      When light moves from one medium to another, it bends and changes speed as the result of differences in a material property called refractive index. If the difference is sharp—as when light moving through air suddenly hits a pane of glass—much of the light is reflected. But a moth’s eye is coated with tiny, uniform bumps that gradually bend (or refract) incoming light. The light waves interfere with one another and cancel one another out, rendering the eyes dark.

The Oldest Homo sapiens?

[These excerpts are from an article by Kate Wong in the September 2017 issue of Scientific American.]

      The year was 1961. A barite mining operation at the Jebel Irhoud massif in Morocco, some 100 kilometers west of Marrakech, turned up a fossil human skull. Subsequent excavation uncovered more bones from other individuals, along with animal remains and stone tools. Originally thought to be 40,000-year-old Neandertals, the fossils were later reclassified as Homo sapiens—and eventually related to roughly 160,000 years ago. Still, the Jebel I rhoud fossils remained something of a mystery because in some respects they looked more primitive than older H. sapiens fossils.

      Now new evidence is rewriting the Jebel Irhoud story again. A team led by Jean-Jacques Hublin of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, has recovered more human fossils and stone tools, along with compelling evidence that the site is far older than the revised estimate….If the fossils do in fact represent H. sapiens, as the team argues, the finds push back the origin of our species by more than 100,000 years and challenge leading ideas about where and how our lineage evolved. But other scientists disagree over exactly what the new findings mean….

      Experts have long agreed that H. sapiens got its start in Africa. Up to this point, the oldest commonly accepted traces of our species were 195,000-year-old remains from the site of Omo Kibish and 160,000-year-old fossils from Herto, both in Ethiopia. Yet DNA evidence and some enigmatic fossils hinted that our species might have deeper roots.

      In their recent work, Hublin and his colleagues unearthed fossils of several other individuals from a part of the Jebel Irhoud site that the miners left undisturbed. The team's finds include skull and lower jaw bones, as well as stone tools and the remains of animals the humans hunted. Multiple techniques date the rock layer containing the fossils and artifacts to between 350,000 and 280,000 years ago….

      Hublin noted that the findings do not imply that Morocco was the cradle of modern humankind. Instead, taken together with other fossil discoveries, they suggest that the emergence of H. sapiens was a pan-African affair. By 300,000 years ago early H. sapiens had spread across the continent. This dispersal was helped by the fact that Africa was quite different back then—the Sahara was green, not the forbidding desert barrier that it is today…

      The new fossils “raise major questions about what features define our species,” observes paleoanthropologist Marta Mirazon Lahr of the University of Cambridge. “[Is] it the globular skull, with its implications [for] brain reorganization, that makes a fossil H. sapiens? If so, the I rhoud population [represents] our close cousins” rather than members of our species. But if, on the other hand, a small face and the shape of the lower jaw are the key traits, then the Jebel Irhoud remains could be from our actual ancestors—and thus shift the focus of scientists who study modern human origins from sub-Saharan Africa to the Mediterranean—Mirazon Lahr says.

      Either way, the discoveries could fan debate over who invented the artifacts of Africa’s Middle Stone Age cultural period, which spanned the time between roughly 300,000 and 40,000 years ago. If H. sapiens was around 300,000 years ago, it could be a contender. But other human species were on the scene back then, too, including Homo heideibergensis and Homo naledi.

Africa’s CDC Can End Malaria

[These excerpts are from an article by Carl Manlan in the September 2017 issue of Scientific American.]

      More than 65 years ago Americans found a way to ensure that no one would have to die from malaria ever again. The disease was eliminated in the U.S. in 1951, thanks to strategies created through the Office of Malaria Control in War Areas, formed in 1942, and the Communicable Disease Center (now the U.S. Centers for Disease Control and Prevention), founded in 1946.

      The idea for Africa’s own Centers for Disease Control and Prevention (Africa CDC) was devised in 2013 and formalized after the worst Ebola outbreak in history the following year. The Africa CDC, which was officially launched in January of this year, is a growing partnership that aims to build countries’ capacity to help create a world that is safe and secure from infectious disease threats….

      Malaria and other preventable diseases continue to challenge our ability to transform our economies at the pace required to support our population growth. Ultimately, for Africa to achieve malaria eradication, it is necessary to translate the Africa CDC’s mandate from the African Union into a funded mechanism to inform health investment.

      Ending malaria was the impetus that led to a strong and reliable CDC in the U.S., and now Africa has an opportunity to repeat that success—ideally by 2030, when the world gathers to assess progress toward achieving the U.N.’s Sustainable Development Goals. We have the opportunity to save many, many lives through the Africa CDC. Let's make it happen.

End the Assault on Women’s Health

[This excerpt by the editors is from the September 2017 issue of Scientific American.]

      Current events are just the latest insult in a long history of male-centric medicine, often driven not by politicians but by scientists and physicians. Before the National Institutes of Health Revitalization Act of 1993, which required the inclusion of women and minorities in final-stage medication and therapy trials, women were actively excluded from such tests because scientists worried that female hormonal cycles would interfere with the results. The omission meant women did not know how drugs would affect them. They respond differently to illness and medication than men do, and even today those differences are inadequately understood. Women report. more intense pain than men in almost every category of disease, and we do not know why. Heart disease is the number-one killer of women in the U. S., yet only a third of clinical trial subjects in cardiovascular research are female—and fewer than a third of trials that include women report results by sex.

      The Republican assault on health care will just make things worse. The proposed legislation includes provisions that would let states eliminate services known as “essential health benefits,” which include maternity care. Before the ACA made coverage mandatory, eight out of 10 insurance plans for individuals and small businesses did not cover such care. The proposed cuts would have little effect on reducing insurance premiums, and the cost would be shifted to women and their families—who would have to take out private insurance or go on Medicaid (which the proposed bill greatly limits)—or to hospitals, which are required by law to provide maternity care to uninsured mothers.

      The bill, in its current form, would also effectively block funding for Planned Parenthood, which provides reproductive health services to 2.4 million women and men. The clinics are already banned from using federal funding for abortions except in cases of rape or incest or when the mother's life is in danger, in accordance with the federal Hyde Amendment. So the Planned Parenthood cuts would primarily affect routine health services such as gynecological exams, cancer screenings, STD testing and contraception—and these clinics are sometimes the only source for such care. Regardless of which side you are on in the pro-life/pro-choice debate, these attempts to remove access to such basic services should alarm us all.

      The Trump administration also has been chipping away at the ACA’s birth-control mandate. A proposed regulation leaked in May suggested the White House was working to create an exemption to allow almost any employer to opt out of covering contraception on religious or moral grounds. Nationwide, women are increasingly turning to highly effective long-acting reversible contraceptives (LARCs) such as intrauterine devices (IUDs). The percentage of women aged 15 to 44 using LARCs increased nearly fivefold from 2002 to 2013. Decreased coverage for contraceptives translates to less widespread use and will likely mean more unintended pregnancies and abortions.

      And abortions will become harder to obtain. After Roe v. Wade, many states tried to put in place laws to hamstring abortion clinics. These efforts have only ramped up in recent years, as many states have enacted so-called TRAP laws (short for targeted regulation of abortion providers), unnecessarily burdensome regulations that make it very difficult for these clinics to operate. Recognizing this fact, the Supreme Court struck down some of these laws in Texas in 2016, but many are still in place in other states. Rather than making women safer, as proponents claim, these restrictions interfere with their Supreme Court-affirmed right to safely terminate a pregnancy.

      Whether or not the repeal-and-replace legislation passes this year, these attacks are part of a larger war on women’s health that is not likely to abate anytime soon. We must resist this assault. Never mind “America First”—it’s time to put women first.

Beyond DNA

[These excerpts are from an article by Gemma Tarlach in the September 2017 issue of Discover.]

      The study of ancient proteins, paleoproteomics is an emerging interdisciplinary field that draws from chemistry and molecular biology as much as paleontology, paleoanthropology and archaeology. Its applications for understanding human evolution are broad:

      One 2016 study used ancient collagen, a common protein, to determine otherwise unidentifiable bone fragments as Neanderthal; another identified which animals were butchered at a desert oasis 250,000 years ago based on protein residues embedded in stone tools.

      Paleoproteomic research can also build evolutionary family trees based on shared or similar proteins, and reveal aspects of an individual’s physiology beyond what aDNA might tell us….

      Thanks to ancient proteins surviving far longer than aDNA — in January, one team claimed to have found evidence of collagen in a dinosaur fossil that’s 195 million years old — researchers are able to read those cheap molecular newspapers from deep time.

      The roots of paleoproteomics actually predate its sister field, paleogenomics. In the 1930s, archaeologists attempted (with little success) to determine the blood types of mummies by identifying proteins with immunoassays, which test for antibody-antigen reactions.

      A couple of decades later, geochemists found that amino acids, the building blocks of proteins, could survive in fossils for millions of years. But it wasn’t until this century that paleoproteomics established itself as a robust area of research.

      In 2000, researchers identified proteins in fossils using a type of mass spectrometer that, unlike earlier methods, left amino acid sequences more intact and readable. Much of today’s research uses a refined version of that method: zooarchaeology by mass spectrome-try (ZooMS)….

      And in 2016, Welker, Collins and colleagues used ZooMS to determine that otherwise unidentifiable bone fragments in the French cave Grotte du Renne belonged to Neanderthals, settling a debate over which member of Homo occupied the site about 40,000 years ago. Given how closely related Neanderthals are to our own species, the researchers' ability to identify a single protein sequence specific to our evolutionary cousins is stunning.

      ZooMS is not a perfect methodology. Analyzing proteins within a fossil requires destroying a piece of the specimen, something unthinkable for precious ancient hominin remains.

      That’s why the most significant applications for ZooMS may be to identify fragmentary fossils and to learn more about ancient hominins’ environments — especially the ones they created….

A Matter of Choice

[These excerpts are from an article by Peg Tyre in the August 2017 issue of Scientific American.]

      …President Donald Trump’s secretary of education, Betsy DeVos, is preparing to give the scheme its first national rollout in the U.S. She has made voucher programs the centerpiece of her efforts to enhance educational outcomes for students, saying they offer parents freedom to select institutions outside their designated school zone. “The secretary believes that when we put the focus on students, and not buildings or artificially constructed boundaries, we will be on the right path to ensuring every child has access to the education that fits their unique needs,” says U.S. Department of Education spokesperson Elizabeth Hill.

      Because the Trump administration has championed vouchers as an innovative way to improve education in the U.S., SCIENTIFIC AMERICAN examined the scientific research on voucher programs to find out what the evidence says about Friedman’s idea. To be sure, educational outcomes are a devilishly difficult thing to measure with rigor. But by and large, studies have found that vouchers have mixed to negative academic outcomes and, when adopted widely, can exacerbate income inequity. On the positive side, there is some evidence that students who use vouchers are more likely to graduate high school and to perceive their schools as safe.

      DeVos’s proposal marks a profound change of direction for American education policy. In 2002, under President George W. Bush’s No Child Left Behind Act, the federal education mantra was “what gets tested, gets taught,” and the nation’s public schools became focused on shaping their curriculum around state standards in reading and math. Schools where students struggled to perform at grade level in those subjects were publicly dubbed “failing schools.” Some were sanctioned. Others were closed. During those years, networks of privately operated, publicly funded charter schools, many of them with a curriculum that was rigorously shaped around state standards, opened, and about 20 percent of them flourished, giving parents in some low-income communities options about where to enroll their child. While charter schools got much of the media attention, small voucher programs were being piloted in Washington, D.C., and bigger programs were launched in Indiana, Wisconsin, Louisiana and Ohio….

      Until now only a handful of American cities and states have experimented with voucher programs. Around 500,000 of the country’s 56 million schoolchildren use voucher-type programs to attend private or parochial schools. The results have been spotty. In the 1990s studies of small voucher programs in New York City Washington, D.C., and Dayton, Ohio, found no demonstrable academic improvement among children using vouchers and high rates of churn —many students who used vouchers dropped out or transferred schools, making evaluation impossible. One study of 2,642 students in New York City who attended Catholic schools in the 1990s under a voucher plan saw an uptick in African-American students who graduated and enrolled in college but no such increases among Hispanic students.

      In 2004 researchers began studying students in a larger, more sustained voucher plan that had just been launched in Washington, D.C. This is the country's first and so far only federally sponsored voucher program. There 2,300 students were offered scholarships, and 1,700 students used those scholarships mostly to attend area Catholic schools. The analysts compared academic data on those who did and did not opt for parochial school and found that voucher users showed no significant reading or math gains over those who remained in public school. But graduation rates for voucher students were higher-82 percent compared with 70 percent for the control group, as reported by parents. A new one-year study of the Washington, D.C., program published in April showed that voucher students actually did worse in math and reading than students who applied for vouchers through a citywide lottery but did not receive them. Math scores among students who used vouchers were around 7 percentage points lower than among students who did not use vouchers. Reading scores for voucher students were 4.9 percentage points lower. The study authors hypothesized that the negative outcomes may be partly related to the fact that public schools offer more hours of instruction in reading and math than private schools, many of which cover a wider diversity of subjects such as art and foreign languages….

      Voucher proponents say parents, even those using tax dollars to pay tuition, should be able to use whatever criteria for school choice they see fit. A provocative idea, but if past evidence can predict future outcomes, expanding voucher programs seems unlikely to help US. schoolchildren keep pace with a technologically advancing world.

Life Springs

[These excerpts are from an article by Martin J. Van Kranendonk, David W. Deamer and Tara Djokic in the August 2017 issue of Scientific American.]

      …Some of the rocks are wrinkled orange and white layers, called geyserite, which were created by a volcanic geyser on Earth's surface. They revealed bubbles formed when gas was trapped in a sticky film, most likely produced by a thin layer of bacterialike microorganisms. The surface rocks and indications of biofilms support a new idea about one of the oldest mysteries on the planet: how and where life got started. The evidence pointed to volcanic hot springs and pools, on land, about 3.5 billion years ago.

      This is a far different picture of life's origins from the one scientists have been sketching since 1977. That was the year the research submarine Alvin discovered hydrothermal vents at the bottom of the Pacific Ocean pumping out minerals containing iron and sulfur and gases such as methane and hydrogen sulfide, surrounded by primitive bacteria and large worms. It was a thriving ecosystem. Biologists have since theorized that such vents, protected from the cataclysms wracking Earth's surface about four billion years ago, could have provided the energy, nutrients and a safe haven for life to begin. But the theory has problems. The big one is that the ocean has a lot of water, and in it the needed molecules might spread out too quickly to interact and form cell membranes and primitive metabolisms.

      Now we and others believe land pools that repeatedly dry out and then get wet again could be much better places. The pools have heat to catalyze reactions, dry spells in which complex molecules called polymers can be formed from simpler units, wet spells that float these polymers around, and further drying periods that maroon them in tiny cavities where they can interact and even become concentrated in compartments of fatty acids—the prototypes of cell membranes.

      …Charles Darwin had suggested, back in 1871, that microbial life originated in “some warm little pond.” A number of scientists from different fields now think that the author of On the Origin of Species had intuitively hit on something important….

      …simple molecular building blocks might join into longer information-carrying polymers like nucleic acids—needed for primitive life to grow and replicate—when exposed to the wet-dry cycles characteristic of land-based hot springs. Other key polymers, peptides, might form from amino acids under the same conditions. Crucially, still other building blocks called lipids might assemble into microscopic compartments to house and protect the information-carrying polymers. Life would need all the compounds to get started….

      Both on land and in the sea, chemical and physical laws have provided a very useful frame around this particular puzzle, and the geologic and chemical discoveries described here fill in different areas. But before we can see a clear picture of the origin of life, many more pieces need to be put in place. What is exciting, however, is that now we can see a path forward to the solution.

Technology as Magic

[This excerpt is from an article by David Pogue in the August 2017 issue of Scientific American.]

      We the people have always been helplessly drawn to the concept of magic: the notion that you can will something to happen by wiggling your nose, speaking special words or waving your hands a certain way. We've spent billions of dollars for the opportunity to see what real magic might look like, in the form of Harry Potter movies, superhero films and TV shows, from Bewitched on down.

      It should follow, then, that any time you can offer real magical powers for sale, the public will buy it. That’s exactly what’s been going on in consumer technology. Remember Arthur C. Clarke’s most famous line? “Any sufficiently advanced technology is indistinguishable from magic.” Well, I’ve got a corollary: “Any sufficiently magical product will be a ginormous hit.”

      Anything invisible and wireless, anything that we control with our hands or our voices, anything we can operate over impossible distances—those are the hits because they most resemble magic. You can now change your thermostat from thousands of miles away, ride in a car that drives itself, call up a show on your TV screen by speaking its name or type on your phone by speaking to it. Magic.

      For decades the conventional wisdom in product design has been to "make it simpler to operate" and “make it easier for the consumer.” And those are admirable goals, for sure. Some of the biggest technical advancements in the past 30 years—miniaturization, wireless, touch screens, artificial intelligence, robotics—have been dedicated to “simpler” and “easier.”

      But that’s not enough to feel magical. Real tech magic is simplicity plus awe. The most compelling tech conventions—GPS apps telling you when to turn, your Amazon Echo answering questions for you, your phone letting you pay for something by waving it at that product—feel kind of amazing every single time.

      The awe component is important. It's the difference between magic and mere convenience. You could say to your butler, “Jeeves, lock all the doors”—and yes, that’d be convenient. But saying, “Alexa, lock all the doors,” and then hearing the dead-bolts all over the house click by themselves? Same convenience, but this time it’s magical.

Plastic-Eating Worms

[These excerpts are from an article by Matthew Sedacca in the August 2017 issue of Scientific American.]

      Humans produce more than 300 million metric tons of plastic every year. Almost half of that winds up in landfills, and up to 12 million metric tons pollute the oceans. So far there is no sustainable way to get rid of it, but a new study suggests an answer may lie in the stomachs of some hungry worms.

      Researchers in Spain and England recently found that the larvae of the greater wax moth can efficiently degrade polyethylene, which accounts for 40 percent of plastics. The team left 100 wax worms on a commercial polyethylene shopping bag for 12 hours, and the worms consumed and degraded about 92 milligrams, or roughly 3 percent, of it. To confirm that the larvae’s chewing alone was not responsible for the polyethylene breakdown, the researchers ground some grubs into a paste and applied it to plastic films. Fourteen hours later the films had lost 13 percent of their mass—presumably broken down by enzymes from the worms’ stomachs.

      When inspecting the degraded plastic films, the team also found traces of ethylene glycol, a product of polyethylene breakdown, signaling true biodegradation….

      …the larvae’s ability to break down their dietary staple—beeswax—also allows them to degrade plastic. “Wax is a complex mixture of molecules, but the basic bond in polyethylene, the carbon-carbon bond, is there as well….The wax worm evolved a mechanism to break this bond.”

      Jennifer DeBruyn, a microbiologist at the University of Tennessee, who was not involved in the study, says it is not surprising that an organism evolved the capacity to degrade polyethylene. But compared with previous studies, she finds the speed of biodegradation in this one exciting. The next step, DeBruyn says, will be to pinpoint the cause of the breakdown. Is it an enzyme produced by the worm itself or by its gut microbes? Bertocchini agrees and hopes her team's findings might one day help harness the enzyme to break down plastics in landfills, as well as those scattered throughout the ocean. But she envisions using the chemical in some kind of industrial process—not simply “millions of worms thrown on top of the plastic.”

Awash in Plastic

[These excerpts are from an article by Jesse Greenspan in the August 2017 issue of Scientific American.]

      Henderson Island, a tiny, unpopulated coral atoll in the South Pacific, could scarcely be more remote. The nearest city of any size lies some 5,000 kilometers away. Yet when Jennifer Lavers, a marine biologist at the Institute for Marine and Antarctic Studies in Tasmania, ventured there two years ago to study invasive rodent-eradication efforts, she found the once pristine UNESCO World Heritage Site inundated with trash-17.6 metric tons of it, she conservatively estimates—pretty much all of it plastic. (The rubbish originates elsewhere but hitches a ride to Henderson on wind or ocean-currents.) One particularly spoiled stretch of beach yielded 672 visible pieces of debris per square meter, plus an additional 4,497 L items per square meter buried in the sand….

      By comparing the data with a study of the nearby Ducie and Oeno atolls conducted in 1991, the team extrapolated that there is between 200 and 2,000 times more trash on Henderson now than there was on those neighboring islands back then. Unidentifiable plastic fragments, resin pellets and fishing gear make up the bulk of the total (graphic), but the researchers also came across toothbrushes, baby pacifiers, hard hats, bicycle pedals and a sex toy. Thousands of new items wash up daily and make any cleanup attempt impractical, according to Lavers, who specializes in studying plastic pollution. Meanwhile many of the world's other coastlines could face a similar threat…

Find My Elephant

[These excerpts are from an article by Rachel Nuwer in the August 2017 issue of Scientific American.]

      How does one protect elephants from poachers in an African reserve the size of a small country? This daunting task typically falls to park rangers who may spend weeks patrolling the bush on foot, sometimes lacking basic gear such as radios, tents or even socks. They are largely losing to ivory poachers, as attested by the latest available data on Africa’s two species of elephant, both threatened: savanna elephant populations fell 30 percent between 2007 and 2014, and those of forest elephants plummeted by 62 percent between 2002 and 2011.

      To stem the losses, conservationists are increasingly turning to technology. The latest tool in the arsenal: real-time tracking collars, developed by the Kenya-based nonprofit Save the Elephants and currently being used on more than 325 animals in 10 countries. The organization’s researchers wrote algorithms that use signals from the collars to automatically detect when an animal stops moving (indicating it may be dead), slows down (suggesting it may be injured) or heads toward a danger zone, such as an area known for rampant poaching. Experimental accelerometers embedded in the collars detect aberrant behaviors such as “streaking”—sudden, panicked flight that might signal an attack. Unlike traditional tracking collars, many of which send geographical coordinates infrequently or store them onboard for later retrieval, these devices’ real-time feeds enable rangers to react quickly. In several cases, they have led to arrests….

      An early version of the program is being tested at four sites in Africa, with a 10-site expansion planned for September. At Lewa Wildlife Conservancy in Kenya, DAS is already seen as a game changer after its launch less than a year ago, says Batian Craig, director of 51 Degrees, a private company that oversees security operations at Lewa: “Being able to visualize all this information in one place and in real time makes a massive difference to protected-area managers.”

Aussie Invaders

[These excerpts are from an article by Erin Biba in the August 2017 issue of Scientific American.]

      The Australian government unleashed a strain of a hemorrhagic disease virus into the wild earlier this year, hoping to curb the growth of the continent’s rabbit population. This move might sound barbaric, but the government estimates that the animals—brought by British colonizers in the late 18th century—gnaw through about $115 million in crops every year. And the rabbits are not the only problem. For more than a century Australians have battled waves of invasive species with many desperate measures—including introducing nonnative predators—to limited avail.

      Australia is not the only country with invasive creatures. But because it is an isolated continent, most of its wildlife is endemic—and its top predators are long extinct. This gives alien species a greater opportunity to thrive….But the Tasmanian tiger, the marsupial lion and Megalania (a 1,300-pound lizard) are gone. The only top predator left, the Australian wild dog, or dingo…is under threat from humans because of its predilection for eating sheep.

      Along with rabbits, Australia is trying to fend off red foxes (imported for hunting), feral cats (once kept as pets), carp (brought in for fish farms) and even camels (used for traversing the desert). Wildlife officials have attempted to fight these invaders by releasing viruses, spreading poisons, building thousands of miles of fences, and sometimes hunting from helicopters. In one famous case, the attempted solution became its own problem: the cane toad was introduced in 1935 to prey on beetles that devour sugarcane. But the toads could not climb cane plants to reach the insects and are now a thriving pest species themselves.

      Despite scientists’ protestations, the government plans to introduce another virus later this year to try reducing the out-of-control carp population….

Scientists Can’t Be Silent

[This excerpt is from an editorial by Christopher Coons in the August 4, 2017, issue of Science.]

      In an era of rapid technological change and an increasingly global economy, investments in research and development are crucial for spurring economic growth and sustaining competitiveness. Yet, across the U.S. federal government, scientists are playing a decreasing role in the policy-making that supports this investment, often being pushed out by a political agenda that is stridently anti-science. Meanwhile, Americans are becoming more dis-trustful of democratic institutions, the scientific method, and basic facts—three core beliefs on which the research enterprise depends. The United States remains the unquestioned global leader in science and innovation, but given a White House that disregards the value of science and an American public that questions the very concept of scientific consensus, sustaining the U.S. commitment to science won't happen without a fight.

      Many Americans take for granted the ways in which the United States supports its scientists, but that hasn’t always been the case. Before 1940, the United States had only 13 Nobel laureates in science. Since World War II, however, the country has won over 180 scientific Nobel Prizes, far more than any other nation. That's not a direct proxy for achievement, but it reflects a fundamental change in the way Americans understand the value of research. This transformation didn’t happen by accident. Immigration laws have allowed aspiring scientists from around the world to study and innovate in the United States. Long-term, sustained investments in research and development have been supported by a network of universities, national laboratories, and federal research institutions such as the National Institutes of Health. Strong intellectual property laws have evolved to protect groundbreaking ideas. These efforts haven’t just won Nobel Prizes. Federal investments in diverse scientific and human capital have unleashed economic growth, created tens of millions of jobs, and returned taxpayer money invested many times over.

      Sadly, media headlines (and recent election results) reflect a growing distrust of science and scientists. Look no further than efforts to undermine the nearly unanimous scientific consensus on the impacts of climate change, the use of genetically modified organisms, or the importance of vaccinations. These trends predate the current administration, but President Trump has already taken steps that threaten scientific progress. In its 2018 budget proposal, the White House is seeking to cut overall federal research funding by nearly 17%. Dozens of key scientific positions throughout federal agencies remain unfilled. The administration has sought to shutter innovative programs such as the Department of Energy's Advanced Research Projects Agency-Energy.

      How should the scientific enterprise respond? In August 1939, Albert Einstein wrote to President Franklin D. Roosevelt, urging him to monitor the development of atomic weapons and consider new investments and partnerships with universities and industry. Einstein's letter galvanized federal involvement in creating a world-class scientific ecosystem. Scientists today should follow Einstein's lead. They should make the case for science with the public through online communities and in local meetings and media. Scientists should fight for scientific literacy by advocating for science, technology, engineering, and mathematics (STEM) education, as well as for women and minorities in STEM fields.

Inevitable or Improbable?

[This book review of “Improbable Destinies,” written by Adrian Woolfson is in the July 28, 2017, issue of Science.]

      In their seminal book Evolution and Healing, Randolph Nesse and George C. Williams describe the design of human bodies as “simultaneously extraordinarily precise and unbelievably slipshod!” Indeed, they conclude that our inconsistencies are so incongruous that one could be forgiven for thinking that we had been “shaped by a prankster.”

      By what agency did this unfortunate state of affairs come into being, and how might we amend it? Gene editing and synthetic biology offer the possibility of, respectively “correcting” or “rewriting” human nature, allowing us to expunge unfavorable aspects of ourselves—such as our susceptibility to diseases and aging—while enabling the introduction of more appealing features. The legitimacy of such enterprises, however, to some extent depends on whether the evolution of humans on Earth was inevitable.

      If our origin and nature were deterministically programmed into life’s history, it would be hard to argue that we should be any other way. If, on the other hand, we are the improbable products of a historically contingent evolutionary process, then human exceptionalism is compromised, and the artificial modification of our genomes may be perceived by some as being less of an affront to the natural order. In his compelling book Improbable Destinies, Jonathan Losos addresses this issue, recasting previous dialogues in the light of an experimental evolutionary agenda and, in so doing, arrives at a novel conclusion.

      Until recently, the evolutionary determinism debate focused on two contrary interpretations of an outcrop of rock located in a small quarry in the Canadian Rocky Mountains known as the Burgess Shale. Contained within the Burgess Shale, and uniquely preserved by as-yet-unknown processes, are the fossilized remains of a bestiary of animals, both skeletal and soft-bodied. These fossils are remarkable in that they appear to have originated in a geological instant 570 to 530 million years ago during the Cambrian. They comprise a bizarre zoo of outlandish body plans, some of which appeared to be unrepresented in living species.

      In his 1989 book Wonderful Life, the Harvard biologist Stephen Jay Gould argued that the apparently arbitrary deletion of distinct body plans in the Cambrian suggests that life's history was deeply contingent, underwritten by multiple chance events. As such, if the tape of life could be rewound back to the beginning and replayed again, it would be vanishingly unlikely that anything like humans would emerge again. The Cambridge paleontologist Simon Conway Morris, on the other hand, would have none of this.

      Citing a long list of examples to illustrate the ubiquity of convergence—the phenomenon whereby unrelated species evolve a similar structure—Conway Morris claimed that the evolution of humanlike organisms would be a near inevitability of any replay. In his scheme, articulated in 2003 in Life’s Solution, nature’s deep self-organizing forces narrowly constrain potential evolutionary outcomes, resulting in a relatively sparse sampling of genetic space.

      Losos closes the loop on this contentious debate, marshaling data from the burgeoning research area of experimental evolution. Unlike Darwin, who perceived the process of evolution to be imperceptibly slow and therefore inaccessible to direct experimentation, contemporary evolutionary biologists have realized that evolution can occur in rapid bursts and may consequently be captured on the wing.

      Given that microbes have an intergenerational time of 20 minutes or less, in 1988, the evolutionary biologist Richard Lenski reasoned that the bacterium Escherichia coli would comprise the perfect model experimental system to study condensed evolutionary time scales. Bacteria could additionally be frozen, allowing multiple parallel replays to be run again and again from any time point in their history. Twenty--eight years and 64,000 bacterial generations later, he concluded that the history of life owes its complexity both to repeatability and contingency.

      Losos and other investigators have demonstrated a similar degree of repeatability in the natural evolution of the Anolis lizard, three-spined sticklebacks, guppies, and deer mice. Importantly, however, when experimental populations evolve in divergent environments, novel outcomes are more commonly observed than convergence.

      These experiments were not a replay of the tape of life in time so much as a replay in space, but the findings were surprising in that they emerged within a relatively short time frame—a far cry from what one might have expected would be necessary to falsify the predictability hypothesis.

      Losos concludes that “both sets of forces—the random and the predictable ... together give rise to what we call history!” With this, humans are humbled once again, cast firmly into the sea of ordered indeterminism. Although he does not attempt to use this as a justification for human genomic modification, Losos argues that the genetic principles underlying life's multifarious convergent solutions might, among other things, be coopted to rescue imperiled species.

Nitrogen Stewardship in the Anthropocene

[These excerpts are from an article by Sybil P. Seitzinger and Leigh Phillips in the July 28, 2017, issue of Science.]

      Nitrogen compounds, mainly from agriculture and sewage, are causing widespread eutrophication of estuaries and coastal waters. Rapid growth of algal blooms can deprive ecosystems of oxygen when the algae decay, with sometimes extensive ecological and economic effects. Nitrogen oxides from fossil fuel combustion also contribute to eutrophication, and nitrous oxide, N2O, is an extremely powerful greenhouse gas (GHG)….climate change is worsening nitrogen pollution, notably coastal eutrophication. The results highlight the urgent need to control nitrogen pollution. Solutions may be found by drawing on decarbonization efforts in the energy sector.

      Increased precipitation and greater frequency and intensity of extreme rainfall events will see increased leaching and run-off—or nitrogen loading—in many agricultural areas….in the Mississippi-Atchafalaya River Basin, a business-as-usual emission scenario leads to an 18% increase in nitrogen loading by the end of the 21st century, driven by projected increases in both total and extreme precipitation. To counter this, a 30% reduction in anthropogenic nitrogen inputs in the region would be required. Farmers here already are trying to achieve a 20% loading reduction target imposed by the U.S. Environmental Protection Agency (EPA), requiring a 32% reduction in nitrogen inputs to the land. To offset the climate-induced boost to nitrogen pollution in addition to meeting this target would thus require a 62% reduction in nitrogen inputs, a colossal challenge for any farmer.

      …regions with historically substantial rainfall, high nitrogen inputs, and projected robust increases in rainfall are most likely to experience increased coastal nitrogen loading. This includes great swaths of east, south, and southeast Asia, India, and China—regions where coastal eutrophication already occurs.

      Solutions to coastal eutrophication and climate change are closely intertwined. To stay below a 2°C increase in global average temperature will require attaining net zero GHG emissions soon after mid-century. To achieve this goal, fossil fuel combustion must plummet, thus also reducing production, deposition, and consequent eutrophication due to nitrogen oxides associated with fossil fuel combustion. However, nitrogen and carbon GHG emissions from agriculture (such as N2O and CH4) are some of the most difficult to reduce. In the energy sector, humanity needs to perform one action: Stop burning fossil fuels. And, at least in principle, there are clean-energy alternatives. In agriculture, many different actions are required. There is no alternative to eating.

      Nevertheless, we can begin to structure our thinking about solutions for the sector in a way that mirrors the fuel-switching solutions for other GHG sources-a rule of thumb that everyone from policy-makers to farmers, communities, and even children can comprehend and help put into practice….

      Although the challenges are great, successful commercialization of technologies currently in development and increased nitrogen efficiency in agriculture could help to reduce the pressures on coastal ecosystems and to reduce GHG emissions while permitting increased production to feed a growing population. There is potentially an exciting optimistic story to be told about global nitrogen stewardship in the Anthropocene.

We Still Need to Beat HIV

[These excerpts are from an editorial by Francois Dabis and Linda-Gail Bekker in the July 28, 2017, issue of Science.]

      Despite remarkable advances in HIV treatment and prevention, the limited political will and leadership in many countries—particularly in West and Central Africa and Eastern Europe—have fallen short of translating these gains into action. As a result, nearly 2 million infections occurred in 2016, creating a situation that is challenging to counter. This week in Paris, the International AIDS Society (IAS) convened researchers, health experts, and policy-makers to discuss the global state of this epidemic. It has been more than three decades since AIDS was clinically observed and associated with HIV infection. Since then, HIV has accounted for 35 million deaths worldwide. Today, about 37 million people are infected. IAS and the French Research Agency on HIV and Viral Hepatitis (ANRS) have now released the Paris Statement…to remind world leaders why HIV science matters, how it should be strengthened, and why it should be funded globally and durably so that new evidence can be translated into policy.

      The good news is that scientific breakthroughs have led to new biomedical interventions and practices and, consequently, substantive reductions in morbidity, mortality, and new infections. Full-scale implementation of these new measures could eliminate HIV/AIDS as a health crisis. The bad news is that this effort is fragile because funding is under threat, health care systems of many countries are not equipped to deliver, and political will and leadership are lacking. Even if current efforts reduce new infections by 90% over the next decade, there would still be 200,000 new infections each year, with a worldwide lifelong treatment target of 40 million individuals living with HIV.

      Antiretroviral therapy (ART) has been the catalyst for change in how HIV infection is treated and prevented. A single, once-a-day, multidrug tablet, with few side effects, has converted HIV from a death sentence to a chronic, manageable disease for millions. It reduces viral levels so that the risk of transmission plummets. ART also has made mother-to-child transmission a rare event in many places. Pre-exposure prophylaxis and voluntary male circumcision have joined other behavioral measures (condom promotion, universal testing, and lifestyle changes) as cornerstones of prevention. Yet, ART is a lifelong and costly regime, and many people in the developed and the developing world cannot access it. Many nations do not have the health care infrastructure or the community engagement to support the robust new ways to prevent transmission or diagnose infection. Although we have come far, the benefits of these new approaches are not universal….

      Containment of this infectious disease is attainable. We encourage all to join in the global advocacy for HIV/ AIDS research and development.

Charlottesville, VA

[This letter to the MIT community was sent out on the morning of August 15, 2017, by L. Rafael Real, the President of MIT.]

      History teaches us that human beings are capable of evil. When we see it, we must call it what it is, repudiate it and reject it.

      This weekend in Charlottesville, Virginia, we witnessed a strand of hatred. White supremacy and anti-Semitism, whether embodied by neo-Nazis, the Ku Klux Klan or others, are bankrupt ideologies with a wicked aim as plain as our need to repel it.

      The United States must remain a country of freedom, tolerance and liberty for all. To keep that promise, it must always remain a place where those who hold radically opposing views can voice them. However, when an ideology contends that some people are less human than others, and when that ideology commands violence in the name of racial purity, we must reject that ideology as evil.

      I write to you this morning because I believe that the events of this weekend embody a threat of direct concern to our community.

      A great glory of the United States is the enduring institutions and ideals of our civil society. The independent judiciary. The free press. The universities. Free speech. The rule of law. The belief that we are all created equal. Each one reinforces and draws strength from the others. When those pillars come under attack, society is endangered. I believe we all have a responsibility to protect them—with a sense of profound gratitude for the freedoms they guarantee.

      At MIT, let us with one voice reject hatred—whatever its form. Let us unite in mourning those who lost their lives to this struggle in Charlottesville. And let us work for goodwill among us all.

Territory

[These excerpts are from chapter 6 of The Parable of the Beast by John N. Bleibtreu.]

      Domestic animals have, over the course of their domestication, lost much of their reliance on territory for subsistence and mating opportunities. Any wild animal removed from its territory and placed in another, even though the new area may be superficially identical, undergoes a great emotional crisis. Zoo animals transported from a cage in a zoo in one city to a cage in another may often, in their fright and panic, do themselves physical injury, or go through a long period of weight loss and apathy in their new quarters. Fully adult animals, caught in the wild and brought into captivity, often do not survive this combined shock of losing a familiar ecological territory and all the rituals of behavior that are enacted within it. For a successful life in captivity, wild animals must be captured young before their patterns of life become enmeshed with that of their familiar territory. Domestic animals can be transported around from one racetrack or show ring to another with a minimum of psychic injury. This is also true of wild circus animals, which have never been allowed to become familiar with any special territory and must adapt to being without any territory whatever from an early age. But once an animal has become accustomed to a territory, even the reduced territory of a zoo cage, it is hard indeed for it to adjust to a new territory.

      Most mammals affiliate themselves with their environment by means of chemical sense-impressions—exceptions being the higher primates including man, and such marine mammals as whales, porpoises, seals, etc., and the occasional special case like the giraffe whose scent receptor is removed by height from likely contact with scent traces in its territory.

      They identify their territories by scent markers, which they deposit themselves. The systems of scent markings vary with their relative usefulness to the animal concerned. For those so constructed as to be able to travel fairly rapidly while at the same time keeping their noses to the ground (as for instance foxes, wolves, jackals, cats, hippopotamuses and rhinoceroses), scent traces are deposited on the ground. The most convenient scent traces are the feces and urine of the animal concerned. One “housebreaks” a domestic dog or cat by defining its territory in relation to that of its master. The interior of the house is the master’s territory, the animal's territorial orientation accommodates to this fact and it restricts its fecal and urinary marking to its own territory elsewhere. Primates (including human infants) are notoriously difficult to housebreak. Territorial affiliation is not directly connected with olfactory marking, though a good case can be made for the persistence of marking habits, which among normal humans consist of optical markings (initials, etc.). In pathological mental states, however, they may well revert to fecal or urinary marking rituals.

      The treatment of feces and urine vary as usual with the species. Hippopotamuses and rhinoceroses distribute their feces; hippopotamuses with the special musculature of their tails, which whirl almost like miniature propellers. The rhinoceros roots about in his feces pile with his horn, distributing it and giving rise to an African legend that the rhinoceros has lost his horn in his feces and attempts to find it again—a legend that should be of some curious interest to depth psychologists, especially in view of the symbolic connection, which appears repeatedly in many cultures associating the horn of the rhinoceros and human priapean energy. For many years, because of the fame of the rhinoceros’ horn as an aphrodisiac, it was hunted hard, almost to the point of extinction, vast prices being paid in China particularly, for powdered rhinoceros horn….

      We still possess, in the structure of our shoulder bones and muscles, evidence that our ancestors were arboreal. The baboon, a terrestrial monkey, cannot hang from his arms as we can. Anthropologists surmise that our human ancestors, possibly several species of apes rather than merely one apish ancestor, descended from the trees at some later time than did the ancestors of the modern baboons. Fecal marking is not very useful to arboreal creatures; some of the lemurs mark with urine, others with special scent glands usually located under the tail. But in the higher reaches of the primate order, territories are established by visual recognition, a form almost unique among mammals.

      That traces of territorial marking instincts still persist among humans can be established by a wealth of detail. For example the “Kilroy Was Here” drawings of World War II are undoubtedly territorial markings. The keepers of public monuments fight a losing battle against the scrawled, carved, scratched legends that visitors leave. But perhaps the most directly territorial marking by humans is the urinal graffiti. As with animals, it is the male of the human species who is the most ardent marker; and the compulsion to mark insulting legends on the walls of urinals seemingly transects all economic classes and educational levels. We have all seen graffiti on walls where we should never expect to see them, in universities and exclusive clubs. Mating area markings are a commonplace among animals. Among tigers the female leaves a special urine trace on the male’s marking spot as a sexual invitation, and on their next encounter he is prepared for her presentation. Among humans the park benches and tree trunks adjoining a popular assignation area are overlaid with a mosaic of carved initials often joined with Cupid’s arrow and enclosed within the outline of a heart.

      Among humans the association between the related ideas of geographic and personal possession is demonstrated by the word property, which includes both real estate and personal belongings. For young American males of mating age the first territorial possession is not a geographic area but an automobile, the sexual significance of which is well understood by manufacturers and their advertising agents. In this respect the behavior of the American male is not very different from that of many birds and mammals. The acquisition of a territory is an indispensable commencement for any courtship activity. The American male must often get a car before he can get a mate.

Earthworms

[This excerpt is from chapter 5 of The Parable of the Beast by John N. Bleibtreu.]

      Space is far more subjectively appreciated by organisms than is time, which passes for us all, great beasts and tiny animalcules alike, in roughly the same way, since the circuit of the earth, the alternation of light and dark, affects all living things on this planet, directly or indirectly. The drive of an animal to acquire time by flight is readily seen by the most casual observer. The drive to acquire space is not quite so easily or readily seen. The discovery that there is no form of life that does not manifest some behavior that could be considered territorial has only lately come to the attention of biologists. Laboratory psychologists have been concerned with the functional basis of comparative systems of space perception, but even though von Uexkiill pondered the problem in the beginning of this century, the various behaviors by which the organism expressed this perception in its natural environment was slow in coming.

      In an attempt to discover whether the basic element of space-perception, form, was perceived as such by the eyeless earthworm, von Uexkiill performed an experiment based on an observation of Darwin’s. “Darwin early pointed out,” wrote von Uexkiill, “that the earthworm drags leaves and pine needles into its narrow cave. They serve it both for protection and for food. Most leaves spread out if one tries to pull them into a narrow tube petiole [stem] foremost. On the other hand, they roll up easily and offer no resistance if seized at the tip. Pine needles on the other hand, which always fall in pairs, must be grasped at their base, not their tip, if they are to be dragged into narrow holes with ease.

p style="text-indent: 40px;">       “It was inferred from the earthworm's spontaneously correct handling of leaves and needles that the form of these objects which plays a decisive part in its effector world must exist as a receptor in its perceptual world.

      “This assumption has been proven false. It was possible to show that identical small sticks dipped in gelatine were pulled into the earthworms’ holes indiscriminately by either end. But as soon as one end was covered with powder from the tip of a dried cherry leaf, the other with powder from its base, the earthworms differentiated between the two ends of the stick exactly as they do between the tip and base of the leaf itself. Although earthworms handle leaves in keeping with their form, they are not guided by the shape, but by the taste of the leaves. This arrangement has evidently been adopted for the reason that the receptor organs of earthworms are built too simply to fashion sensory cues of shape. This example shows how nature is able to overcome difficulties which seem to us utterly insurmountable.”


You can see a video on earthworm activity at night by IMPACT Product Development at
ACTIVE EARTHWORMS
Insect Chemicals

[This excerpt is from chapter 5 of The Parable of the Beast by John N. Bleibtreu.]

      …Since what was known about chemical communication systems suggested that they transmitted information via a binary code, Butler hypothesized the following sequence: If the absence of the queen-chemical produced certain behaviors, then her presence, and the presence of this chemical inhibited these behaviors. The logic may seem odd when related to human behavior, but one could say about humans that if the absence of oxygen produces suffocation, then the presence of oxygen inhibits suffocation.

      Suspecting, from the delay in the communal reaction to the presence or absence of the queen, that chemical information about her presence or absence was transmitted along with food particles from bee to bee in the ritual act of food sharing, Butler's first step was to exclude other factors which could conceivably be operative. In one adroit experiment he proved that neither the sight of the queen in the flesh, nor sounds made by her, nor a generalized scent exuded by her which might have pervaded the entire hive, informed the colony of her presence. He enclosed the queen in her own colony within a double-walled cage of wire mesh, through which no other bee could make physical contact with her. Within a matter of hours, though the members of the colony could see her, hear her, and smell her, the colony reacted with the classic symptoms of queen loss.

      Butler’s next step was to begin a detailed study of the various kinds of physical contact exchanged between the queen and the members of her colony. The queen is normally surrounded by a suite or an entourage. These words are the technical terms used to describe this social arrangement—a refreshing change from the usual run of bastardized Latin generally employed in scientific usage. This entourage generally contains ten or twelve nurses. Butler wished first to determine whether these nurses composed a special caste within the colony, or whether any worker bee could perform nurse duties. To this end he removed the queen with a forceps, placing her in various random locations within the hive. Each time this was done, the suite disbanded us the queen was removed, the nurses going about other worker tasks immediately, while at the new location a new suite formed itself around the queen immediately. In order to gather harder statistics (the kind that impress other scientists), he repeated the basic experiment, except that instead of using crude forceps he devised a complicated, electrically-powered transport cage, which moved the queen about the hive according to a predetermined pace. Using this procedure, he demonstrated that information concerning the presence of the queen was trans-mitted via the nurses who comprised her entourage to other bees via ritual food sharing, and from these others to still others, until, like a chain letter, the number of bees receiving a chemical trace of the queen’s presence, multiplied geometrically.

      The suite of the queen bee surrounds her completely. The bees directly in front of the queen feed her continuously. Those bees to her rear and sides stroke and caress her with their antennae or lick at her with their proboscises. Butler discovered that those bees who caressed the queen with their antennae then touched antennae with other members of the colony when they exchanged food. Those who fed the queen took in this queen substance via their proboscises and distributed it in the same way to others during food sharing. In 1962, ten years after Ribbands had stimulated his curiosity, Butler finally isolated the queen bee substance. He found that it was produced by the mandibular glands of the queen, and distributed all over her body through the act of grooming. The molecule, as he finally identified it, had a remarkable chemical resemblance to the ovary-inhibiting hormone of prawns. Butler, suspecting that its ovary-atrophying effects might well work on other phyla, experimented with various arthropods and vertebrates in hopes that this substance might be a hormone of universal application, but sorrowfully, he found no support for this supposition.

      It would be very interesting to discover whether or not the literal communication entity transmits an identical meaning across phylum barriers; whether, in effect, certain words in the chemical language mean the same thing to different classes of animals. In some instances—we know that they do. Insect alarm signals, the chemical exudations which comprise warning signals, are grossly similar in classes of insects. This seems to be generally true in animal communication. Courtship signals, for example, whether auditory, visual (behavioral), chemical, or a combination, seem to be highly specific. No other bird will respond affirmatively to a blue jay's courtship call, except another blue jay. But almost all the birds and mammals within range of its alarm call will respond alertly as the blue jay shrieks its warning through the woods. The same is true of insect chemical communication, and because of this it was possible for humans to devise insect repellents (which are essentially distillations of insect chemical alarm signals) which are effective across a wide spectrum of genera. The average personal insect repellent acts upon such a disparate collection of creatures as chiggers, ticks, and mosquitoes, perhaps others as well.

Gypsy Moth

[Thise excerpt is from chapter 5 of The Parable of the Beast by John N. Bleibtreu.]

      During the summer of 1869 a French painter and amateur naturalist named Leopold Trouvelot took a house for the season in Medford, Massachusetts—at 27 Myrtle Street, to be exact. The address has become notorious in the annals of entomology. M. Trouvelot wanted to do some illustrations of silkworms in various stages of their development, and to that end had imported several eggs from Europe. He left them on a table in his work-room and waited for the larvae to emerge. It must have been a warm summer, and the table on which the eggs were located must have been near a window. Perhaps the rustling of a curtain blowing in the wind brushed them from M. Trouvelot’s worktable and out the window into the yard. M. Trouvelot missed the eggs on the afternoon of their disappearance and made efforts to find them, for among the eggs were several belonging to a European moth, Porthera dispar, known in England as the gypsy moth, perhaps because the color of the wings of the male imago resemble the color of tanned, windburned skin. Knowing full well this moth’s European reputation as a destroyer of leafy trees, M. Trouvelot “gave public notice of [the eggs’] disappearance.” It was a pity that the public took no notice at the time.

      Within two years of this event, a vigorous colony of the moths had established itself on Myrtle Street and the citizens of the town of Medford found themselves living in a nightmare.

      There was a public hearing sometime later at which citizens spoke of their impressions of that summer of 1871. One Mrs. Belcher reported as follows: “My sister cried out one day, ‘They [the caterpillars] are marching up the street!’ I went to the front door and sure enough the street was black with them coming across from my neighbor, Mrs. Clifford, and heading straight for our yard.”

      A Mrs. Spinner stated: “I lived in Cross Street . . . and in June of that year [1871] I was out of town for three days. When I went away the trees in our yard were in splendid condition and there was not a sign of insect devastation upon them. When I returned there was scarcely a leaf upon the trees. The gypsy moth caterpillars were over everything.” A neighbor of Mrs. Spinner, Mr. D. M. Richardson, said: “The gypsy moth appeared in Spinner's place in Cross Street and after stripping the trees there, started across the street. It was about five o'clock in the evening that they started across in a great flock and they left a plain path across the road. They struck into the first apple tree in our yard and the next morning I took four quarts of cater-pillars off one limb.” Mrs. Hamlin continues: “When they got their growth, these caterpillars were bigger than your little finger and would crawl very fast. It seemed as if they could go from here to Park Street in less than half an hour.”

      Mr. Sylvester Lacy reported that: “I lived in Spring Street . . . and the place simply teemed with them and I used to fairly dread going down the street to the station. It was like running a gauntlet. I used to turn up my coat collar and run down the middle of the street. One morning in particular, I remember that I was completely covered with caterpillars inside my coat as well as out. The street trees were completely stripped down to the bark. . . . The worst place on Spring Street was at the houses of Messrs Plunket and Harman. The fronts of these houses were black with caterpillars and the sidewalks were a sickening sight, covered as they were with the crushed bodies of the pest.”

      The destruction in Medford was horrible. The landscape looked like a burned-out battlefield. In their desperation the insects had devoured even the grasses down to the bare dirt. The Massachusetts National Guard was called out to drench the countryside with Paris green, an arsenic compound. Artillery caissons were converted into spray wagons, but the moths continued to flourish.

      The plague spread very slowly. Female moths, though they did not appear markedly different from the males, are flightless. When they emerge from the pupa case, they can crawl only a few feet away from the spot where they are sought out by flying males and fertilized, so the new generation of eggs is laid not far from the previous spot.

      Dr. Charles Fernald, the resident entomologist at the Massachusetts State Agricultural Station at Hatch, was the first professional zoologist to become involved with the moth. He was not at home when the first delegation of citizens from Medford appeared on his doorstep with specimens of the caterpillar and the moth. His wife searched vainly through his collection of American moths in an attempt to identify the animal, and it was finally his young son who discovered Linnaeus’ original description of the moth in a European book. Fernald found the havoc caused by the moth utterly incredible until he performed some tests of his own with lettuce leaves and found that a fully mature caterpillar would consume well over ten square inches of foliage in one twenty-four hour feeding period.

      He wondered how the male moths located their flightless females. Was it random accident? The vernacular name for the moth in France was “Zigzag,” which described accurately their erratic, wavering flight. Fernald suspected there might be some truth to the folklore account that it was a scent exuded by the female which attracted the males, and that their lumbering flight pattern had evolved in response to this need. In traveling from one point to another the moth lumbers wildly off course, to and fro and up and down: it would thus more likely encounter air-borne scent traces than if it flew sharp and straight like a swallow.

      Fernald devised a trap, a wire-mesh box with a funnel entrance, which would admit insects but not permit them exit, and using live females for bait, he attracted numbers of males into it. He began using these traps as a gauge of infestation in any given area. By placing the traps in an open location and leaving them for several days, he could get an impression of the population density from the number of males captured. By this time the plague of caterpillars had spread from Medford to other Massachusetts towns, probably carried by travelers such as Mr. Lacy who dreaded going to the railroad station covered with caterpillars, but went nonetheless.

      In all of this work by Fernald and in the work to follow, there was very little of the philosophical speculation which so marked the studies stimulated by von Uexkiill or even Marston Bates working with his anopheles. It was all eminently practical. By 1915 the moths had spread as far north as Maine, and in August of that year entomologists had become curious about the range of this sex attractant. Traps containing live females were placed on several uninhabited islands off the coast of Maine, in Casco Bay. In one trap placed on Outer Green Island, better than two miles from the nearest moth, and left there for three weeks, two males were found when it was retrieved. In another trap placed on Billingsgate Island off the coast of Cape Cod, four males were found, though this trap was two and a quarter miles from the nearest point of infestation.

      To assure themselves that these males had actually flown the distance and not been blown by winds into the general vicinity of the traps, follow-up tests were conducted in a closed, airstill room twelve by fourteen feet, and the moth’s flight speed was timed at approximately 150 feet per minute. Virgin males rarely flew uninterruptedly for much more than a mile before alighting, but a small number of sexually experienced males flew constantly for several days, covering well over two miles before eventually collapsing in death.

      This is curious in that prior sexual experience would seem to be a powerful motivating force for males. Fernald timed a number of gypsy moth copulations and reported that they ranged in time from twenty-five minutes to three hours and eighteen minutes, with the average time being about one and a quarter hours. He writes that “after mating, the male is quite stupid but in about one-half hour regains his normal activity” and is quite eager to mate again.

      The next episode in the annals of the study of the sex stuff of the gypsy moth is almost comic in its foolishness. For the amount of sophisticated effort that was lavished upon obtaining the information, very little was finally learned. But the gigantic power of the United States Government was now bent upon the project, and the elephant labored to produce the mouse. By 1927 the gypsy moth was no longer a regional problem; it had become a national concern, and the United States Department of Agriculture was asked to aid in its control.

      It was imperative now for the Government to obtain a census of the moth. Fernald’s system of using live females as bait in traps was considered too dangerous : if, through some inadvertence a fertile female should escape, the horrors of Medford might be re-enacted somewhere else. Two government entomologists, Charles W. Collins and Samuel F. Potts, were assigned the task of devising an extract of female scent to be used for bait. After dissecting thousands of moths and baiting traps with sections of tissue taken from various parts of the female anatomy, they finally discovered (not to anyone's great surprise) that the scent was produced in the tissues immediately surrounding the female genitalia, particularly the area immediately surrounding the opening of the copulatory pouch. But though the Government contracted the services of several eminent university chemistry professors, no one was able to isolate and identify the chemical responsible. However, the Government is obsessed with the establishment of standards and specifications, and the notion of utilizing female genitalia (even if only that of a moth) as official United States Government equipment must have been repellent to the authorities in charge of the program, for thirty years were spent in the attempt to isolate the chemical in its pure form. Finally in 1960 three chemists, Martin Jacobson, Morton Beroza, and William A. Jones, working at the Beltsville, Maryland, United States Government Agricultural Research Station, succeeded. They had dissected the genitalia of well over half a million female moths in order to determine the official specifications for the scent bait to be used in United States Government gypsy moth census traps. For the record, the name of the chemical is dextrorotatory-10-acetox-y-I-hydroxy-cis-7- hexadecene.


You can see a video on gypsy moths by YouTube at
GYPSY MOTHS
Marine Worms

[This excerpt is from chapter 5 of The Parable of the Beast by John N. Bleibtreu.]

      Since the most primitive existing organisms inhabit the sea, one of the earliest and most curious studies of chemical communication was conducted by two marine biologists, Frank Lillie and Ernest Everett just, in 1913. One summer night during the dark of the moon they set out from the Woods Hole Biological Laboratory in a rowboat with a carbide lamp swinging from the bow. They wanted to know more about a most peculiar phenomenon, the swarms of minute marine worms which covered the surface of the sea at certain times like a red carpet, rising and falling in the gentle ocean swell. Lillie wrote the report: “The swarming usually begins with the appearance of a few males, readily distinguished by their red anterior segments and their white sexual segments darting rapidly through the water in round paths in and out of the circle of light cast by the lantern. The much larger females then begin to appear, usually in small numbers, swimming laboriously through the water. Both sexes rapidly increase in numbers during the next fifteen minutes and in an hour or an hour and a half all have disappeared into the night.” Lillie and Just imagined that this assembly was connected with reproduction. Males and females were obviously swollen with sperm and eggs; but they were unable to observe the mating act under these field conditions, so they captured several specimens and brought them back to the laboratory, putting them in separate containers to see what would happen. They believed it likely that a lunar rhythm controlled both the assembly and the actual spawning activity. They wanted to see whether the females dropped their eggs first, to be fertilized immediately thereafter by the males shedding sperm, or whether perhaps the sequence would be reversed. But they were disappointed. Nothing happened until, as is so often the case in science, there was a procedural accident. “One day,” Lillie writes, “a male was dropped accidentally into a bowl of seawater which had previously contained a female. He immediately began to shed sperm and swam round and round the bowl very rapidly casting sperm until the entire 200 cubic centimeters was opalescent.” The female had obviously left some chemical trace of her presence in the water, which stimulated the male. But the female had not previously shed any eggs into this water. Lillie then placed a female in the bowl in which the male had shed his sperm, and the female immediately cast her eggs. Lillie and Just were unable to extract or identify this substance. They did determine, however, that it was highly species-specific. The worm they worked with was Nereis limbata, and when they attempted to repeat the experiment with closely related species of worms, there was no response. This particular chemical language was understood only by N. limbata. No other worm could comprehend it. In 1951 a team. of German biochemists attempted to analyze this (as they called it) "sex stuff:" without too much success. They did discover that it was a protein substance which oozed out from the entire body of the egg-bearing female. This stimulates the male into shedding sperm. Chemical elements in the semen stimulate the female into casting her eggs into this floating mass of sperm. The system is very effective. It does not require any auditory or visual communication system.

Malaria and Mosquitoes

[These excerpts are from chapter 4 of The Parable of the Beast by John N. Bleibtreu.]

      No less an authority than Sir William Osier, the famous British physician and medical historian, has called malaria “the single greatest destroyer of mankind,” and considering the vast array of ills that human flesh is heir to, including the malice of other humans, the odd little parasite which produces the disease has accomplished a remarkable record. It may yet prove to have been the single greatest destroyer of vertebrates generally, and therefore one of the most potent pruning hooks of the natural selection process, for contrary to popular notion, it is neither restricted to the tropic zones nor to warm-blooded mammals. Fish, reptiles (particularly lizards), and birds have all been discovered suffering from the disease. It has spread so far from the tropics, which may well have been its original birthplace, that penguins living well below the Antarctic circle have been discovered suffering from the disease. Though in this latter instance, instead of a mosquito vector being the intermediate host, a louse or mite is suspected of carrying the parasite.

      Several authorities believe it has had an incalculable effect on human history, particularly the development of modern civilizations in the temperate zones where there is a shorter summer activity season for the adult female mosquito. The disease is also considered partially responsible for the decline of several great Mediterranean cultures, particularly that of Athens, between the fifth and third centuries B.C., where the literature abounds with excellent clinical descriptions of the classic symptoms. It may have worked analogously in other animal phyla, sapping its victims of energy, creating brain damage with resulting sluggish neurological reactions, so that many populations of animals which were superbly fitted to cope with other elements of their environment may have perished because of their inability to withstand the ravages of this disease. Paul F. Russell, Chairman of the World Health Organization's Committee on Malaria, estimates that the total number of contemporary cases of human malaria, as recently as 1952, ran about 350 million, or roughly 6.3 per cent of the world’s population….

      In its vertebrate host, the animal produces the major portion of its population inside the red blood corpuscles. There is a little-understood transitional phase when the animal enters the host and resides in one of the organs (the spleen and liver are favored locations in primates), and there may be some reproduction—enough to give the population a start—somewhere other than in the red blood cell. But this latter location is where the vast bulk of the population resides. The red corpuscles are living cells shaped like a concave disk, composed mainly of hemoglobin. They are produced by specialized cells within the marrow of the long bones of the body. The corpuscles live for about fifty to seventy days, and upon dying, their corpses are destroyed by processes occurring in the liver, the spleen, and the lymph nodes. These latter organs are the first to suffer from the effects of the disease. They become enlarged and overworked in an attempt to cope with the increased mortality of corpuscles, and if the parasite population is not stabilized by one means or another, they ultimately fail in their functions. But before this happens, other unpleasant effects are felt. Hemoglobin, the stuff on which the parasite lives, is an oxygen transport material transferring the oxygen taken in by the lungs to the various body tissues requiring it. If a person dies of malaria, his body literally smothers to death. The capillaries become distended, clogged with dead and dying cells, blood flow is impeded, and the brain—one of the most voracious users of oxygen—will be damaged by oxygen deprivation. If the damage is not too extensive, the brain can transfer functions from destroyed tissues to intact ones, but in many cases of malaria fatality, the mortal blow to the individual was dealt in the brain itself.

      The parasite enters into its vertebrate host as a tiny hairlike creature, called a sporozoite, which matured within the stomach of a female mosquito. The sporozoite is injected into the vertebrate as the mosquito spits its irritating saliva into the wound created by its proboscis mouth. In the late nineteenth century Alphonse Laveran, a French medical officer stationed in a military hospital in Algeria, first discovered the creature. Considering the tremendous technological advances which have occurred since then, surprisingly little more is known about the parasite….

      The crescent bodies which Laveran described were the asexually reproducing forms of the protozoa now known as schizonts. This schizont form is the one that the animal assumes to best exploit its vertebrate host. As it grows within the red blood cell, it produces forms called pseudopodia—literally false feet—which grow out in seemingly haphazard fashion from any part of the original schizont until the cell is completely filled with this Medusa-head mesh of living matter. These false feet now break away from one another; and as this happens they are given another name, merozoites. They become active as a swarm of snakes, and the blood cell ruptures under the pressure of their writhing. This cycle may take anywhere from one to three or four days—the period required is one of the species criteria. As the corpuscle ruptures, the merozoites escape into the surrounding plasma and each one actively squirms its way into another cell. Sometimes, if the population is dense, two or three may enter a single cell, but in that case future growth is stunted. The number of these merozoites which derive from any given schizont is also a species criterion. In the most virulent human form of the disease, the production averages about sixteen merozoites per schizont.

      Almost the entire population of parasites reproduces simultaneously; this causes the classic symptoms of malaria—a sudden chill which leaves the victim blue-lipped and shaking with an ague which can last a few minutes or an hour, and which is followed immediately by a furious rise in temperature, up to 107°F, often accompanied by delirium, frightful aches in the bones, vomiting, etc. This paroxysm ends after about four hours, and is followed by a relatively tranquil sweating stage during which the patient may fall into an exhausted sleep. The victim may then be relatively without symptoms until the next period of schizont reproduction, which occurs two or three or four days hence, depending on the species of parasite. The majority of these paroxysms occur, at least in the tropic forms of malaria, after midnight and before dawn. The reproductive cycle of the parasites is tuned to the favored time for the mosquitoes to seek out food. In temperate zones, or locations where the mosquitoes feed in the daytime or early evening, the symptoms of the disease occur then.

      For some reason that we still do not understand, a few members of this parasite population do not assume the asexual schizont form. These are the “oval forms” described by Laveran, and it is in their interest that the cycle of the schizonts occurs when it does, during the mosquito's favored feeding time, for if these oval forms remain in the blood of a vertebrate they will perish. They are sexual animals, and in order to complete their mating activities, they require a special setting, the unique and particular environment that obtains within the gut of a female mosquito. Not any mosquito—but a particular species of mosquito—and it was this host preference of the malaria parasite that led obtuse human entomologists to separate into species categories several populations of anopheles mosquitoes that had appeared identical.

      The particular set of conditions which prevail within the mosquito gut have never been reproduced in the laboratory, so no one has any idea what these requirements may be. But the full cycle of parasite mating activities can only occur in this setting. Some of the preliminary transformations of these oval forms into free-living sexual animals can occur outside the mosquito’s gut—on a microscope slide—though it is not believed that they can occur in the bloodstream of the vertebrate host. The transformation happens fairly quickly, within a span of ten minutes or so. Some of the oval bodies contain male spermatazoa; they burst, rupturing the corpuscle as they do, and releasing a swarm of swallow-tailed sperm into the surrounding plasma. These active creatures, which swim about lashing their double tails, are known as microgametes. The other oval bodies are females. Under the microscope, if one is careful about the staining procedures, one can differentiate between the two types of oval bodies. One can sometimes see within the oval envelope which surrounds the microgametes the compressed community of males, swarming and writhing, at least at a certain stage of maturity shortly before the males are released.

      The female oval bodies, known as macrogametocytes, appear dense and more coherent; they also grow larger and finally explode out from the cell which surrounds them through the sheer increase in bulk. Once released from the blood cell they are mobile, capable of some movement; given a surface to cling to, they screw themselves about with a hideous scrunching worm-like movement. In the mosquito gut, and occasionally on a microscope slide, they will develop still further, growing an odd humping protuberance, which seems to exert an attraction for the males. If any microgametocyte finds himself in the vicinity of this hump, he lashes himself toward it and enters it like a battering ram. Then genetic materials are exchanged and the female is fertilized. As this happens, she becomes endowed by scientists with still another name, oocyte, and from this point on any further development must occur within the gut of an appropriate species of mosquito.

      As the mosquito sucks up its meal of blood, it takes into its gut, along with the sexual forms of plasmodium, the asexual forms, schizonts and any free-swimming merozoites which happen to be about. These creatures are digested along with the ingested vertebrate’s blood. Only the sexual forms resist digestion.

      After being fertilized, the female screws herself deeply into the mosquito’s gut wall where, if conditions are proper, she will be enclosed within a cyst. She now grows to a huge size, large enough to be visible to the naked eye. She mushrooms through the mosquito’s gut wall, and continues growing from the other, outer, side. Dissecting a mosquito to remove the gut is not as difficult a procedure as may be imagined. It can be done by anyone who knows the trick with two ordinary pins on any smooth surface. The gut comes right out like a tiny piece of brown spaghetti, and the mature oocytes can clearly be seen protruding like mushrooms on small stalks from its outer wall. Eventually these oocytes burst, releasing a horde of small hairlike animals similar in appearance to microgametes. But their lashing tails, instead of extending from the rear of the body, extend fore and aft; the creature looks like a transparent snake with its opaque head carried amidships. The sporozoites now swim actively around within the fluids inside the mosquito’s body cavities until eventually some of them enter the salivary glands. Once there they seem content to remain even though, after a period of time, the population may become quite dense. There they collect and wait for the mosquito to spit them out along with its irritating saliva into the body of a vertebrate animal.

      It is believed by some malariologists that sporozoites have the ability to penetrate tissue, finding their way into the nearest capillary by burrowing right through the flesh. It is conceivably possible for a person to contract malaria even if he is not bitten by a mosquito—even if he squashes the insect on his skin before it has a chance to bite, the sporozoites may still be able to enter into his bloodstream and infect him. Once in the bloodstream the parasites migrate rapidly to the liver or the spleen (in the case of primates), where they begin reproducing and eventually the population spills out into the red blood cells where the parasites travel throughout the body as a whole, each enclosed in its hospitable capsule, a red blood cell.

      Interesting as this recital of the malaria parasite sequence may be, both in itself and as an example of parasitic adaptation, it may still seem far removed from the species problem. Yet it was largely through the concerted efforts of several malariologists that the modern “sexual” concept of the species, the so-called New Systematics, developed. Mounting field trips into remote areas with special equipment to study animal sexual relationships is an expensive business. While involved in their mating activities, many animals are peculiarly vulnerable to predators; they make strenuous efforts to accomplish the mating act under conditions of great privacy—in darkness, in inaccessible locations, etc.—and at this time more than any other, they resist observation.

      But since detailed and intimate knowledge of the entire generative cycle of both the mosquito and its infectious parasitic passenger was of such crucial importance to the health of mankind, no expense was spared in sending men and equipment anywhere in the world where the mosquito was suspected to exist so that all the information possible about the mosquito’s habits generally, its round of daily activities—its feeding habits, resting habits, those habits connected with copulation and oviposition—could be obtained for the purpose of destroying the mosquito more effectively and economically; and with their destruction, hopefully, control of the disease they carry.

      As with most blood-sucking insects, it is the female who requires the blood for the maturation of her eggs. The male anopheles drinks the juices of plants and fruits. In the laboratory it feeds happily on apples and raisins; the skins of these fruits represent the limit of the penetrating powers of its proboscis, which is blunter and more flaccid than the female’s.

      The female proboscis is a complicated instrument composed of comparatively rigid members—the jaws, which have grown together into a tube during the course of evolution. The lips remain flexible members; the insect uses them as a retractable guide-sleeve while introducing the proboscis into tissue--for the act of biting is an introduction, an insinuation; it is not a stab. The tip of the proboscis, for the final third of its length, is more flexible than the rest; something like the tip of a fishing rod, and once the main entrance into the tissue has been made, this tip searches for a capillary at an angle of about 45o, first probing in this direction, then being withdrawn and redirected into another direction, and so on, until it strikes a capillary. Then the saliva which contains some decongestant properties is injected to “thin out” the blood and the insect begins sucking it up, often taking blood in the amount of its own body weight. After this it flies away to a resting place where it shall remain for at least two days (depending on species) until the blood is digested and the eggs matured. The next act is that of oviposition, and for this the insect must fly to a body of water (a rain-filled hoofprint is enough) to lay its eggs, for the larva which shall emerge from the egg is an aquatic creature.

      It has been reckoned that the lifespan of the average mature female anopheles is somewhere around a week or so. One laboratory specimen has been maintained in the lab for eighty-six days, during which time it had six blood meals and laid six clutches of eggs; but this is considered exceptional by malariologists. The general feeling is that only the rare mosquito has more than two blood meals during the course of her lifetime. Here is another example of how absurdly unfavorable the environment of the plasmodium parasite would seem to be: the time required for the development of the oocysts in the gut of the mosquito varies with environmental temperature, shortening as the temperature rises, but on the average it seems to take about a week. The average parasite has then only one opportunity to re-establish its population within a new vertebrate host. And yet the prevalence of the disease testifies to the operational effectiveness of this seemingly unforgiving system. One chance for success is all that most animals get, and this one chance is sufficient.

      The species problem only entered into malaria studies as the carrier of Europe, Anopheles maculipennis began to be intensively studied during the 1920’s and 1930’s. A. maculipennis is unmistakable, a small, rather darkly speckled insect sitting high on its spindly legs and pitched at a distinctly downward angle. Its rearmost legs are normally held up in the air off the surface, even when the insect is resting, not preparing to bite, and the body is tilted downward as though it intended to stand on its head. Most resting mosquitoes carry their body horizontal to the surface. As malariologists all over Europe concentrated their attention on A. maculipennis, certain inconsistencies became apparent. The first disturbing note appeared in 1920 when the French malariologist Emile Rouboud published a paper noting the abundance of A. maculipennis in many parts of France where malaria had never been reported. Quite independently, in the next year, Carl Wesenberg-Lund reported the same situation in Denmark, as did Battista Grassi for Italy. Wesenberg-Lund believed that his Danish mosquitoes had changed their food preferences. Rouboud and Grassi were closer to the truth in their speculations; they believed they had discovered a new “race” or subspecies of maculipennis. They believed they were dealing with a “race” rather than a new species, because there was nothing in the appearance of the adult to differentiate these benign insects from the disease carriers.

      But then a Dutch entomologist, Nicholas H. Swellengrebel, given financial support for his studies as the result of an outbreak of malaria in the Netherlands, found that there was a difference between the carriers and the benigns. One could not discern it by comparing any two individuals from either population, but statistically he was able to determine average differences in the length of wing in each population; he named one of them, the suspected malaria carriers, “shortwings” (which has since been made over into formal Latin as atroparvus), and the other, the benign population, “longwings.” Assisted by a colleague, Abraham de Buck, he went on, in the manner of a good ethologist, to discover that between the populations the entire repertoire of behaviors differed: their feeding habits, adult mating habits, their larval breeding places—everything differed except their appearance. Linnaeus’ doctrinal shroud still blinded both these men, and they hesitated (they wrote that they considered it “inadvisable”) to give separate Latin species names to these two populations. The next step was obvious: would they interbreed and would the offspring be fertile?

      The shortwings would and did. Males would buzz any resting female within a cage no matter how small it was, and mount her. Matings between female longwings and male shortwings produced infertile offspring like horse and donkey matings. This was certainly diagnostic of a species difference. But the longwing males could not be induced to mate in small cages, or in large outdoor cages. They simply would not mate in captivity. This was curious. Were the pursuit of this curious phenomenon to be conducted purely for purposes of enlightenment, we would probably still be waiting for the answer. But a dread disease had struck a civilized nation, and so investigations leading to the solution of this problem were supported by serious men sitting on the boards of reputable institutions; it was no longer an eccentric obsession on the part of a handful of entomologists.


You can see a video on fighting malaria by Politifact at
FIGHTING MALARIA
Evolution of Species

[This excerpt is from chapter 4 of The Parable of the Beast by John N. Bleibtreu.]

      Today it is taken for granted that evolution occurs; the idea of phylogenetic development is accepted as is the idea of ontogenetic development. But our understanding of the time scales involved is completely missing. We simply have no idea how long it takes for almost anything to happen in evolution—or why rates should be so uneven. It would seem that certain populations of animals are halted in a hiatus of attentive expectancy for very long periods of time. In our order of primates, for example, there are the families of tree shrews, lemurs, lorises, tarsiers; for the most part squirrel-like insectivores which have not changed nearly so radically from their fossil forebears as have we. Darwin’s hypothesis, coupled with the discoveries of modern genetics, have given us a retrospective comprehension of what must have happened. We know that branches grow apart from the trunks of trees, and that their individual growth will depend in part on the vicissitudes of their particular location; they will either flourish if they have access to light and space, or remain stunted if they are deprived. The analogy persists in the “family trees” of animal relationships, and insofar as evolutionary theory goes, the most fascinating part of the process is the one that occurs right at the point of branching.

      How does it occur? Geographic isolation is not the answer. “San Francisco Bay,” Ernst Mayr writes, “which keeps the prisoners of Alcatraz isolated from the other [human] inhabitants of California, is not an isolating mechanism, nor is a mountain range or a stream that separates two populations that are otherwise able to interbreed.”

      The species problem, when finally examined here, at its roots involves an understanding of sexual behavior, of compatibility and incompatibility. What happens is that suddenly a splinter portion of any given population no longer chooses to interbreed with the main body. The members of this splinter party suddenly begin to interbreed exclusively with one another, rejecting potential mates from outside the group.

      As a result they share the genetic memory of their communal experiences with one another and develop their particularities aloof from the parent population. In the past fifty years the fallacies inherent in the Linnean system of classification on the basis of appearance have caused zoologists to formulate a classification based on something other than appearance. Appearance can vary with the individual. For example, prior to 1950 the weasels of North America were classified into twenty-two different species. A patient zoologist, Eugene R. Hall, after observing them for a long period, in 1951 published a 466-page paper which finally convinced his fellow taxonomists that the weasels of North America really belonged to only four separate species. There appeared the typical species gap between four weasel populations. All the rest of the varying animals were merely subspecies, or races.

      How then is the species defined if not on the basis of appearance? How is this gap between populations perceived? It is perceived in sexual terms. Ernst Mayr defines the species as follows: “The species, finally, is a genetic unit consisting of a large interconnecting gene pool.” It is the word interconnecting which is the operative word in this definition. So long as the pool is interconnecting, the population is capable of fermenting— producing its own interior variations. Only when it ceases to be interconnecting, when a discontinuous splinter becomes a separate fragment, does it become a species embarked on the road toward extraordinary differentiation—such differentiation for example as exists between the gibbon, the gorilla, and ourselves.

Pineal Body

[This excerpt is from chapter 2 of The Parable of the Beast by John N. Bleibtreu.]

      The pineal body is a small grey or white structure about a quarter of an inch long and weighing about a tenth of a grain in man. It is located at the very top of the spinal column where the neck enters the skull. It is the only structure in the brain which is not bilaterally symmetrical. Draw a mid-line down the brain from front to back and everything appearing on one side of this mid-line is duplicated on the other—except for the pineal body. This fact alone has always made it distinctive and a curiosity to anatomists. It is shaped roughly like a pine cone—therefore its name. Descartes was not entirely original in his description of the pineal body as the site of the soul. The Greek anatomist Herophilus of the fourth century B.C. described the pineal as a “sphincter which regulated the flow of thought.”

      Herophilus in turn may very well have come by his notion of pineal function from India, where speculation about this apparently useless appendage to the brain stretches back into the darkness of prehistory, perhaps as far as 3,500 years.

      The Hindus recognized something about the pineal body which had escaped the intuition of Western anatomists from Herophilus to Descartes right up to the year 1886, when by apparent coincidence two monographs on the subject were published independently, one in German by H. W. de Graaf, and one in English by E. Baldwin Spencer. The Hindus recognized from the very beginning that the pineal body was an eye! As such it is represented in oriental art and literature as the Third Eye of Enlightenment.

      The third eye of certain lizards and fish was well known; the eye was unmistakable in the Sphenodon genus of lizard, which contains the famous species Tuatara of New Zealand. In this creature the third eye is marked distinctly by the skull being opened in a central cleft, and the external scales arranged in a kind of rosette with a transparent membrane in the center. When dissected, this organ proved to have all the essential features of an eye--there was a pigmented retina, which surrounded an inner chamber filled with a globular mass analogous to a lens, but the connecting nerves were absent, and the anatomists of the nineteenth century decided that the organ was without visual function in the Tuatara. It was primarily De Graaf and Spencer who proved that this organ in the Tuatara lizards was the same organ that became buried and was rendered indistinct in function in mammals and was known as the pineal body.

      In 1958 two zoologists, Robert Stebbins and Richard Eakin from the University of California at Berkeley, did ethological studies on the common western fence lizard (Sceloporus occidentalis), capturing two hundred animals, removing the pineal eyes of one hundred, and performing a sham operation equally traumatic, but which left the pineal eye intact in the other hundred. They found that removing the pineal eye markedly affected behavior, particularly escape reactions. After recovering from the operation, the animals were released in their natural habitat, whereupon Stebbins and Eakin chased and tried to capture them. Before I0 A.M., they were able to capture 63 per cent of the pinealectomized lizards as opposed to 37 per cent of the sham operated animals. After 10 A.M. and until nightfall the percentage of captures was more equal, though a small balance of the sham operated animals always managed to escape better than the pinealectomized ones. This experiment proved that, contrary to all the previously held notions about the vestigial nature of this third eye—that it was a remnant as useless to lizards as the appendix supposedly is to humans—possession of an intact pineal eye has a marked survival value. It was most emphatically not nonfunctional.

      In mammals the pineal body was recognized to be a gland by the early Latin anatomists, who rated it more important than the pituitary, which is now considered the “master gland.” The Latin writers named the pineal “glandula superior” and the pituitary “glandular inferior,” but from the nineteenth century until just within the past ten years modern anatomists have been divided into two distinct camps—those who believed the pineal was an endocrine gland producing some hormonal excretion, and those who believed it was a useless vestigial appendage of the brain, a leftover reptilian eye. It was not until 1958 that the secretion of the pineal gland was isolated and identified, and the importance of the pineal as a time-sensor in humans finally established. Strangely enough this tremendously important work was done by a dermatologist, Aaron B. Lerner of the Yale University Medical School, and not by an endocrinologist. The story of how this came to pass is sufficiently curious to warrant a brief diversion.

The World within Us

[This excerpt is from chapter 2 of The Parable of the Beast by John N. Bleibtreu.]

      Partly, this disregard of this literally painful biological fact of life stems from the ancient Judeo-Christian dichotomy, the separation of body and mind into quite separate compartments. The bodily effects of such traveling is well known to athletes who usually schedule their arrival well in advance of competition so as to allow a period of recuperation. Yet businessmen or diplomats often allow themselves no more than a quick sprucing up at a hotel before being whisked off to meet the opposition.

      The mind is as much in the body as the body is in the world. The body penetrates the mind just as the world penetrates the body. We like to believe, since we see ourselves as enclosed within a shield of skin, that we are demarcated from the world by this envelope of skin, just as a theater curtain separates the audience from the stage before the performance. But the skin is a porous membrane. Electrically and chemically the world moves right through us as though we were made of mist.

      By and large we are unaware of the presence of the outside world within us. We are even more unconscious of the breathing of the skin’s pores than we are of the intake and exhaling of the lung's breathing. We do not feel the penetration of cosmic particles. This part of the world is all but unknown to most of us. And yet it is as the world enters into us, with its force and influence, that we become one with it.

Sand Fleas

[This excerpt is from chapter 2 of The Parable of the Beast by John N. Bleibtreu.]

      An Italian zoologist, Floriano Papi of the Zoological Institute of the University of Pisa, had become interested in the sand flea, Talitrus saitador. Though called a flea, this tiny creature is not an insect, but a crustacean distantly related to the shrimp. It normally inhabits that wet band of beach sand which is just out of the battering reach of the surf. During the bright part of the day, the late morning and early afternoon, it remains hidden in underground tunnels, obtaining its food by filtering plankton from the seawater which soaks through and collects in its tunnels. But around sunset the sand fleas emerge from underground and embark upon a strange migration, traveling inland for a distance of one hundred meters or more. Papi was fascinated by several aspects of this migration. It appeared to be purposeless, at least in any economic sense. It was not a food search, for the flea is unequipped to ingest any solid food. Differing numbers of animals make the trip each night. During certain periods large numbers of animals make the voyage, while at other times only a few may be found. Papi could not discover any associated phenomenon that correlated with this change in numbers. It did not seem to occur in connection with any easily observed astronomical occurrences, and Papi suspected that perhaps weather-induced barometric or humidity changes may have triggered more fleas at one time than another.

      Around sunrise the fleas return to the beach and the water’s edge. But if they are interrupted at any point during the course of their migration, they will attempt to escape threat by returning to the sea. Papi found that they returned to the sea unerringly, even when he caught them and transported them to a different location where familiar landmarks were absent. Strong offshore winds scattered the sand and continually reshaped it into different patterns, so that even under normal circumstances it would be difficult for them ever to locate themselves by means of a stable set of Euclidean reference points the way we humans do. Suspecting that perhaps sand fleas sensed the presence of the sea by some stimulus such as the sound of the surf, or differences in humidity, or scent, Papi caught some fleas and transported them across the boot of Italy to the Eastern shore, the Tyrrhenian seacoast, where he released them a few meters inland from the surf line. He was astonished to discover that instead of proceeding directly toward the nearby beach, they struck off the way they had come, apparently determined to march overland for more than one hundred kilometers in order to reach their familiar Adriatic beach.


You can see a video on catching sand fleas at
CATCHING SAND FLEAS
Dancing Bees

[These excerpts are from chapter 2 of The Parable of the Beast by John N. Bleibtreu.]

      This discovery came by accident. “When I wish,” he wrote, “to attract some bees for training experiments, I usually place upon a small table several sheets of paper which have been smeared with honey. Then I am often obliged to wait for many hours, sometimes for several days, until finally a bee discovers the feeding place. But as soon as one bee has found the honey, more will appear within a short time—perhaps as many as several hundred. They have all come from the same hive as the first forager; evidently this bee must have announced its discovery at home.” How was this knowledge communicated?

      Von Frisch first located the hive. Then he made arrangements to identify each individual bee, so as to determine whether there were communicators of information and receivers of information, or whether all bees in the hive were equally able to give out and receive information. He devised a two-digit color code for the abdomen and a single-digit color code for the thorax. He suspended pigments in shellac and placed dots of color on each animal until he could identify each of 999 bees in a hive.

      He then moved the colony into an artificial hive with glass walls, so he could observe the activity within. He noted that when a bee discovers a new source of food, it returns to the hive and “begins to perform what I have called a round dance. On the same spot, he turns around, once to the right, once to the left, repeating these circles again and again with great vigor. Often the dance continues for half a minute or longer at the same spot.” The dancer may then move to another part of the hive and repeat the dance.

      Von Frisch also noted another dance, which he called “the tail-wagging dance.” In this performance the bees “run a short distance in a straight line while wagging the abdomen very rapidly from side to side; then they make a complete 360° turn to the left, run straight ahead once more, turn to the right and repeat this pattern over and over again.”

      The deciphering of the coded message of these dances became frustratingly difficult. After von Frisch finally succeeded in breaking them, in the late 1940's, they seemed simple enough. But the initial difficulties centered around the fact that human language is digital, while animal communication is analog. Language is symbolic, and though we humans make analogies (slippery as an eel, strong as an ox, etc.), the analogies are rarely built into the structure of language. For example, if I say I am sad, or I am angry, the precise degree of sadness or anger is not known at this point. Even the addition of an adverb such as very angry, or somewhat sad is not enough. To communicate the degree, the exact point upon a sliding scale, we must resort, like animals, to analog communication devices—tones of the voice, facial expressions, gestures, postures, etc. In mathematical terms a slide rule is an analog computer, whereas an adding machine is a digital computer. When using the slide rule, the analogy or relationship of one scale to another is immediately and directly visible, while in an adding machine digits must be mechanically piled on top of one another, or removed from the pile, and no relationships are apparent.

      Since conventional code-breaking techniques were devised for deciphering digital symbolic communications, von Frisch was compelled to begin at the very beginning, trying to find the analogy within the pantomime of the bee's dance and the location of the food source. It took him many years of frustrating, painstaking effort.

      Now that we know the key, the problem seems childishly simple. But it only became apparent to von Frisch gradually, as he began moving his food source progressively farther from the hive. After the food was moved more than fifty meters from the hive, the bees stopped indicating its direction by means of the round dance, and transferred to the tail-wagging dance. Von Frisch found there were correlations between the duration of the dance and the flying time needed to reach the source. Tail winds or headwinds would alter the length of the performance even though the food source was equidistant in terms of meters.

      At last it finally happened—the connection was at last made between directional space and circadian rhythms. Von Frisch writes of this momentous discovery quite matter-of-factly: “When we watched the dances over a period of several hours, always supplying sugar at the same feeding place, we saw the direction of the straight part of the dances was not constant, but gradually shifted so that it was always quite different in the afternoon from what it had been in the morning. More detailed observations showed that the direction of the dances always changed by approximately the same angle as the earth’s rotation and the apparent motion of the sun across the sky.”

      …The next step was simple enough. The movement of the sun was regular but nonetheless unstable. If the bee were to indicate direction by analogy to the position of the sun, there would have to be some stable convention by which the analogy could be presented in dance form. Von Frisch noted that the dance always took place upon a vertical surface. So the next great leap into the breaking of the code came when von Frisch hypothesized that within the hive, in darkness, the one stable directional indicator might be the force of gravity. Working from this assumption, he finally succeeded in breaking the code completely. “If,” he writes, “the run points straight down, it means ‘Fly away from the sun to reach the food.’ If during the straight portion of the dance the bee heads 60° to the left of vertical, then the feeding place is situated 60° to the left of the sun.”

      However, there was another complication. Even on cloudy days, when the position of the sun was not apparent to the bees, they continued to communicate the location of food sources by the analogy of the dance. His first thought was that the bees were aware of the azimuth of the sun even when it was invisible….He tested this assumption by providing artificial sunbeams—by moving the experimental hive into the shade and diverting sunbeams from the wrong direction with a mirror—and discovered that by doing this, he could disorient the bees. Yet artificial light sources, such as a powerful flashlight beam, would not disorient them.

      At this point, in the early 1940’s, von Frisch’s work could have taken two directions: he could have explored the mystery of circadian rhythms, the mechanics of the innate sense of time—for the bees would necessarily have to be aware of the regular movement of the sun in order for them to utilize this movement as a navigational aid—or in conformity with the principle of Occam’s razor, he could assume the more simple explanation, that no complex time sense need exist innately within the bee, that merely some optical property of sunlight made the position of the sun visible to the bee when it was shrouded from human eyes by cloud cover.

      Von Frisch chose the latter course. Optics had entranced him from his youth; he had begun working with honeybees in the first place to disprove von Hess's contention that they were color-blind. His thirty-year involvement with breaking the code of their dance was kind of a side trip from his major goal. Perhaps he was happy to find a new optical problem to solve. At any rate, he writes: “Light rays coming directly from the sun consist of vibrations that occur in all directions perpendicular to the line along which sunlight travels. But the light of the blue sky has not reached us directly from the sun; it has first been scattered from particles in the atmosphere. This diffuse, scattered light coming from the sky is partially polarized, by which we mean that more of it is vibrating in one direction than in others.”


You can see a video on dancing bees by Smithsonian at
DANCING BEES
Daphnia

[This excerpt is from chapter 1 of The Parable of the Beast by John N. Bleibtreu.]

      …A very large number of organisms spend the larger part of their lives in the haploid phase—algae and fungi are examples. However, it may not be construed that the haploid phase of life—even human life—is sterile, incapable of producing diploid cells, of forming a fetus. This phenomenon, the production of diploid from haploid cells, is known as parthenogenesis and takes place quite routinely and normally in several animals. Perhaps the best known is Daphnia, a fresh-water crustacean commonly found in seasonal puddles of water in temperate zone meadows, marshes, and other similar places. These animals survive the winter in the form of eggs encased in hard leathery armor. Alternate freezing and thawing in the spring cracks these egg cases and releases the tiny shrimplike animals, which are all females. They produce young, other females, and the colony thrives and increases during the warm summer months without any need of male intervention. The transition, or metamorphosis, from haploid to diploid is completely asexual. The haploid phase is therefore in no way incomplete. It is fully capable of producing adult forms with a total complement of structural and behavioral traits.

      It is theoretically possible for human females to be similarly produced by human females, since all eggs, human as well as Daphnia eggs, produce, during the process of maturation, another genetically complete but much diminished copy of themselves known as the Polar Body. In mammals this polar body is normally expendable; it is thrown off by the egg and becomes absorbed by the tissues of the ovary. In Daphnia, however, this polar body is reintegrated into the egg, playing the role of sperm and supplying the missing genetic materials. As fall approaches and the Daphnia colony nears the end of its active locomotor phase of life, males appear. The present hypothesis holds that the shortened span of daylight causes females to produce them, though their production can be induced by chemical changes in the water as well as alterations of light. So far as is known, the sole function of males is to produce the hard leathery egg cases which only appear as a result of heterosexual copulation. The eggs produced without male intervention are soft-shelled and defenseless.

Behaviorism

[This excerpt is from chapter 1 of The Parable of the Beast by John N. Bleibtreu.]

      A great deal of valuable information was obtained from the Watson and the Pavlov studies, but since they were motivated exclusively by pragmatic considerations, they contained their own inbuilt limitations of application. The frightening social and political consequences of communities based on these models of behavior were lampooned by Aldous Huxley in Brave New World, and by George Orwell in 1984.

      In Europe animal behavior studies took an entirely different direction. They were conducted by zoologists, not psychologists. The emphasis of post-Darwinian zoology has been on evolution, on answering the very simple, basic question: How did living things get to be the way they are now? As to how forms acquired their present appearance, part of the answer is hopefully to be found in the fossil record. At least the sequence of the appearance of transitional forms is to be found there. But behavior cannot be fossilized. It can only be deduced; from the shape of an animal’s teeth, for example, one can deduce whether it was herbivorous or carnivorous, and from that, deduce, in general, its style of life—whether it was a grazer, browser, or hunter. But the hard answers to these questions come from witnessing the acts of life themselves, not from the circumstantial evidence which is often misleading.

Cattle Tick

[These excerpts are from chapter 1 of The Parable of the Beast by John N. Bleibtreu.]

      The cattle tick is a small, flat-bodied, blood-sucking arachnid with a curious life history. It emerges from the egg not yet fully developed, lacking a pair of legs and sex organs. In this state it is still capable of attacking cold-blooded animals such as frogs and lizards, which it does. After shedding its skin several times, it acquires its missing organs, mates, and is then prepared to attack warm-blooded animals.

      The eyeless female is directed to the tip of a twig on a bush by her photosensitive skin, and there she stays through darkness and light, through fair weather and foul, waiting for the moment that will fulfill her existence. In the Zoological Institute, at Rostock, prior to World War I ticks were kept on the ends of twigs, waiting for this moment for a period of eighteen years. The metabolism of the creature is sluggish to the point of being suspended entirely. The sperm she received in the act of mating remains bundled into capsules where it, too, waits in suspension until mammalian blood reaches the stomach of the tick, at which time the capsules break, the sperm are released and they fertilize the eggs which have been reposing in the ovary, also waiting in a kind of time suspension.

      The signal for which the tick waits is the scent of butyric acid, a substance present in the sweat of all mammals. This is the only experience that will trigger time into existence for the tick.

      The tick represents, in the conduct of its life, a kind of apotheosis of subjective time perception. For a period as long as eighteen years nothing happens. The period passes as a single moment; but at any moment within this span of literally senseless existence, when the animal becomes aware of the scent of butyric acid it is thrust into a perception of time, and other signals are suddenly perceived.

      The animal then hurls itself in the direction of the scent. The object on which the tick lands at the end of this leap must be warm; a delicate sense of temperature is suddenly mobilized and so informs the creature. If the object is not warm, the tick will drop off and reclimb its perch. If it is warm, the tick burrows its head deeply into the skin and slowly pumps itself full of blood. Experiments made at Rostock with membranes filled with fluids other than blood proved that the tick lacks all sense of taste, and once the membrane is perforated the animal will drink any fluid, provided it is of the right temperature.

      The extraordinary preparedness of this creature for that moment of time during which it will re-enact the purpose of its life contrasts strikingly with probability that this moment will ever occur. There are doubtless many bushes on which ticks perch, which are never bypassed by a mammal within range of the tick’s leap. As do most animals, the tick lives in an absurdly unfavorable world—at least so it would appear to the compassionate human observer. But this world is merely the environment of the animal. The world it perceives—which experimenters at Rostock called its umwelt, its perceptual world—is not at all unfavorable. A period of eighteen years, as measured objectively by the circuit of the earth around the sun, is meaningless to the tick. During this period, it is apparently unaware of temperature changes. Being blind, it does not see the leaves shrivel and fall and then renew themselves on the bush where it is affixed. Unaware of time it is also unaware of space, and the multitudes of forms and colors which appear in space. It waits, suspended in duration for its particular moment of time, a moment distinguished by being filled with a single, unique experience; the scent of butyric acid.

      Though we consider ourselves far removed as humans from such a lowly insect form as this, we too are both aware and unaware of elements which comprise our environment. We are more aware than the tick of the passage of time. We are subjectively aware of the aging process; we know that we grow older, that time is shortened by each passing moment. For the tick, however, this moment that precedes its burst of volitional activity, the moment when it scents butyric acid and is thrust into purposeful movement, is close to the end of time for the tick. When it fills itself with blood, it drops from its host, lays its eggs, and dies….

      The man who coined the term umwelt, who examined the perceptual system of the cattle tick, and who is considered by many to be the father of ethology, was an eccentric Baltic baron named Jakob Johann von Uexkiill.

A Cirrus Cloud Climate Dial?

[These excerpts are from an article by Ulrike Lohmann and Blaz Gasparini in the July 21, 2017, issue of Science.]

      Climate engineering is a potential means to offset the climate warming caused by anthropogenic greenhouse gases. Suggested methods broadly fall into two categories. Methods in the first category aim to remove carbon dioxide (CO2) from the atmosphere, whereas those in the second aim to alter Earth’s radiation balance. The most prominent and best researched climate engineering approach in the second category is the injection of atmospheric aerosol particles or their precursor gases into the stratosphere, where these particles reflect solar radiation back to space. Climate engineering through cirrus cloud thinning, in contrast, mainly targets the long-wave radiation that is emitted from Earth.

      Wispy, thin, and often hardly visible to the human eye, cirrus clouds do not reflect a lot of solar radiation back to space. Because they form at high altitudes and cold temperatures, cirrus clouds emit less long-wave radiation to space than does a cloud-free atmosphere. The climate impact of cirrus clouds is therefore similar to that of greenhouse gases. Their long-wave warming (greenhouse) effect prevails over their reflected solar radiation (cooling) effect….

      If cirrus thinning works, it should be preferred over methods that target changes in solar radiation, such as stratospheric aerosol injections, because cirrus thinning would counteract greenhouse gas warming more directly. Solar radiation management methods cannot simultaneously restore temperature and precipitation at present-day levels but lead to a reduction in global mean precipitation because of the decreased solar radiation at the surface. This adverse effect on precipitation is minimized for cirrus seeding because of the smaller change in solar radiation.

Sulfur Injections for a Cooler Planet

[These excerpts are from an article by Ulrike Niemeler and Simone Tilmes in the July 21, 2017, issue of Science.]

      Achieving the Paris Agreement’s aim to limit the global temperature increase to at most 2°C above preindustrial levels will require rapid, substantial greenhouse gas emission reductions together with large-scale use of “negative emission” strategies for capturing carbon dioxide (CO2) from the air. It remains unclear, however, how or indeed whether large net-negative emissions can be achieved, and neither technology nor sufficient storage capacity for captured carbon are available. Limited commitment for sufficient mitigation efforts and the uncertainty related to net-negative emissions have intensified calls for options that may help to reduce the worst climate effects. One suggested approach is the artificial reduction of sunlight reaching Earth's surface by increasing the reflectivity of Earth's surface or atmosphere.

      Research in this area gained traction after Crutzen called for investigating the effects of continuous sulfur injections into the stratosphere—or stratospheric aerosol modification (SAM)—as one method to deliberately mitigate anthropogenic global warming. The effect is analogous to the observed lowering of temperatures after large volcanic eruptions. SAM could be seen as a last-resort option to reduce the severity of climate change effects such as heat waves, floods, droughts, and sea level rise. Another possibility could be the seeding of ice clouds—an artificial enhancement of terrestrial radiation leaving the atmosphere—to reduce climate warming.

      SAM technologies are presently not developed. Scientists are merely beginning to grasp the potential risks and benefits of these kinds of interventions….However, different models consistently identify side effects; for example, the reduction of incoming solar radiation at Earth’s surface reduces evaporation, which in turn reduces precipitation. This slowing of the hydrological cycle affects water availability, mostly in the tropics, and reduces monsoon precipitation….

      Currently, a single person, company, or state may be able to deploy SAM without in-depth assessments of the risks, potentially causing global impacts that could rapidly lead to conflict. As such, it is essential that international agreements are reached to regulate whether and how SAM should be implemented. A liability regime would rapidly become essential to resolve conflicts, especially because existing international liability rules do not provide equitable and effective compensation for potential SAM damage. Such complexities will require the establishment of international governance of climate intervention, overseeing research with frequent assessments of benefits and side effects.

      Climate intervention should only be seen as a supplement and not a replacement for greenhouse gas mitigation and decarbonization efforts because the necessary level and application time of SAM would continuously grow with the need for more cooling to counteract increasing greenhouse gas concentrations. A sudden disruption of SAM would cause an extremely fast increase in global temperature. Also, SAM does not ameliorate major consequences of the CO2 increase in the atmosphere, such as ocean acidification, which would continue to worsen.

Golding on Education

[These excerpts are from On the Crest of the Wave by William Golding.]

      But what has happened to the woman [Education]?...her face is fretted with the lines of worry and exasperation. The hand across the shoulders holds a pair of scales so that the children can see how this thing in this pan weighs more than the other one. A pair of dividers hangs from her little finger so that they can check that this thing is longer than that. Her right hand still points into the dawn, but the little girl is yawning; and the boy is looking at his feet. For the worry and exasperation in Education’s face are because she has learnt something herself: that the supply of teachable material is as limited as the supply of people who can teach; that neither can be manufactured or bought anywhere; and lastly, most important of all — thought that turns her exasperation into panic — she is pointing the children in one direction and being moved, herself, in another.

      …Education still points to the glorious dawn, official at any rate, but has been brought to see, in a down-to-earth manner, that what we really want is technicians and civil servants and soldiers and airmen and that only she can supply them. She still calls what she is doing “education” because it is proper, a dignified word — but she should call it “training”, as with dogs. In the Wellsian concept, the phrase she had at the back of her mind was “Natural Philosophy”; but the overtones were too vast, too remote, too useless on the national scale, emphatically on the side of “knowing” rather than “doing”.

      Now I suppose that I had better admit that all this is about “Science” in quotation marks, and I do so with fear and trembling. For to attack “Science” is to be labelled reactionary; and to applaud it, the way to an easy popularity. “Science” has become a band-waggon on which have jumped parsons and television stars, novelists and politicians, every public man, in fact, who wants an easy hand-up, all men who are either incapable of thought or too selfishly careerist to use it; so that the man in the street is persuaded by persistent half-truths that “Science” is the most important thing in the world, and Education has been half-persuaded, too. But it cannot be said often enough or loudly enough that “Science” is not the most important thing. Philosophy is more important than “Science”; so is history; so is courtesy, come to that, so is aesthetic perception. I say nothing of religion, since it is outside the scope of this article; but on the national scale, we have come to pursue naked, inglorious power when we thought we were going to pursue Natural Philosophy.

      The result of this on the emotional climate is perceptible already. Mind, I have no statistical evidence to present; unless convictions that have grown out of experience may be called unconsciously statistical. I recognize that everything I say may be nothing more than an approximation to the truth, because the truth itself is so qualified on every side, so slippery that you have to grab at it as, and how, you can. But we are on the crest of the wave and can see a little way forward.

      It is possible when writing a boy’s report to admit that he is not perfect. This has always been possible; but today there is a subtle change and the emphasis is different. You can remark on his carelessness; you can note regretfully his tendency to bully anyone smaller. You can even suggest that the occasions when he removed odd coins from coats hung in the changing room are pointers to a deep unhappiness. Was he perhaps neglected round about the change of puberty? Does he not need a different father-figure; should he not therefore, try a change of psychiatrist? You can say all this; because we all live in the faith that there is some machine, some expertise that will make an artificial silk purse out of a sow’s ear. But there is one thing you must not say because it will be taken as an irremediable insult to the boy and to his parents. You must not say he is unintelligent. Say that and the parents will be after you like a guided missile. They know that intelligence cannot be bought or created. They know, too, it is the way to the good life, the shaming thing, that we pursue without admitting it, the naked power, the prestige, the two cars and the air travel. Education, pointing still, is nevertheless moving their way; to the world where it is better to be envied than ignored, better to be well-paid than happy, better be successful than good—better to be vile, than vile-esteemed.

      I must be careful. But it seems to me that an obvious truth is being neglected. Our humanity, our capacity for living together in a full and fruitful life, does not reside in knowing things for the sake of knowing them or even in the power to exploit our surroundings. At best these are hobbies and toys — adult toys, and I for one would not be without them. Our humanity rests in the capacity to make value judgments, unscientific assessments, the power to decide that this is right, that wrong, this ugly, that beautiful, this just, that unjust. Yet these are precisely the questions which “Science” is not qualified to answer with its measurement and analysis. They can be answered only by the methods of philosophy and the arts. We are confusing the immense power which the scientific method gives us with the all-important power to make the value judgments which are the purpose of human education.

      The pendulum has swung too far. There was a time in education — and I can just remember it — when science fought for its life, bravely and devotedly. Those were the days when any fool who had had Homer beaten into his arse was thought better educated than a bright and inquiring Natural Philosopher. But now the educational world is full of spectral shapes, bowing acknowledgments to religious instruction and literature but keeping an eye on the laboratory where is respect, jam tomorrow, power. The arts are becoming the poor relations. For the arts cannot cure a disease or increase production or ensure defence. They can only cure or ameliorate sicknesses so deeply seated that we begin to think of them in our new wealth as built-in: boredom and satiety, selfishness and fear. The vague picture of the future which our political parties deduce from us as a desirable thing is limitless prosperity, health to enable us to live out a dozen television charters, more of everything; and dutifully they shape our education so that the picture can be painted in.

      The side effect is to enlarge the importance of measurement and diminish the capacity to make value judgments. This is not deliberate and will be denied anyway. But when the centre of gravity is shifted away from the social virtues and the general refining and developing of human capacities, because that is not what we genuinely aspire to, no amount of lip-service and face-saving can alter the fact that the change is taking place. Where our treasure is, there are our hearts also.

Unlocking a Key to Maize’s Amazing Success

[These excerpts are from an article by Elizabeth Pennisi in the July 21, 2017, issue of Science.]

      Munching fresh corn on the cob is a summer tradition for many Americans, but they’re far from alone in their love of maize. This staple grows on every continent save Antarctica and provides food and biofuels for millions of people. Now, researchers studying ancient and modern maize have found a clue to its popularity over the millennia: maize’s easily adjustable flowering time, which enabled ancient peoples to get the plant to thrive in diverse climates, according to studies presented this month at the meeting of the Society for Molecular Biology and Evolution here. The studies found hints of the genomic shifts behind such rapid change and began to clarify when L maize flowering came under farmer control….

      Ancient farmers in Mexico began cultivating maize's wild ancestor, teosinte, about 9000 years ago, but researchers have long wondered why this plant initially became so popular given that teosinte lacks some of maize’s most desirable qualities, such as an easily harvestable large cob of kernels.

      Wondering whether an adaptable flowering time explained some of the appeal, Tenaillon about 20 years ago began growing two standard maize strains and selecting only the earliest and latest bloomers for re-planting the following year. The tiny flowers enshrouded by a leaf produce the silk—long, pollen-catching strands—whereas male flowers make up the corn “tassel.” After just 13 generations, early- and late-flowering individuals in each strain had developed a 3-week difference in timing, she reported at the meeting. Both types of plants grew at about the same rate and reached about the same height, indicating the early bloomers simply followed a sped-up program for when to flower. Thus ancient farmers may have been able to more quickly breed crops, an advantage in colder places with shorter growing seasons.

How to Govern Geoengineering?

[These excerpts are from an editorial by Janos Pasztor, Cynthia Scharf and Kai-Uwe Schmidt in the July 21, 2017, issue of Science.]

      The Paris Agreement aims to limit the global temperature rise to 1.5° to 2°C above preindustrial temperature, but achieving this goal requires much higher levels of mitigation than currently planned. This challenge has focused greater attention on climate geoengineering approaches, which intentionally alter Earth’s climate system, as part of an overall response starting with radical mitigation. Yet it remains unclear how to govern research on, and potential deployment of, geoengineering technologies.

      There are two main types of geoengineering: carbon dioxide removal (CDR) from the atmosphere and solar radiation management (SRM) to cool the planet. Geoengineering does not obviate the need for radical reductions in greenhouse gas (GHG) emissions to zero, combined with adaptation to inevitable climate impacts. However, some scientists say that geoengineering could delay or reduce the overshoot. In so doing, we may expose the world to other serious risks, known and unknown.

      Since 2009, the U.K. Royal Society, the European Union, and the U.S. National Academy of Sciences have recognized the need for governance and for a strategic approach to climate geoengineering policies. However, national governments and intergovernmental actors have thus far largely ignored their recommendations….

      The world is heading to an increasingly risky future and is unprepared to address the institutional and governance challenges posed by these technologies. Geoengineering has planet-wide consequences and must therefore be discussed by national governments within intergovernmental institutions, including the United Nations. The research community has been addressing many of these issues, but the global policy community and the public largely have not. It is time to do so.

Trump’s Science Shop is Small and Waiting for Leadership

[These excerpts are from an article by Jeffrey Mervis in the July 14, 2017, issue of Science.]

      The 1976 law that created the White House Office of Science and Technology Policy (OSTP) lets presidents tailor the office to fit their priorities. Under former President Barack Obama, OSTP grew to a record size and played a role in all the administration's numerous science and technology initiatives. In contrast, President Donald Trump has all but ignored OSTP during his first 6 months in office, keeping it small and excluding it from even a cursory role in formulating science-related policies and spending plans.

      OSTP is not alone across the government in awaiting a new crop of key managers. But such leadership voids can be paralyzing for a small shop. Trump has yet to nominate an OSTP director, who traditionally also serves as the president's science adviser. Nor has he announced his choices for as many as four other senior OSTP officials who would need to be confirmed by the Senate. An administration official, however, told Science that OSTP has reshuffled its work flow—and that there’s a short list for the director’s position….

      Although much about OSTP’s future remains uncertain, Trump has renewed the charter of the National Science and Technology Council, a multiagency group that carries out much of the day-to-day work of advancing the president's science initiatives. He has also retained three offices that oversee the government’s multibillion-dollar efforts in nanotechnology, information technology research, and climate change. Still pending is the status of the President’s Council of Advisors on Science and Technology, a body of eminent scientists and high-tech industry leaders that went out of business at the end of the Obama administration.

Can We Beat Influenza?

[These excerpts are from an editorial by Wenqing Zhang and Robert G. Webster in the July 14, 2017, issue of Science.]

      …Currently, only two influenza A viruses and two influenza B clades are circulating and causing disease in humans, but 16 additional subtypes of influenza A viruses are circulating in nature (14 in birds and two in bats). Of the latter, six occasionally infect humans, providing an ever-looming pandemic threat. However, there is still a lack of fundamental knowledge to predict if and when a particular viral subtype will acquire pandemic ability. We therefore still fail to predict influenza pandemics, and this must change.

      …Problems with sharing influenza samples climaxed in 2007 but then began to be addressed in 2011 following the adoption of the WHO’s Pandemic Influenza Preparedness Framework, which places virus sharing and access to benefits on an equal footing. Fair and equitable sharing of benefits arising from the use of genetic resources under the Nagoya protocol should promote further pathogen sharing in a broader context….

      …Vaccine production has not changed much in decades; it remains a lengthy egg-based process. Furthermore, vaccine efficacy, especially in the elderly, is unsatisfactory and requires annual updates. Universal vaccines that protect against all influenza subtypes are being researched and hold promise for future infection control. Antiviral agents used to treat influenza are limited to antineuraminidase drugs, but polymerase-targeting drugs are in development, suggesting the possibility of future multidrug therapies.

      The approaching 100-year anniversary of the 1918 Spanish influenza pandemic, considered one of the greatest public health crises in history, reminds us that influenza has the potential to cause catastrophic disease at any time….As knowledge is gained and technology improves, so will our pandemic predictive ability and response capacity, but everything depends on rapid global sharing of both viruses and genomic information.

Copernicus

[This excerpt is from Copernicus by William Golding.]

      The book — Concerning the Revolutions of the Celestial Bodies — contains two elements which it is necessary to understand. One, of course, was a detailed mathematical examination of his own system and an attempt to square it with the results of observation. This was beyond the comprehension of all but a few dozen mathematicians of his time. The system did not make prediction easier; it simply happened to be nearer the truth, and if you were attuned to the aesthetic of mathematics, you might feel that. But in the other part of his book, Copernicus made a profound tactical mistake. He tried to ignore mathematics altogether and to find some ground on which uninstructed common sense might operate. In a word, he tried to meet the arguments that might be brought against him by an appeal to common sense. In so doing, he only made it easier for the average non-astronomical, non-mathematical reader to counter him at every point. For in the days of naked-eye observation, the Ptolemaic universe was the common-sense one.

      Here is an example: “It is the vault of heaven that contains all things, and why should motion not be attributed rather to the contained than the container, to the located than the locator? The latter view was certainly that of Heraclides and Ecphantus the Pythagorean, and Hicetas of Syracuse according to Cicero.”

      This gave infinite scope to those who would play at quotations. In short, his attempt to appeal to common sense was an abject failure, because no man who could not feel the aesthetic of mathematics in his bones could possibly avoid the conclusion that Copernicus was a theorist with straw in his hair.

      So when the book came out, in 1543, it gathered to itself a series of uninformed criticisms which are difficult to parallel because, in a way, the event was unparalleled.

      Thus a political thinker of the sixteenth century, Jean Bodin, said: “No one in his senses, or imbued with the slightest knowledge of physics, will ever think that the earth, heavy and unwieldy from its own weight and mass, staggers up and down around its own center and that of the sun ; for at the slightest jar of the earth, we would see cities and fortresses, towns and mountains, thrown down.”

      Now this is very sensible, granted the kind of universe Bodin, and indeed Copernicus, inhabited. All Bodin needed was a lifetime’s study of mathematics.

      Martin Luther, who was used to laying down the law, dealt with Copernicus summarily in terms of Holy Writ: “This fool wishes to reverse the entire science of astronomy; but sacred Scripture tells us that Joshua commanded the sun to stand still and not the earth.”

      That is a clincher. The mind reels to think of the laborious history of other discoveries and other reassessments one would have to unravel for Martin Luther before he could have been brought to doubt the evidence of Holy Writ and his own senses.

      Copernicus got nowhere once he stepped outside the world of mathematical demonstration. Only six years after Copernicus died, Melanchthon summed up the average man’s reaction: `”Now it is a want of honesty and decency to assert such notions publicly, and the example is pernicious. It is part of a good mind to accept the truth as revealed by God and to acquiesce in it.”

      The truth was that man's universe was about to turn inside out and explode. No man could introduce such an idea and be believed. Copernicus, in his appeal to common sense — it was really an appeal to intuition — had attempted the impossible.

      The history of his idea is revealing. Again it was a matter of teamwork. His system never worked exactly. Tycho Brahe who came after him was an anti-Copernican, but he knew the value of accurate observations and spent his life compiling them. Then he produced a cosmology of his own, with the earth still at the center. Kepler, his successor, returned to the ideas of Copernicus, and at last, with Brahe's accurate observations, made them work. He got rid of the last hangover from the ancient system, the idea that movement in a circle is perfect and therefore the one movement admissible in the heavens. He went from the circle to the ellipse, and in his work we find a model of the solar system which, with minor alterations has been accepted ever since.

      What was Copernicus, then? In him, the ancient and the modern meet. He inherited the work of three thousand years, and he pointed the way toward Newton and Einstein. He was, as it were, a man who lived beside a great river and built the first bridge across it.

Men’s Attire

[This excerpt is from Crosses by William Golding.]

      Who wants to see the slightly adjusted outline of an old man whose figure is beginning to ‘go’ like ice cream left in the sun? Society should be more charitable. Once men have started their decline and begun to gather their parcels of fat, they should be allowed to hide themselves under what clothing they choose. Does anyone really believe that decrepitude can be charmed out of sight or obliterated by a tailor’s sleight of hand?

      It was not always so. Once, a degree of fantasy was allowable, and it gave old men the chance to preserve a proper dignity for themselves. Male clothes were as varied then as the ones women wear now. You could hide your baldness with velvet or silk or feathers or laurels. You could wear jewels in your beard. You could have shoulders of ermine, and below them a glistening cataract which gave massiveness to your body and hid your reedy legs. You could clothe yourself in the elegance and severity of a toga, so that as you walked, your feet were moving in the folds and flounces of a full skirt. For the young, there was a brave show of leg and thigh, rather than a dun-colored drainpipe. There was the codpiece to proclaim and if possible exaggerate your virility. The slimness of your waist was accentuated by a belt, where hung a rapier which was little but a variation on the codpiece. You could expose your chest if you had hair on it. You could wear your hair in long curls, and finish off with a jaunty velvet cap and feather perched over one ear.

      The historian will probably object that you stank. But so did everyone else. And you could always wear perfume, an aid which has been defined as ‘one stink covering another’. Young men must have thought a good deal of themselves in those days; and an old one moved among them with all the dignity of a battle-ship among destroyers. But today we all wear the same uniform, the same livery of servitude to convention. Youth has not its grace, nor age its privilege. An old man exhibits his infirmities in the same clothes that do nothing but hide the graces of his grandson. We have lost both ways.

      …Until man is free of this drab convention and can dress as he likes, and habitually does so dress from one end of life to the other, we shall continue to button and zip and strap ourselves into a structure not much more becoming than a concrete wall, and about as comfortable.

Leonida’s Gift

[This excerpt is from The Hot Gates by William Golding.]

      To most of the Persian army, this must have meant nothing. There had been, after all, nothing but a small column of dust hanging under the cliffs in one corner of the plain. If you were a Persian, you could not know that this example would lead, next year, to the defeat and destruction of your whole army at the battle of Plataea, where the cities of Greece fought side by side. Neither you nor Leonidas nor anyone else could foresee that here thirty years’ time was won for shining Athens and all Greece and all humanity.

      The column of dust diminished. The King of Kings gave an order. The huge army shrugged itself upright and began the march forward into the Hot Gates, where the last of the Spartans were still fighting with nails and feet and teeth.

      I came to myself in a great stillness, to find I was standing by the little mound. This is the mound of Leonidas, with its dust and rank grass, its flowers and lizards, its stones, scruffy laurels and hot gusts of wind. I knew now that something real happened here. It is not just that the human spirit reacts directly and beyond all argument to a story of sacrifice and courage, as a wine glass must vibrate to the sound of the violin. It is also because, way back and at the hundredth remove, that company stood in the right line of history. A little of Leonidas lies in the fact that I can go where I like and write what I like. He contributed to set us free.

Successful Evolution

[This excerpt is from chapter 5 of Full House by Stephen Jay Gould.]

      Many classic “trends” of evolution are stories of such unsuccessful groups—trees pruned to single twigs, then falsely viewed as culminations rather than lingering vestiges of former robustness. We cannot grasp the irony of life's little joke until we recognize the primacy of variation within complete systems, and the derivative nature of abstractions or exemplars chosen to represent this varied totality. The full evolutionary bush of horses is a complete system; the steamrollered “line” from Hyracotherium to Equus is one labyrinthine path among many, with no special claim beyond the fortuity of a barely continued existence.

      These conceptual errors have plagued the interpretation of horses, and the more general evolutionary messages supposedly conveyed by them, from the very beginning. Huxley himself, in the printed version of his capitulation to Marsh’s interpretation of horses as an American tale, used the supposed ladder of horses as a formal model for all vertebrates. For example, he denigrated the teleosts (modern bony fishes) as dead ends without issue: “They appear to me to be off the main line of evolution—to represent, as it were, side tracks starting from certain points of that line.” But teleosts are the most successful of all vertebrate groups. Nearly 50 percent of all vertebrate species are teleosts. They stock the world’s oceans, lakes, and rivers, and include nearly one hundred times as many species as primates (and about five times more than all mammals combined). How can we call them “off the main line” just because we can trace our own pathway back to common ancestry with theirs, more than 300 million years ago?

Fusion

[This excerpt is from chapter 24 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      It was not until 1929 that another idea was put forth: the notion that, given the prodigious temperatures and pressures of a star's interior, atoms of light elements might fuse together to form heavier atoms— that atoms of hydrogen, as a start, could fuse to form helium; that the source of cosmic energy, in a word, was thermonuclear. Huge amounts of energy had to be pumped into light nuclei to make them fuse together, but once fusion was achieved, even more energy would be given out. This would in turn heat up and fuse other light nuclei, producing yet more energy, and this would keep the thermonuclear reaction going. The inside of the sun reaches enormous temperatures, something on the order of twenty million degrees. I found it difficult to imagine a temperature like this—a stove at this temperature (George Cramow wrote in The Birth and Death of the Sun) would destroy everything around it for hundreds of miles.

      At temperatures and pressures like this, atomic nuclei — naked, stripped of their electrons —would be rushing around at tremendous speed (the average energy of their thermal motion would be similar to that of alpha particles) and continually crashing, uncushioned, into one another, fusing to form the nuclei of heavier elements.

      We must imagine the interior of the Sun [Gamow wrote) as some gigantic kind of natural alchemical laboratory where the transformation of various elements into one another takes place almost as easily as do the ordinary chemical reactions in our terrestrial laboratories.

      Converting hydrogen to helium produced a vast amount of heat and light, for the mass of the helium atom was slightly less than that of four hydrogen atoms—and this small difference in mass was totally transformed into energy, in accordance with Einstein’s famous e = mc2. To produce the energy generated in the sun, hundreds of millions of tons of hydrogen had to be converted to helium each second, but the sun is composed predominantly of hydrogen, and so vast is its mass that only a small fraction of it has been consumed in the earth’s lifetime. If the rate of fusion were to decline, then the sun would contract and heat up, restoring the rate of fusion; if the rate of fusion were to become too great, the sun would expand and cool down, slowing it. Thus, as Cramow put it, the sun represented “the most ingenious, and perhaps the only possible, type of ‘nuclear machine,’” a self-regulating furnace in which the explosive force of nuclear fusion was perfectly balanced by the force of gravitation. The fusion of hydrogen to helium not only provided a vast amount of energy, but also created a new element in the world. And helium atoms, given enough heat, could be fused to make heavier elements, and these elements, in turn, to make heavier elements still.

      Thus, by a thrilling convergence, two ancient problems were solved at the same time: the shining of stars, and the creation of the elements. Bohr had imagined an aufbau, a building up of all the elements starting from hydrogen, as a purely theoretical construct—but such an aufbau was realized in the stars. Hydrogen, element 1, was not only the fuel of the universe, it was the ultimate building block of the universe, the primordial atom, as Prout had thought back in 1815. This seemed very elegant, very satisfying, that all one needed to start with was the first, the simplest of atoms.

Neils Bohr

[This excerpt is from chapter 24 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      It was Niels Bohr, also working in Rutherford’s lab in 1913, who bridged the impossible, by bringing together Rutherford’s atomic model with Planck’s quantum theory. The notion that energy was absorbed or emitted not continuously but in discrete packets, “quanta,” had lain silently, like a time bomb, since Planck had suggested it in 1900. Einstein had made use of the idea in relation to photoelectric effects, but otherwise quantum theory and its revolutionary potential had been strangely neglected, until Bohr seized on it to bypass the impossibilities of the Rutherford atom. The classical view, the solar-system model, would allow electrons an infinity of orbits, all unstable, all crashing into the nucleus. Bohr postulated, by contrast, an atom that had a limited number of discrete orbits, each with a specific energy level or quantal state. The least energetic of these, the closest to the nucleus, Bohr called the “ground state”—an electron could stay here, orbiting the nucleus, without emitting or losing any energy, forever. This was a postulate of startling, outrageous audacity, implying as it did that the classical theory of electromagnetism might be inapplicable in the minute realm of the atom.

      There was, at the time, no evidence for this; it was a pure leap of inspiration, imagination—not unlike the leaps he now posited for the electrons themselves, as they jumped, without warning or intermediates, from one energy level to another. For, in addition to the electron's ground state, Bohr postulated, there were higher-energy orbits, higher-energy “stationary states,” to which electrons might be briefly translocated. Thus if energy of the right frequency was absorbed by an atom, an electron could move from its ground state into a higher-energy orbit, though sooner or later it would drop back to its original ground state, emitting energy of exactly the same frequency as it had absorbed—this is what happened in fluorescence or phosphorescence, and it explained the identity of spectral emission and absorption lines, which had been a mystery for more than fifty years.

      Atoms, in Bohr’s vision, could not absorb or emit energy except by these quantum jumps—and the discrete lines of their spectra were simply the expression of the transitions between their stationary states. The increments between energy levels decreased with distance from the nucleus, and these intervals, Bohr calculated, corresponded exactly to the lines in the spectrum of hydrogen (and to Balmer’s formula for these). This coincidence of theory and reality was Bohr’s first great triumph. Einstein felt that Bohr’s work was “an enormous achievement,” and, looking back thirty-five years later, he wrote, “[it] appears to me as a miracle even today. . . . This is the highest form of musicality in the sphere of thought.” The spectrum of hydrogen—spectra in general—had been as beautiful and meaningless as the markings on butterflies’ wings, Bohr remarked; but now one could see that they reflected the energy states within the atom, the quantal orbits in which the electrons spun and sang. “The language of spectra,” wrote the great spectroscopist Arnold Sommerfeld, “has been revealed as an atomic music of the spheres.”

      Could quantum theory be extended to more complex, multi-electron atoms? Could it explain their chemical properties, explain the periodic table? This became Bohr’s focus as scientific life resumed after the First World War.

      As one moved up in atomic number, as the nuclear charge or number of protons in the nucleus increased, an equal number of electrons had to be added to preserve the neutrality of the atom. But the addition of these electrons to an atom, Bohr envisaged, was hierarchical and orderly. While he had concerned himself at first with the potential orbits of hydrogen’s lone electron, he now extended his notion to a hierarchy of orbits or shells for all the elements. These shells, he proposed, had definite and discrete energy levels of their own, so that if electrons were added one by one, they would first occupy the lowest-energy orbit available, and when that was full, the next-lowest orbit, then the next, and so on. Bohr’s shells corresponded to Mendeleev’s periods, so that the first, innermost shell, like Mendeleev's first period, accommodated two elements, and two only. Once this shell was completed, with its two electrons, a second shell began, and this, like Mendeleev's second period, could accommodate eight electrons and no more. Similarly for the third period or shell. By such a building-up, or aufbau, Bohr felt, all the elements could be systematically constructed, and would naturally fall into their proper places in the periodic table.

      Thus the position of each element in the periodic table represented the number of electrons in its atoms, and each element’s reactivity and bonding could now be seen in electronic terms, in accordance with the filling of the outermost shell of electrons, the so-called valency electrons. The inert gases each had completed outer valency shells with a full complement of eight electrons, and this made them virtually unreactive. The alkali metals, in Group I, had only one electron in their outermost shell, and were intensely avid to get rid of this, to attain the stability of an inert-gas configuration; the halogens in Group VII, conversely, with seven electrons in their valency shell, were avid to acquire an extra electron and also achieve an inert-gas configuration. Thus when sodium came into contact with chlorine, there would be an immediate (indeed explosive) union, each sodium atom donating its extra electron, and each chlorine atom happily receiving it, both becoming ionized in the process.

      The placement of the transition elements and the rare-earth elements in the periodic table had always given rise to special problems. Bohr now suggested an elegant and ingenious solution to this: the transition elements, he proposed, contained an additional shell of ten electrons each; the rare-earth elements an additional shell of fourteen. These inner shells, deeply buried in the case of the rare-earth elements, did not affect chemical character in nearly so extreme a way as the outer shells; hence the relative similarity of all the transition elements and the extreme similarity of all the rare-earth elements.

      Bohr’s electronic periodic table, based on atomic structure, was essentially the same as Mendeleev's empirical one based on chemical reactivity (and all but identical with the block tables devised in pre-electronic times, such as Thomsen’s pyramidal table and Werner’s ultralong table of 1905). Whether one inferred the periodic table from the chemical properties of the elements or from the electronic shells of their atoms, one arrived at exactly the same point. Moseley and Bohr had made it absolutely clear that the periodic table was based on a fundamental numerical series that determined the number of elements in each period: two in the first period, eight each in the second and third, eighteen each in the fourth and fifth; thirty-two in the sixth and perhaps also the seventh. I repeated this series-2, 8, 8, 18, 18, 32—over and over to myself.

      At this point I started to revisit the Science Museum and spend hours once again gazing at the giant periodic table there, this time concentrating on the atomic numbers inscribed in each cubicle in red. I would look at vanadium, for example—there was a shining nugget in its pigeonhole—and think of it as element 23, a 23 consisting of 5 + 18: five electrons in an outer shell around an argon “core” of eighteen. Five electrons—hence its maximum valency of 5; but three of these formed an incomplete inner shell, and it was such an incomplete shell, I had now learned, that gave rise to vanadium's characteristic colors and magnetic susceptibilities. This sense of the quantitative did not replace the concrete, the phenomenal sense of vanadium but heightened it, because I saw it now as a revelation, in atomic terms, of why vanadium had the properties it did….

Atomic Numbers

[This excerpt is from chapter 24 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      …One could round off a weight that was slightly less or slightly more than a whole number (as Dalton did), but what could one do with chlorine, for example, with its atomic weight of 35.5? This made Prout’s hypothesis difficult to maintain, and further difficulties emerged when Mendeleev made the periodic table. It was clear, for example, that tellurium came, in chemical terms, before iodine, but its atomic weight, instead of being less, was greater. These were grave difficulties, and yet throughout the nineteenth century Prout’s hypothesis never really died—it was so beautiful, so simple, many chemists and physicists felt, that it must contain an essential truth.

      Was there perhaps some atomic property that was more integral, more fundamental than atomic weight? This was not a question that could be addressed until one had a way of “sounding” the atom, sounding, in particular, its central portion, the nucleus. In 1913, a century after Prout, Harry Moseley, a brilliant young physicist working with Rutherford, set about exploring atoms with the just-developed technique of X-ray spectroscopy. His experimental setup was charming and boyish: using a little train, each car carrying a different element, moving inside a yard-long vacuum tube, Moseley bombarded each element with cathode rays, causing them to emit characteristic X-rays. When he came to plot the square roots of the frequencies against the atomic number of the elements, he got a straight line; and plotting it another way, he could show that the increase in frequency showed sharp, discrete steps or jumps as he passed from one element to the next. This had to reflect a fundamental atomic property, Moseley believed, and that property could only be nuclear charge.

      Moseley’s discovery allowed him (in Soddy's words) to “call the roll” of the elements. No gaps could be allowed in the sequence, only even, regular steps. If there was a gap, it meant that an element was missing. One now knew for certain the order of the elements, and that there were ninety-two elements and ninety-two only, from hydrogen to uranium. And it was now clear that there were seven missing elements, and seven only, still to be found. The “anomalies” that went with atomic weights were resolved: tellurium might have a slightly higher atomic weight than iodine, but it was element number 52, and iodine was 53. It was atomic number, not atomic weight, that was crucial.

      The brilliance and swiftness of Moseley’s work, which was all done in a few months of 1913-14, produced mixed reactions among chemists. Who was this young whippersnapper, some older chemists felt, who presumed to complete the periodic table, to foreclose the possibility of discovering any new elements other than the ones he had designated? What did he know about chemistry—or the long, arduous processes of distillation, filtration, crystallization that might be necessary to concentrate a new element or analyze a new compound? But Urbain, one of the greatest analytic chemists of all—a man who had done fifteen thousand fractional crystallizations to isolate lutecium—at once appreciated the magnitude of the achievement, and saw that far from disturbing the autonomy of chemistry, Moseley had in fact confirmed the periodic table and reestablished its centrality. “The law of Moseley . . . confirmed in a few days the conclusions of my twenty years of patient work.”

      Atomic numbers had been used before to denote the ordinal sequence of elements ranked by their atomic weight, but Moseley gave atomic numbers real meaning. The atomic number indicated the nuclear charge, indicated the element’s identity, its chemical identity, in an absolute and certain way. There were, for example, several forms of lead—isotopes—with different atomic weights, but all of these had the same atomic number, 82. Lead was essentially, quintessentially, number 82, and it could not change its atomic number without ceasing to be lead. Tungsten was necessarily, unavoidably, element 74. But how did its 74-ness endow it with its identity?

Rutherford’s Gold Foil

[This excerpt is from chapter 23 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      The alpha particles emitted by radioactive decay (they were later shown to be helium nuclei) were positively charged and relatively massive—thousands of times more massive than beta particles or electrons—and they traveled in undeviating straight lines, passing straight through matter, ignoring it, without any scattering or deflection (although they might lose some of their velocity in so doing). This, at least, appeared to be the case, though in 1906 Rutherford observed that there might be, very occasionally, small deflections. Others ignored this, but to Rutherford these observations were fraught with possible significance. Would not alpha particles be ideal projectiles, projectiles of atomic proportions, with which to bombard other atoms and sound out their structure? He asked his young assistant Hans Geiger and a student, Ernest Marsden, to set up a scintillation experiment using screens of thin metal foils, so that one could keep count of every alpha particle that bombarded these. Firing alpha particles at a piece of gold foil, they found that roughly one in eight thousand particles showed a massive deflection—of more than 90 degrees, and sometimes even 180 degrees. Rutherford was later to say, “It was quite the most incredible event that ever happened to me in my life. It was almost as incredible as if you fired a fifteen-inch shell at a piece of tissue paper and it came back and hit you.”

      Rutherford pondered these curious results for almost a year, and then, one day, as Geiger recorded, he “came into my room, obviously in the best of moods, and told me that now he knew what the atom looked like and what the strange scatterings signified.”

      Atoms, Rutherford had realized, could not be a homogenous jelly of positivity stuck with electrons like raisins (as J. J. Thomson had suggested, in his “plum pudding” model of the atom), for then the alpha particles would always go through them. Given the great energy and charge of these alpha particles, one had to assume that they had been deflected, on occasion, by something even more positively charged than themselves. Yet this happened only once in eight thousand times. The other 7,999 particles might whiz through, undeflected, as if most of the gold atoms consisted of empty space; but the eight-thousandth was stopped, flung back in its tracks, like a tennis ball hitting a globe of solid tungsten. The mass of the gold atom, Rutherford inferred, had to be concentrated at the center, in a minute space, not easy to hit—as a nucleus of almost inconceivable density. The atom, he proposed, must consist overwhelmingly of empty space, with a dense, positively charged nucleus only a hundred-thousandth its diameter, and a relatively few, negatively charged electrons in orbit about this nucleus—a miniature solar system, in effect.

Radiochemistry

[This excerpt is from chapter 22 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      The half-life of radium was much longer than that of its emanation, radon—about 1,600 years. But this was still very small compared to the age of the earth—why, then, if it steadily decayed, had all the earth’s radium not disappeared long ago? The answer, Rutherford inferred, and was soon able to demonstrate, was that radium itself was produced by elements with a much longer half-life, a whole train of substances that he could trace back to the parent element, uranium. Uranium in turn had a half-life of four and a half billion years, roughly the age of the earth itself. Other cascades of radioactive elements were derived from thorium, which had an even longer half-life than uranium. Thus the earth was still living, in terms of atomic energy, on the uranium and thorium that had been present when the earth formed.

      These discoveries had a crucial impact on a long-standing debate about the age of the earth. The great physicist Kelvin, writing in the early 186os, soon after the publication of The Origin of Species, had argued that, based on its rate of cooling, and assuming no source of heat other than the sun, the earth could be no more than twenty million years old, and that in another five million years it would become too cold to support life. This calculation was not only dismaying in itself, but was impossible to reconcile with the fossil record, which indicated that life had been present for hundreds of millions of years—and yet there seemed no way of rebutting it. Darwin was greatly disturbed by this.

      It was only with the discovery of radioactivity that the conundrum was solved. The young Rutherford, it was said, nervously facing the famous Lord Kelvin, now eighty years old, suggested that Kelvin’s calculation had been based on a false assumption. There was another source of warmth besides the sun, Rutherford said, and a very important one for the earth. Radioactive elements (chiefly uranium and thorium, and their breakdown products, but also a radioactive isotope of potassium) had served to keep the earth warm for billions of years and to protect it from the premature heat-death that Kelvin had predicted. Rutherford held up a piece of pitchblende, the age of which he had estimated from the amount of helium it contained. This piece of the earth, he said, was at least 500 million years old.

      Rutherford and Soddy were ultimately able to delineate three separate radioactive cascades, each containing a dozen or so breakdown products emanating from the disintegration of the original parent elements. Could all of these breakdown products be different elements? There was no room in the periodic table for three dozen elements between bismuth and thorium—room for half a dozen, perhaps, but not much more. Only gradually did it become clear that many of the elements were just versions of one another; the emanations of radium and thorium and actinium, for example, though they had widely differing half-lives, were chemically identical, all the same element, though with slightly different atomic weights. (Soddy later named these isotopes.) And the end points of each series were similar—radium G, actinium E, and thorium E, so-called, were all isotopes of lead.

      Every substance in these cascades of radioactivity had its own unique radio signature, a half-life of fixed and invariable duration, as well as a characteristic radiation emission, and it was this which allowed Rutherford and Soddy to sort them all out, and in so doing to found the new science of radiochemistry.

      The idea of atomic disintegration, first raised and then retreated from by Marie Curie, could no longer be denied. It was evident that every radioactive substance disintegrated in the act of giving off energy and turned into another element, that transmutation lay at the heart of radioactivity.

      I loved chemistry in part because it was a science of transformations, of innumerable compounds based on a few dozen elements, themselves fixed and invariant and eternal. The feeling of the elements’ stability and invariance was crucial to me psychologically, for I felt them as fixed points, as anchors, in an unstable world. But now, with radioactivity, came transformations of the most incredible sort. What chemist would have conceived that out of uranium, a hard, tungsteny metal, there could come an alkaline earth metal like radium; an inert gas like radon; a tellurium-like element, polonium; radioactive forms of bismuth and thallium; and, finally, lead—exemplars of almost every group in the periodic table?

      No chemist would have conceived this (though an alchemist might), because the transformations lay beyond the sphere of chemistry. No chemical process, no chemical attack, could ever alter the identity of an element, and this applied to the radioactive elements too. Radium, chemically, behaved similarly to barium; its radioactivity was a different property altogether, wholly unrelated to its chemical or physical properties. Radioactivity was a marvelous (or terrible) addition to these, a wholly other property (and one that annoyed me at times, for I loved the tungstenlike density of metallic uranium, and the fluorescence and beauty of its minerals and salts, but I felt I could not handle them safely for long; similarly I was infuriated by the intense radioactivity of radon, which otherwise would have made an ideal heavy gas).

      Radioactivity did not alter the realities of chemistry, or the notion of elements; it did not shake the idea of their stability and identity. What it did do was to hint at two realms in the atom—a relatively superficial and accessible realm governing chemical reactivity and combination, and a deeper realm, inaccessible to all the usual chemical and physical agents and their relatively small energies, where any change produced a fundamental alteration of the element’s identity.


You can see radiation in a spinthariscope on YouTube!
This is what Sacks observed.
Cuttlefish

[This excerpt is from chapter 22 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      We all adopted particular zoological groups: Eric became enamored of sea cucumbers, holothurians; Jonathan of iridescent bristled worms, polychaetes; and I of squids and cuttlefish, octopuses, cephalopods—the most intelligent and, to my eyes, the most beautiful of invertebrates. One day we all went down to the seashore, to Hythe in Kent, where Jonathan's parents had taken a house for the summer, and went out for a day's fishing on a commercial trawler. The fishermen would usually throw back the cuttlefish that ended up in their nets (they were not popular eating in England). But I, fanatically, insisted that they keep them for me, and there must have been dozens of them on the deck by the time we came in. We took all the cuttlefish back to the house in pails and tubs, put them in large jars in the basement, and added a little alcohol to preserve them. Jonathan’s parents were away, so we did not hesitate. We would be able to take all the cuttlefish back to school, to Sid—we imagined his astonished smile as we brought them in—and there would be a cuttlefish apiece for everyone in the class to dissect, two or three apiece for the cephalopod enthusiasts. I myself would give a little talk about them at the Field Club, dilating on their intelligence, their large brains, their eyes with erect retinas, their rapidly changing colors.

      A few days later, the day Jonathan's parents were due to return, we heard dull thuds emanating from the basement, and going down to investigate, we encountered a grotesque scene: the cuttlefish, insufficiently preserved, had putrefied and fermented, and the gases produced had exploded the jars and blown great lumps of cuttlefish all over the walls and floor; there were even shreds of cuttlefish stuck to the ceiling. The intense smell of putrefaction was awful beyond imagination. We did our best to scrape off the walls and remove the exploded, impacted lumps of cuttlefish. We hosed down the basement, gagging, but the stench was not to be removed, and when we opened windows and doors to air out the basement, it extended outside the house as a sort of miasma for fifty yards in every direction.

      Eric, always ingenious, suggested we mask the smell, or replace it, by an even stronger, but pleasant smell—a coconut essence, we decided, would fill the bill. We pooled our resources and bought a large bottle of this, which we used to douche the basement, and then distributed liberally through the rest of the house and its grounds.

      Jonathan’s parents arrived an hour later and, advancing toward the house, hit an overwhelming scent of coconut. But as they drew nearer they hit a zone dominated by the stench of putrefied cuttlefish—the two smells, the two vapors, for some curious reason, had organized themselves in alternating zones about five or six feet wide. By the time they reached the scene of our accident, our crime, the basement, the smell was insupportable for more than a few seconds. The three of us were all in deep disgrace over the incident, I especially, since it had arisen from my greed in the first place (would not a single cuttlefish have done?) and my folly in not realizing how much alcohol they would need. Jonathan’s parents had to cut short their holiday and leave the house (the house itself, we heard, remained uninhabitable for months). But my love of cuttlefish remained unimpaired.

Spontaneous Radiation

[These excerpts are from chapter 21 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      What had been a mild puzzle with uranium had become a much more acute one with the isolation of radium, a million times more radioactive. While uranium could darken a photographic plate (though this took several days) or discharge an ultrasensitive gold-leaf electroscope, radium did this in a fraction of a second; it glowed spontaneously with the fury of its own activity; and, as became increasingly evident in the new century, it could penetrate opaque materials, ozonize the air, tint glass, induce fluorescence, and burn and destroy the living tissues of the body, in a way that could be either therapeutic or destructive.

      With radiation of every other sort, going all the way from X-rays to radio waves, energy had to be provided by an external source; but radioactive elements, apparently, had their own power and could emit energy without decrement for months or years, and neither heat nor cold nor pressure nor magnetic fields nor irradiation nor chemical reagents made the least difference to this.

      Where did this immense amount of energy come from? The firmest principles in the physical sciences were the principles of conservation —that matter and energy could neither be created nor destroyed. There had never been any serious suggestion that these principles could ever be violated, and yet radium at first appeared to do exactly that—to be a perpetuum mobile, a free lunch, a steady and inexhaustible source of energy.

      One escape from this quandary was to suppose that the energy of radioactive substances had an exterior source; this indeed was what Becquerel first suggested, on the analogy of phosphorescence—that radioactive substances absorbed energy from some-thing, from somewhere, and then reemitted it, slowly, in their own way. (He coined the term hyperphosphoresrence for this.)

      Notions of an outside source—perhaps an X-ray-like radiation bathing the earth—had been entertained briefly by the Curies, and they had sent a sample of a radium concentrate to Hans Geitel and Julius Elster in Germany. Elster and Geitel were close friends (they were known as “the Castor and Pollux of physics”), and they were brilliant investigators, who had already shown radioactivity to be unaffected by vacua, cathode rays, or sunlight. When they took the sample down a thousand-foot mine in the Harz Mountains—a place where no X-rays could reach—they found its radioactivity undiminished…..

      But if it was imaginable—just—that a slow dribble of energy such as uranium emitted might come from an outside source, such a notion became harder to believe when faced with radium, which (as Pierre Curie and Albert Laborde would show, in 1903)was capable of raising its own weight of water from freezing to boiling in an hour. It was harder still when faced with even more intensely radioactive substances, such as pure polonium (a small piece of which would spontaneously become red-hot) or radon, which was 200,000 times more radioactive that radium itself—so radioactive that a pint of it would instantly vaporize any vessel in which it was contained. Such a power to heat was unintelligible with any etheric or cosmic hypothesis.

      With no plausible external source of energy, the Curies were forced to return to their original thought that the energy of radium had to have an external origin, to be an “atomic property”—although a basis for this was hardly imaginable. As early as 1898, Marie Curie added a bolder, even outrageous thought, that radioactivity might come from the disintegration of atoms, that it could be “an emission of matter accompanied by a loss of weight of the radioactive substances”—a hypothesis even more bizarre, it might have seemed, than its alternative, for it had been axiomatic in science, a fundamental assumption, that atoms were indestructible, immutable, unsplittable—the whole of chemistry and classical physics was built on that faith….

      All scientific tradition, from Democritus to Dalton, from Lucretius to Maxwell, insisted upon this principle, and one can readily understand how, after her first bold thoughts about atomic disintegration, Marie Curie withdrew from the idea, and (using unusually poetic language) ended her thesis on radium by saying, “the cause of this spontaneous radiation remains a mystery . . . a profound and wonderful enigma.”

The Curies, Polonium and Radium

[This excerpt is from chapter 21 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      Eve Curie’s biography of her mother—which my own mother gave me when I was ten—was the first portrait of a scientist I ever read, and one that deeply impressed me. It was no dry recital of a life’s achievements, but full of evocative, poignant images—Marie Curie plunging her hands into the sacks of pitchblende residue, still mixed with pine needles from the Joachimsthal mine; inhaling acid fumes as she stood amid vast steaming vats and crucibles, stirring them with an iron rod almost as big as herself; transforming the huge, tarry masses to tall vessels of colorless solutions, more and more radioactive, and steadily concentrating these, in turn, in her drafty shed, with dust and grit continually getting into the solutions and undoing the endless work. (These images were reinforced by the film Madame Curie, which I saw soon after reading the book.)

      Even though the rest of the scientific community had ignored the news of Becquerel's rays, the Curies were galvanized by it: this was a phenomenon without precedent or parallel, the revelation of a new, mysterious source of energy; and nobody, apparently, was paying any attention to it. They wondered at once whether there were any substances besides uranium that emitted similar rays, and started on a systematic search (not confined, as Becquerel’s had been, to fluorescent substances) of everything they could lay their hands on, including samples of almost all the seventy known elements in some form or other. They found only one other substance besides uranium that emitted Becquerel’s rays, another element of very high atomic weight—thorium. Testing a variety of pure uranium and thorium salts, they found the intensity of the radioactivity seemed to be related only to the amount of uranium or thorium present; thus one gram of metallic uranium or thorium was more radioactive than one gram of any of their compounds.

      But when they extended their survey to some of the common minerals containing uranium and thorium, they found a curious anomaly, for some of these were actually more active than the element itself. Samples of pitchblende, for instance, might be up to four times as radioactive as pure uranium. Could this mean, they wondered, in an inspired leap, that another, as-yet-unknown element was also present in small amounts, one that was far more radioactive than uranium itself?

      In 1897 the Curies launched upon an elaborate chemical analysis of pitchblende, separating the many elements it contained into analytic groups: salts of alkali metals, of alkaline earth elements, of rare-earth elements—groups basically similar to those of the periodic table—to see if the unknown radioactive element had chemical affinities with any of them. Soon it became clear that a good part of the radioactivity could be concentrated by precipitation with bismuth.

      They continued rendering their pitchblende residue down, and in July of 1898 they were able to make a bismuth extract four hundred times more radioactive than uranium itself. Knowing that spectroscopy could be thousands of times more sensitive than traditional chemical analysis, they now approached the eminent rare-earth spectroscopist Eugene Demarcay to see if they could get a spectroscopic confirmation of their new element. Disappointingly, no new spectral signature could be obtained at this point; but nonetheless, the Curies wrote,

      we believe the substance we have extracted from pitchblende contains a metal not yet observed, related to bismuth by its analytical properties. If the existence of this new metal is confirmed we propose to call it polonium, from the name of the original country of one of us.

      They were convinced, moreover, that there must be still another radioactive element waiting to be discovered, for the bismuth extraction of polonium accounted for only a portion of the pitchblende’s radioactivity.

      They were unhurried—no one else, after all, it seemed, was even interested in the phenomenon of radioactivity, apart from their good friend Becquerel—and at this point took off on a leisurely summer holiday. (They were unaware at the time that there was another eager and intense observer of Becquerel’s rays, the brilliant young New Zealander Ernest Rutherford, who had come to work in J. J. Thomson’s lab in Cambridge.) In September the Curies returned to the chase, concentrating on precipitation with barium—this seemed particularly effective in mopping up the remaining radioactivity, presumably because it had close chemical affinities with the second as-yet-unknown element they were now seeking. Things moved swiftly, and within six weeks they had a bismuth-free (and presumably polonium-free) barium chloride solution which was nearly a thousand times as radioactive as uranium. Dernarcay's help was sought once again, and this time, to their joy, he found a spectral line (and later several lines: “two beautiful red bands, one line in the blue-green, and two faint lines in the violet”) belonging to no known element. Emboldened by this, the Curies claimed a second new element a few days before the close of 1898. They decided to call it radium, and since there was only a trace of it mixed in with the barium, they felt its radioactivity “must therefore be enormous.”

      It was easy to claim a new element: there had been more than two hundred such claims in the course of the nineteenth century, most of which turned out to be cases of mistaken identity, either “discoveries” of already known elements or mixtures of elements. Now, in a single year, the Curies had claimed the existence of not one but two new elements, solely on the basis of a heightened radioactivity and its material association with bismuth and barium (and, in the case of radium, a single new spectral line). Yet neither of their new elements had been isolated, even in microscopic amounts.

      Pierre Curie was fundamentally a physicist and theorist (though dexterous and ingenious in the lab, often devising new and original apparatus—one such was an electrometer, another a delicate balance based on a new piezo-electric principle—both subsequently used in their radioactivity studies). For him, the incredible phenomenon of radioactivity was enough—it invited a vast new realm of research, a new continent where countless new ideas could be tested.

      But for Marie, the emphasis was different: she was clearly enchanted by the physicality of radium as well as its strange new powers; she wanted to see it, to feel it, to put it in chemical combination, to find its atomic weight and its position in the periodic table.

      Up to this point the Curies’ work had been essentially chemical, removing calcium, lead, silicon, aluminum, iron, and a dozen rare-earth elements—all the elements other than barium—from the pitchblende. Finally, after a year of this, there came a time when chemical methods alone no longer sufficed. There seemed no chemical way of separating radium from barium, so Marie Curie now began to look for a physical difference between their compounds. It seemed probable that radium would be an alkaline earth element like barium and might therefore follow the trends of the group. Calcium chloride is highly soluble; strontium chloride less so; barium chloride still less so—radium chloride, Marie Curie predicted, would be virtually insoluble. Perhaps one could make use of this to separate the chlorides of barium and radium, using the technique of fractional crystallization. As a warm solution is cooled, the less soluble solute will crystallize out first, and this was a technique which had been pioneered by the rare-earth chemists, striving to separate elements that were chemically almost indistinguishable. It was one that required great patience, for hundreds, even thousands, of fractional crystallizations might be needed, and it was this repetitive and tantalizingly slow process that now caused the months to extend into years.

      The Curies had hoped they might isolate radium by 1900, but it was to take nearly four years from the time they announced its probable existence to obtain a pure radium salt, a decigram of radium chloride—less than a ten-millionth part of the original. Fighting against all manner of physical difficulties, fighting the doubts and skepticisms of most of their peers, and sometimes their own hopelessness and exhaustion; fighting (although they did not know it) against the insidious effects of radioactivity on their own bodies, the Curies finally triumphed and obtained a few grains of pure white crystalline radium chloride—enough to calculate radium’s atomic weight (226), and to give it its rightful place, below barium, in the periodic table.

      To obtain a decigram of an element from several tons of ore was an achievement with no precedent; never had an element been so hard to obtain. Chemistry alone could not have succeeded in this, nor could spectroscopy alone, for the ore had to be concentrated a thousandfold before the first faint spectral lines of radium could even be seen. It had required a wholly new approach—the use of radioactivity itself—to identify the infinitesimal concentration of radium in its vast mass of surrounding material, and to monitor it as it was slowly, reluctantly, forced into a state of purity.

      With this achievement, public interest in the Curies exploded, spreading equally to their magical new element and the romantic, heroic husband-and-wife team who had dedicated themselves so totally to its exploration. In 1903, Marie Curie summarized the work of the previous six years in her doctoral thesis, and in the same year she received (with Pierre Curie and Becquerel) the Nobel Prize in physics.

Spectroscopy

[This excerpt is from chapter 17 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      Kirchhoff and others (and especially Lockyer himself) went on to identify a score of other terrestrial elements in the sun, and now the Fraunhofer mystery—the hundreds of black lines in the solar spectrum—could be understood as the absorption spectra of these elements in the outermost layers of the sun, as they were transilluminated from within. On the other hand, a solar eclipse, it was predicted, with the central brilliance of the sun obscured and only its brilliant corona visible, would produce instead dazzling emission spectra corresponding to the dark lines….

      At this point, Bunsen and Kirchhoff turned their attention away from the heavens, to see if they could find any new or undiscovered elements on the earth using their new technique. Bunsen had already observed the great power of the spectroscope to resolve complex mixtures—to provide, in effect, an optical analysis of chemical compounds. If lithium, for example, was present in small amounts along with sodium, there was no way, with conventional chemical analysis, to detect it. Nor were flame colors of help here, because the brilliant yellow flame of sodium tended to flood out other flame colors. But with a spectroscope, the characteristic spectrum of lithium could be seen immediately, even if it was mixed with ten thousand times its weight of sodium.

      This enabled Bunsen to show that certain mineral waters rich in sodium and potassium also contained lithium (this had been completely unsuspected, the only sources hitherto having been certain rare minerals). Could they contain other alkali metals too? When Bunsen concentrated his mineral water, rendering down 600 quintals (about 44 tons) to a few liters, he saw, amid the lines of many other elements, two remarkable blue lines, close together, which had never been seen before. This, he felt, must be the signature of a new element. “I shall name it cesium because of its beautiful blue spectral line,” he wrote, announcing its discovery in November 1860.

      Three months later, Bunsen and Kirchhoff discovered another new alkali metal; they called this rubidium, from “the magnificent dark red color of its rays.”

      Within a few decades of Bunsen and Kirchhoff’s discoveries twenty more elements were discovered with the aid of spectroscopy—indium and thallium (which were also named for their brilliantly colored spectral lines), gallium, scandium, and germanium (the three elements Mendeleev had predicted), all the remaining rare-earth elements, and, in the 1890s, the inert gases.

      But perhaps the most romantic story of all, certainly the one that most appealed to me as a boy, had to do with the discovery of helium. It was Lockyer himself who, during a solar eclipse in 1868, was able to see a brilliant yellow line in the sun's corona, a line near the yellow sodium lines, but clearly distinct from them. He surmised that this new line must belong to an element unknown on earth, and named it helium (he gave it the metallic suffix of -ium because he assumed it was a metal). This finding aroused great wonder and excitement, and it was even speculated by some that every star might have its own special elements. It was only twenty-five years later that certain terrestrial (uranium) minerals were found to contain a strange, light gas, readily released, and when this was submitted to spectroscopy it proved to be the selfsame helium.

Ida Noddack

[This excerpt is from chapter 16 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      Ida Tacke Noddack was one of a team of German scientists who found element 75, rhenium, in 1925-26. Noddack also claimed to have found element 43, which she called masurium. But this claim could not be supported, and she was discredited. In t934, when Fermi shot neutrons at uranium and thought he had made element 93, Noddack suggested that he was wrong, that he had in fact split the atom. But since she had been discredited with element 43, no one paid any attention to her. Had she been listened to, Germany would probably have had the atomic bomb and the history of the world would have been different. (This story was told by Glenn Seaborg when he was presenting his recollections at a conference in November 1997.)

Mendeleev

[This excerpt is from chapter 16 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      Like my own parents, Mendeleev had come from a huge family—he was the youngest, I read, of fourteen children. His mother must have recognized his precocious intelligence, and when he reached fourteen, feeling that he would be lost without a proper education, she walked thousands of miles from Siberia with him—first to the University of Moscow (from which, as a Siberian, he was barred) and then to St. Petersburg, where he got a grant to train as a teacher. (She herself, apparently, nearing sixty at the time, died from exhaustion after this prodigious effort. Mendeleev, profoundly attached to her, was later to dedicate the Principles to her memory.)

      Even as a student in St. Petersburg, Mendeleev showed not only an insatiable curiosity, but a hunger for organizing principles of all kinds. Linnaeus, in the eighteenth century, had classified animals and plants, and (much less successfully) minerals, too. Dana, in the r 830s, had replaced the old physical classification of minerals with a chemical classification of a dozen or so main categories (native elements, oxides, sulfides, and so on). But there was no such classification for the elements themselves, and there were now some sixty elements known. Some elements, indeed, seemed almost impossible to categorize. Where did uranium go, or that puzzling, ultralight metal, beryllium? Some of the most recently discovered elements were particularly difficult—thallium, for example, discovered in 1862, was in some ways similar to lead, in others to silver, in others to aluminum, and in yet others to potassium.

      It was nearly twenty years from Mendeleev's first interest in classification to the emergence of his periodic table in 1869. This long pondering and incubation (so similar, in a way, to Darwin’s before he published On the Origin of Species) was perhaps the reason why, when Mendeleev finally published his Principles, he could bring a vastness of knowledge and insight far beyond any of his contemporaries—some of them also had a clear vision of periodicity, but none of them could marshal the overwhelming detail he could.

      Mendeleev described how he would write the properties and atomic weights of the elements on cards and ponder and shuffle these constantly on his long railway journeys through Russia, playing a sort of patience or (as he called it) “chemical solitaire,” groping for an order, a system that might bring sense to all the elements, their properties and atomic weights.

      There was another crucial factor. There had been considerable confusion, for decades, about the atomic weights of many elements. It was only when this was cleared up finally, at the Karlsruhe conference in 1860, that Mendeleev and others could even think of achieving a full taxonomy of the elements. Mendeleev had gone to Karlsruhe with Borodin (this was a musical as well as a chemical journey, for they stopped at many churches en route, trying out the local organs for themselves). With the old, pre-Karlsruhe atomic weights one could get a sense of local triads or groups, but one could not see that there was a numerical relationship between the groups themselves. Only when Cannizzaro showed how reliable atomic weights could be obtained and showed, for example, that the proper atomic weights for the alkaline earth metals (calcium, strontium, and barium) were 40, 88, and 137 (not 20, 44, and 68, as formerly believed) did it become clear how close these were to those of the alkali metals—potassium, rubidium, and cesium. It was this closeness, and in turn the closeness of the atomic weights of the halogens—chlorine, bromine, and iodine—which incited Mendeleev, in 1868, to make a small grid juxtaposing the three groups:

      Cl       35.5     K       39     Ca       40
               Br      80        Rb     85     Sr        88
               I      127        Cs    133     Ba     137

      And it was at this point, seeing that arranging the three groups of elements in order of atomic weight produced a repetitive pattern—a halogen followed by an alkali metal, followed by an alkaline earth metal—that Mendeleev, feeling this must be a fragment of a larger pattern, leapt to the idea of a periodicity governing all the elements—a Periodic Law.

      Mendeleev’s first small table had to be filled in, and then extended in all directions, as if filling up a crossword puzzle; this in itself required some bold speculations. What element, he wondered, was chemically allied with the alkaline earth metals, yet followed lithium in atomic weight? No such element apparently existed —or could it be beryllium, usually considered to be trivalent, with an atomic weight of 14.5? What if it was bivalent instead, with an atomic weight, therefore, not of 14.5 but 9? Then it would follow lithium and fit into the vacant space perfectly.

      Moving between conscious calculation and hunch, between intuition and analysis, Mendeleev arrived within a few weeks at a tabulation of thirty-odd elements in order of ascending atomic weight, a tabulation that now suggested there was a recapitulation of properties with every eighth element. And on the night of February 16, 1869, it is said, he had a dream in which he saw almost all of the known elements arrayed in a grand table. The following morning, he committed this to paper.

      The logic and pattern of Mendeleev's table were so clear that certain anomalies stood out at once. Certain elements seemed to be in the wrong places, while certain places had no elements. On the basis of his enormous chemical knowledge, he repositioned half a dozen elements, in defiance of their accepted valency and atomic weights. In doing this, he displayed an audacity that shocked some of his contemporaries (Lothar Meyer, for one, felt it was monstrous to change atomic weights simply because they did not “fit”).

      In an act of supreme confidence, Mendeleev reserved several empty spaces in his table for elements “as yet unknown.” He asserted that by extrapolating from the properties of the elements above and below (and also, to some extent, from those to either side) one might make a confident prediction as to what these unknown elements would be like. He did exactly this in his 1871 table, predicting in great detail a new element (“eka-aluminum”) which would come below aluminum in Group III. Four years later just such an element was found, by the French chemist Lecoq de Boisbaudran, and named (either patriotically, or in sly reference to himself, gallus, the cock) gallium.

      The exactness of Mendeleev's prediction was astonishing: he predicted an atomic weight of 68 (Lecoq got 69.9) and a specific gravity of 5.9 (Lecoq got 5.94) and correctly guessed at a great number of gallium's other physical and chemical properties—its fusibility, its oxides, its salts, its valency. There were some initial discrepancies between Lecoq’s observations and Mendeleev’s predictions, but all of these were rapidly resolved in favor of Mendeleev. Indeed, it was said that Mendeleev had a better grasp of the properties of gallium—an element he had never even seen—than the man who actually discovered it.

      Suddenly Mendeleev was no longer seen as a mere speculator or dreamer, but as a man who had discovered a basic law of nature, and now the periodic table was transformed from a pretty but unproven scheme to an invaluable guide which could allow a vast amount of previously unconnected chemical information to be coordinated. It could also be used to suggest all sorts of research in the future, including a systematic search for “missing” elements. “Before the promulgation of this law,” Mendeleev was to say nearly twenty years later, “chemical elements were mere fragmentary, incidental facts in Nature; there was no special reason to expect the discovery of new elements.”

      Now, with Mendeleev’s periodic table, one could not only expect their discovery, but predict their very properties. Mendeleev made two more equally detailed predictions, and these were also confirmed with the discovery of scandium and germanium a few years later.' Here, as with gallium, he made his predictions on the basis of analogy and linearity, guessing that the physical and chemical properties of these unknown elements, and their atomic weights, would be between those of the neighboring elements in their vertical groups.

      The keystone to the whole table, curiously, was not anticipated by Mendeleev, and perhaps could not have been, for this was not a question of a missing element, but of an entire family or group. When argon was discovered in 1894—an element which did not seem to fit anywhere in the table—Mendeleev denied at first that it could be an element and thought it was a heavier form of nitrogen (N3, analogous to ozone, O3). But then it became apparent that there was a space for it, right between chlorine and potassium, and indeed, for a whole group coming between the halogens and the alkali metals in every period. This was realized by Lecoq, who went on to predict the atomic weights of the other yet-to-be-discovered gases—and these, indeed, were discovered in short order. With the discovery of helium, neon, krypton, and xenon, it was clear that these gases formed a perfect periodic group, a group so inert, so modest, so unobtrusive, as to have escaped for a century the chemist's attention. The inert gases were identical in their inability to form compounds; they had a valency, it seemed, of zero.

Alkali Metals

[This excerpt is from chapter 11 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      Lavoisier, making his list of elements in 1789, had included the “alkaline earths” (magnesia, lime, and baryta) because he felt they contained new elements—and to these Davy added the alkalis (soda and potash), for these, he suspected, contained new elements too. But there were as yet no chemical means sufficient to isolate them. Could the radically new power of electricity, Davy wondered, succeed here where ordinary chemistry had failed? First he attacked the alkalis, and early in 1807 performed the famous experiments that isolated metallic potassium and sodium by electric current. When this occurred, Davy was so ecstatic, his lab assistant recorded, that he danced with joy around the lab.

      One of my greatest delights was to repeat Davy’s original experiments in my own lab, and I so identified with him that I could almost feel I was discovering these elements myself. Having read how he first discovered potassium, and how it reacted with water, I diced a little pellet of it (it cut like butter, and the cut surface glittered a brilliant silver-white—but only for an instant; it tarnished at once). I lowered it gently into a trough full of water and stood back—hardly fast enough, for the potassium caught fire instantly, melted, and as a frenzied molten blob rushed round and round in the trough, with a violet flame above it, spitting and crackling loudly as it threw off incandescent fragments in all directions. In a few seconds the little globule had burned itself out, and tranquility settled again over the water in the trough. But now the water felt warm, and soapy; it had become a solution of caustic potash, and being alkaline, it turned a piece of litmus paper blue.

      Sodium was much cheaper and not quite as violent as potassium, so I decided to look at its action outdoors. I obtained a good-sized lump of it—about three pounds—and made an excursion to the Highgate Ponds in Hampstead Heath with my two closest friends, Eric and Jonathan. When we arrived, we climbed up a little bridge, and then I pulled the sodium out of its oil with tongs and flung it into the water beneath. It took fire instantly and sped around and around on the surface like a demented meteor, with a huge sheet of yellow flame above it. We all exulted—this was chemistry with a vengeance!

      There were other members of the alkali metal family even more reactive than sodium and potassium, metals like rubidium and cesium (there was also the lightest and least reactive, lithium). It was fascinating to compare the reactions of all five by putting small lumps of each into water. One had to do this gingerly, with tongs, and to equip oneself and one’s guests with goggles: lithium would move about the surface of the water sedately, reacting with it, emitting hydrogen, until it was all gone; a lump of sodium would move around the surface with an angry buzz, but would not catch fire if a small lump was used; potassium, in contrast, would catch fire the instant it hit the water, burning with a pale mauve flame and shooting globules of itself everywhere; rubidium was still more reactive, spluttering violently with a reddish violet flame; and cesium, I found, exploded when it hit the water, shattering its glass container. One never forgot the properties of the alkali metals after this.


Watch the reaction of alkali metals on YouTube!
Alkali Metals
Watch another video of the reaction of alkali metals on YouTube!
Alkali Metals
What’s the Damage from Climate Change?

[These excerpts are from an article by William A. Pizer in the June 30, 2017, issue of Science.]

      Questions of environmental regulation typically involve trade-offs between economic activity and environmental protection. A tally of these trade-offs, put into common monetary terms—that is, a cost-benefit analysis (CBA) —has been required for significant regulations (e.g., those having an annual effect on the economy of $100 million or more) by the U.S. government for more than four decades. Ethical debate over the role of CBA is at least as old as the requirement itself, but the practical reality is that it pervades government policy-making. If estimates of environmental impacts and valuation are absent or questionable, the case for environmental protection is weakened….

      Between 2009 and 2016, the U.S. government established an interagency working group to produce improved estimates of the cost associated with carbon dioxide emissions. It made use of the only three models, based on peer-reviewed research, that put together the four key components necessary to value the benefits of reducing climate change: projections of population, economic activity, and emissions; climate models to predict how small perturbations to baseline emissions affect the climate; damage models that translate climate change into impacts measured against the baseline economic activity and population; and a discounting model to translate the future damages associated with current incremental emissions into an appropriate damage value today….The damage component is arguably the most challenging: Information must be combined from numerous studies, covering multiple climate change impacts, spanning a range of disciplines, and often requiring considerable work to make them fit together.

Déjà vu for U.S. Nuclear Waste

[This editorial by Allison Macfarlane and Rod Ewing is in the June 30, 2017, issue of Science.]

      With the arrival of the 115th U.S. Congress, the House of Representatives began hearings on the Nuclear Waste Policy Amendments Act of 2017. The legislation restarts Yucca Mountain as a repository for highly radioactive waste. In 1987, Congress amended the Nuclear Waste Policy Act of 1982, selecting Yucca Mountain, Nevada, as the only site to be studied, expecting the repository to open in 1998. It did not. Over the past 30 years, not much has changed. But going forward—or not—with Yucca Mountain will not address the systemic problems of the U.S. nuclear waste program, and this may well lead to continued failure.

      Spent nuclear fuel from power plants has accumulated at more than 70 sites in 35 states, and highly radioactive waste from defense programs remain at U.S. Department of Energy (DOE) sites. Used fuel is transferred to casks, where they may wait for decades to cool to temperatures required for transport. Clearly, the back end of the nuclear fuel cycle is broken.

      There are major obstacles to the current nuclear waste program. Nuclear facilities, whether for disposal or interim storage, take decades to plan, license, and build. Moreover, sustained opposition to a nuclear facility can prevail, simply because opponents only need to succeed occasionally to derail large, complicated projects. The United States needs a strategy that can persist over decades, not just until the next election.

      Key to any strategy is the creation of a new organization for the management of nuclear waste. The DOE, designated by law to lead the nation on this issue, has failed, in part due to changing political winds. Creating a new organization to manage this problem is not a new recommendation. The 2012 Blue Ribbon Commission on America’s Nuclear Future suggested a new, single-purpose, government-chartered corporation for the task. A series of recent meetings (Reset of U.S. Nuclear Waste Management Strategy and Policy) considered a utility-owned organization to have important advantages. Indeed, four of the most advanced nuclear waste management programs—in Canada, Finland, Sweden, and Switzerland—have placed responsibility for managing and disposal of spent fuel with a single nonprofit organization owned by the nuclear utilities. Consequently, these companies have strong technical and financial incentives to make decisions focused on the final goal—geological disposal.

      Funding a new organization is critically important. Although U.S. ratepayers have paid ($0.001/kWh) for nuclear waste disposal, the U.S. program has not moved forward because congressional appropriations from the Nuclear Waste Fund are subject to statutory (Budget Control Act of 2011) and procedural (congressional budget resolutions) limits, in addition to political ones, that restrict availability of the funds. The Nuclear Waste Fund, now $35 billion dollars, is used instead to offset the federal debt. A utility-owned management organization would not suffer from the vagaries of the political process. Fees could be collected and used by the utility-owned organization.

      Also essential is trust. Although a new organization would exist within a web of oversight entities (federal regulator, state agencies, independent scientific review, and public interest groups), it could only operate successfully with the trust of all affected parties. The organization must direct a robust science program and manage a major engineering and construction project under intense public scrutiny and engagement over many decades.

      A new U.S. program also should pay attention to the successes of other programs, particularly in Sweden, Finland, Canada, and France. A well-designed process with technical criteria for site selection and for public engagement and approval has been key to their success.

      Without addressing these issues, the U.S. program cannot expect to succeed. Otherwise, in 30 years, Congress will be holding hearings on yet another generation of amendments to the Nuclear Waste Policy Act.

Lavoisier’s Chemistry

[This excerpt is from chapter 10 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      Lavoisier’s demonstration that combustion was a chemical process—oxidation, as it could now be called—implied much else, and was for him only a fragment of a much wider vision, the revolution in chemistry that he had envisaged. Roasting metals in closed retorts, showing that there was no ghostly weight gain from “particles of fire” or weight loss from loss of phlogiston, had demonstrated to him that there was neither creation nor loss of matter in such processes. This principle of conservation, moreover, applied not only to the total mass of products and reactants, but to each of the individual elements involved. When one fermented sugar with yeast and water in a closed vessel to yield alcohol, as in one of his experiments, the total amounts of carbon and hydrogen and oxygen always stayed the same. They might be reaggregated chemically, but their amounts were unchanged.

      The conservation of mass implied a constancy of composition and decomposition. Thus Lavoisier was led to define an element as a material that could not be decomposed by existing means, and this enabled him (with de Morveau and others) to draw up a list of genuine elements—thirty-three distinct, undecomposable, elementary substances, replacing the four Elements of the ancients. This in turn allowed Lavoisier to draw up a “balance sheet,” as he called it, a precise accounting of each element in a reaction.

      The language of chemistry, Lavoisier now felt, had to be transformed to go with his new theory, and he undertook a revolution of nomenclature, too, replacing the old, picturesque but uninformative terms—like butter of antimony, jovial bezoar, blue vitriol, sugar of lead, fuming liquor of Libavius, flowers of zinc—with precise, analytic, self-explanatory ones. If an element was compounded with nitrogen, phosphorus, or sulfur, it became a nitride, a phosphide, a sulfide. If acids were formed, through the addition of oxygen, one might speak of nitric acid, phosphoric acid, sulfuric acid; and of the salts of these as nitrates, phosphates, and sulfates. If smaller amounts of oxygen were present, one might speak of nitrites or phosphites instead of nitrates and phosphates, and so on. Every substance, elementary or compound, would have its true name, denoting its composition and chemical character, and such names, manipulated as in an algebra, would instantly indicate how they might interact or behave in different circumstances. (Although I was keenly conscious of the advantages of the new names, I missed the old ones, too, for they had a poetry, a strong feeling of their sensory qualities or hermetic antecedents, which was entirely missing from the new, systematic and scentless chemical names.)

      Lavoisier did not provide symbols for the elements, nor did he use chemical equations, but he provided the essential background to these, and I was thrilled by his notion of a balance sheet, this algebra of reality, for chemical reactions. It was like seeing language, or music, written down for the first time. Given this algebraic language, one might not need an actual afternoon in the lab—one could in effect do chemistry on a blackboard, or in one’s head.

      All of Lavoisier’s enterprises—the algebraic language, the nomenclature, the conservation of mass, the definition of an element, the formation of a true theory of combustion—were organically interlinked, formed a single marvelous structure, a revolutionary refounding of chemistry such as he had dreamed of, so ambitiously, in 1773. The path to his revolution was not easy or direct, even though he presents it as obvious in the Elements of Chemistry; it required fifteen years of genius time, fighting his way through labyrinths of presupposition, fighting his own blindnesses as he fought everyone else’s.

      There had been violent disputes and conflicts during the years in which Lavoisier was slowly gathering his ammunition, but when the Elements was finally published—in 1789, just three months before the French Revolution—it took the scientific world by storm. It was an architecture of thought of an entirely new sort, comparable only to Newton’s Principia. There were a few holdouts—Cavendish and Priestley were the most eminent of these—but by 1791 Lavoisier could say, “all young chemists adopt the theory and from that I conclude that the revolution in chemistry has come to pass.”

      Three years later Lavoisier’s life was ended, at the height of his powers, on the guillotine. The great mathematician Lagrange, lamenting the death of his colleague and friend, said: “It required only a moment to sever his head, and one hundred years, perhaps, may not suffice to produce another like it.”

Antoine Lavoisier’s Scientific Achievement

[This excerpt is from chapter 10 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      In his biography of Lavoisier, Douglas McKie includes an exhaustive list of Lavoisier's scientific activities which paints a vivid picture of his times, no less than his own remarkable range of mind: “Lavoisier took part,” McKie writes,

      …in the preparation of reports on the water supply of Paris, prisons, mesmerism, the adulteration of cider, the site of the public abattoirs, the newly-invented “aerostatic machines of Montgolfier” (balloons), bleaching, tables of specific gravity, hydrometers, the theory of colors, lamps, meteorites, smokeless grates, tapestry making, the engraving of coats-of-arms, paper, fossils, an invalid chair, a water-driven bellows, tartar, sulfur springs, the cultivation of cabbage and rape seed and the oils extracted thence, a tobacco grater, the working of coal mines, white soap, the decomposition of nitre, the manufacture of starch . . . the storage of fresh water on ships, fixed air, a reported occurrence of oil in spring water .. the removal of oil and grease from silks and woollens, the preparation of nitrous ether by distillation, ethers, a reverberatory hearth, a new ink and inkpot to which it was only necessary to add water in order to maintain the supply of ink . , the estimation of alkali in mineral waters, a powder magazine for the Paris Arsenal, the mineralogy of the Pyrenees, wheat and flour, cesspools and the air arising from them, the alleged occurrence of gold in the ashes of plants, arsenic acid, the parting of gold and silver, the base of Epsom salt, the winding of silk, the solution of tin used in dyeing, volcanoes, putrefaction, fire-extinguishing liquids, alloys, the rusting of iron, a proposal to use “inflammable air” in a public firework display (this at the request of the police), coal measures, dephlogisticated marine acid, lamp wicks, the natural history of Corsica, the mephitis of the Paris wells, the alleged solution of gold in nitric acid, the hygrometric properties of soda, the iron and salt works of the Pyrenees, argentiferous lead mines, a new kind of barrel, the manufacture of plate glass, fuels, the conversion of peat into charcoal, the construction of corn mills, the manufacture of sugar, the extraordinary effects of a thunder bolt, the retting of flax, the mineral deposits of France, plated cooking vessels, the formation of water, the coinage, barometers, the respiration of insects, the nutrition of vegetables, the proportion of the components in chemical compounds, vegetation, and many other subjects, far too many to be described here, even in the briefest terms.

Robert Hooke – Robert Boyle’s Assistant

[This excerpt is from chapter 10 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      Hooke himself was to become a marvel of scientific energy and ingenuity, abetted by his mechanical genius and mathematical ability. He kept voluminous, minutely detailed journals and diaries, which provide an incomparable picture not only of his own ceaseless mental activity, but of the whole intellectual atmosphere of seventeenth-century science. In his Micrographia, Hooke illustrated his compound microscope, along with drawings of the intricate, never-before-seen structures of insects and other creatures (including a famous picture of a Brobdingnagian louse, attached to a human hair as thick as a barge pole). He judged the frequency of flies’ wingbeats by their musical pitch. He interpreted fossils, for the first time, as the relics and impressions of extinct animals. He illustrated his designs for a wind gauge, a thermometer, a hygrometer, a barometer. And he showed an intellectual audacity sometimes even greater than Boyle's, as with his understanding of combustion, which, he said, “is made by a substance inherent, and mixt with the Air.” He identified this with “that property in the Air which it loses in the Lungs.” This notion of a substance present in limited amounts in the air that is required for and gets used up in combustion and respiration is far closer to the concept of a chemically active gas than Boyle’s theory of igneous particles.

      Many of Hooke’s ideas were almost completely ignored and forgotten, so that one scholar observed in 1803, “I do not know a more unaccountable thing in the history of science than the total oblivion of this theory of Dr. Hooke, so clearly expressed, and so likely to catch attention.” One reason for this oblivion was the implacable enmity of Newton, who developed such a hatred of Hooke that he would not consent to assume the presidency of the Royal Society while Hooke was still alive, and did all he could to extinguish Hooke's reputation. But deeper than this is perhaps what Gunther Stent calls “prematurity” in science, that many of Hooke’s ideas (and especially those on combustion) were so radical as to be unassimilable, even unintelligible, in the accepted thinking of his time.

Robert Boyle

[This excerpt is from chapter 10 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      Chemistry as a true science, I read, made its first emergence with the work of Robert Boyle in the middle of the seventeenth century. Twenty years Newton’s senior, Boyle was born at a time when the practice of alchemy still held sway, and he still maintained a variety of alchemical beliefs and practices, side by side with his scientific ones. He believed that gold could be created, and that he had succeeded in creating it (Newton, also an alchemist, advised him to keep silent about this). He was a man of immense curiosity (of “holy curiosity,” in Einstein’s phrase), for all the wonders of nature, Boyle felt, proclaimed the glory of God, and this led him to examine a huge range of phenomena.

      He examined crystals and their structure, and was the first to discover their cleavage planes. He explored color, and wrote a book on this which influenced Newton. He devised the first chemical indicator, a paper soaked with syrup of violets which would turn red in the presence of acid fluids, green with alkaline ones. He wrote the first book in English on electricity. He prepared hydrogen, without realizing it, by putting iron nails in sulfuric acid. He found that although most fluids contracted when frozen, water expanded. He showed that a gas (later realized to be carbon dioxide) was evolved when he poured vinegar on powdered coral, and that flies would die if kept in this “artificial air.” He investigated the properties of blood and was interested in the possibility of blood transfusion. He experimented with the perception of odors and tastes. He was the first to describe semipermeable membranes. He provided the first case history of acquired achromatopsia, a total loss of color vision following a brain infection.

      All these investigations and many others he described in language of great plainness and clarity, utterly different from the arcane and enigmatic language of the alchemists. Anyone could read him and repeat his experiments; he stood for the openness of science, as opposed to the closed, hermetic secrecy of alchemy.

      Although his interests were universal, chemistry seemed to hold a very special appeal for him (even as a youth he called his own chemical laboratory “a kind of Elysium”). He wished, above all, to understand the nature of matter, and his most famous book, The Sceptical Chymist, was written to debunk the mystical doctrine of the Four Elements, and to unite the enormous, centuries-old empirical knowledge of alchemy and pharmacy with the new, enlightened rationality of his age.

Malodorous Chemicals

[This excerpt is from chapter 8 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      The bad smells, the stenches, always seemed to come from compounds containing sulfur (the smells of garlic and onion were simple organic sulfides, as closely related chemically as they were botanically), and these reached their climax in the sulfuretted alcohols, the mercaptans. The smell of skunks was due to butyl mercaptan, I read—this was pleasant, refreshing, when very dilute, but appalling, overwhelming, at close quarters. (I was delighted, when I read Antic Hay a few years later, to find that Aldous Huxley had named one of his less delectable characters Mercaptan.)

      Thinking of all the malodorous sulfur compounds and the atrocious smell of selenium and tellurium compounds, I decided that these three elements formed an olfactory as well as a chemical category, and thought of them thereafter as the “stinkogens.”

      I had smelled a bit of hydrogen sulfide in Uncle Dave's lab—it smelled of rotten eggs and farts and (I was told) volcanoes. A simple way of making it was to pour dilute hydrochloric acid on ferrous sulfide. (The ferrous sulfide, a great chunky mass of it, I made myself by heating iron and sulfur together till they glowed and combined.) The ferrous sulfide bubbled when I poured hydrochloric acid on it, and instantly emitted a huge quantity of stinking, choking hydrogen sulfide. I threw open the doors into the garden and staggered out, feeling very queer and ill, remembering how poisonous the gas was. Meanwhile, the infernal sulfide (I had made a lot of it) was still giving off clouds of toxic gas, and this soon permeated the house. My parents were, by and large, amazingly tolerant of my experiments, but they insisted, at this point, on having a fume cupboard installed and on my using, for such experiments, less generous quantities of reagents.

      When the air had cleared, morally and physically, and the fume cupboard had been installed, I decided to make other gases, simple compounds of hydrogen with other elements besides sulfur. Knowing that selenium and tellurium were closely akin to sulfur, in the same chemical group, I employed the same basic formula: compounding the selenium or tellurium with iron, and then treating the ferrous selenide or ferrous telluride with acid. If the smell of hydrogen sulfide was bad, that of hydrogen selenide was a hundred times worse—an indescribably horrible, disgusting smell that caused me to choke and tear, and made me think of have very putrefying radishes or cabbage (I had a fierce hatred of cabbage library, and his c and brussels sprouts at this time, for boiled, overboiled, they had been staples at Braefield).

      Hydrogen selenide, I decided, was perhaps the worst smell in the world. But hydrogen telluride came close, was also a smell from hell. An up-to-date hell, I decided, would have not just rivers of fiery brimstone, but lakes of boiling selenium and tellurium, too.

Risks of Chemistry

[This excerpt is from chapter 8 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      Chemical exploration, chemical discovery, was all the more romantic for its dangers. I felt a certain boyish glee in playing with these dangerous substances, and I was struck, in my reading, by the range of accidents that had befallen the pioneers. Few naturalists had been devoured by wild animals or stung to death by noxious plants or insects; few physicists had lost their eyesight gazing at the heavens, or broken a leg on an inclined plane; but many chemists had lost their eyes, limbs, and even their lives, usually through producing inadvertent explosions or toxins. All the early investigators of phosphorus had burned themselves severely. Bunsen, investigating cacodyl cyanide, lost his right eye in an explosion, and very nearly his life. Several later experimenters, like Moissan, trying to make diamond from graphite in intensely heated, high-pressure “bombs,” threatened to blow themselves and their fellow workers to kingdom come. Humphry Davy, one of my particular heroes, had been nearly asphyxiated by nitrous oxide, poisoned himself with nitrogen peroxide, and severely inflamed his lungs with hydrofluoric acid. Davy also experimented with the first “high” explosive, nitrogen trichloride, which had cost many people fingers and eyes. He discovered several new ways of making the combination of nitrogen and chlorine, and caused a violent explosion on one occasion while he was visiting a friend. Davy himself was partially blinded, and did not recover fully for another four months. (We were not told what damage was done to his friend's house.)

      The Discovery of the Elements devoted an entire section to “The Fluorine Martyrs.” Although elemental chlorine had been isolated from hydrochloric acid in the 1770s, its far more active cousin, fluorine, was not so easily obtained. All the early experimenters, I read, “suffered the frightful torture of hydrofluoric acid poisoning,” and at least two of them died in the process. Fluorine was only isolated in 1886, after almost a century of dangerous trying.

The Unaffordable Urban Paradise

[This excerpt is from an article by Richard Florida in the July/August 2017 issue of Technology Review.]

      …Urban areas provide the diversity, creative energy, cultural richness, vibrant street life, and openness to new ideas that attract startup talent. Their industrial and warehouse buildings also provide employees with flexible and reconfigurable work spaces. Cities and startups are a natural match.

      For years, economists, mayors, and urbanists believed that high-tech development was an unalloyed good thing, and that more high-tech startups and more venture capital investment would “lift all boats.” But the reality is that high-tech development has ushered in a new phase of what I call winner-take-all urbanism, where a relatively small number of metro areas, and a small number of neighborhoods within them, capture most of the benefits.

      Middle-class neighborhoods have been hollowed out in the process. In 1970, roughly two-thirds of Americans lived in middle-class neighborhoods; today less than 40 percent of us do. The middle-class share of the population shrank in a whopping 203 out of 229 U.S. metro areas between 2000 and 2014. And places where the middle class is smallest include such superstar cities and tech hubs as New York, San Francisco, Boston, Los Angeles, Houston, and Washington, D.C.

      Despite all this, it wouldn’t make any sense to put the brakes on high-tech development. Doing so would only cut off a huge source of innovation and economic development. High-tech industry remains a major driver of economic progress and jobs, and it provides much-needed tax revenues that cities can use to address and mitigate the problems that come with financial success.

      But if high-tech development causes problems, and stopping it doesn’t solve those problems, what comes next?

      High-tech companies should—out of self-interest, if for no other reason—embrace a shift to a kind of urbanism that allows many more people, especially blue-collar and service workers, to share in the gains of urban development. The superstar cities they’ve helped create cannot survive when nurses, EMTs, teachers, police officers, and other service providers can no longer afford to live in them.

      Here’s how they can do it. First, they can work with cities to help build more housing, which would reduce housing prices. They can support efforts to liberalize outdated zoning and building codes to enable more housing construction, and invest in the development of more affordable housing for service and blue-collar workers.

      Second, they can work for, support, and invest in the development of more and better public transit to connect outlying areas to booming cores and tech clusters where employment is—and to spur and generate denser real estate and business development around those stops and stations.

      Third, they can engage the wider business community and government to upgrade the jobs of low-wage service workers—who now make up more than 45 percent of the national workforce—into higher-paying, family-supporting work.

      This last idea might seem outlandish, but it's analogous to how the U.S. turned low-paying manufacturing jobs of the early 20th century into middle-class jobs in the 1950s and 1960s….

East African Turmoil Imperils Giraffes

[These excerpts are from an article by Jane Qiu in the June 23, 2017, issue of Science.]

      In recent months, drought and overgrazing in northern Kenya have sent thousands of herders and their livestock into national parks and other protected areas, intensifying tensions over land and grazing. Violence has taken the lives of several rangers, and a surge in wildlife killings is devastating populations of one of East Africa’s most majestic beasts: giraffes. “This affects all wildlife, but giraffes may be particularly hard hit,” says Fred Bercovitch, a zoologist….

      For hunters, “giraffes are an easy target,” he notes. And as scientists have recognized only recently, giraffes have multiple species, and several populations are already in serious decline. In the past 30 years, populations of two East African varieties, the Nubian and reticulated giraffes, have plunged by 97% and 78%, respectively, and the International Union for the Conservation of Nature may soon declare them critically endangered….

      The biggest threats to the animals are rapid human population growth and the influx of herders, along with refugees fleeing regional conflicts. In the refugee camps bordering Kenya and Somalia, for instance, bush meat, including giraffes, is an important source of food for half a million destitute people.

      A traditional predator, the lion, may also be taking a growing toll….

      Adding to the pressure is exponential growth in mining and infrastructure development—highways, railways, oil pipelines, and industrial compounds—which often encroach on key giraffe habitats, including those in national parks. The newly opened $3.2 billion Mombasa-Nairobi railway, for instance, cuts through Kenya’s Tsavo National Park….

Table Salt

[This excerpt is from chapter 7 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      It was from Griffin that I first gained a clear idea of what was meant by “acids” and “alkalis” and how they combined to produce “salts.” Uncle Dave demonstrated the opposition of acids and bases by measuring out precise quantities of hydrochloric acid and caustic soda, which he mixed in a beaker. The mixture became extremely hot, but when it cooled, he said, “Now try it, drink it.” Drink it—was he mad? But I did so, and tasted nothing but salt. :You see,” he explained, “an acid and a base come together, and they neutralize each other; they combine and make a salt.”

      Could this miracle happen in reverse, I asked? Could salty water be made to produce the acid and the base all over again? “No,” Uncle said, “that would require too much energy. You saw how hot it got when the acid and base reacted— the same amount of heat would be needed to reverse the reaction. And salt,” he added, “is very stable. The sodium and chloride hold each other tightly, and no ordinary chemical process will break them apart. To break them apart you have to use an electric current.”

      He showed me this more dramatically one day by putting a piece of sodium in a jar full of chlorine. There was a violent conflagration, the sodium caught fire and burned, weirdly, in the yellowish green chlorine—but when it was over, the result was nothing more than common salt. I had a heightened respect for salt, I think, after that, having seen the violent opposites that came together in its making and the strength of the energies, the elemental forces, that were now locked in the compound.


Watch this reaction on YouTube!
Making Table Salt
Chemistry of Minerals

[This excerpt is from chapter 6 of Uncle Tungsten, an autobiography by Oliver Sacks.]

      The eighteenth century, Uncle told me, had been a grand time for the discovery and isolation of new metals (not only tungsten, but a dozen others, too), and the greatest challenge to eighteenth-century chemists was how to separate these new metals from their ores. This is how chemistry, real chemistry, got on its feet, investigating countless different minerals, analyzing them, breaking them down, to see what they contained. Real chemical analysis—seeing what minerals would react with, or how they behaved when heated or dissolved—of course required a laboratory, but there were elementary observations one could do almost anywhere. One could weigh a mineral in one's hand, estimate its density, observe its luster, the color of its streak on a porcelain plate. Hardness varied hugely, and one could easily get a rough approximation—talc and gypsum one could scratch with a fingernail; calcite with a coin; fluorite and apatite with a steel knife; and orthoclase with a steel file. Quartz would scratch glass, and corundum would scratch anything but diamond.

Burning Aluminum

[This excerpt is from chapter 4 Uncle Tungsten, an autobiography by Oliver Sacks.]

      On one visit, Uncle Dave showed me a large bar of aluminum. After the dense platinum metals, I was amazed at how light it was, scarcely heavier than a piece of wood. “I’ll show you something interesting,” he said. He took a smaller lump of aluminum, with a smooth, shiny surface, and smeared it with mercury. All of a sudden—it was like some terrible disease—the surface broke down, and a white substance like a fungus rapidly grew out of it, until it was a quarter of an inch high, then half an inch high, and it kept growing and growing until the aluminum was completely eaten up. “You’ve seen iron rust—oxidizing, combining with the oxygen in the air,” Uncle said. “But here, with the aluminum, it’s a million times faster. That big bar is still quite shiny, because it’s covered by a fine layer of oxide, and that protects it from further change. But rubbing it with mercury destroys the surface layer, so then the aluminum has no protection, and it combines with the oxygen in seconds.”

      I found this magical, astounding, but also a little frightening—to see a bright and shiny metal reduced so quickly to a crumbling mass of oxide….


Watch this reaction on YouTube!
Burning Aluminum
Waning Woods

[This brief article by Jason G. Goldman is in the July 2017 issue of Scientific American.]

      We humans have left our mark on the entire planet; not a single ecosystem remains completely untouched. But some landscapes have been affected less than others. And the extent to which the earth can provide habitats for plants and animals, sequester atmospheric carbon and regulate the flow of freshwater depends on the vastness of the least affected regions. These tracts, where human influence is still too weak to easily detect by satellite, are prime targets for conservation. Using satellite imagery, a group of researchers mapped the global decline between 2000 and 2013 of such “intact forest landscapes” (IFLs), defined as forested or naturally treeless ecosystems of 500 square kilometers or more. Around half of the area of the world’s IFLs are in the tropics, and a third can be found in the boreal forests of North America and Eurasia. Logging, agriculture, mining and wildfires contributed to the drop, as reported in January in Science Advances.

      The bright side? Landscapes under formal protection, such as national parks, were more likely to remain intact.

Probiotics Are No Panacea

[These excerpts are from an article by Ferris Jabr in the July 2017 issue of Scientific American.]

      Walk into any grocery store, and you will likely find more than a few “probiotic” products brimming with so-called beneficial bacteria that are supposed to treat everything from constipation to obesity to depression. In addition to foods traditionally prepared with live bacterial cultures (such as yogurt and other fermented dairy products), consumers can now purchase probiotic capsules and pills, fruit juices, cereals, sausages, cookies, candy, granola bars and pet food. Indeed, the popularity of probiotics has grown so much in recent years that manufacturers have even added the microorganisms to cosmetics and mattresses.

      A closer look at the science underlying microbe-based treatments, however, shows that most of the health claims for probiotics are pure hype. The majority of studies to date have failed to reveal any benefits in individuals who are already healthy. The bacteria seem to help only those people suffering from a few specific intestinal disorders….

      The popular frenzy surrounding probiotics is fueled in large part by surging scientific and public interest in the human microbiome: the overlapping ecosystems of bacteria and other microorganisms found throughout the body. The human gastrointestinal system contains about 39 trillion bacteria, according to the latest estimate, most of which reside in the large intestine. In the past 15 years researchers have established that many of these commensal microbes are essential for health. Collectively, they crowd out harmful microbial invaders, break down fibrous foods into more digestible components and produce vitamins such as K and B12.

      The idea that consuming probiotics can boost the ability of already well-functioning native bacteria to promote general health is dubious for a couple of reasons. Manufacturers of probiotics often select specific bacterial strains for their products because they know how to grow them in large numbers, not because they are adapted to the human gut or known to improve health. The particular strains of Bifidobacterium or Lactobacillus that are typically found in many yogurts and pills may not be the same kind that can survive the highly acidic environment of the human stomach and from there colonize the gut.

      Even if some of the bacteria in a probiotic managed to survive and propagate in the intestine, there would likely be far too few of them to dramatically alter the overall composition of one’s internal ecosystem. Whereas the human gut contains tens of trillions of bacteria, there are only between 100 million and a few hundred billion bacteria in a typical serving of yogurt or a microbe-filled pill. Last year a team of scientists at the University of Copenhagen published a review of seven randomized, placebo-controlled trials (the most scientifically rigorous types of studies researchers know how to conduct) investigating whether probiotic supplements—including biscuits, milk-based drinks and capsules—change the diversity of bacteria in fecal samples. Only one study—of 34 healthy volunteers—found a statistically significant change, and there was no indication that it provided a clinical benefit….

      Despite a growing sense that probiotics do not offer anything of substance to individuals who are already healthy, researchers have documented some benefits for people with certain conditions.

      In the past five years, for example, several combined analyses of dozens of studies have concluded that probiotics may help prevent some common side effects of treatment with antibiotics. Whenever physicians prescribe these medications, they know they stand a good chance of annihilating entire communities of beneficial bacteria in the intestine, along with whatever problem-causing microbes they are trying to dispel. Normally the body just needs to grab a few bacteria from the environment to reestablish a healthy microbiome. But sometimes the emptied niches get filled up with harmful bacteria that secrete toxins, causing inflammation in the intestine and triggering diarrhea. Adding yogurt or other probiotics—especially the kinds that contain Lactobacillus—during and after a course of antibiotics seems to decrease the chances of subsequently developing these opportunistic infections….

      Probiotics also seem to ameliorate irritable bowel syndrome, a chronic disease characterized by abdominal pain, bloating, and frequent diarrhea or constipation (or a mix of the two)…

      …Put another way, treatments for microbe-related disorders are most successful when they work in tandem with the human body’s many microscopic citizens, not just against them.

Tapping the Trash

[These excerpts are from an article by Michael E. Webber in the July 2017 issue of Scientific American.]

      On December 20, 2015, a mountain of urban refuse collapsed in Shenzhen, China, killing at least 69 people and destroying dozens of buildings. The disaster brought to life the towers of waste depicted in the 2008 dystopian children’s movie WALL-E, which portrayed the horrible yet real idea that our trash could pile up uncontrollably, squeezing us out of our habitat. A powerful way to transform an existing city into a sustainable one—a city that preserves the earth rather than ruining it—is to reduce all the waste streams and then use what remains as a resource. Waste from one process becomes raw material for another.

      Many people continue to migrate to urban centers worldwide, which puts cities in a prime position to solve global resource problems. Mayors are taking more responsibility for designing solutions simply because they have to, especially in countries where national enthusiasm for tackling environmental issues has cooled off. International climate agreements forged in Paris in December 2015 also acknowledged a central role for cities. More than 1,000 mayors flocked to the French capital during the talks to share their pledges to reduce emissions. Changing building codes and investing in energy efficiency are just two starting points that many city leaders said they could initiate much more quickly than national governments.

      It makes sense for cities to step up. Some of them—New York City, Mexico City, Beijing—house more people than entire countries do. And urban landscapes are where the challenges of managing our lives come crashing together in concentrated form. Cities can lead because they can quickly scale up solutions and because they are living laboratories for improving quality of life without using up the earth’s resources, polluting its air and water, and harming human health in the process.

      Cities are rife with wasted energy, wasted carbon dioxide, wasted food, wasted water, wasted space and wasted time. Reducing each waste stream and managing it as a resource—rather than a cost—can solve multiple problems simultaneously, creating a more sustainable future for billions of people….

      One obvious place to start reducing waste is leaky water pipes. A staggering 10 to 40 percent of a city’s water is typically lost in pipes. And because the municipality has cleaned that water and powered pumps to move it, the leaks throw away energy, too.

      Energy consumption itself is incredibly wasteful. More than half the energy a city consumes is released as waste heat from smokestacks, tailpipes, and the backs of heaters, air conditioners and appliances. Making all that equipment more efficient reduces how much energy we need to produce, distribute and clean up.

      Refuse is another waste stream to consolidate. The U.S. generates more than four pounds of solid waste per person every day. Despite efforts to compost, recycle or incinerate some of it, a little more than half is still dumped in landfills….

      Once cities reduce waste streams, they should use waste from one urban process as a resource for another. This arrangement is rare, but compelling projects are rising. Modern waste-to-energy systems, such as one in Zurich, burn trash cleanly, and some, including one in Palm Beach, Fla., recover more than 95 percent of the metals in the gritty ash that is left by the combustion….

      Municipalities also need to help residents become smarter citizens because each individual makes resource decisions every time he or she buys a product or flips a switch. Access to education and data will be paramount. Connecting those citizens also requires collaboration and neighborly interactions: parks, playgrounds, shared spaces, schools, and religious and community centers-all of which were central tenets of centuries-old designs for thriving cities. The more modern and smart our cities become, the more we might need these old-world elements to keep us together.

Raise Alcohol Taxes, Reduce Violence

[These excerpts are from an article by Kunmi Sobowale in the July 2017 issue of Scientific American.]

      …alcohol is a common instigator of violence against others, as well as harm to oneself.

      This link between alcohol and violence has been shown in multiple countries. In 1998 the Bureau of Justice Statistics reported that in the U.S., two thirds of violent attacks on intimate partners occurred in the context of alcohol abuse. Drinking increases the perpetration of physical and sexual violence. Alcohol use also reportedly increases the severity of violent assaults. Although drinking alcohol does not always lead to violence and is not a prerequisite for violence to occur, the link between alcohol and violence is undeniable.

      The victims are overwhelmingly women. But children are also harmed. Parents who drink heavily are more likely to physically abuse their child. Youngsters who live in neighborhoods with more bars or liquor stores are more likely to be maltreated….

      Violent, drunken men fall victim, too. They are as likely to I die from alcohol-related firearm incidents as drunk-driving accidents. For all the effort put into preventing drunk driving, we have utterly failed to appreciate that being intoxicated while in possession of a firearm is an equally dangerous situation. Nevertheless, many states permit customers to carry firearms into establishments that serve alcohol.

      …Suicide attempts are often impulsive acts. When sober, many patients regret these efforts to take their own life. Unfortunately, alcohol intoxication increases the risk that people will attempt suicide with a firearm, and because guns are the most lethal suicide method in the U.S., it is often too late for regrets….

      Compared with other approaches to violence prevention, higher taxes on alcohol seem more politically feasible—certainly they will get more support than gun-control measures. Taxes are more effective than most other alcohol-consumption interventions, and they garner revenue for local governments. States can use that money to support programs that aid victims of violence. Taxation also drives down youth drinking, which, in turn, lowers the chance that young people will grow into heavy drinkers. One common argument against taxing alcohol is that it disproportionately affects poor people—but a recent study in the journal Preventing Chronic Disease suggests that is not true. Moreover, in general, evidence suggests that less alcohol access does not lead people to use other drugs.

      If policy makers are serious about violence prevention—to say nothing of reducing car and other accidents—they need to reduce alcohol use. Taxation is a simple and powerful way to do so.

  Website by Avi Ornstein, "The Blue Dragon" – 2016 All Rights Reserved