Breakthrough Dialogue 2017: Biography and Meal Form

Michael Shellenberger to Speak at 18th Annual POWER Conference on March 22

Is Paris Good or Bad?

And here’s me agreeing with Josh Barro, who says Paris is toothless:

Do I contradict myself? I don’t think so.

Barro was reacting to this week’s news that Syria, following fellow holdouts Iran, Nicaragua, and Nigeria, has signed onto the Paris Agreement. President Trump announced his intention to leave the agreement earlier this year, which would make the United States the only country in the world not formally committed to Paris’s climate goals.

What’s attractive about the Paris Agreement is the same thing some find disappointing about it: it’s pragmatic. Signatory countries pledge to achieve emissions goals equivalent to or slightly better than business-as-usual: in other words, countries pledge to do what’s possible, which may not be inspiring but is also the only known form of effective policy.

As Brad Plumer and Nadja Popovich showed at the New York Times this week, the Paris targets don’t bring us much closer to the increasingly unlikely 2 degree climate target. The flip side of that wet blanket is that Paris is a framework upon which to build future emissions progress.

And climate policy, such as it is, won’t usually be the next or most important step. Remember that emissions have tended to decline more slowly after major climate policies have been enacted than before. Technological change and macroeconomic trends will have far greater impact on emissions than explicit climate policy, so advocates of steeper emissions reductions should focus on technology and macroeconomics (for instance, the speed and nature of the transition to service economies).

At the time of its original signing, I wrote that the Paris Agreement represented the culmination of decades of progress towards a pragmatic, bottom-up, technology-based climate framework. While pragmatism can’t always withstand black swans like President Trump, that remains the case. But even if some future president recommits the United States to Paris, as I think would be preferable, global climate agreements will always be a reflection of technological reality, not a driver of it.

Sustainable Gastronomy

Senator Lisa Murkowski Announced as Breakthrough’s 2017 T.O.P. Award Recipient

TOP Award

TOP AWARD | TALENT. OPTIMISM. PRAGMATISM.

 

We are proud to announce that Breakthrough’s 2017 TOP Award will go to Senator Lisa Murkowski.

Breakthrough’s vision is a future that is good for people and for nature. Such a future is only possible if our leaders have the talent, optimism, and pragmatism to create positive and lasting change in the world. In that spirit, the TOP Award recognizes public servants, entrepreneurs, activists, and thought leaders who exemplify these qualities.

Recipients of the TOP Award are chosen in recognition of their commitment to work across the aisle, push back against tribal ideology and litmus tests, and find ways to do good for everyone.

Senator Murkowski has been an exemplary public servant, passionately committed to representing Alaska in the U.S. Senate. She is chair of the Senate Energy and Natural Resources Committee and a member of the Senate Appropriations, Indian Affairs, and Health, Education, Labor and Pensions committees.

In particular, we wish to honor Senator Murkowski’s commitment to energy innovation. From her support for innovation programs like the Advanced Research Projects Agency - Energy (ARPA-E) to her leadership on reform to nuclear innovation system in the United States, Senator Murkowski is one of the most important and influential voices in a generation in efforts to transition to cheaper, cleaner, more modern energy technologies. Under both Republican and Democratic leadership, she has worked across the aisle and advocated for increased and improved federal investments in the next generation of U.S. energy technology and entrepreneurship.

Untapped Potential

What Intersectionality Tells Us about That Gender Problem

Jennifer Bernstein’s essay raises the important critique that environmentalism’s knee-jerk reaction to modernization echoes the tendency to naturalize an essential woman whose biology is connected to the nonhuman world. And yet, while it is true that many women throughout the world have experienced progress in terms of their roles in the workforce, home, and daily life, intersectional feminism tells us that we must be cautious in exchanging one essentialized category for another. In this response, we suggest thinking critically about the ways we categorize experiences, practices, and technologies, and we advise against overdetermining women one way or the other. 

Using cooking as an example, Bernstein takes to task the Michael Pollan/Mark Bittman discourse around getting back to the kitchen as a way to democratize our household tasks and redefine our relationship to each other and the food we eat. The Pollan boys make a pretty compelling case on its surface, but Bernstein highlights the class- and gender-based assumptions behind who can actually make the transition from food produced by modern, industrial systems to, say, other approaches to cooking and more “natural” foods (as in, the ways authorities like Pollan and Bittman would tell us how and what to eat; Julie Guthman’s 2007 commentary on teaching food strikes a chord here).

In response, Bernstein turns to the very technology that environmentalists love to critique, which she argues can truly democratize the roles and responsibilities within (and by extension) outside of the household. The potentially liberating role that technology plays, Bernstein points out, can be good for more marginalized and diverse families, for whom “natural” foods are still out of reach. To critique these technologies wholesale reinforces the systemic inequities already in place, hidden behind a veil of healthy eating. So, Bernstein asks, what would environmentalism look like if it took these realities seriously? Well, it would support technology that benefits working-class women and families, rather than criticizing them for using it.

Bernstein certainly presents an important critique of the naturalizing character of our discourses about the environment. To say that women should “get back” to any kind of cooking reinforces a biologically driven social Darwinism that still assumes women are “naturally” connected to nonhuman nature (and as a consequence, are more caring, etc.). And if we follow this to its logical conclusion, this determinist line of thinking suggests that patriarchy is a natural part of that order too, and we must accommodate it rather than working toward smashing it. In other words, boys will be boys, and the only thing we can do is remind them that their behavior is bad, as opposed to examining the structures that reinforce and maintain inequities.

We suggest it is not only society’s discourses and choices that need to be reexamined; systemic inequality and sexism need to be deconstructed too. The story is incomplete if we just talk about specific technologies that women choose without simultaneously talking about institutions and historical context. In other words, we have the capacity to simultaneously respect women’s choices—both those who choose to use so-called modern technology and those who opt for alternative approaches to cooking—and be critical of the patriarchal systems that limit those choices.

Take the different types of women that Bernstein presents to make her argument. These actors are either the middle- and upper-class women who “choose” to use more time-intensive methods to cook for their families (with the associated positive health and social/cultural/political benefits) versus the working-class and poor women who “choose” modern technology to manage an anxious and uncertain economic life. In our view these choices are situational, and to valorize or critique one or the other is a false dichotomy, particularly if that critique is at the cost or benefit of the other.

Intersectional feminism tells us that, while women likely share some experiences, their differing roles and identities based on ideas about race or class result in divergent experiences, and thus, different choices that make sense to them. To generally valorize one choice over the other is problematic, especially without taking those experiences into account.

Bernstein’s critique doesn’t seem to cut both ways. If it is unreasonable to critique a working-class woman for choosing technology, then it seems equally unreasonable to critique the more privileged woman for choosing a time-intensive cooking approach. Valorizing the working-class woman’s technology choices feels a bit knee-jerk too. Nobody’s decisions are without histories and contexts. People do not live in a world that is a free market of choices.

We don’t disagree with Bernstein’s general point. It is elitist and sexist to critique our working-class comrades for their consumption choices, particularly if it reinforces privilege over others. But a critique is only worthy if it is pointed at our subjects of study and ourselves. To point feminism outward without pointing it back at ourselves (or the communities we are valorizing) is a form of orientalism. One community’s necessary choices do not make another community’s choices good or bad.

So what would these feminists say about the modernist/environmentalist dichotomy? Well, it is more complicated. Both systems and individual choices are implicated in socio-ecological challenges. Costs, benefits, choices, systems, and privileges should all be a part of our analysis. And fortunately we have the tools to wrestle with this. Intersectional feminism compels us to look at different choices through the diverse, intersecting identities that women embody just as we work to deconstruct the institutions and histories that limit those choices. It is our job to continually nuance these frameworks and advance a system that appropriately provides choices while simultaneously respecting how people choose to provide nourishment and sustenance for their families.

For further reference:

Might Feedlots Be the Sustainable Option?

The new research, published by the Food Climate Research Network, finds that under the right conditions, best-practice cattle grazing can sequester more carbon in the soil than is emitted over the life cycle of the production system. But these cases are rare and cannot be scaled up across much of the world’s grazing land. This comes in stark contrast to the views of advocates like Allan Savory, who has claimed that so-called “carbon grazing” could sequester not only all the carbon emitted by livestock, but all the carbon emitted across all sectors of the economy.

The authors of this work, and many of the journalists covering it, have used its findings to echo a long-heralded environmental imperative: we should all eat less beef.  

Abstaining from beef or eating lower on the food chain has obvious environmental upsides. But few in this debate have been willing to seriously consider another uncomfortable conclusion: that beef from feedlot cattle might have a lower carbon footprint than beef from cattle that have spent their entire lives on pasture.

In a recent paper in Science of the Total Environment, we show that intensive systems where cattle spend their last few months in feedlots typically involve lower land use, greenhouse gas emissions, and other environmental impacts per unit of meat than systems where cattle spend their whole lives on pasture.


Figure originally published in Science of The Total Environment

The new report doesn’t compare these systems directly, but highlights the need to more carefully assess the environmental impacts of pasture and feedlots.

Some argue that although raising cattle entirely on pasture may emit more methane and other greenhouse gases than feedlot cattle, the additional carbon sequestration makes the former more climate-friendly. But we have to remember that even cattle finished in feedlots spend most of their lives on pasture, so any benefits of soil carbon sequestration accrue to both systems. The only source of environmental impacts not accounted for in our recent paper is the additional pasture land needed for grass-finished cattle. For this to offset the lower impacts of finishing cattle in feedlots, the additional pasture would need to have substantially higher rates of carbon sequestration than the land use it replaces.

While the new report does not settle the question, its findings imply that this is unlikely to be true in many situations. Instead, the authors warn that “significant expansion [of grazing] would cause catastrophic land use change and other environmental damage.”

Given wide differences in feedlots and grazing across countries, it’s almost inevitable that feedlots will be the climate-smart choice in some circumstances just as pasture will be in others. At Breakthrough, we’re currently looking into the best empirical research on lifecycle greenhouse impacts from beef production, including under what circumstances finishing cattle in feedlots versus pasture has the lowest impacts. It’s mighty complex, and worth a healthy dose of care and nuance.

But ultimately, an evidence-based look of this terrain demands that we reconsider what we have cast as the traditional environmental villains. While we’re giving people the uncomfortable advice to eat less beef, we should be honest about the uncomfortable evidence in favor of feedlots.

Potential Energy

So argues Breakthrough Senior Fellow Siddhartha Shome in a new essay for the Breakthrough Journal that urges us to reconsider hydropower as a source of ready electricity in the developing world and, somewhat surprisingly, a driver of forest conservation to boot. Not only does hydropower supply more than half of the world’s renewable energy, Shome argues, but historically, the need to protect hydrologic resources has also motivated and institutionalized hugely effective conservation measures in countries such as Costa Rica, that gem of green virtue known for its verdant primary forests, wide swaths of protected areas, and abundant renewable power—80 percent, in fact, of the country’s total electricity, and about 80 percent of which comes from hydro.

Of course, Shome has no illusions that hydropower comes without trade-offs, nor without controversy. Environmentally, he acknowledges, dams can hugely disrupt ecosystems and displace species. Hydroelectric projects can and have dislodged humans as well, forcing involuntary relocations and economic burdens borne most heavily by indigenous communities. And as of late, there has even been some discussion in academic circles as to the greenhouse gas emissions that dams, too, likely generate.

But confronting each challenge in turn, Shome ultimately demonstrates that such questions attend virtually all energy development. Social and environmental displacement almost always accompany development to some degree, and forgoing development is not an option for countries in Africa, Central and South America, and Asia looking to achieve modern living standards. Managing those trade-offs well requires wise governance, responsible institutions, and equitable management of complex social and environmental processes.

But again, the example of Costa Rica suggests that good governance of the challenges that attend hydroelectric development remains within the realm of possibility. No indigenous populations have been displaced, Shome shows, by hydroelectric projects in the country’s history. The one dam that did require substantial relocation was largely a success story, with the help of a governmental task force, financial aid, and other forms of institutional support. And since, local communities have largely benefited from the economic and social development that the nation’s hydro developments have made possible.

The exceptional conservation gains that hydropower has generated in Costa Rica are equally important to acknowledge. Thirty percent of its land, we learn, falls under some form of national protection. Fifty percent is made up of forest. We have hydropower to thank, Shome says, for Monteverde, for Corcovado National Park, and for the five percent of Earth’s terrestrial species that make Costa Rica’s tropical rainforests their home.

Those places where deforestation poses the greatest threat today are often also the regions where energy development will and must take precedence. We would be remiss, Shome suggests, to disregard hydroelectricity, a power source with the massive potential, if appropriately harnessed, to address both challenges at once.

Kim Elliott

Rick Perry’s One Step Forward, Three Steps Back

American energy policy should move the country toward lower emissions, denser fuels, and more affordable energy. This rule change would cause the most drastic power market regulatory reform in decades, all for one step forward and many steps back.

According to the Department of Energy, the proposed rule is designed to “ensure that certain reliability and resilience attributes of electric generation resources are fully valued.” On Monday, the proposal was “fast-tracked” by the Federal Energy Regulatory Commission. The rule provides cost recovery to power plants with more than 90 days of fuel on site. Only coal and nuclear plants usually have this much supply, so the rule amounts to a federal subsidy for coal and nuclear generation.

But in 2017, coal power is fundamentally uneconomic. Largely thanks to the US shale gas revolution, coal plants are being shut down based on costs alone. As my colleague Michael Goff has recently shown, replacing coal with gas has been the primary driver of lowered US greenhouse gas emissions. Since gas plants emit roughly half as much CO2 as coal plants, the transition from coal to gas and renewables has resulted in a 25% decline in US power sector emissions over the past ten years.

Throughout most of the 20th century, coal power was the cheapest source on the market. But as cleaner and more advanced forms of electricity generation have become available, it’s appropriate for policy to accelerate the transition away from coal. That’s what “making clean energy cheap” is all about. This policy does the opposite—it props up one of the dirtiest, costliest energy sources we have.

Protecting coal power doesn’t even guarantee reliability or resiliency in times of crisis. One of the country’s largest coal plants, Texas’s 2.5 gigawatt WA Parish plant, was partially shuttered and partially converted to gas during Hurricane Harvey. Meanwhile, the collapse of Puerto Rico’s transmission network after Maria proves that other steps, like undergrounding power lines, are much more important for improving reliability. A recent analysis from the Rhodium Group found that 96% of the electricity disruptions in the US over the past five years were caused by severe weather disrupting transmission and distribution. Fuel supply problems, on the other hand, caused a mere 0.0007%.

Undoubtedly, the proposed rule would also help nuclear plants: thanks to the high density of nuclear fuel, plants typically have 18-24 months of fuel on hand. Since both nuclear and coal plants can produce electricity at any time of day, they are grouped as “baseload” sources to be protected by the rule.

But that’s where the similarities end.

Nuclear is substantially cheaper to operate, more reliable, less polluting, and lower-carbon than coal. Deep decarbonization will almost certainly require protecting and expanding nuclear capacity, as nuclear plants provide consistent low-carbon power when intermittent renewables like wind and solar cannot. But meeting any reasonable climate goal will also require eliminating the remaining emissions from coal plants over the next few decades. Ultimately, the rule provides a much-needed lift to American nuclear power at the cost of a poorly designed subsidy for dirty, expensive, outdated coal.

What should DOE and FERC do instead? There are policies that energy regulators should consider to protect existing nuclear and accelerate cheap, reliable power-sector decarbonization. These include expanding renewable portfolio standards into low-carbon portfolio standards, advancing demonstration and commercialization policies for next-generation nuclear technologies, and investing in clean energy RD&D across the board. These policies would move the US power system in the right direction. DOE’s proposed rule change would not.

The Modern Joy of Cooking

Technology, increased leisure time, shifting social structures, and widening of economic opportunity are together changing the way we think about work. Part of this progress of modernity is that the drudgery of household chores can transform into uplifting activities. Take cooking: once the burden of housewives systemically kept out of the workforce, cooking today has become a more egalitarian, enjoyable, and creative endeavor.

Jennifer Bernstein is skeptical. In her essay “On Mother Earth and Earth Mothers,” Bernstein asks what environmentalism would look like if it took feminism seriously. She wants to believe that it does, but its regressive rhetoric around cooking stretches the limits of her credulity. Innovations like the microwave and frozen pizzas were, in recent memory, revered as emblems of feminist progress for their ability to lighten women's domestic workloads. These same products are anathema to greens, who turn up their noses at anything “your great-great-grandmother wouldn’t recognize as food” (Michael Pollan’s famous admonition). Good environmentalists shop at the farmers’ market, always have a pot simmering on the stove, and if they’re really good, ferment their own kimchi.

These nostalgic fantasies make Bernstein wonder if the vision of environmentalism is compatible with feminism. Environmentalism has a tendency to idealize the past as a time when humans lived in greater harmony with nature. Advancing women’s rights, on the other hand, means challenging the traditional gender roles of agrarian societies and embracing the liberating potential of new technologies.

The environmentalist call to the kitchen is meant as both a joyous return to sensory pleasure and a rebellion against the destructive forces of capitalism. In Cooked, Michael Pollan repeatedly describes cooking as a protest “on behalf of the senses” and against “the homogenization of flavors.” This “slow food revolution” involves a lot more domestic labor, and women, Bernstein argues, are the ones who end up doing it. Women are still responsible for the bulk of unpaid work in the home, she finds—even among affluent couples, even in dual-income households. So when environmentalists ask us to cook more, in effect they’re asking women to cook more. It’s negligent, Bernstein suggests, for environmentalists not to grapple with this reality any further than naively hoping that both genders pitch in equally.

Bernstein casts a broad net, contending that these environmental views ignore the realities of women all over the world: from subsistence farmers in Africa to families below the poverty line in America to affluent suburban mothers filled with guilt for not puréeing their own baby food. I have no major beef with the argument that we cannot expect poor women to bear the burden of protecting the environment, and that instead we must find solutions that improve welfare and save the planet at the same time (though socioeconomics seems a more relevant lens here than gender). I’m less convinced, however, of the harm done to women who have the means and time to make environmental choices.

Middle-class and affluent women already experience gender-specific guilt in deciding how to divide their time between home and work. Bernstein resents the way figures like Pollan romanticize the domestic option, making the choice even more difficult. A world in which no mother feels guilty about serving frozen pizza for dinner might be better for women—but only if they are by default the ones responsible for putting dinner on the table.

“Lightening women’s workloads” is not the point of feminism, but rather a corollary to gender equality. While women have yet to achieve that aim, their domestic workload has doubtless been lightened; progress owes in part to time-saving technological advancements, but also to men pitching in more than they used to. While 70 percent of women cook compared to only 43 percent of men, the proportion of women who cook and the amount of time they spend doing so is less than it was four decades ago (women now spend 71 minutes a day cooking, compared to 101 minutes 40 years ago). The opposite is true for men, who now spend 49 minutes cooking compared to 38 minutes 40 years ago. This kind of shift is a central part of the feminist vision: not just that women can play traditionally male roles, but that men can also play traditionally female ones.

Compared to other chores, cooking provides a shining example of men moving into a traditionally female sphere. Though Bernstein condemns environmentalist nostalgia, her own argument is rooted in outdated notions of cooking, which she considers a chore no different from laundry or vacuuming. Food preparation is no longer drudgery in the same way that other chores are. You’d never ask someone over on a date to mop the floor, but roasting a chicken together is a standard feature of millennial courtship. Cooking can be a way to unwind at the end of the day, something to do with friends, a creative outlet. It isn’t always like this, but laundry never is. And the less we see cooking as a chore, the more acceptable it becomes for men to do it.

Changing floor plans provide physical testimony to the way our attitudes around cooking have shifted. My house was built in 1936, and its dark little kitchen is tucked away like a laundry room. It is closed off from the rest of the house by not one but three doors, god forbid the shameful scent of sizzling onions might waft into the living room through an open threshold. Modern kitchens today, in contrast, blend seamlessly with living and dining rooms.

Overall, kitchen work has gained higher status, regarded less and less as pure drudgery by the middle and upper classes. Changing class structures are responsible for this change—a growing middle class, who can afford to make meals into events but not to hire help—more than environmentalist notions about how households should work. The growing awareness that cooking is, at least on occasion, an uplifting activity has contributed to men cooking more than they ever have—not to mention the removal of the literal walls in our homes that separate female-chore spaces from male-socializing ones.

Environmentalists aren’t responsible for these changing attitudes around cooking (as much as they might like to be), nor are they guilty of any crimes against feminism. Men and women cook for reasons entirely unrelated to environmentalism. The impact of this cooking on the most pressing environmental problems, as Bernstein acknowledges, is trivial at best. Environmentalism doesn’t seem to have a gender problem, but it does have an agenda problem: it can’t come up with relevant and effective recommendations that people actually listen to. 

The Allure of Do-It-Yourself

Before Michael Pollan, there was Deadwood, Oregon. Located in a dense green valley in Oregon’s Coast Range, the small pioneer community became a magnet for the back-to-the-land movement of the 1970s. After a decade of chaotic uproar, young hippies were looking for a novel way to protest consumerism and conformity. Many self-styled rebels moved to the country to create alternative communities based on a commitment to “Mother Nature,” organic gardening, and a do-it-yourself ethos. Deadwood’s remote location and seemingly fertile soil made it appealing. My parents set up camp in 1976, and I grew up with a shovel in hand. We grew a rambling garden; we canned fruit, vegetables, and fish; we sat down together every evening to eat an elaborate home-cooked meal.

In her new essay “On Mother Earth and Earth Mothers: Why Environmentalism Has a Gender Problem,” Jennifer Bernstein criticizes the very foundation of my family’s lifestyle: the notion that time-consuming methods of growing and cooking food are somehow environmentally, culturally, and ethically superior. “The glorification of nature and farming and the romanticizing of the home, domestic life, and the woman at the center of it are ultimately nostalgias that cover up the brutality of rural life and drudgery of domestic labor in a perfume of freshly cut hay and caramelizing onions,” she writes, skewering Pollan and other environmentalists for championing unrealistic expectations and stigmatizing labor-saving inventions such as microwave ovens and frozen foods.

As I read Bernstein’s essay, I thought back on my hippie childhood. Many of our neighbors kept livestock; they made cheese and yogurt and were able to raise the bulk of the meat and vegetables they needed for a year. In my family, food was everything. Dinner was typically five or six courses, lovingly prepared with garden and wild ingredients or food we’d preserved for winter. Although we lived well below the poverty line, we ate like Michael Pollan.

Bernstein would likely not be impressed. “Like the household,” she writes, “the smallholder farm idealized as pastoral fantasy disconnected from the capitalist system is contingent upon free labor. In most parts of the world, smallholder farms are economically viable only because women (and often children) provide their labor at no cost.”

As a child, I was certainly no stranger to free labor. I weeded the garden, hauled firewood, worked for our small business, and helped with household chores. Although my dad was the cook in our house, my mother spent much of her time engaged in traditionally female labor. She spun wool, wove blankets and coats, sewed clothing, gardened, and canned tomatoes, peaches, and pickles. Many of the women we knew did all this in addition to cooking, cleaning, washing cloth diapers, and taking care of the kids.

Bernstein is correct in pointing out that this homegrown, DIY lifestyle is not tenable for the average working mother. Our hippie neighbors had time to raise animals and make food from scratch because most didn’t have full-time jobs. They had small businesses (making candles, growing blueberries, construction, carpentry) or they worked seasonally (tree planting, trimming weed). Many people supplemented their incomes with cannabis, though Deadwood was never home to the big plantations that would later dominate southern Oregon and Humboldt County. Deadwoodians were limited by a rainier climate and a certain culture of restraint. They grew discretely and barely eked out a living.

There is some irony to the trajectory of the back-to-the-land movement. Originally inspired by an idealistic vision of a natural rural utopia, these urban and suburban kids became entrenched in the practical details of living in the middle of nowhere. Rickety houses and water systems, mountains of wood to be hauled and split, gardens to be hoed, and the eternal beating back of the underbrush—thimbleberry, salmonberry, and blackberry—that could stifle a house within weeks if left unattended.

But I don’t think my mother and her friends felt particularly oppressed by the required labor. To them it was a form of independence from a “mainstream” culture they viewed as fatally flawed. Living a life that wasn’t reliant on processed foods or pesticides was a rejection of destructive values and hollow cultural norms. Although my mother was prone to complaining about needing to pull weeds or kill chickens, she ultimately found it empowering. When she taught me how to sew a hemline and can peaches, she didn’t believe she was yoking me to the patriarchy—she was empowering me with fundamental skills that I’d need to live what she considered a meaningful and fulfilling life.

My mother didn’t come from a wealthy background, but she had the privilege of being educated, beautiful, talented, and white. She could have had a career or married a man with money. But, like many women of her generation, she didn’t want to work for “the man,” or marry him either. She chose a life of poverty not because she wanted to be poor (she didn’t) but because she wanted to live in the woods and have a certain freedom of movement. Despite her interest in spinning and weaving, she considered herself a strong feminist and certainly “wore the pants” in our family.

Bernstein might disagree. “At bottom, feminist thought and action are incompatible with poverty, agrarianism, and neoprimitivism,” she writes. Although she does not address the back-to-the-land movement of the ’70s and instead focuses on the present trend toward the DIY ethos, she is critical of DIY as a creed. “A growing chorus of voices argues that to be proper environmentalists and nurturing parents, each night should involve a home-cooked meal of fresh, organic, unprocessed ingredients,” Bernstein states, later adding, “At a moment in our history when increasing numbers of women have liberated themselves from many of the demands of unpaid domestic labor, prominent environmental thinkers are advocating a return to the very domestic labor that stubbornly remains the domain of women.”

We didn’t really need to sew our own clothing. We didn’t need to can peaches. We didn’t need to make six-course meals from scratch. My parents didn’t need to go to quite so much trouble to follow the two guiding hippie mantras of their time: “tune in, turn on, and drop out” and “reduce, reuse, recycle.” If their real goal was a more environmentally conscientious lifestyle, we could have subsided on brown rice and kale. We could have gotten all of our clothing from the free box. But their point wasn’t entirely ethical. It was really the point that Pollan would later make. Slow down, skip the shortcuts, enjoy the process, and savor superior results. And they did. We all did. It was lovely to sit down around our big round table and commune with friends and family over giant garden salads, elk stroganoff, plum wine, and home-baked bread. It was glorious to have a pantry full of shining mason jars of peaches and cherries.

In the end, the allure would lead me back home. After spending my 20s in Seattle, I eventually returned to rural Oregon to live in the house where I grew up. My reasons were twofold: part a sincere affection for the homegrown lifestyle, and part economic necessity. Bernstein points out that neoprimitivism is not economically feasible without free labor, but in my case it made sense.

My husband and I were both laid off during the economic depression of 2008. After six months of vainly applying for jobs, we left the city and drove south. In rural Oregon, the rent was cheaper and we could supplement our diet by gardening, allowing us to subsist on odd jobs and my nascent career as a freelance writer. It worked for us: our rural lifestyle gave us flexibility, creative fulfillment (in my case, anyway), and the satisfaction of growing and cooking good food. That said, it was difficult. Sometimes we ran out of cooking oil. Sometimes we didn’t have money for beer. Sometimes it was indescribably dreary to be stuck in the middle of nowhere, hours from the nearest source of Chinese takeout.

As a child, I had been humiliated by our poverty, and as an adult I went to ridiculous lengths to avoid going on food stamps. We arrived in Oregon in November and didn’t yet have a garden, so I taught myself to forage for mushrooms and wild plants. I spent hours scouring the hillsides and researching to correctly identify edible plants. And that was before I even got down to cooking from scratch. It was not an efficient way to put food on the table, but I chose it, and I enjoyed it. The experience certainly didn’t make me feel any less a feminist.

But is our rural lifestyle really low-impact? We eat more locally grown food than many city dwellers. We don’t have a flush toilet. But that’s likely negated by our commuting. Due to economic necessity, my husband is currently working a seasonal job maintaining county parks. We live an hour from his job site, so he now drives 80 miles a day.

When all is said and done, I agree with Bernstein that a DIY lifestyle is not a practical solution to environmental problems, nor should it be viewed as a moral high ground, nor is it feasible for the bulk of society. My parents relied on food stamps to make ends meet, and the back-to-the-land movement would likely have foundered if hippies hadn’t figured out the ultimate cash crop: cannabis. Obviously, government assistance and the black market drug trade should not be viewed as a sustainable way to fund a better diet, nor as a realistic solution for the bulk of the population.

Sure we lived without indoor plumbing. Sure we worked legitimately hard to make ends meet. But in the end, my parents had the privilege of education and ancestry. They chose a difficult row to hoe. And, ultimately, that’s what we should strive for. A world in which more people have the power to choose the way of life that works for them, the lifestyle that brings them joy. I think Bernstein’s right in saying that we won’t get there by committing ourselves to making our own kombucha. 

Why Environmentalism Has a Gender Problem: A Breakthrough Debate

The Allure of Do-It-Yourself

Before Michael Pollan, there was Deadwood, Oregon. Located in a dense green valley in Oregon’s Coast Range, the small pioneer community became a magnet for the back-to-the-land movement of the 1970s. After a decade of chaotic uproar, young hippies were looking for a novel way to protest consumerism and conformity. Many self-styled rebels moved to the country to create alternative communities based on a commitment to “Mother Nature,” organic gardening, and a do-it-yourself ethos. Deadwood’s remote location and seemingly fertile soil made it appealing. My parents set up camp in 1976, and I grew up with a shovel in hand. We grew a rambling garden; we canned fruit, vegetables, and fish; we sat down together every evening to eat an elaborate home-cooked meal.

In her new essay “On Mother Earth and Earth Mothers: Why Environmentalism Has a Gender Problem,” Jennifer Bernstein criticizes the very foundation of my family’s lifestyle: the notion that time-consuming methods of growing and cooking food are somehow environmentally, culturally, and ethically superior. “The glorification of nature and farming and the romanticizing of the home, domestic life, and the woman at the center of it are ultimately nostalgias that cover up the brutality of rural life and drudgery of domestic labor in a perfume of freshly cut hay and caramelizing onions,” she writes, skewering Pollan and other environmentalists for championing unrealistic expectations and stigmatizing labor-saving inventions such as microwave ovens and frozen foods.

As I read Bernstein’s essay, I thought back on my hippie childhood. Many of our neighbors kept livestock; they made cheese and yogurt and were able to raise the bulk of the meat and vegetables they needed for a year. In my family, food was everything. Dinner was typically five or six courses, lovingly prepared with garden and wild ingredients or food we’d preserved for winter. Although we lived well below the poverty line, we ate like Michael Pollan.

Bernstein would likely not be impressed. “Like the household,” she writes, “the smallholder farm idealized as pastoral fantasy disconnected from the capitalist system is contingent upon free labor. In most parts of the world, smallholder farms are economically viable only because women (and often children) provide their labor at no cost.”

As a child, I was certainly no stranger to free labor. I weeded the garden, hauled firewood, worked for our small business, and helped with household chores. Although my dad was the cook in our house, my mother spent much of her time engaged in traditionally female labor. She spun wool, wove blankets and coats, sewed clothing, gardened, and canned tomatoes, peaches, and pickles. Many of the women we knew did all this in addition to cooking, cleaning, washing cloth diapers, and taking care of the kids.

Bernstein is correct in pointing out that this homegrown, DIY lifestyle is not tenable for the average working mother. Our hippie neighbors had time to raise animals and make food from scratch because most didn’t have full-time jobs. They had small businesses (making candles, growing blueberries, construction, carpentry) or they worked seasonally (tree planting, trimming weed). Many people supplemented their incomes with cannabis, though Deadwood was never home to the big plantations that would later dominate southern Oregon and Humboldt County. Deadwoodians were limited by a rainier climate and a certain culture of restraint. They grew discretely and barely eked out a living.

There is some irony to the trajectory of the back-to-the-land movement. Originally inspired by an idealistic vision of a natural rural utopia, these urban and suburban kids became entrenched in the practical details of living in the middle of nowhere. Rickety houses and water systems, mountains of wood to be hauled and split, gardens to be hoed, and the eternal beating back of the underbrush—thimbleberry, salmonberry, and blackberry—that could stifle a house within weeks if left unattended.

But I don’t think my mother and her friends felt particularly oppressed by the required labor. To them it was a form of independence from a “mainstream” culture they viewed as fatally flawed. Living a life that wasn’t reliant on processed foods or pesticides was a rejection of destructive values and hollow cultural norms. Although my mother was prone to complaining about needing to pull weeds or kill chickens, she ultimately found it empowering. When she taught me how to sew a hemline and can peaches, she didn’t believe she was yoking me to the patriarchy—she was empowering me with fundamental skills that I’d need to live what she considered a meaningful and fulfilling life.

My mother didn’t come from a wealthy background, but she had the privilege of being educated, beautiful, talented, and white. She could have had a career or married a man with money. But, like many women of her generation, she didn’t want to work for “the man,” or marry him either. She chose a life of poverty not because she wanted to be poor (she didn’t) but because she wanted to live in the woods and have a certain freedom of movement. Despite her interest in spinning and weaving, she considered herself a strong feminist and certainly “wore the pants” in our family.

Bernstein might disagree. “At bottom, feminist thought and action are incompatible with poverty, agrarianism, and neoprimitivism,” she writes. Although she does not address the back-to-the-land movement of the ’70s and instead focuses on the present trend toward the DIY ethos, she is critical of DIY as a creed. “A growing chorus of voices argues that to be proper environmentalists and nurturing parents, each night should involve a home-cooked meal of fresh, organic, unprocessed ingredients,” Bernstein states, later adding, “At a moment in our history when increasing numbers of women have liberated themselves from many of the demands of unpaid domestic labor, prominent environmental thinkers are advocating a return to the very domestic labor that stubbornly remains the domain of women.”

We didn’t really need to sew our own clothing. We didn’t need to can peaches. We didn’t need to make six-course meals from scratch. My parents didn’t need to go to quite so much trouble to follow the two guiding hippie mantras of their time: “tune in, turn on, and drop out” and “reduce, reuse, recycle.” If their real goal was a more environmentally conscientious lifestyle, we could have subsided on brown rice and kale. We could have gotten all of our clothing from the free box. But their point wasn’t entirely ethical. It was really the point that Pollan would later make. Slow down, skip the shortcuts, enjoy the process, and savor superior results. And they did. We all did. It was lovely to sit down around our big round table and commune with friends and family over giant garden salads, elk stroganoff, plum wine, and home-baked bread. It was glorious to have a pantry full of shining mason jars of peaches and cherries.

In the end, the allure would lead me back home. After spending my 20s in Seattle, I eventually returned to rural Oregon to live in the house where I grew up. My reasons were twofold: part a sincere affection for the homegrown lifestyle, and part economic necessity. Bernstein points out that neoprimitivism is not economically feasible without free labor, but in my case it made sense.

My husband and I were both laid off during the economic depression of 2008. After six months of vainly applying for jobs, we left the city and drove south. In rural Oregon, the rent was cheaper and we could supplement our diet by gardening, allowing us to subsist on odd jobs and my nascent career as a freelance writer. It worked for us: our rural lifestyle gave us flexibility, creative fulfillment (in my case, anyway), and the satisfaction of growing and cooking good food. That said, it was difficult. Sometimes we ran out of cooking oil. Sometimes we didn’t have money for beer. Sometimes it was indescribably dreary to be stuck in the middle of nowhere, hours from the nearest source of Chinese takeout.

As a child, I had been humiliated by our poverty, and as an adult I went to ridiculous lengths to avoid going on food stamps. We arrived in Oregon in November and didn’t yet have a garden, so I taught myself to forage for mushrooms and wild plants. I spent hours scouring the hillsides and researching to correctly identify edible plants. And that was before I even got down to cooking from scratch. It was not an efficient way to put food on the table, but I chose it, and I enjoyed it. The experience certainly didn’t make me feel any less a feminist.

But is our rural lifestyle really low-impact? We eat more locally grown food than many city dwellers. We don’t have a flush toilet. But that’s likely negated by our commuting. Due to economic necessity, my husband is currently working a seasonal job maintaining county parks. We live an hour from his job site, so he now drives 80 miles a day.

When all is said and done, I agree with Bernstein that a DIY lifestyle is not a practical solution to environmental problems, nor should it be viewed as a moral high ground, nor is it feasible for the bulk of society. My parents relied on food stamps to make ends meet, and the back-to-the-land movement would likely have foundered if hippies hadn’t figured out the ultimate cash crop: cannabis. Obviously, government assistance and the black market drug trade should not be viewed as a sustainable way to fund a better diet, nor as a realistic solution for the bulk of the population.

Sure we lived without indoor plumbing. Sure we worked legitimately hard to make ends meet. But in the end, my parents had the privilege of education and ancestry. They chose a difficult row to hoe. And, ultimately, that’s what we should strive for. A world in which more people have the power to choose the way of life that works for them, the lifestyle that brings them joy. I think Bernstein’s right in saying that we won’t get there by committing ourselves to making our own kombucha. 

The Modern Joy of Cooking

Technology, increased leisure time, shifting social structures, and widening of economic opportunity are together changing the way we think about work. Part of this progress of modernity is that the drudgery of household chores can transform into uplifting activities. Take cooking: once the burden of housewives systemically kept out of the workforce, cooking today has become a more egalitarian, enjoyable, and creative endeavor.

Jennifer Bernstein is skeptical. In her essay “On Mother Earth and Earth Mothers,” Bernstein asks what environmentalism would look like if it took feminism seriously. She wants to believe that it does, but its regressive rhetoric around cooking stretches the limits of her credulity. Innovations like the microwave and frozen pizzas were, in recent memory, revered as emblems of feminist progress for their ability to lighten women's domestic workloads. These same products are anathema to greens, who turn up their noses at anything “your great-great-grandmother wouldn't recognize as food” (Michael Pollan’s famous admonition). Good environmentalists shop at the farmers’ market, always have a pot simmering on the stove, and if they’re really good, ferment their own kimchi.

These nostalgic fantasies make Bernstein wonder if the vision of environmentalism is compatible with feminism. Environmentalism has a tendency to idealize the past as a time when humans lived in greater harmony with nature. Advancing women’s rights, on the other hand, means challenging the traditional gender roles of agrarian societies and embracing the liberating potential of new technologies.

The environmentalist call to the kitchen is meant as both a joyous return to sensory pleasure and a rebellion against the destructive forces of capitalism. In Cooked, Michael Pollan repeatedly describes cooking as a protest “on behalf of the senses” and against “the homogenization of flavors.” This “slow food revolution” involves a lot more domestic labor, and women, Bernstein argues, are the ones who end up doing it. Women are still responsible for the bulk of unpaid work in the home, she finds—even among affluent couples, even in dual-income households. So when environmentalists ask us to cook more, in effect they're asking women to cook more. It's negligent, Bernstein suggests, for environmentalists not to grapple with this reality any further than naively hoping that both genders pitch in equally.

Bernstein casts a broad net, contending that these environmental views ignore the realities of women all over the world: from subsistence farmers in Africa to families below the poverty line in America to affluent suburban mothers filled with guilt for not puréeing their own baby food. I have no major beef with the argument that we cannot expect poor women to bear the burden of protecting the environment, and that instead we must find solutions that improve welfare and save the planet at the same time (though socioeconomics seems a more relevant lens here than gender). I'm less convinced, however, of the harm done to women who have the means and time to make environmental choices.

Middle-class and affluent women already experience gender-specific guilt in deciding how to divide their time between home and work. Bernstein resents the way figures like Pollan romanticize the domestic option, making the choice even more difficult. A world in which no mother feels guilty about serving frozen pizza for dinner might be better for women—but only if they are by default the ones responsible for putting dinner on the table.

“Lightening women’s workloads” is not the point of feminism, but rather a corollary to gender equality. While women have yet to achieve that aim, their domestic workload has doubtless been lightened; progress owes in part to time-saving technological advancements, but also to men pitching in more than they used to. While 70 percent of women cook compared to only 43 percent of men, the proportion of women who cook and the amount of time they spend doing so is less than it was four decades ago (women now spend 71 minutes a day cooking, compared to 101 minutes 40 years ago). The opposite is true for men, who now spend 49 minutes cooking compared to 38 minutes 40 years ago. This kind of shift is a central part of the feminist vision: not just that women can play traditionally male roles, but that men can also play traditionally female ones.

Compared to other chores, cooking provides a shining example of men moving into a traditionally female sphere. Though Bernstein condemns environmentalist nostalgia, her own argument is rooted in outdated notions of cooking, which she considers a chore no different from laundry or vacuuming. Food preparation is no longer drudgery in the same way that other chores are. You'd never ask someone over on a date to mop the floor, but roasting a chicken together is a standard feature of millennial courtship. Cooking can be a way to unwind at the end of the day, something to do with friends, a creative outlet. It isn't always like this, but laundry never is. And the less we see cooking as a chore, the more acceptable it becomes for men to do it.

Changing floor plans provide physical testimony to the way our attitudes around cooking have shifted. My house was built in 1936, and its dark little kitchen is tucked away like a laundry room. It is closed off from the rest of the house by not one but three doors, god forbid the shameful scent of sizzling onions might waft into the living room through an open threshold. Modern kitchens today, in contrast, blend seamlessly with living and dining rooms.

Overall, kitchen work has gained higher status, regarded less and less as pure drudgery by the middle and upper classes. Changing class structures are responsible for this change—a growing middle class, who can afford to make meals into events but not to hire help—more than environmentalist notions about how households should work. The growing awareness that cooking is, at least on occasion, an uplifting activity has contributed to men cooking more than they ever have—not to mention the removal of the literal walls in our homes that separate female-chore spaces from male-socializing ones.

Environmentalists aren’t responsible for these changing attitudes around cooking (as much as they might like to be), nor are they guilty of any crimes against feminism. Men and women cook for reasons entirely unrelated to environmentalism. The impact of this cooking on the most pressing environmental problems, as Bernstein acknowledges, is trivial at best. Environmentalism doesn’t seem to have a gender problem, but it does have an agenda problem: it can’t come up with relevant and effective recommendations that people actually listen to. 

David Biello

Tim Searchinger

The Power of Progress

Many on the left of the political spectrum, following Marx, have long imagined that the immiseration of the poor and working classes would lead to revolution. And while there are a few examples where declining economic fortunes have led to a transformative politics that advanced the cause of justice, equity, liberty, and democracy, most notably the New Deal response to the Great Depression, more often than not, the loss of economic and social status for those at the bottom has turned the losers in the economic game against one another.

When people lose hope in a better future, when they conclude, as demagogues on both sides of the political spectrum have told them so persistently, that society and the economy are rigged against them, the response is more likely to be sectarian or fatalistic than progressive and inclusive.

It turns out that to make a better future, you have to believe in a better future.

And to believe in a better future, it is essential that one appreciate the ways that the present too is better than the past. Scholars like Hans Rosling, Max Roser, Steven Pinker, and Charles Kenny have provided the empirical basis for that view, and suggest the social, political, and economic processes that have made that progress possible. What I want to talk about today is why that story is so important to tell.

Now, just so we’re clear, to recognize that by most social and material metrics, almost everyone in the world is better off today than they were a generation ago, much less a century ago, does not mean that there is not still enormous work to do. A billion people globally are still mired in deep agrarian poverty. Feudal and patriarchal social arrangements and traditions still condemn women around the world to second-class citizenship. Civil strife and famine, although on the decline, still plague many regions of the world. Even in the rich world, slowing economic growth and rising economic inequality threaten to upend the institutions and social norms that have provided the foundation for peace, prosperity, and comity since the end of World War II.

It is also true that solving old problems more often than not creates new ones. The era of cheap food and rising incomes that brought an end to hunger in the developed world also brought us higher rates of obesity. The fossil fuels that made modern societies possible brought us first smothering air pollution and now global warming. An increasingly educated populace, with an encylopedia of answers to any query at its fingertips, has proven ever more adept at finding facts, some real, some alternative, to justify its prejudices, biases, and ideological priors.

But for the most part, the new problems are better than the old problems. Climate change is a major threat to human well-being, especially for the poor. But given the choice, poor nations still reliably elect to pursue fossil-fueled development in order to lift their populations out of poverty in the present and take their chances down the road with climate change. The techno-narcissism that fuels our increasingly uncivil democracy still trumps ignorance and autocracy. A beer gut, when all is said and done, is a better problem to have than a swollen belly.

Some would tell us that to recognize these facts risks complacency or the belief, as per Dr. Pangloss in Candide, that we live in the best of all possible worlds. But I want to suggest that the opposite is the case. To appreciate the world that our ancestors built, to express gratitude for the good fortune to have been born at a time when human prosperity, freedom, and possibility are greater than at any time since the remarkable human journey on this planet began 200,000 years ago, is precisely the posture that will be necessary to take on the great challenges that we face in the 21st century.

Meeting here in modern California, it is easy to forget that our great coastal cities, the great agricultural valley just to the east of us, the great waterworks that has made life for 30 million people in this arid and beautiful place possible, were built by people who were much poorer than we are.

But even after a great depression and a great war, they were optimistic about the future. They had seen the world torn apart and then transformed in the space of less than a generation. They understood from their own experience what progress was and what a common faith in a providential future could accomplish.

That same generation extended the franchise to all Americans and passed the laws that brought us clean air and water. And it transformed this valley from a sleepy mosaic of orchards and farms to a great center of technology and learning.

Today, by contrast, the apocalyptic style in American politics is ascendant. Many on the Left would have us believe that addressing climate change, racial inequity, and economic inequality requires nothing short of an end to capitalism, while the Right insists that we must “charge the cockpit or die.” Captivated by fever dreams of apocalypse, Left and Right compete to see which vision of a dystopian future can more effectively galvanize public revulsion at our democratic institutions.  

The danger is not so much that either vision will prevail. Advanced developed democracies are proving more resilient to demagogues and populists than many feared a year ago. And conservatives are learning today the same hard lesson that progressives learned during the Obama years. “Change we can believe in” requires engaging interests, values, and perspectives in the world that we might prefer to ignore.

No, the danger in these notions is that we lose our collective sense of gratitude, for the remarkable world that those who came before us built and with that, a working knowledge of how they did so. When we become so consumed by the world’s problems and its failings that we can’t allow ourselves to see progress, we lose sight of our assets and capabilities. We become entitled, not empowered.

If the perfect is the enemy of the good, then gratitude, for the shared sacrifices that our forebears made, and for the progress that those sacrifices have brought, is the antidote to cynicism and grievance. And that will be our objective today. To deepening our commitment to the practice of progress: to making a good world better, to creating the conditions in which we can all become our best selves, and to remembering in the face of great challenges how much we have already overcome.

Stuck in the S-Curve?

Here’s what solar generation looks like in six leading solar markets around the world:

The top five countries in solar penetration have each achieved 4.5% or more of their total electricity production from solar. Of these, Italy, Greece, Germany, and Spain have shown stagnation in their solar shares since 2014. California, a state with nearly four times the population of Greece, has beaten every county in the BP data set with solar penetration close to 10%, and 13% if distributed solar is also counted.

In Germany, what looked like exponential growth in, say, 2011 looks a lot more like logistic growth today (in other words, an S-curve rather than a steepening slope). Time will tell if California and Japan follow the trend on display in Germany, Spain, Greece, and Italy, but there are reasons to expect this saturation to plague solar around the world.

Fundamentally, the problem comes down to the economics of the power grid. The more solar there is on the grid, the more power is produced when the sun is out, which depresses wholesale prices midday and potentially requires that the solar or some other power source be curtailed. This loss cuts into solar’s economics. As a result, Jesse Jenkins and Alex Trembath observed that solar would be limited at or near penetration equal to its nominal capacity factor on any grid. In other words, if the capacity factor of solar in California is about 27%, then California will have a hard time reaching more than 27% solar on its grid.

Now, 27% would be a lot! We should be so lucky: Greece, Spain, Germany, and Italy have all seen solar deployment saturate well below their own “capacity factor threshold,” at least for the time being.  

Of course, the picture for renewables looks better if we bring wind into the equation. But for similar reasons—they’re both intermittent generation technologies—wind shows a similar, if less pronounced, saturation:

Each of the top five wind-producing countries has achieved 15% electricity or greater from wind over the year. Of these, Denmark, Portugal, Ireland, and Spain have shown stagnation, while Lithuania is continuing to grow without yet showing a clear peak. Lithuania imports about two-thirds of the electricity it uses, has good grid interconnections, and cost-competitive wind resources, creating the market conditions for high wind production.

Additional growth in variable renewable energy in the world’s leading countries could be difficult. One option to smooth out the variability of renewable sources is the connection of grids with high-voltage direct current cables. So far grid-level batteries have not been an economical way to store large amounts of energy. Cheaper lithium-ion batteries could change this, although recent grid-level trials have shown the economic challenges that remain.

While the progress in wind and solar power has been a valuable achievement, excessive hype about renewable energy technologies creates false optimism and leads policymakers to foreclose other important solutions, such as advanced nuclear power and carbon capture and storage. It is therefore necessary to keep a pragmatic view of the limitations of renewable energy and the technological innovation and infrastructure that will be required to overcome those limitations.

Where’s the Fake Beef?

In this post I’ll take a quick look at the significant environmental benefits of meatless meat we would forgo in a knee-jerk opposition to genetic engineering.

As a matter of first principles, the activists’ opposition to the burger appears unfounded. The Impossible Burger doesn’t contain any genetically engineered ingredients. Rather, one of the ingredients, heme, is produced by a genetically engineered yeast—similar to the way insulin is produced for people suffering from Type 1 diabetes. The company has also voluntarily gone through a rigorous food safety review with the Food and Drug Administration that many companies avoid and is conducting further food safety tests.

But more importantly, the activists’ opposition may obstruct progress on one of the world’s top environmental challenges—reducing the impact of meat production. As Breakthrough’s Ted Nordhaus observed in USA Today last week, this is another unfortunate example of environmentalist symbolism obstructing environmental progress.

Most hamburgers are made from ground beef. The Impossible Burger mostly contains coconut oil and protein from wheat and potato. Comparing the resources and pollution involved with each food, as I do below, demonstrates the potential for meat substitutes to benefit the planet.

Total US beef consumption is higher than in any other country. We eat nearly 25 billion pounds of ground beef and other beef products each year, equivalent in weight to about 50 skyscrapers.

Regional Meat Consumption (per capita), 1961–2011

And beef has a disproportionately large environmental footprint. Pound for pound, it has a higher carbon, land, and water footprint than other widely consumed foods. For instance, pasture for cattle grazing takes up over 400 million acres in the US and covers a quarter of the world’s land area. Additionally, over one-third of US corn produced is used for animal feed, with millions of acres of corn farmland devoted to growing feed for beef cattle. Some of this land has replaced or is infringing on prairie, grassland, and wetland ecosystems.

Plant-based burgers and other meatless alternatives, like in vitro meat, promise the taste and experience of a beef burger with a far smaller environmental impact. Research has shown that producing an Impossible Burger requires one-quarter to one-tenth of the water use, greenhouse gas emissions, and land use compared to a typical beef-based burger. For instance, an Impossible Burger requires only a few square meters of cropland to grow the wheat, coconut, soy, and other ingredients involved. Other analyses of plant-based meats have found similar environmental benefits.

Hamburgers: An Ecological Comparison

Plant-based burgers could become widely popular if companies can improve their taste and texture and bring prices down. About one-quarter of beef in the US is sold by restaurants and fast food chains as burgers and ground beef. Imagine if these chains replaced this beef with a 50:50 mix of beef and non-meat ingredients like Google does. Mixing them may get around the fact that plant-based burgers still don’t taste the same as beef burgers, and could substantially reduce beef demand.

How would this help the environment? Based on existing estimates, this replacement would reduce US agricultural greenhouse gas emissions by 24 to 58 million metric tons CO2 equivalent, or by 4.6 to 11.1 percent. Although not a huge percentage, it would be meaningful change—the equivalent of taking 5 to 12 million cars off the road. Just as importantly, it could reduce total land use by 22 million acres, freeing up land for people and animals to enjoy.

With doctors, researchers, and even celebrities calling for people to eat less meat—but rates of pure vegetarianism or veganism stubbornly low—the Impossible Burger and other plant-based burgers seem like a win for people and a win for the planet. There is clearly an enormous potential for low-impact foods that taste like meat to satisfy Americans’ appetite and spare the environment. Are we willing to forgo these benefits because activists don’t approve of the genetic engineering technology used to make one ingredient?

2017 Breakthrough Dialogue East Registration

Sarah Evanega

Varun Sivaram

Jessica Jewell

Ariane de Bremond

Carly Vynne Baker

Deena Shanker

Amy Harder

Last Plant Standing

As we think about the future of power utilities, there is a lingering question as to how these large, but necessary, investments in infrastructure and low-carbon energy will be made in deregulated power markets. Southern Company is unique not because it’s a vertically integrated utility in a regulated market—most utilities in the South are—but because of its commitment to innovation and public-private partnerships, something that most utilities gave up in the 1970s.

Southern Company is one of the only utilities to have a dedicated research and development department, investing over $2 billion in energy innovation since 1970. They invest in projects across the spectrum, from industrial battery storage and carbon capture to advanced nuclear. And it’s their investment in advanced nuclear in particular, something unheard of from other utilities, that made their decision to stay the course on the Vogtle project more logical. Nuclear isn’t just another way to make steam for them—it’s part of a broader strategy to innovate across the power sector.

In 2016, Southern Company announced a partnership with GE-Hitachi to work on their metal-cooled fast-spectrum PRISM reactor. They also announced partnerships with X-energy to develop another advanced nuclear concept, a high-temperature gas-cooled pebble bed modular reactor. Also in 2016, DOE awarded $40 million to a Southern Company-led public-private partnership to develop molten salt reactors, a partnership that includes Oak Ridge National Laboratory, TerraPower, the Electric Power Research Institute, and Vanderbilt University. Southern Nuclear has stated that it plans to have a demonstration reactor come online by 2025, with commercial deployment in the early 2030s.

As forward-thinking as Southern’s strategy has been, the Vogtle plants have been plagued with difficulties. The Vogtle reactors were originally scheduled to come online in 2016 and 2017 but are now not expected until 2019 and 2020 at the earliest due to several delays. Westinghouse took over construction of the Vogtle reactor in 2015 when it purchased the previous contractor CB&I, but then CB&I sued Westinghouse over the distribution of $2 billion in cost overruns across the four AP1000s under construction in the US, the Vogtle share of which was $1.3 billion. Westinghouse made several miscalculations on timeline and project management that led to many of these challenges, and Southern Company has stated that construction has been more productive since they took control in June.

Still, with prospects for increased government investment in energy innovation uncertain, the role of private investment in low-carbon technologies has never been more critical. While the success of the Vogtle project remains up in the air, dependent on extension of the production tax credit and additional loan guarantees, the significance of a public utility willing to make a big, long-term investment in clean energy innovation should be reason for hope.

But if the United States is ever to actually see a nuclear renaissance, we’ll need more than one innovative company and one successful plant. An advanced nuclear industry will likely mean smaller reactors, entrepreneurial firms, new leaders, and new policy.

See our recent report How to Make Nuclear Innovative for more.

Breakthrough Dialogue East 2017

Reducing the Environmental Impact of Global Diets

Reducing the Environmental Impact of Global Diets

Accounting for nearly one-third of global land use and 14 percent of anthropogenic greenhouse gas emissions, meat production weighs heavily on the global environment, and beef production most of all. Comprising 41 percent of total livestock sector emissions, beef production is responsible for pasture that spans a quarter of the world’s land area and threatens further expansion in biodiversity hotspots like the Brazilian Amazon. It makes sense, therefore, that calls to reduce global beef consumption have gained increasing prominence among environmentalists as a way to decrease emissions and spare high-priority land.

But according to a new peer-reviewed paper in Science of the Total Environment—co-authored by Breakthrough’s Marian Swain, James McNamara, and Linus Blomqvist, along with ecologist William Ripple at Oregon State—demand-side efforts to reduce meat consumption will not be nearly enough to confront the environmental impacts of livestock over the next century, especially as meat demand in the developing world continues to grow.

Read the paper: Reducing the Environmental Impact of Global Diets

Improving the environmental efficiency of livestock production systems through intensification, on the other hand, while little discussed, holds significant potential to mitigate the impact of the sector. Intensive beef production in particular, in which cattle finish their lives on grain-based feeds in highly controlled environments, dramatically reduces time to slaughter, and thus methane emissions, as compared with extensive, pasture-only systems.

GHG emissions for animal and vegetable protein sources


Data source: Nijdam et al. 2012

Intensive beef production systems also boast lower land-use intensity than do extensive systems, an especially important finding for developing regions, where intensification could prevent pasture expansion and land conversion even as meat demand continues to rise.

Land-use intensity of animal and vegetable protein sources


Data source: Nijdam et al. 2012

There are certainly trade-offs to intensification, including excessive antibiotic use, local pollution, and animal welfare. But there are also surprising synergies that emerge from some of the practices that improve productivity, including selective breeding and modern veterinary care. With appropriate management and effective policies in place, intensification has the potential to vastly improve the environmental performance of meat production around the world. Dietary preferences will also play a role, but for the sake of global conservation and the climate, it will be essential to keep intensification in view.

 

To access the article, please email the authors directly here.

 

Read more:

Decoupling without Disconnection

Decoupling without Disconnection

We live in an increasingly urbanized world. More than half of the people on the planet live in cities. By 2050, it is predicted that 64 percent of the developing world and 86 percent of the developed world will be urbanized.1 Cities will continue to be the centers of global power, and the voting patterns, consumption decisions, and other political and economic practices of urban citizens will shape conservation in the 21st century. The urban is where a democratic politics of conservation for the Anthropocene needs to start.

Some argue that urbanization is good news for conservation. Aging rural populations and accelerated urban migration can lead to land abandonment. As farming ceases, new spaces are opened up for conservation territories and rewilding.2 Urbanization also permits the “decoupling” of society from environmental damage by reducing inefficient, traditional activities and enabling agricultural intensification and resource substitution—like a shift from wood fuel to renewable energy.3 The North American experience suggests that urbanization can drive support for conservation as well. People only become interested in the intrinsic value of wildlife once they are freed from precarious and tedious agricultural living.4

Proponents of the environmental benefits of urbanization have focused less, however, on the everyday lives of urban residents. While urbanization is generally associated with improvements in health and life expectancy,5 it can also lead to increases in certain diseases. Urban diseases come in different forms with stark and unequal geographies. New arrivals in poor areas of cities in the global south may be exposed to increased levels of infectious disease and environmental pollution.6 Urban residents in cities in the global north, meanwhile, have higher rates of allergy, autoimmune, and inflammatory diseases.7 These trends have been linked to changes in the microbiome associated with modern, urban hygiene practices (like water treatment, sanitation, caesarean sections, and antibiotic use).8

Urbanization clearly reconfigures our ecological connections to the environment. Some argue that urban life also leads to pathological social and psychological disconnections from nature. Urban residents all over the world have fewer educational encounters with the types of wildlife that concern conservationists, and they develop less working knowledge of the processes of food production. Some go so far as to suggest that urban residents are alienated and suffer from “ecological boredom”9 or “nature-deficit disorders”10 that are manifest in a range of mental health conditions.

There is seemingly a paradox in this story: we might want to make space for nature in the countryside by encouraging people to move to the city, but the urban is presented as a deeply unnatural place to live. I make this argument not to romanticize rural life, but to identify a deep ambivalence about the urban in different strands of contemporary urban, health, and environmental theory. Looking ahead, the challenge for academics and policy makers is to identify and support ways of decoupling society from environmental impacts, without disconnecting groups of people from beneficial encounters with nature and from democratic involvement in the political process through which these might be sustained.


Decoupled, urban living doesn't have to be a post-nature wasteland.

One way to move beyond the paradox of urban disconnection is to avoid a binary spatial imagination in which the world can be divided into clear urban or rural spaces. In reality, the urban has a much more varied geography. There are urban villages, urban farms, parks and gardens, and post-industrial urban wilds. There is also a wide range of ecological networks through which urban spaces are connected to their rural hinterlands. Rivers, canyons, ridges, railways, roads, and other bits of infrastructure all bisect the city and allow wildlife into and out of urban areas. Finally, some urbanization happens through low-density suburban sprawl. The urban and the rural are not so easily separated.

This more varied geography offers different ways of thinking about the place of the urban in conservation and about the opportunities for connecting urban residents with environmental management. Encounters between people and wildlife in the city can give some pointers for a democratic model of conservation in which citizens have access to nature and a stake in how it is managed. There are many different urban public green spaces from which one could start. Here, I focus on three types, using examples taken from North American and European cities.

The first is municipal parks—like Central Park or Golden Gate Park, and a range of less famous examples. Many of these were created during the rapid expansion of cities during the Industrial Revolution. Green spaces were set aside as a result of political pressure by social reformers and public health advocates concerned about the mental and physical health of the urban industrial working classes.11 In some ways we can understand these urban parks as a legacy of 19th- and 20th-century concerns about the pathologies of urban life. The provision of urban green space has since moved in and out of fashion in urban planning. It declined during the early- to mid-20th century when the emphasis was on industrial development, private suburban gardens, and concrete and high-density vertical living in city centers.12  A focus on public green space returns toward the end of the 20th century in association with the rise of “new” or “green” urbanism.13 This movement emphasizes lower-density, “liveable cities,”14 the restoration of former industrial sites (like London’s Olympic Park or the High Line in New York), and planning obligations toward the building of green infrastructure—like green walls and roofs.15


The High Line in New York City. Image credit: David Berkowitz.

Parks and reclaimed urban green space are often found in more affluent parts of cities, and may serve as a catalyst for raising property prices and for urban gentrification. Park budgets have been especially strained during the recent period of austerity in public finances, and the management of many parks is shifting from municipal or state control to community or trust arrangements. Some parks are being forced to pay for themselves through the hosting of private events, or even the introduction of entry charges. Nonetheless, the history of parks demonstrates how valuable land was set aside to provide public spaces for the urban poor. Space was created in which people living in densely packed urban housing can connect with nature and gather to relax, converse, exercise, protest, and engage in a wide range of civic practices necessary for the successful functioning of a democratic society.

A second example would be community gardens and allotments. These are parcels of land set aside or reclaimed from urban development and allocated to local residents to grow food. Allotments have a rich political history in the UK.16 In rural areas, they emerge from the backlash against the 17th-century enclosure of common land, and the Diggers movement that protested the “right to dig.” Urban allotments emerge from political forces similar to those that drove the creation of urban parks. They are strongly linked to political anxieties about the implications of urbanization and industrialization on the laboring poor. Before the advent of the welfare state, allotments were allocated as a cheap way to feed households on incomes below a living wage. Demand for allotments is strongly indexed to periods of economic crisis and austerity, reaching peaks in the UK during World War I and again as a result of the World War II “Dig for Victory” campaign. Demand for allotments fell away toward the end of the 20th century, but has seen a resurgence in the last two decades as a result of the rise of environmentalism amongst largely middle-class urban publics. Even in the current era of cheap food, demand for allotments outstrips supply in many UK cities.

Allotments take the form of small private spaces, arranged in such density that they require communal interaction. In contrast, many community gardens are managed in common, with the deliberate aim of engendering social inclusion and cohesion. Community gardens have provided important social and agricultural spaces for immigrant groups new to cities. They enable the unemployed, elderly, those with mental health problems, and other vulnerable groups to develop skills, relax, and socialize.17 Community gardens have also been used as means to protest against unwanted urban development and to begin the regeneration of areas scarred by industrial decline, middle-class suburban flight, and housing abandonment. In short, while allotments and community gardens are not a means to enable urban self-sufficiency, they can help fill some local food gaps. More significantly, they make it possible for sometimes-marginal citizens to gather, socialize, and organize in public space, and to maintain a productive connection to the environment.


A community garden in New South Wales. Image credit: https://www.flickr.com/photos/d-olwen-dee/8201694319/in/album-72157632057433482/.

Third, and finally, is the less celebrated role of what the British naturalist Richard Mabey calls the “unofficial countryside”18: the scruffy bits of land on the edges and amidst the infrastructure of the city. These may become parks and community gardens, but they currently stand as neglected “edgelands,”19 feral spaces that are outside private or state control. They include post-industrial brownfield sites, disused parking lots, old graveyards, military installations, and the like. These spaces are often denigrated as toxic wastelands, or eyesores. They may be sites of transgression and illegal activity, where squatters, travelers, and the homeless make informal settlements. These are areas planners, policymakers, and many publics would rather not see, where those at the very bottom of urban society are left to decay. There is little that is aspirational or affirmative in popular perceptions of these places.

But in some instances, these sites come to comprise an urban commons with important democratic and ecological potential. The biographies of contemporary British natural history writers are littered with accounts of their transformative childhood experiences in such locations.20 Postwar bombsites, abandoned factories, power stations, and railways lines were the places in which young, working-class nature enthusiasts learned to explore and discover the natural world. These were accessible places close to home, and full of interesting wildlife. They offered places for a feral, anti-establishment model of urban natural history that defined itself in opposition to the perceived elitism and exclusivity of rural nature reserves and the privatized landscapes of the agricultural countryside. Writing today, these authors and activists are concerned that the forms of green urbanism and ecological restoration described above might tame these spaces, “greenwashing”21 them into monotonous parks and surveilling and sanitizing them such that they deter youthful adventure. Some environmental NGOs and educationalists couple these trends with the marked decline in the time children spend unsupervised outdoors. They predict a broader crisis in popular understanding and support for conservation amongst urban citizens.22


“‘The unofficial countryside’: the scruffy bits of land on the edges and amidst the infrastructure of the city.”

Urbanization is a seemingly inevitable force with significant global environmental potential. While urban areas have their problems, the urban is not necessarily a space lost to nature. Nor will urbanization inevitably lead to human disconnection from the environment. Humans, plants, and animals can thrive in cities, and urban green spaces offer diverse examples of ways in which citizens can be politically connected through environmental management. It is possible to decouple society from damaging environmental processes without disconnecting people from the political and ecological processes of living on a planet marked by human impacts. It is in the urban that we might find forms of conservation that recognize common claims to land and democratic models of land management fit for the Anthropocene.

Can We Love Nature and Let It Go?

Decoupled, Not Detached

A similar criticism came from Giorgos Kallis, a professor at the Universitat Autònoma de Barcelona and a prominent degrowth advocate, in an online response. “It seems that the manifesto calls for less material connection and more ‘emotional’ connection to nature. Yet it is unclear how the latter will come without the former in the urban, genetically modified paradises envisaged. Playing Tarzan video games?”

The manifesto had made clear that what we meant by decoupling referred to our material dependence on nature, not our spiritual or emotional connection to it. Camping, gardening, wilderness, and parks were good. Trying to grow all our food with low-productivity smallholder farms was not. But that seemed not much to matter to these and other critics. Conservationists whose lives had literally been dedicated to walling off nature by creating nature preserves and various other sorts of protected areas attacked the manifesto for proposing to do exactly that.

In part, this was because much of the focus of the manifesto was on measures that would improve resource productivity such that humans would need less nature to meet their needs, rather than on placing restrictions upon human activities. In part it was because we did not suggest degrowing the economy or otherwise restricting human living standards or economic prosperity as a key measure to protect the environment.

But perhaps more than anything, the claim that ecomodernists propose creating a fully synthetic world, entirely detached from nature, was a useful fiction, allowing its purveyors to recast the pragmatic and empirical environmental case for intensification of human settlements, food, and energy systems as a recipe for a dystopian future. This idea, that technology alienates us from nature, and hence our authentic selves, has animated environmental thought and ethics from its 19th-century origins onwards, a conviction drawn, in turn, from the movement’s Romantic antecedent. Humans in nature are authentic. Humans using technology to mediate, manipulate, or modify nature are alienated.

It is this dichotomy, more than anything, that explains the revulsion at the manifesto shared by both traditional conservationists who deify nature and political ecologists like Kallis who reject concepts like nature and wilderness altogether. Human technological progress cannot be the environment’s salvation because it is, a priori, its antithesis. If we don’t depend on nature for our sustenance, in this view, we won’t value it for its intrinsic worth.

In a new essay for the Breakthrough Journal, environmental writer Emma Marris challenges this idea on social, philosophical, and practical grounds. She reminds us that modern conservation ethics only emerge once people have been liberated from the hard physical labor of scraping a living from the natural world, and that it is precisely the Tarzan videos, National Geographic specials, and other sorts of virtual experiences of nature that Kallis mocks that account for much of our contemporary appreciation of the natural world. And she argues that appreciating nature for its intrinsic value remains a deeply anthropocentric act. We value nature neither because some ethicists have decided that all creatures have “existence rights” nor because forests provide water filtration services (Marris reminds us that just about any kind of forest, including a tree plantation, can provide those services just as well) but because in uncountable and ineffable ways, having nature in our lives enriches our experiences and our physical and emotional well-being.

But beyond that, Marris offers an explicit and specific vision for precisely how we might decouple human material well-being from nature while maintaining, indeed deepening, our spiritual and emotional connections to it. She calls this framework “interwoven decoupling,” envisioning urban centers pulsing with human and natural life, “interwoven” with urban farms, community gardens, city parks, wilder, untended lots, and other “tendrils of nature” reaching in from the wilderness beyond the city’s reach.

But having these things also means recognizing precisely what function they serve—not any meaningful form of production, but rather connection. A world with both more wilderness and small farms, gardens, parks, and urban wildlife that connect us, as we put it in the manifesto, “to our deep evolutionary history” is only possible if the vast majority of food that we produce is grown and raised through hyper-efficient forms of agriculture. “The paradox of the ‘natural upscale’ lifestyle,” Marris writes, is that modes of living and farming that remain tightly coupled with nature—eating local, organic, grass-fed, and the like—are simply less efficient modes of production and hence, by most metrics, worse for the environment.

Marris’s essay is an important read for all who care about conservation and who wish to deepen human connection and commitment to an ecologically vibrant planet. It is also, of no less import, a useful corrective to those who suggest that there is no alternative to the binary choice between recoupling human societies to natural systems or accepting a future that is sterile, synthetic, and fully detached from nature.

As always, your thoughts and responses are welcome.

Farming Better

With world population expected to exceed nine billion people by 2050, global food production needs to increase dramatically. One of the central challenges of this century will be to feed this growing population without degrading the natural environment. Producing food already emits over 25 percent of anthropogenic greenhouse gases, takes up more land than any other human use, is a primary cause of habitat loss, and contributes to several types of water and air pollution that harm humans and ecosystems.

Some advocates and academics have proposed that low-input production such as organic agriculture could feed the world. Many assume that growing food with less synthetic fertilizer, energy, and other inputs would leave a smaller environmental footprint. But a growing body of research suggests that the reality is more complex.

For their study, Clark and Tilman analyzed over 700 food production systems, assessing greenhouse gas emissions, land use, energy use, and contributions to water and air pollution in order to compare the relative merits of different systems, including conventional farms, which often use synthetic inputs such as chemical fertilizer and pesticides, and organic farms, which abide by organic certification standards and forgo these synthetic inputs.

Notably, they found that organic agriculture requires more land, contributes to greater eutrophication (a harmful type of water pollution), and generates similar greenhouse gas emissions per unit of food as conventional agriculture. This is because most organic farms are less productive and use a lot of manure to fertilize their crops. Lower productivity means that organic farms need more land to produce the same amount of food as conventional farms. And while using manure is a resourceful way to recycle waste, it has its downsides. Manure provides nutrients to crops less efficiently than conventional fertilizer, and thus a greater portion of it seeps into waterways and is emitted as nitrous oxide, a potent greenhouse gas.

Clark and Tilman’s work therefore suggests that a dramatic and widespread shift from conventional to organic agriculture and other low-input systems will not reduce many of agriculture’s environmental impacts. This is not to argue, however, for the status quo. Conventional farming has its share of problems; Clark and Tilman find that conventional farms use more energy and fossil fuels than organic farms and note that they tend to apply high amounts of pesticides that harm humans and ecosystems.

This underscores the importance of accelerating sustainable intensification. We need to help all types of farmers—organic and conventional alike—to adopt practices and inputs that improve their productivity while minimizing their environmental impacts. Some of these practices are already common among organic farms, as others are among conventional ones. Cover cropping and multi-cropping, for example, common in organic agriculture, can increase productivity and reduce land use and nutrient loss. Likewise, emerging precision agriculture technologies that many people associate with large-scale conventional agriculture, such as fertigation and variable-rate fertilization, have already begun to reduce agriculture’s environmental footprint while boosting yields.

Existing research indicates that developing and spreading such best practices will require greater investment in research and efforts to provide farmers with the information and inputs they need. Where this is not enough, Clark and Tilman point to reducing food waste and meat consumption. “It’s essential we take action,” Tilman says, “to increase public adoption of low-impact and healthy food, as well as the adoption of low-impact, high-efficiency agricultural production systems.” In order to do so, it will be critical to think beyond the dichotomy of conventional versus organic, and focus instead on the uptake and development of better ways to farm.

Breakthrough Dialogue East Announced

Breakthrough Dialogue East Announced

We’re bringing the Breakthrough Dialogue to Washington, DC!

For the last seven years, the Breakthrough Institute has hosted a unique conversation with scholars, technologists, business leaders, philanthropists, and policy-makers in the Bay Area about how to build a future that is good for people and the environment. The Dialogue has been described as the anti-Davos, a place where leading thinkers on energy, conservation, farming, and innovation from across the political spectrum ask hard questions of their own assumptions, philosophical commitments, and ideological priors.
 
This November, we’re bringing that conversation to the East Coast. In the face of new global environmental challenges, and at a moment of intense political polarization in the United States, we hope you’ll join us for a day designed to help us all step away from the policy debates and political controversies of the moment, to consider together what we really know about the relationship between human well-being, environmental change, technological progress, and economic and political modernization.
 
Past Dialogues have helped launch cross-cutting new initiatives to develop advanced nuclear energy technologies, promote technological innovation and infrastructure planning to advance biodiversity and conservation, and develop an environmental politics that explicitly rejects both neo-Malthusianism and pastoral romanticism.
 
Topics will include energy transitions, agriculture for nine billion people, conservation on a used planet, and top-down versus bottom-up environmentalism.
 
 

A Glimpse of the Breakthrough Dialogue:

Charles Mann

Charles Mann

Breakthrough Dialogue 2017

Breakthrough Dialogue 2017: Democracy in the Anthropocene took place on June 21 – 23, 2017

In a world in which humans have become the dominant ecological force on the planet, good outcomes for people and the environment increasingly depend upon the decisions we collectively make. How we grow food, produce energy, utilize natural resources, and organize human settlements and economic enterprises will largely determine what kind of planet we leave to future generations. Depending upon those many decisions, the future earth could be hotter or cooler; host more or less biodiversity; be more or less urbanized, connected, and cosmopolitan; and be characterized by vast tracts of wild lands, where human influences are limited, or virtually none at all.

If the promise of the Anthropocene is, to paraphrase Stewart Brand’s famous coinage, that “we are as gods,” and might get good at it, the risk is that we are not very good at it and might be getting worse. A “Good Anthropocene” will require foresight, planning, and well-managed institutions. But what happens when the planners and institutions lose their social license? When utopian civil society ideals conflict with practical measures needed to assure better outcomes for people and the environment? When the large-scale and long-term social and economic transformations associated with ecological modernization fail to accommodate the losers in those processes in a just and equitable manner?

If the enormous global ecological challenges that human societies face today profoundly challenge small-is-beautiful, soft energy, and romantic agrarian environmentalism, the checkered history of top-down technocratic modernization challenges its ecomodern alternative. It is easy enough to advocate that everybody live in cities, much harder to achieve that transition in fair and non-coercive fashion. Nuclear energy has mostly been successfully deployed by state fiat. It is less clear that it can succeed in a world that has increasingly liberalized economically and decentralized politically. Global conservation efforts have become expert at mapping biodiversity hotspots but still struggle to reconcile global conservation objectives with local priorities, diverse stakeholders, and development imperatives in poor economies. Rich-world prejudices about food and agricultural systems, meanwhile, frequently undermine agricultural modernization in the poor world.

Where contemporary environmentalism was borne of civil society reaction to the unintended consequences of industrialization and modernity, the great environmental accomplishments of modernity—the Green Revolution, the development and deployment of a global nuclear energy fleet, the rewilding and reforestation of vast areas thanks to energy transitions, and rising agricultural productivity—proceeded either out of view or over the objections of civil society environmental discourse. Today, the Green Revolution, nuclear energy, and the transition from biomass to fossil energy are broadly viewed as ecological disasters in many quarters, despite their not insignificant environmental benefits.

At the 2017 Breakthrough Dialogue, we tackled these questions head-on. Attitudes towards urbanization, nuclear energy, GMOs, and agricultural modernization are beginning to shift, as the magnitude of change needed to reconcile ecological concerns with global development imperatives has begun to come fully into view. Can a Good Anthropocene be achieved in bottom-up, decentralized fashion? Can there be a robust and vocal civil society constituency for ecomodernization? What should we do when not everyone wants to be modern, and what is to be done when political identities and ideological commitments trump facts on the ground? If it turns out, in short, that we’re not very good at being gods, is it possible to get better at it? 

Breakthrough Paradigm Award 2017: Calestous Juma

Ted Nordhaus presents Calestous Juma with the 2017 Breakthrough Paradigm award.

 

Plenary Sessions

Democracy and Ecomodernism

Human societies have alternated between scarcity and abundance, disruptive social change and stable institutions, periods of peace and prosperity and episodes of violence and collapse for as long as there have been human societies. Over the last few centuries, and especially since World War II, the world has become become increasingly stable, thanks to rising affluence, democratization, and shared investments in science, technology, and infrastructure. But today, many of the institutions that have made those arrangements possible appear to be under assault, from illiberal populism on the right and postmodern relativism on the left. With faith in social authority, government, and politics waning in many parts of the world, what will be necessary to sustain social, economic, and environmental progress?

Speakers:

  • Steven Pinker, author, The Better Angels of Our Nature
  • Ruth DeFries, professor, The Earth Institute at Columbia University

  • Nils Gilman, vice president of programs, Berggruen Institute

  • Moderator: Oliver Morton, senior editor, The Economist

 

When Is Big Beautiful?

The failure of systems and institutions that are too big to fail has become a cautionary tale for our age: of systems too complex and large-scale to manage, authority too remote to account for local conditions or incorporate local knowledge, and technocrats too swept up in their own overweening ambitions. Stoked by Hayekian fears of collectivization on the right and Schumacherian dreams of localized economies on the left, we have talked small across the political spectrum for the last half-century even as the breadth and scope of human enterprises, the complexity of our technological systems, and the scale of political institutions necessary to manage both has only grown. Few voices have been willing to defend bigness. But on a planet of seven-going-on-nine billion people, is big inevitable? The long-term shift toward greater centralization—mega-cities, centralized electrical grids, industrial agriculture—has brought not insignificant benefits alongside the high-profile risks and failures that so much contemporary debate seems obsessed about. How should we balance our desires for greater control and “small-d” democracy with the increasing scale of social organization in the Anthropocene?

Speakers:

  • Robert Atkinson, president, Information Technology and Innovation Foundation
  • Luis Bettencourt, director, Mansueto Institute for Urvan Innovation at the University of Chicago

  • Susanna Hecht, professor, UCLA and Geneva Institute for Advanced Study of International Development

  • Moderator: Alex Trembath, director of communications, The Breakthrough Institute

 

The Ecomodern Economy

Ecomodernism envisions a dematerializing economy that makes fewer demands upon natural resources and ecosystems. Promising trends suggest such a future might be possible. Population growth is slowing and is flat or even falling in many parts of the world. Material consumption for many goods and services is saturating in developed economies. Rising resource productivity has enabled us to produce greater material output with less material input. But what are the consequences of these developments for the economy? Slowing population growth and saturating demand for goods and services has brought slower economic growth. Rising resource productivity has been closely coupled with rising labor productivity. The dark side of slowing economic growth, demand saturation, and rising resource and labor productivity could be secular stagnation, the jobless recovery, and rising inequality. What will be necessary to assure that a resource-efficient, decoupled global economy will be a prosperous or equitable one? Could nature-liberating technological change be as great a threat to shared economic prosperity as efforts to restrict economic activity in the name of the environment?

Speakers:

  • Erik Brynjolfsson, co-author, The Second Machine Age 

  • Andrew McAfee, co-author, The Second Machine Age

  • Dario Gil, vice president of science and solutions, IBM

  • Moderator: Eduardo Porter, columnist, The New York Times

Eating Ecologically

The last decade has seen growing attention to the sustainability of food systems, with much of the discussion led by prominent chefs, food critics, and journalists. The basic argument has been that food that is organic, local, grass-fed, and wild is more sustainable than that produced through the large-scale, industrial systems that dominate food production today. However, in recent years a number of careful studies have begun to suggest that the sustainable choice might not be so obvious. Humans use roughly 40% of the ice-free land on the planet to grow food and raise livestock, comprising the vast majority of our direct land footprint, and the expansion of cropland and pasture is the biggest driver of biodiversity loss. In this panel, we take a hard look at what kind of farming systems might practically bring the best outcomes for both people and the environment. Do the largely arbitrary definitions of “organic” and “conventional” serve us? How might we apply the best attributes of industrial and organic production systems in order to produce more and healthier food with lower attendant impacts on the environment?

Speakers:

  • Jayson Lusk, Regents Professor and Willard Sparks Endowed Chair, Oklahoma State University

  • Danielle Nierenberg, president, Food Tank

  • Pedro Sanchez, professor, Institute for Sustainable Food Systems at the University of Florida

  • Moderator: Tamar Haspel, journalist, The Washington Post

Democracy and Conservation

The historic legacy of conservation has often been characterized by the displacement of people, truncation of rights, and blaming of victims. The litany of confusions and tragedies that have arisen from well-intentioned conservation efforts is long and ongoing. What would a progressive pro-people conservation look like? How might win-win conservation strategies facilitate the movement of people out of conservation zones while assuring them better land, water, and living conditions, and accelerate land-sparing modernization and access to markets and infrastructure?

Speakers:

  • Krithi Karanth, conservation scientist, Wildlife Conservation Society

  • Stephanie Romañach, research ecologist, Wetland and Aquatic Research Center

  • Jamie Lorimer, professor, University of Oxford

  • Moderator: Paul Robbins, director, Nelson Institute for Environmental Studies at the University of Wisconsin-Madison

 

Concurrent Session Topics:

Biotech and Conservation

To date, biotechnology has mostly been deployed in the agriculture sector, but now the same genetic tools are being harnessed for conservation outcomes. Scientists are working on novel applications of gene editing to improve the natural world’s resilience to human threats—for example, by altering genes to improve genetic diversity in endangered species. New techniques like gene drive also open the door to radical ecosystem interventions like killing off rodents that threaten native bird species, or eliminating the mosquitoes that spread Zika virus. “Genetic rescue” even offers the possibility of bringing back extinct species like the passenger pigeon or woolly mammoth. The pace of new genetic developments is occurring faster than the conservation community can debate them. Should humans intervene at the DNA level to preserve biodiversity, or leave nature alone? What is the state of the art with these cutting-edge technologies, and how should we understand the ethical and environmental debates that rage around them?

 

Climate Policy Beyond Two Degrees

Sometime later this century, global carbon emissions will push atmospheric temperatures above the international target of 2°C above preindustrial levels. Ignoring this basic reality has allowed experts to craft goals and policies that do not consider a post-2°C world. That is the work of this panel. What sorts of social and economic impacts can be expected with warming above this target? How can climate policy shift to embrace a more pragmatic approach to resilience and adaptation on a hotter planet? Should we adopt a new target—2.5°C? 3°C?—or abandon targets and timetables altogether?

 

What’s the Next Wave of the Energy Access Push in Africa?   

Energy is central to modern life, yet Africans consume just a tiny fraction of the energy used by citizens of wealthier regions. Demand for affordable and reliable access to energy is growing rapidly across the continent, driven by both economics and politics. In both Nigeria and Ghana, for instance, electricity has become a visible and powerful signal of the government’s ability (or not) to deliver jobs and a better life. In both countries, the current government won recent elections in part on the back of frustration at blackouts and slow progress on electrification. Our panel will consider different future energy paths for African countries and the linkages to job creation, public expectations, and democratic accountability.

 

Is Centrism Dead?

The problem of America’s widening political polarization remains stubbornly unsolved. It is universally acknowledged that the Left and Right no longer share any basis for policymaking or even understanding of reality, but how to mitigate this—to return to a previous era of bipartisan cooperation, or to forge a new framework to break through hyper-partisanship—is an open question. Some organizations have proposed explicitly centrist, bipartisan, moderate, and/or unlabeled political coalitions that take the “best of both sides” to defuse partisan tensions. Others reject this approach, insisting that the only way forward on policy priorities (for the Left or the Right) is to soundly defeat the opposition. The election of Donald Trump threw America’s polarization into the sharpest relief yet, raising the question: is centrism dead? It is an essential question for a 21st-century American politics, in which social policy, health care, a shifting labor force, and environmental challenges all require innovative and unprecedented government action. Can we move forward as a nation with our politics as broken as they are?

 

Science of Communication

Disagreements around scientific issues seem to be growing as a source of contention in political discourse. A lack of science literacy is blamed for a lack of action on a range of environmental and public health issues. Many advocates are calling for more science education and better science communication, and are demanding respect for the field of science and their expertise. But how we communicate is also a science, with a dynamic and growing understanding of the best ways to engage the public on complex technical issues. In this session, we’ll explore what the latest science says about how best to communicate around these issues.

 

Values in Science and Policy: Whose Nature?

Social values are inextricably entangled in environmental science and policy alike. Whether in shaping how wilderness is managed, what role science should play in policy, or which people and priorities ought to be privileged in land-use decisions, environmental questions are inevitably values questions. But acknowledging that these values exist is one thing; grappling with how they should be accommodated in our environmental decisions is another. Practically speaking, how should diverse and competing values be identified and brought into conversation? What institutions offer promising practices for integrating the values of various stakeholders with scientific expertise in a productive, actionable fashion? And how can we do so at a moment when the “commons,” expert credibility, and democratic institutions appear to have lost favor?

 

Decarbonization Beyond the Power Sector

While much of the conversation on global decarbonization focuses on electricity, the power sector is only responsible for about a quarter of global greenhouse gas emissions. In comparison with other sectors, reducing emissions in the power sector seems easy. In this session, we’ll explore the options and challenges for decarbonizing heavy industry, transportation, and agriculture.

 

YIMBY (Yes In My Backyard)

The world’s population is continuing to grow and urbanize, and the need for strong cities has never been greater. The way we build cities has profound economic, environmental, and social implications. As ecomodernists, we recognize the centrality of cities to conservation, innovation, and human prosperity.

The cost of renting and home buying continues to escalate, leading to an affordability crisis in many cities around the world, especially in hot markets like San Francisco and New York City. The YIMBY movement believes the problem is that cities have too little housing and that the solution is to build more housing. But is it really that simple? Our panel of experts will discuss the political and economic challenges surrounding the housing market. They will address whether we can build enough housing to meet demand without unacceptably impinging on the needs and desires of established residents.

 

Bottom-Up Meets Top-Down in Africa

There are large, seemingly intractable tensions between African aspirations for bottom-up democracy and the centralizing tendencies of strong development states. How can aspiring African development states move fast to deliver (often highly centralized) modernization projects while retaining the legitimacy of their populations, many of whom have well-justified skepticism toward centralized power following long legacies of its abuse? How can governments balance the need for public buy-in with the imperative to move quickly and decisively—in particular, where it concerns large infrastructure projects that necessarily involve trade-offs between the collective good and groups of people who will lose out from modernization, at least in the short term (for example, those displaced by the construction of dams or the conversion of informal urban slums into new housing developments)?

 

Nature Needs Half: The Ethics and Practicalities of Protecting Half the Earth

Building on provisions set out in the Convention on Biological Diversity to increase global protected areas, a growing number of scientists argue that only by setting aside half of the Earth’s surface, land, and water for biodiversity conservation can we ensure the survival of the world’s species, habitats, and the ecological services they provide. However, increasing the amount of land protected globally from today’s 15% to 50% by mid-century could have consequences for people, which in turn could jeopardize conservation objectives, unless due consideration is given to human development and the rights of local communities. This session will explore the challenges, opportunities, and risks of realizing the Nature Needs Half vision. What does it mean for communities living at the frontline of conservation? What are the implications for how we develop and manage human-dominated landscapes like farmland and cities? And how should it be done to ensure that both people and nature benefit?

 

Emerging Nuclear Economies

There are over forty countries interested in starting commercial nuclear power programs around the world. These countries come from a broad range of geographies, economies, and governing structures. As the United Arab Emirates sets to open its first commercial reactor this year, which countries will be next, and what challenges will they face? On this panel, we’ll explore why new countries are interested in developing nuclear power and what are the biggest challenges they face.

 

What Green Can Learn from Pink: Five Lessons from the Success of the Gay Rights Movement for the Environmental Movement

Over the last 25 years, during which the politics of climate change have become ever more polarized, there has been a sea change in attitudes toward gay and queer Americans. How did LGBT rights win, while environmentalists have lost ground? There are important overlaps: how voters identify in terms of LGBT rights and climate change have become core questions of political and personal identity, and there are extremely well-funded social movements to advance change on both issues. But there may be limitations to the comparison—the economic and technical obstacles to climate action have no parallel in the LGBT debate. On this panel, two veterans of LGBT rights campaigns and two veteran observers of the climate wars will discuss what Green can learn from Pink and what lessons might not apply.

Steven Bananas

Global Conservation on a Used Planet

In so doing, Ellis has complicated two longstanding environmental ideas. The first being the notion that human transformation of the biosphere has been a relatively recent development and the second, relatedly, that there is some scientifically discernible baseline to which nature, as distinct from humanity, might be returned.

For doing so, Ellis has been demonized in some quarters, accused of counseling complacency in the face of new and catastrophic ecological threats. If human transformation of the planet is an age-old phenomena, then why worry about present-day affronts to the environment? And if there is no baseline or original state to which nature might be returned, then why bother with conservation?

In a new Breakthrough Journal essay, Ellis offers a convincing case for why we should care about conservation, what it will take to preserve our natural inheritance as human societies evolve through the 21st century, and how conservation must be understood to be an ongoing and long-term social and cultural project. Far from being sanguine about the loss of biodiversity, Ellis challenges us to create a future rich with wildness. “If you aspire to live on a planet where wild creatures roam unhindered across wild landscapes,” he writes in the opening paragraphs, “this is not the planet you are making.”

Ellis argues that because Nature has co-evolved with human societies, it is as much a product of human social and cultural evolution as are our societies. For this reason, global conservation, he suggests, “is far more likely to emerge as a shared social project evolving from the bottom-up aspirations of the world’s people, their societies, and their dynamic environments over the very long term” than through top-down technocratic initiatives. That will depend in turn, Ellis says, on our ability to inspire and promote collective aspirations based on “the abiding human love and concern for wild nature.”

Ellis’s detractors have predicated their attacks on his work upon a rhetorical sleight of hand, transforming Ellis’s conviction that continuing social learning and technological innovation might allow for good outcomes for people and nature in the Anthropocene to the much stronger statement that those capabilities will assure good outcomes.

Ellis’s latest piece should establish once and for all that he makes no such claim. Good outcomes in the Anthropocene will depend upon the decisions that humans collectively make. Will we be able to grow food and produce energy in ways that will allow us to leave more room for nature? And will we choose to do so with the foresight to not only leave more room for nature but also to connect those areas across human-dominated landscapes that will continue to be densely settled and intensively farmed, so that biodiversity can flourish on a used planet with a changing climate?

Ellis’s vision is expansive and ambitious. But it also recognizes that there is no path toward global conservation objectives that can avoid the messiness, uncertainty, compromise, and negotiation that all democratic politics requires. Your thoughts, comments, and responses as always are welcome.

Read more from Breakthrough Journal, No. 7
Democracy in the Anthropocene
Featuring pieces by Erle Ellis, Emma Marris, Calestous
Juma, Jennifer Bernstein, and Siddhartha Shome

Nature for the People

In Search of a Feminist Environmentalism

The proximate target of Bernstein’s opening stanzas are lifestyle gurus like Michael Pollan, who urge women back into the kitchen in search of social connection, domestic harmony, and healthy families. But her broader beef is with an environmental ethic that continues to feminize nature, masculinize technology, and romanticize the home, the kitchen, and the small farm as places untainted by technology, industry, and commerce.

Reading Bernstein reminds us that those places were also historically the sites of patriarchy and ritual violence against women. When the self-styled ecofeminist Vandana Shiva valorizes subsistence agriculture because it places women closer to nature, the source, she claims, of all wealth, she is in reality condemning them to lives of hard physical labor and often brutal oppression.

A similar claim was, not incidentally, precisely the argument that was used to justify American slavery. And yet for this, along with her rejection of genetically modified seeds, Shiva has become a hero to many environmentalists who style themselves progressive.

These sorts of sentiments were once antithetical to leftists and progressives. “At least since Virginia Woolf identified ‘a room of her own and five hundred a year’ as the necessary preconditions for a woman to achieve personal and professional empowerment,” Bernstein observes, “feminists have advocated for those fruits of modernization—individuation, privacy, education, and civil rights—that have enabled the relative gender equality the majority of the developed world experiences today.” Marx, meanwhile, famously described rural life as a form of idiocy.

But whether born of nostalgia for rural idylls that never existed or dogmatic insistence that peasant life must be virtuous precisely because it sits outside “hegemonic neoliberalism,” progressives today too often end up defending the indefensible.

There is, of course, nothing inherently wrong with spending an afternoon shopping at a farmers’ market and preparing an elaborate meal for one’s family if one has the time, money, and inclination to do so. But that must be understood as a privilege, not a virtue, one built upon several centuries of agricultural modernization, technological progress, and economic growth that have allowed most of us in the wealthy, developed world to garden, search out ingredients for meals, and cook for our families because we want to, not because we have to.
 

Read more from Breakthrough Journal, No. 7
Democracy in the Anthropocene
Featuring pieces by Erle Ellis, Emma Marris, Calestous
Juma, Jennifer Bernstein, and Siddhartha Shome

How Natural Gas and Wind Decarbonize the Grid

Cheap natural gas has reduced carbon emissions on the US electricity grid more than anything else over the past decade.

That’s the core conclusion of our new analysis. But while the coal-to-gas shift continues to lead power sector decarbonization, wind is playing a bigger and bigger role.

Over 2007–2015, the period we studied for this analysis, the replacement of coal with cheaper natural gas was responsible for a cumulative 443 million tons of carbon emissions reductions. That compares to 316 million tons of reduced emissions due to lower demand, and 294 million tons due to increased wind generation. See Figures 1 and 2 for details.

 

 

Figure 1: Decarbonization benefit of new generation and demand reduction in 2015, compared to 2007 emissions

 

Figure 2: Cumulative decarbonization of new generation and demand reduction, 2008–2015, relative to 2007 emissions

 

These conclusions build on a 2014 Breakthrough analysis in which we found that the decline in coal was overwhelmingly due to natural gas, while wind replaced a much more diverse base of resources over the 2007–2013 period.

How can natural gas, a fossil fuel with significantly greater emissions than renewable sources, decarbonize the power sector more than wind?

If natural gas generation replaces coal generation while wind replaces hydroelectric, then gas has a greater decarbonizing benefit. And that does happen, for instance, in the American Midwest, where gas has displaced a lot of coal, while wind has displaced some hydro in the West. But on average, a megawatt-hour generated by wind reduced more carbon than one generated by gas. The biggest reason that gas has reduced carbon emissions more than wind is simply that there’s a lot more new gas generation than wind.

It’s a complicated question, and we need more than national data to answer it. So, as in our 2014 analysis, we broke down annual generation into the 10 North American Electric Reliability Corporation (NERC) regions. Although the NERC regions do not perfectly represent autonomous electricity markets, they allow for more precise analysis of the evolution of the grid than would be possible from looking at US electricity as a whole.

Similar to the 2007 to 2013 change, in 2014 and 2015 we continue to observe major declines in coal production and increases in natural gas. Wind was the second largest source of growth in generation. Solar saw big growth in 2014 and 2015, mostly in the WECC (Western Electricity Coordinating Council) region, but remains under 1% of total electricity nationally (our figures, taken from the EIA Form 923, count only utility-scale solar and not distributed solar).

In each of the mainland NERC regions, coal production fell, while gas production increased in all regions except WECC (gas and solar also fell in ASCC, the Alaska Systems Coordinating Council). Gas was the largest source of new generation, while wind was the second largest. Changes in each major source are illustrated in Figure 3.
 

Figure 3


 

To estimate the greenhouse gas benefit (or detriment) of new power sources, we first estimate the intensity of displaced sources by taking the average of the carbon intensities of all sources that decreased, weighted by the amount of decrease. The greenhouse gas intensity benefit or cost of a fuel that grew on the grid is the difference between the displaced intensity and the intensity of that fuel. If a power source that grew has a greater carbon intensity than the sources it displaced, then the greenhouse gas benefit is negative, or, in other words, deployment of the source led to re-carbonization instead of decarbonization.

This calculation is based on the assumption that a power source that grows displaces sources that decreased in the proportions by which they decreased. However, if a grid grows in size overall, then only a portion of the increase in a power source displaces other sources, and the remainder satisfies new demand. The benefits or costs of adding new sources are reduced accordingly.

To calculate the total decarbonization benefit of a given source over all eight mainland North American grids, we take the weighted average of the decarbonization benefit over all grids for which that source grew, weighted by the amount of growth. Emissions factors for each power source are noted in the Appendix. These decarbonization benefits are shown in Table 1.
 

Table 1


So as a whole, each megawatt-hour of natural gas reduced carbon less than an average megawatt-hour of wind or solar. But due to the sheer volume of natural gas generation, compared to the relatively smaller amount of wind and still pretty negligible amount of solar, natural gas reduced more carbon in absolute terms over the eight-year period we looked at.

From 2007 to 2015, overall generation fell in all grids except FRCC (Florida Reliability Coordinating Council), TRE (Texas Reliability Entity), and SPP (Southwest Power Pool). As seen in Table 1, that demand reduction is responsible for a significant portion of reduced emissions. But obviously, demand reduction reduces emissions more on grids with higher carbon baselines, and as the country has emerged from the Great Recession, demand reduction has slowed.

Meanwhile, wind energy is doing a lot more decarbonization work than it has in previous years.

All of this is good news. Carbon emissions are 14% lower now than they were a decade ago in the power sector, mostly as a result of the shale gas revolution, the build-out of wind energy, and demand reduction. As we look toward deeper decarbonization, there are some important implications.

For one, there’s only so far that demand reduction can take us. Much of the “decarbonization” of the last decade was actually just slower economic growth following the Great Recession—that’s something we cannot (and, we think, should not) hope for in the future. Further, as other sectors of the economy—including transportation and industrial sources—electrify, total electricity demand is almost certain to go way up this century. With that in mind, we need to double down on low- and zero-carbon power generation to reach deep decarbonization.

Two, the coal-to-gas transition will run out of runway eventually. If we want to reach the 30% reductions by 2030 envisioned by the Clean Power Plan, then coal-to-gas is a good way to get there. If we want to reach 80% or higher reductions by 2050, we’ll need to replace that natural gas with renewables, nuclear, and/or carbon capture in the long term.

Three, renewables—especially wind but also solar—are a growing force for decarbonization. That will continue as wind and solar get steadily cheaper with further deployment. But even as wind turbines and solar panels get cheaper, greater penetration on electric grids leads to value deflation. That’s because greater penetrations of intermittent renewables fluctuate between zero generation and overgeneration on a grid, which leads to increased costs. You can read more about this challenge here.

If we put all that together, we arrive at our ultimate conclusion: we need better tools, technologies, and policies to meet our ambitious climate goals. Follow our work in the future for more on deep decarbonization.
 

The author would like to thank Eric Gimon for a fruitful discussion and helpful suggestions on this project.

How Natural Gas and Wind Decarbonize the Grid

On Mother Earth and Earth Mothers

Democracy in the Anthropocene

Democracy in the Anthropocene

emocracy, tolerance, and pluralism,” my coauthors and I wrote in early 2015 when we published An Ecomodernist Manifesto, hold the “keys to achieving a great Anthropocene.” At the time, it was the notion of a great Anthropocene that seemed preposterous to some. In the face of looming ecological catastrophe, the only choice, according to many critics, was between a future that would be bad and one that would be worse. But today, it is our faith in democracy, tolerance, and pluralism that perhaps seems more audacious.

The ecomodern project is predicated upon a notion of providence — that life for humans is getting better, as social scientists like Hans Rosling, Max Roser, Steven Pinker, and Ruth DeFries have reminded us in recent years. Despite the depredations of the Great Recession, Western publics remain, from top to bottom, among the wealthiest human beings to have ever lived. The freedoms, identities, privileges, and opportunities available to even the poorest among us are simply unprecedented in the history of the human species.

With unprecedented knowledge, technology, and resources at our disposal, human societies have the capability to make a planet that is good for both humans and nature. The question today, and not just for ecomodernists, is whether, in the face of the proliferating values, ideologies, and priorities that come with material prosperity, we will be capable of charting a course that can assure wise development of the global commons.
 

A rising tide may have floated all boats, but it has also set them against each other in new ways.


From Brexit to the election of Donald Trump, it has become apparent that in increasingly affluent, postmaterial, and unequal societies, the center cannot hold. A rising tide, over more than two centuries, may have floated all boats, but it has also set them against each other in new ways. Unmoored from the disciplining project of modernization and lacking real external enemies, politics has become a zero-sum competition for status, recognition, and identity.

Since the 2007 financial crisis and the deep global recession that followed, these fractures have galvanized the wave of populist revulsion — at the “establishment,” the status quo, shadowy elites, and international cabals of bankers, capitalists, and globalists — that has swept across advanced developed economies, as publics around the world have sought strong leaders and strong medicine to restore the nation, the people, and the economy to their rightful place in the world.

Elections in France and the Netherlands have stanched the momentum of populist movements for the moment. But nowhere is there much confidence that the long-standing formula for peace, prosperity, equality, and freedom over the seven decades since the end of World War II is capable of addressing the grievances that are fueling the retreat to populism and authoritarianism among polities around the world.

Today, liberal democracy, internationalism, global trade, and multiethnic societies are all under duress. The Left rails at something called “neoliberalism,” while the Right blames immigrants, internationalism, multiculturalism, and postmodernism. Lacking anything capable of uniting increasingly fractious and polarized polities, politics, in the ordinary sense of the word, becomes impossible. The social solidarity necessary for all nation-building projects, or even increasingly just the simple exercise of basic governance, recedes from view.

In this issue of the Breakthrough Journal, we consider the many consequences of this postmaterial, postindustrial, and postmodern moment for ecomodernism. How might the long-term planning horizons necessary for good outcomes in the Anthropocene be reconciled with bottom-up, democratic governance? And what is to be done when late-modern, postmaterial publics lose track of both the civil and material conditions that have made the extraordinary prosperity and freedoms we take for granted possible?

1.

“What does environmentalism look like when it takes women’s realities seriously?” geographer Jennifer Bernstein asks in a bristling new essay, “On Mother Earth and Earth Mothers.” Tracing the troubled gender dynamics of modern environmentalism and the biological determinism of ecofeminist discourse, Bernstein documents the ways in which calls for a return to the kitchen by food and lifestyle gurus like Michael Pollan, and a return to the farm by ecofeminists like Vandana Shiva, disempower women and naturalize traditional gender roles.

“At a moment in our history when increasing numbers of women have liberated themselves from many of the demands of unpaid domestic labor,” Bernstein writes, “prominent environmental thinkers are advocating a return to the very domestic labor that stubbornly remains the domain of women.”

The consequences of this sort of “new naturalism” for women are not insignificant. In developed countries, as lower- and middle-class working women attempt to make the best of stagnating wages, packed schedules, and the demands of their “second shift,” moralizing calls for “a languorous, technology-free, larger-than-life cooking experience” only add to the burdens of women’s work. In the developing world, Bernstein continues, those burdens are all the more onerous on women, condemning them to often brutal and patriarchal rural livelihoods that bear little resemblance to the pastoral idyll romanticized by those who would reject modern industrial society for an agrarian alternative.
 

Environmentalists need to rid themselves, once and for all, of the pastoral romanticism that has animated environmental ethics for more than a century.


What will it take for environmentalism to reconcile itself with feminism? At bottom, environmentalists will need to rid themselves, once and for all, of the pastoral romanticism that has animated environmental ethics for more than a century. “Modern notions of rights, identity, and agency cannot be reconciled with premodern social, economic, and political arrangements,” Bernstein concludes. “Environmental ethics that reject those prerequisites in the name of the natural and pastoral are, simply put, irreconcilable with feminism.”

Abandoning our nostalgia for pastoral utopias doesn’t mean we must abandon a personal connection to farms, food production, or wild nature, nature writer Emma Marris argues in “Can We Love Nature and Let it Go?” But it does require that we reconsider why we love those things and what we want from them. Too often, the case for conservation has been made with species counts or calculations of the economic value of so-called “ecosystem services.” But it is the intrinsic value of nature, Marris observes, “which we variously characterize as love, respect, connection, awe, sense of place,” that she sees as “by far the most powerful motivator for those who work to protect nature.” This becomes evident in the abundance of farmers’ markets, urban farms, and community gardens in our midst today, which produce virtually no food at scale but fulfill an essential need for connection to nature within increasingly urbanized societies.
 

We aren’t going to feed the world with small-scale, low-intensity agriculture.


Decoupling human well-being from environmental impacts shouldn’t entail decoupling people from experiencing nature in all its manifold expressions, whether “out there” in wilderness or just down the street in a community garden. But it does require recognizing that we aren’t going to feed the world with small-scale, low-intensity agriculture. Were such a thing even possible, it would require turning all the world’s forests and wetlands and savannas into smallholder farms and pastures. The answer, rather, lies in what she calls “interwoven decoupling,” a world in which most food is produced intensively and at large scale, and most people live in dense, urban settlements, but in which nature of all sorts – protected areas, “tendrils of wildness” reaching into cities, small farms, and community gardens – abounds, not because we need it, but because we want it.

The key to continuing to love nature even as we let it go will be to decouple our spiritual, emotional, and cultural connections with it from our dependence on it for material sustenance. “If we use up nature,” Marris writes, “we will be miserable. If we wall ourselves off from nature, we will be miserable. The path to joy is to allow nonhuman nature to thrive by reducing our demands upon it, while loving ourselves enough to allow ourselves to remain within it.”

2.

Imagining a future in which a prosperous, modern, and technologically advanced society pulls back from nature because it no longer depends on it for material well-being is the sort of first-world concern that much of the planet would love to have. Liberating most people from backbreaking agrarian labor and creating enough societal prosperity that thoughts can turn to what kinds of nature we would like to keep around is a project that, for most people around the world, is still unfinished.

In “Leapfrogging Progress,” the great Harvard development scholar Calestous Juma reminds us that despite the promise of cell phones and solar panels, there is still no substitute for infrastructure and industrialization. Africa has, over the last decade, experienced a mobile communications revolution. Hundreds of millions of Africans now have access to smart phones. Mobile banking and a variety of other telecommunications services are widely available throughout the continent. But Africans today are still primarily consumers, rather than producers, of those technologies and services. Until that changes, Juma argues, Africa will remain poor and underdeveloped.

The mobile revolution might have provided a foundation for industrial growth and economic diversification. But it was wrongly seized upon as an opportunity to leapfrog industrialization altogether. As a result, African countries remain wedded to their legacy economies, pursuing value addition to raw materials as a means to development — a strategy that in the end limits their capacity to become dynamic learning economies. It is infrastructure, Juma writes, and the institutional growth and technological learning that go along with building and maintaining it, that generates innovation and diversification truly capable of transforming African economies.
 

Despite the promise of cell phones and solar panels, there is still no substitute for infrastructure and industrialization.


The missed opportunities of the mobile revolution hold important lessons for those who argue that it might be replicated for Africans through the provision of solar-powered distributed energy systems. Unless African economies are able to capture a significant share of the value chain associated with solar manufacturing and distribution, solar panels may provide minimal amounts of energy for African households but are unlikely to significantly raise incomes or contribute to Africa’s economic development.

Innovation, Juma argues, is the primary driver of long-term economic transformation, and infrastructure, counterintuitively for some, provides the foundation for learning and innovation. “If anything, the evolution of the mobile sector demonstrates the continued importance of industrial development as the source and catalyst for innovation and economic growth,” Juma concludes. “Leapfrogging particular technologies, such as landlines, may in some cases be an option. But industrialization itself, and the innovation and development it generates, cannot be skipped over.”

If distributed solar isn’t likely to offer a path out of poverty and toward modern living standards, then what energy source might provide Africans with cheap and abundant clean energy, capable of powering development and modernization? In “Untapped Potential,” Breakthrough Senior Fellow Sid Shome suggests it might be time to reconsider an older source of renewable energy — hydroelectric power.

Because of its social and environmental costs, hydro development has long been viewed skeptically in most environmental quarters. But it also remains the largest source of renewable electricity production globally and the world’s largest source of low-carbon power. Shome offers a surprising case to demonstrate his point: the “Green Republic” of Costa Rica, in which hydro not only accounts for two-thirds of the country’s total electric power generation but has also played a central and largely unrecognized role in its extraordinary history of forest conservation.
 

Good institutions, industrialization, infrastructure, wise energy development, and conservation are a package deal.


Hydro is worth a second look, Shome argues, because it offers a cheap and abundant source of low-carbon, dependable, on-demand energy for nations with relatively modest technological and engineering capabilities. For this reason, it has historically played an outsized role for many nations around the world in their transition to becoming modern energy economies. Hydro development has brought with it significant collateral damage to communities, especially indigenous communities, and habitat in and around flooded areas. But the Costa Rica case demonstrates that with wise planning and good institutions, the impacts of hydro development can be significantly, if not entirely, mitigated, with significant benefits for both local communities and surrounding forests and habitat.

Of course, given Africa’s tragic colonial history and complex challenges in the postcolonial era, wise planning and strong institutions are no certainty. But there is little reason to think that Africa can successfully leapfrog functioning institutions any more than it might leapfrog industrialization or infrastructure. Good institutions, industrialization, infrastructure, wise energy development, and conservation are a package deal. Pursuing those outcomes piecemeal is unlikely to bring good outcomes for either people or the environment.

3.

If the last two decades of effort to address global environmental challenges have proven anything, it is that global, top-down restrictions upon human activity, be it greenhouse gas emissions or conversion of tropical forests for agriculture, are a dead end. Yet even as the United States withdraws from the Paris agreement, supposedly science-based technocratic frameworks to limit human environmental impact continue to proliferate. Whether framed in terms of global carbon caps, planetary boundaries, or restricting humanity to half the planet so that the rest may be reserved for nature, the notion that technocratic limits might be imposed upon a growing global population from above by a self-regarding and moralizing expert class is neither practical nor wise.

Yet the same critique might also be lobbed at ecomodernists advocating a good Anthropocene. The vision of an urbanized planet in which human societies are powered by nuclear energy and fed with large-scale, technological agriculture is no less prone to top-down technocratic regimes than is the idea that various sorts of scientifically prescribed boundaries might limit human endeavors. In a world in which human activities and impacts have achieved unprecedented planetary scale, it is difficult to even comprehend the challenges and opportunities we face in the Anthropocene without both thinking at global scales and relying upon science and expertise to make those challenges and opportunities visible.

In “Nature for the People,” leading Anthropocene theorist Erle Ellis offers a framework for thinking about nature and wilderness in the Anthropocene that is both sweeping and bottom up. Sustaining and promoting biodiversity, he argues, will require a global, threefold approach: enhancing the productivity of the lands we use around the world for our own purposes, protecting the areas this intensification opens up, and — “the greatest challenge of them all, the grand challenge of the Anthropocene” — reconnecting habitat across both on a planetary scale.
 

Humans today have greater powers than ever before to craft our planetary future.


Doing so, Ellis acknowledges, will involve trade-offs, as well as an unprecedented level of social coordination. Such collaborative work, if it is to be effective and equitable, must emerge and evolve from the bottom up, through intricate and prolonged social learning and cultural negotiation across local and regional societies and private and public institutions. But perhaps most important to the success of such an ambitious collective project, Ellis suggests, will be the force of its ideals and aspirations.

“Protection and connection at the planetary scales needed to sustain wild creatures and wild spaces through the Anthropocene will not succeed without connecting deeply with the abiding human love and concern for wild nature,” Ellis insists. The forms these values take are many and at times conflicting, and the decisions ahead will be complex and open-ended, but such is the nature of a democratic and aspirational conservation ethic up to the challenge, and true to the potential, of a good Anthropocene.

In these ways, Ellis makes the paradoxical nature of the Anthropocene clear. Humans today have greater powers than ever before to craft our planetary future. But at the moment those powers have begun to come into view, it is not at all clear that we are capable of finding sufficient consensus across borders, ethnicities, values, or ideologies to use those powers with foresight and wisdom.

For now, long-term processes of industrialization and modernity continue to chug along, decoupling our land, energy, and resource use in relative and sometimes absolute terms. But despite these impressive gains, it has also become clear that the challenges we face require a more rapid and intentional acceleration of these processes. That, in turn, would seem to demand some renegotiation of the social, cultural, and political arrangements that appear increasingly incapable of doing so today.

In the face of those challenges, now is not the time to retreat to our technocratic toolboxes, nor into our ideological corners. As Ellis maintains, the planet we occupy, “is a social construct, shaped physically and culturally by the perceptions, values, aspirations, tools, and institutions of societies past and present.” These structures and processes are impossibly complex, but that should not imply that they are out of our control.

Faced with great social, economic, and environmental challenges and increasingly cacophonous global polities, one can understand the temptation to put ourselves in the hands of some cadre of social, economic, or scientific elites. As the fray appears to descend into madness and tribalism, the search for anyone who might be above that fray is not just limited to populists convinced that a wealthy and successful man, not beholden to “special interests,” might do the public’s bidding. The same desires motivate liberal elites to want scientists and technocrats, whom they imagine to be detached from popular political passions or corrupting economic interests, to guide environmental policy making. But there is no real alternative to democratic accountability and self-governance. Democracy, pluralism, and tolerance still represent the only path to a good Anthropocene.

Read more from Breakthrough Journal, No. 7
Democracy in the Anthropocene
Featuring pieces by Erle Ellis, Emma Marris, Calestous
Juma, Jennifer Bernstein, and Siddhartha Shome

New Issue test

journal

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Morbi aliquam laoreet metus, id elementum tellus vulputate et. Mauris eros est, pulvinar vitae condimentum ac, laoreet tincidunt sem. Proin vel lacus mattis libero varius iaculis. Morbi rutrum porta tristique. Pellentesque pharetra vehicula ante, sed aliquam nibh condimentum ut. Morbi sapien sapien, convallis vitae aliquet eu, tempor nec purus. Quisque at dui quis turpis tempus facilisis non eget arcu. Cras rhoncus tincidunt lorem id pharetra. Morbi dignissim purus nec lacus convallis pretium.

content test

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Morbi aliquam laoreet metus, id elementum tellus vulputate et. Mauris eros est, pulvinar vitae condimentum ac, laoreet tincidunt sem. Proin vel lacus mattis libero varius iaculis. Morbi rutrum porta tristique. Pellentesque pharetra vehicula ante, sed aliquam nibh condimentum ut. Morbi sapien sapien, convallis vitae aliquet eu, tempor nec purus. Quisque at dui quis turpis tempus facilisis non eget arcu. Cras rhoncus tincidunt lorem id pharetra. Morbi dignissim purus nec lacus convallis pretium.

journal-test

Democracy in the Anthropocene

Jennifer Bernstein

On Mother Earth and Earth Mothers

ot so long ago, technologies like microwaves and frozen foods were understood to be liberatory. Along with washing machines, dishwashers, vacuum cleaners, and a host of other inventions, these household innovations allowed women to unshackle themselves from many of the demands of domestic labor. It didn’t all work out as hoped. With labor-saving technology at hand, cleanliness and other domestic standards rose. Today, women still perform the lion’s share of domestic work, even among affluent couples, and even within a rising share of dual-income households.1

But it is also true that domestic labor demands upon women in affluent economies have declined dramatically. In the 1960s, women spent an average of 28 hours per week on housework; by 2011, they averaged 15.2 These gains are all the more important as wages in the United States stagnate and the number of single-parent households have grown.3 With many women taking on more than one job, facing longer commutes, and working irregular hours to make ends meet,4 the technological progress that has enabled something so handy as a 30-minute meal only eases the burden of the “second shift,” those unpaid post-work chores that still fall overwhelmingly to women.

And yet, today, a growing chorus of voices argues that to be proper environmentalists and nurturing parents, each night should involve a home-cooked meal of fresh, organic, unprocessed ingredients. “We’re doing so little home cooking now,” food guru Michael Pollan says, “the family meal is truly endangered.”5 Chastising the typical household for spending a mere 27 minutes a day preparing food, Pollan champions increasingly time-consuming methods of food production in defense of the allegedly life-enriching experience of cooking he fears is rapidly being lost.6

The juxtaposition is jarring, if not much remarked upon. At a moment in our history when increasing numbers of women have liberated themselves from many of the demands of unpaid domestic labor, prominent environmental thinkers are advocating a return to the very domestic labor that stubbornly remains the domain of women.

For women of lower socioeconomic status, the demands of a time-intensive, low-technology approach to food preparation are even more onerous. In a critique of this return-to-the-kitchen narrative, authors Sarah Bowen, Sinikka Elliott, and Joslyn Brenton describe interviews they conducted with mothers from a variety of ethnic and socioeconomic groups, whose experiences could not have been more unlike the idealized vision offered up by Pollan—in which the cook finds herself “in that sweet spot where the frontier between work and play disappears in a cloud of bread flour or fragrant steam rising from a boiling kettle of wort.”7 Rather, they were juggling tight schedules, picky children, and the cost of fresh ingredients.4

The idea of a languorous, technology-free, larger-than-life cooking experience is consistent with the longstanding characterization of the kitchen (and the associated garden and farm) as a premodern oasis disconnected from the broader forces of technology and industrialization.8 Like the household, the smallholder farm idealized as pastoral fantasy disconnected from the capitalist system is contingent upon free labor. In most parts of the world, smallholder farms are economically viable only because women (and often children) provide their labor at no cost.9
 

This raises the question — what would an environmentalism that takes feminism seriously look like?


Many of those beckoning women back to the kitchen and the smallholder farm have tried, clumsily, to integrate women’s issues and environmental concerns. Too often, however, they fall back on tired tropes that conflate the two, characterizing both women and nature as disenfranchised, marginalized entities subject to the much stronger force of patriarchal capitalism. Ecofeminists like Vandana Shiva argue that the low status and exploitation of poor women on smallholder farms are a function of capitalism and Western science and technology, not low-intensity, low-input agriculture that requires uncompensated labor to be viable.10 Pollan argues that the solution to heavy domestic burdens that women still bear is to simply urge men to take on more responsibilities, not to forgo time-consuming food preparation.11

Both defenses are blind to the class implications of eschewing technology in favor of more labor-intensive, “earth-friendly” practices, and implicit in both are gendered ideas about caretaking and nature that are antithetical to feminism. The earth becomes another dependent that women must find time and energy to care for.

This raises the question — what would an environmentalism that takes feminism seriously look like?

Rather than naturalize the connection between women and the environment, or push pseudo-environmental practices that result in further gender inequity, there must be a way for women of all classes and ethnicities to actualize their personal and professional goals without having to perform a set of time-intensive activities that constrain them to traditional gender roles.

1.

Modern environmentalism has had a complicated relationship with gender. From Rachel Carson onward, strong, independent women have featured prominently. Iconic figures like Lois Gibbs, Erin Brockovich, and Karen Silkwood demanded corporate accountability in the face of efforts to silence them, and were among the first to bring environmental issues into the public sphere.12

But in a variety of other ways, environmental discourse remains heavily gendered. Wilderness was first “discovered” and then conquered by strapping white men like Lewis and Clark and Daniel Boone.13 From Henry David Thoreau to Edward Abbey to Yvon Chouinard, wilderness has remained, in the environmental imagination, a place where men retreat in search of solace and themselves. The polemicized claims of contemporary conservation discourse, from monkeywrenching to biocentrism, have tended to further inscribe wilderness as a place that men discover, describe, decamp to, and defend.

Women, by contrast, are cast as caretakers and nurturers, of their families, their communities, and by extension, the earth. The role is passive and reactive. Women don’t seek out nature; environmental degradation finds them. Rachel Carson watches the birds die. Lois Gibbs learns that her subdivision was built atop a toxic waste dump. The women of India who would inspire the Chipko movement hug trees to keep a timber concessionaire from cutting them down.
 

The conflation between women and nature has concrete implications for how women are positioned within the environmental movement.


From the 1970s onwards, ecofeminist theorists like Carolyn Merchant and Karen Warren have attempted to recast gendered roles within the environmental movement in a more positive light,14 suggesting that women, because they are more likely to have been subject to other forms of oppression, are more aware of environmental injustice; that because they are more likely to be poor and disenfranchised, are more likely to be on the receiving end of environmental impacts; and that because they are more likely to be caretakers, are more sensitized to environmental health concerns.12

There is some evidence for all of these claims. But too often, those arguments suggest an implicit, and in some cases explicit, biological determinism that would leave most feminists aghast. Many ecofeminists have argued that women are more connected to nature on account of their biological systems (menstruation and reproductive capacity) and “mystical” connectedness with the earth.15 Characterized in this way, women lack agency over their bodies, at the mercy of preordained systems (in this case, biological) beyond their control. Gender, in popular ecofeminist discourse, and too often in the scholarly literature as well, is portrayed as essentialist and deterministic rather than a social construction.

The gendering of environmental discourse is further projected onto the earth, which is cast in the feminine, subject to the rapacious exploitation of masculine techno-industrial society. “Mother Earth” is always nurturing, there to provide, cleaning up when humans make a mess, even acting out dramatically when humans err too much.

The conflation between women and nature has concrete implications for how women are positioned within the environmental movement. Women are cast as closer to the earth and thus biologically possessing different priorities. The market, technology, capitalism, and the modern world are for men. Women are tasked with manifesting all that capitalist culture isn’t. In both the developed and developing world, women are expected to eschew time-saving technological solutions and the petty world of wage-based labor in favor of actions that prioritize the environment at the expense of their own priorities and objectives. The problem is that this increases women’s economic vulnerability, excludes middle- and lower-class women, and perpetuates existing power differentials.

2.

Since the birth of the women’s movement, feminists have struggled to navigate the complex relationships between sex, biology, social and cultural norms, and social and economic power. These questions have revolved not around whether biological differences are real but rather to what degree they can account for differences in roles, attitudes, and status between men and women.

Biological differences between men and women have long been used by defenders of the status quo to defend male privilege and power and to justify the second-class status of women. More recently, many of those arguments have been repurposed by some feminist thinkers as their own, to argue that female biology, and the cultures and mindsets that come with it, are superior. Women may not be stronger or more aggressive than men. But due to their capacity as caretakers, they are more emotionally intelligent and more peace-loving, prone to compromise and collaboration, not bellicosity and conflict. Singularly among the post-’60s identity-based movements, some feminists positioned women not only as victims of patriarchal male oppression, deserving of justice and protection, but as possessed of special knowledge accessible only to them, derived not from experience but from biology.14

More than one generation of feminist scholars have rejected this sort of biological determinism broadly, from Simone de Beauvoir’s publication of The Second Sex in 1949, and her now-famous claim that one is not born but rather becomes a woman, through to Donna Haraway and Judith Butler’s work on the ways that gender is performed and constructed at the end of the 20th century.16 Latter-day ecofeminism, too, in response to the numerous charges of gender essentialism leveled against it, has attempted to absorb some of these critiques.17 But the uncritical assumption of an “ethic of care” in response to environmental ills continues.18

French feminist theorist Elisabeth Badinter takes sharp exception to this new naturalism, and what she sees as the reinscription of maternalism in modern society and ecological consciousness. In her book Le Conflit: La Femme et la Mère, Badinter attacks “motherhood fundamentalism” for being “a movement dressed in the guise of a modern, moral cause that worships all things natural.” To Badinter, “feminism of victimhood” represents the surrendering of women to the inevitable tasks of one’s gender, which she sees as an attempt by conspirators to drive women out of the workplace and thus out of power. She chastises practices that rob women of their agency, pitting women and the environment against one another in a zero-sum game. “Between the protection of trees and the liberty of women, my choice is clear,” Badinter says, “powdered milk, jars of baby food, and disposable nappies were all stages in the liberation of women.” She sees the unabashed elevation of breastfeeding, one of the few acts that remain solely “women’s work,” and the rest of the attachment parenting formula as onerous demands that detract from a woman’s ability to pursue personal and professional ends, and thus as anathema to the feminist cause.19
 

While family meals of wholesome, home-cooked foods are increasingly romanticized, it’s likely that they are romanticized by those on the receiving end of those meals.


The same, of course, could be said of the unpaid labor that comes with “all-natural,” home-cooked meals. The first line of defense to this sort of critique has been to argue that domestic labor should not fall to women alone. Anticipating criticism to his call for home cooking, Michael Pollan argues in Cooked that “by now it should be possible to make a case for the importance of cooking without defending the traditional division of domestic labor. Indeed, that argument will probably get nowhere unless it challenges the traditional arrangements of domesticity — and assumes a prominent role for men in the kitchen, as well as children.”7 And perhaps, in stable, two-parent families, that is the case. But for many households, even in an affluent nation like the United States, that sort of stable and equitable domesticity is not an option. Bowen, Elliott, and Brenton interviewed one working-class African-American couple who were employed by two different fast-food restaurants 45 minutes apart, and rarely knew their schedules ahead of time. The lack of reliable public transportation, compounded by uncertain work hours, made meal planning and cooking a challenge. Other middle-class women, even those with relatively egalitarian household arrangements, would arrive home at six in the evening and attempt to balance spending time with their children and cooking a meal from unprocessed ingredients, as they had been told was best for their family and the environment.4 For poorer families, the cost of plant-focused meals advocated by lifestyle environmentalists was often another barrier,20 as was the difficulty of accommodating various food preferences, which made trying new foods a financially risky venture.

While family meals of wholesome, home-cooked foods are increasingly romanticized, it’s likely that they are romanticized by those on the receiving end of those meals. Historically, once a family improves its socioeconomic status, it outsources labor-intensive activities to lower-class workers. According to a 1937 survey conducted by Fortune magazine, “70 percent of the rich, 42 percent of the upper middle class, 14 percent of the lower middle class, and 6 percent of the poor reported hiring some.”21 Today, upper-class women who appear to be “doing it all” are likely reliant on at least some domestic help, who clean, cook, watch the children, and then disappear from sight.22

For the women interviewed by Bowen and her colleagues, shopping and cooking occasionally added joy but just as often added stress, burdens, and trade-offs. Ironically, the practices advocated by Pollan, Mark Bittman,23 and other popular food and lifestyle gurus in the name of sustainability and a rich and fulfilling home life turn out to be practical only for women who have benefited the most from industrial society.

But the demands that contemporary environmental ethics place upon women do not end with Pollanesque gatherings around the family table. Young mothers are told to forgo processed baby food, relying as it does on far-flung commodity chains and nonlocal ingredients. Instead, they should make their own,24 reject formula in favor of breastfeeding,25 and replace disposable diapers with cloth.26 All, women are told, are necessary to raise healthy babies on a healthy planet. Each prescription combines claims of environmental benefit, however minor (given the water- and chemical-intensive processes associated with producing and reusing cloth diapers, for instance, they are only marginally better for the environment), with increased domestic demands.

Upon leaving the home, women face another series of charges from lifestyle greens. The choice to ride a bike instead of drive,27 for instance, isn’t so simple for women disproportionately tasked with shopping and transporting children from place to place.28 Little wonder that women ride bicycles as transportation at less than one-third the rate of men.29

In these and a variety of other ways, green ideology tells women that tasks that can be automated should be rejected in the name of processes that are closer to nature, without any recognition of the broader social and structural context in which these activities occur. Women perform the bulk of unpaid labor while being beseeched to perform that labor in ways that are more difficult and time-intensive and bring at best minor benefits to the environment or the well-being of their families. The “natural is better” formula and the romanticization of domesticity as untainted by capitalism allow the larger systems in which women and the environment are embedded to escape scrutiny.

3.

The utopian vision of the small farm as pastoral ideal outside the realm of capitalism, largely conjured up by those far removed from the realities of agrarian life, is even more problematic for the status of women in the developing world than is the proliferation of environmental demands upon women in affluent economies. Small farms in the developing world are indeed outside the realm of capitalism. Despite the fact that small farms produce up to 80 percent of the food supply in Asia and sub-Saharan Africa, their share of the formal market is marginal, and very few farmers earn enough to escape poverty.30 In sub-Saharan Africa, three-quarters of malnourished children live on small farms.31 And women, who are disproportionally employed in farm-based labor, are likely to suffer as advocates of smallholder farms ignore the economic realities of small-farm production capacities.32

None of this has prevented ecofeminists like Vandana Shiva from rhapsodizing about the ways in which rural agrarian life allows women to live in harmony with nature. “Nature,” as Shiva writes in Staying Alive: Women, Ecology and Development, is “the creator and source of wealth, and rural women, peasants and tribals who live in, and derive sustenance from nature, have a systematic and deep knowledge of nature’s processes.” These “Third World women,” in their resurgence, “are laying the foundations for the recovery of the feminine principle in nature and society, and through it the recovery of the earth as sustainer and provider.”10

But these sorts of idealizations misrepresent the realities of rural life in developing economies. Seventy percent of the world’s poor live and work in rural areas, mostly in subsistence agriculture.33 Women, in turn, by some estimates constitute as much as 70 percent of the global poor and two-thirds of the world’s adult nonliterate.34 Nearly 70 percent of employed women in southern Asia and over 60 percent of employed women in sub-Saharan Africa work in agriculture, primarily as unpaid or contributing family workers.35 In these contexts, women are also excessively responsible for unpaid caregiving, including running the household and providing food for family members. The International Fund for Agricultural Development’s 2011 Rural Poverty Report indicates that these responsibilities often require a 16-hour workday for rural women in developing countries, with “important consequences for women’s time-poverty and health.”33
 

Small farms in the developing world are indeed outside the realm of capitalism.


Some have suggested that with sufficient policy support, smallholder farms might be able to generate higher incomes while reducing the demands for uncompensated labor from women and children.36 But the more likely path for women and children out of agrarian poverty is the one that most have followed historically — leaving agriculture altogether.37 While still imperfect, the gender equity and expanded personal and professional horizons that women have attained in Western developed economies are simply not possible in subsistence agrarian economies.

Of course, it is also true that women have not been liberated from subservient roles in subsistence agriculture all at once. Incremental approaches that marshal technology in service of empowering women socially and economically while also addressing environmental and resource challenges can help support a virtuous cycle of income growth, education, empowerment, and environmental protection.

Take the much-publicized Green Belt Movement, established by Wangari Maathai. Women in rural Kenya are traditionally responsible for the gathering of fuelwood for cooking, a time-consuming activity that reduces native forest cover, adversely affects women’s health due to smoke inhalation and back pain, and inhibits the pursuit of more economically viable livelihoods. In response, the introduction of biogas digesters, which process livestock manure anaerobically to generate gas, has led to a marked reduction in fuelwood use and positive health impacts.38 The solution simultaneously addresses female empowerment, deforestation, and economic agency.

In contrast to efforts that focus centrally on environmental conservation, conservation in this context comes as a cobenefit of technological progress and political economic change that enable societies to intensify production, address poverty, and empower women, while reducing pressure on biological systems. What is critical about these solutions is that they are not contingent on the contribution of free labor by women, the conflation of women and nature, or the sacrifice of female economic empowerment. Rather, female empowerment — not what is “natural” — is seen as a critical component of environmental problem solving.

4.

So what does environmentalism look like when it takes women’s realities seriously? To start, it will need to finally come to terms with modernization. From a feminist perspective, this would hardly be a radical departure. At least since Virginia Woolf identified “a room of her own and five hundred a year” as the necessary preconditions for a woman to achieve personal and professional empowerment,39 feminists have advocated for those fruits of modernization — individuation, privacy, education, and civil rights — that have enabled the relative gender equality that the majority of the developed world experiences today. Modern technologies like oral birth control, for instance, have further served to relieve women of the burdens of unwanted pregnancy, the health risks of multiple pregnancies, and abortion; and indeed, today the pill is widely heralded for allowing women control over their reproductive cycles, and subsequently their lives.

In the developing world, it is also modernization, and in particular the transition from agrarian to urban livelihoods, that has the most potential to transform women’s realities. Moving to cities enables women to work for an income outside of the home, to escape discrimination more rampant in rural regions, to access education and health care, and to participate in the public sphere. Smaller gains like access to piped water and public transportation can also have an outsized impact on women’s ability to pursue opportunities beyond their domestic responsibilities. Fertility rates, finally, tend to fall with urbanization and the access to education, paid work, modern values, and other means of empowerment that go along with it, further relieving women of domestic burdens and granting them greater options, opportunities, and financial stability.40
 

At bottom, feminist thought and action are incompatible with poverty, agrarianism, and neoprimitivism.


Women’s liberation has always had a dual and linked meaning, referring to both liberation from patriarchal male oppression and from the physical and psychological demands of domestic labor. In the developed world, the household devices of the 19th and 20th centuries — refrigerators, washing machines, cake mixers — reduced the amount of labor needed to run a household, which in turn decreased the weekly workload of the average American woman.41 Peg Bracken’s irreverent The I Hate to Cook Book, a celebration of the canned, powdered, and store-bought published in 1960, stands as a reminder of this legacy.42 While domestic responsibilities still burden women disproportionately, these conveniences have undoubtedly left women with more time for other pursuits — including environmental advocacy.

In the developing world, the intensification of agriculture leads to greater yields on smaller amounts of land with less labor, enabling women to pursue nonagricultural employment while also freeing up land for wildlife habitat.43 In both cases, technology is practically required to address environmental concerns without doing so on the backs of women.

Many greens will not embrace technology overnight, as it is seen as inextricably bound up with a capitalist system culpable for environmental degradation. But conflating technological innovations with the social and political context in which they were deployed forecloses possibilities for problem solving.

At bottom, feminist thought and action are incompatible with poverty, agrarianism, and neoprimitivism. Modern notions of rights, identity, and agency cannot be reconciled with premodern social, economic, and political arrangements. Female empowerment, in the long term, requires modern agriculture, energy, and infrastructure. Environmental ethics that reject those prerequisites in the name of the natural and pastoral are, simply put, irreconcilable with feminism.

The glorification of nature and farming and the romanticizing of the home, domestic life, and the woman at the center of it are ultimately nostalgias that cover up the brutality of rural life and drudgery of domestic labor in a perfume of freshly cut hay and caramelizing onions. While the new domestics advocating home brewing, fermenting kombucha, and churning butter are likely aware of their irony in an era of unprecedented technological progress, this nostalgia does little to further the goals of middle- and lower-class women in the developed world.

However fervently Pollan may argue that cooking is selfless, nonalienated, and leisurely, his ideals stand in stark contrast to the lived experience of women, wherein cooking, while often satisfying, is equally often frustrating, rushed, and increasingly moralized. The popularity of the 30-minute meals Pollan lambasts does not represent laziness, but rather a desire to still cook in the context of the social and economic realities of the modern world. 

An environmentalism that makes daily life harder for a certain segment of the population is not ethical. Romanticizing unpaid labor disregards the burdens on the populations that perform it. Ultimately, environmental change will require far more than the calls of charismatic men (and occasionally women) to return to the kitchen, or to the farm. Instead, it will be those structural and technological changes that alter the lived realities of women in developed and developing countries alike that will succeed in meaningfully, and equitably, addressing both environmental and feminist concerns.

Read more from Breakthrough Journal, No. 7
Democracy in the Anthropocene
Featuring pieces by Erle Ellis, Emma Marris,
Calestous Juma, and Siddhartha Shome

Leapfrogging Progress

ithin two years of its launch in 2007, money transfers through M-Pesa, a cell-phone-based mobile banking application, already equaled the equivalent of 10 percent of Kenya’s GDP. What started as a local system to serve populations too poor for traditional banking has since grown into a global industry, one that threatens to disrupt traditional banking systems around the world. Today, M-Pesa’s network includes 30 million users across 10 countries, and its services have expanded to include international transfers, loans, and even health care.1

 

Image credit: CNN, “M-Pesa: Kenya’s mobile money success story turns 10” (2017), http://www.cnn.com/2017/02/21/africa/mpesa-10th-anniversary/.

The wide adoption of mobile phones in Africa, along with applications like M-Pesa that it has enabled, has created remarkable technological enthusiasm on the continent. Symbolizing the great potential that lies in technological catch-up and leapfrogging, M-Pesa has served as an inspirational example of what Africa could accomplish in other sectors like energy, education, health, transportation, and agriculture. Indeed, countries such as Rwanda are already using drones to transport medical supplies, while the dramatic drop in the cost of solar energy points to the widespread adoption of the technology across Africa.2 Overall, the mobile revolution has given hope to Africans that they too can be dynamic and innovative players in the global economy, transcending the continent’s current reliance on raw material exports.

But while cases such as M-Pesa offer inspiration, the promise of leapfrogging remains largely unfulfilled. The mobile revolution has hardly served as a stimulus for broader industrial development and appears to have had little impact on African innovation policy. Africa still lags behind in manufacturing and has not made major steps to move to the production of technologies related to the mobile industry. Today, African economies are in the middle of their worst downturn in two decades, exposing the persistent dominance of legacy economies. Compounded by the demographic dynamics of rapid population growth and a large youth population with limited skills and employment opportunities, poverty remains the norm in Africa, despite high economic growth rates in some countries.

The failure of the mobile revolution to stimulate industrial development in Africa is the result, in part, of a faulty narrative that assumes that Africa can leap into the service economy without first building a manufacturing base. This view ignores the fact that service industries are closely linked to industrial sectors, many of which may be located in other regions of the world. By accepting this popular perception, Africa may be forgoing the opportunity to invest in core infrastructure and engineering capabilities that would enable it to meet the needs of other sectors such as health, education, and agriculture.

Infrastructure is both the backbone of the economy and the motherboard of technological innovation.3 African countries need adequate infrastructure to realize their full potential; the continent’s low economic performance and weak integration into the global economy stem partly from inadequate investment in and development of energy, transportation, telecommunications, water and sanitation, and irrigation infrastructures.

It is not too late for Africa to become a dynamic and entrepreneurial region driven by innovation. It is certainly right to keep its sights set on technological innovation as an essential driver of economic growth, and as the key to moving beyond the vagaries of commodity exports. But such innovation will depend on industrial development — and the infrastructure and technical capacity it enables — that cannot be leapfrogged. This, in turn, will require a new industrial policy for Africa.

1.

The mobile handset in the hands of an ordinary African has become the symbol of leapfrogging. There is some basis for this imagery — the business model that made it possible for Africa to rapidly adopt mobile telephony did involve the availability of low-cost handsets. But it was the establishment of new telecommunications infrastructure — signaled by the spread of mobile phone towers across the continent — that represented on a more fundamental level what the mobile revolution was about.4 The handsets were merely part of the larger and more complex engineering system that made mobile communication, and further industrial diversification, possible.

Indeed, creating such a system involved dramatic changes in legislation, higher education, and infrastructure, including reforming laws across Africa to create the entrepreneurial space for new infrastructure,5 founding institutions to train new professionals to work in the mobile sector, and installing fiber optic cables throughout the continent. From a legal standpoint, the policy champions of the disruptive technology faced down the incumbent landlines industry to introduce new business models, including prepayments and the low-cost handsets, that enabled the poor to be included in the revolution. Innovation in higher education came in the form of new telecoms universities in countries such as Egypt, Kenya, and Ghana, and analogous institutions such as the Digital Bridge Institute in Nigeria.

As for infrastructure, while early mobile phone systems were connected to the rest of the world through satellite links, with undersea fiber optic cables available to only a small number of West African cities, today all of continental Africa and Indian Ocean states have access to fiber optic cables with significantly higher bandwidth.

Image credit: Steve Song, “African Undersea Cables” (2016), https://manypossibilities.net/african-undersea-cables/.

The majority of these countries, however, have not yet leveraged this broadband infrastructure to foster innovation or development, an indication of the lack of complementary evolution in innovation policy.6 In some cases, telecoms operators have yet to migrate from satellite links to fiber optics cables, leaving the promise of low communication costs out of reach. Even where migration has occurred, access charges remain prohibitive.

Those who can access high bandwidth and the services it enables mostly remain consumers, not producers of technology. These are generally young people in the acquisitive stages of their lives, according to a survey of residents of the Nigerian cities of Lagos, Abuja, and Port Harcourt by the bank Renaissance Capital. Nearly 50 percent are in the market for durable consumer goods such as refrigerators and other household appliances, but these are largely imported because Nigeria’s manufacturing sector has failed to keep pace with the rising demand for finished goods.7

The mobile revolution did in fact offer Africa a unique opportunity to catalyze industrial development and diversification, but that opportunity has not been seized, in large part due to the mistaken view that the introduction of mobile phones and extension of telecommunications services across the continent would allow Africa to leapfrog industrialization rather than build out a new industrial base. This, in the end, defined the continent as a source of consumers rather than producers of technologies. The failure of the mobile revolution is that it has not succeeded in establishing an infrastructural base for economic development, nor for deploying adjacent emerging technologies. Until this lesson is learned from an innovation policy standpoint, the popular call for technological leapfrogging will amount to too little.

2.

Infrastructure projects are inherently technological in nature — they represent bundles of scientific and technical knowledge that are embodied in both equipment and human capabilities. Talk of infrastructure projects may tend to focus on rates of return on investment, impact on public finances, formation of public-private partnerships, identification of sources of funding, or environmental and social costs, but what is often overlooked is the role of infrastructure as the foundation for innovation and economic transformation.

Agriculture in Africa illustrates this point: Africa’s low agricultural productivity levels stem in part from inadequate roads, energy supply, and irrigation. Without rural roads, farmers are condemned to growing crops close to their homes, and as a result can hardly provide adequate food for themselves, let alone surpluses for local trade. Compared with 60 percent of rural people in middle-income countries around the world, only 44 percent of rural Kenyans live within two kilometers of an all-season road. Elsewhere in Africa, this rate of access is even worse: 42 percent in Angola, 38 percent in Malawi, 38 percent in Tanzania, and 32 percent in Ethiopia.8

In addition to facilitating economic activities and generating employment, infrastructure projects are reservoirs for engineering capabilities. The development of geothermal energy in Kenya, for example, has resulted in the creation of a large pool of experts working locally and abroad. These types of infrastructure projects offer Africa a unique opportunity to build the necessary engineering and managerial capabilities for the design, construction, and maintenance of future projects. These projects can then be used as a basis for designing new engineering courses and research activities; in South Korea, for instance, developing the high-speed rail led to the creation of the Korean Railroad Research Institute in 1996.

Policymakers must recognize the potential to tap this knowledge to benefit the wider economy. Ironically, this vision existed in much of colonial Africa. When the British built the Kenya–Uganda rail in the late 19th century, they included a technical facility for repair and maintenance. Over the years, however, African infrastructure projects have increasingly been delinked from technological training and are underperforming as a result. Unfortunately, the design of such projects in Africa tends to focus most on awarding contracts to the lowest bidder, not on seeking to maximize technological capacity.
 

Infrastructure is both the backbone of the economy and the motherboard of technological innovation.


To industrialize, Africa will need a large pool of appropriately trained engineers — some foreign, but most local — to help with the design, construction, and maintenance of infrastructure. It is routine maintenance and additional construction that will require significant and timely creation of local capacity, including entrepreneurs who can identify business opportunities associated with new infrastructure projects that will contribute to sustained economic growth and the spread of prosperity. This will in turn involve considerable accumulation of knowledge and capabilities.

At face value, Africa’s engineering challenges are daunting. Leading economies such as South Africa and Nigeria suffer from critical shortages that are worsened by international skill migration. It is estimated that South Africa loses through migration nearly as many engineers as it trains annually. Worse, no African country maintains reliable records on training and deployment of engineers.

There are, however, strategic measures that such countries can take to ramp up their capacity. At bottom, they begin with the recognition of innovation as the most important driver of long-term economic transformation, and of infrastructure as the foundation for innovation. They will also depend on the understanding that the key objective of infrastructure projects should extend beyond the provision of services, to the acquisition, domestication, and local diffusion of technological capacities.

3.

While leapfrogging gets most of the attention, African policy makers continue to pursue a rather more old-fashioned approach to industrial development: adding value to the natural resources and raw materials that define Africa’s legacy economies.

The African Union’s ten-year Science, Technology and Innovation Strategy for Africa 2024 has attempted to reposition the continent as a technology-driven economy, with an important focus on leveraging emerging and available technologies to generate products and services that are relevant to local economies. But adding value to resource-centered exports remains the continent’s primary strategy for economic growth.

Value addition appeals for a number of reasons; at present, the continent’s commodity systems earn producers negligible profits. Africa produces nearly 75 percent of the world’s cocoa but gets only about 2 percent of the $100 billion global chocolate market. In 2014, Africa exported $2.4 billion of coffee, while Germany, which is not a producer but a processor of this commodity, re-exported nearly $3.8 billion worth of coffee worldwide. The standard response to this disparity is to call on Africa to add value to its commodities itself.

To make value addition the primary model for industrial diversification, however, would be a mistake. The common misunderstanding that industrialized countries advanced largely because they exploited low-cost natural resources from their African colonies leads many in Africa to believe that they too can industrialize and grow by making better use of their natural resources. But to do so would be to ignore important lessons from economic history.

In general, there is little evidence to suggest that countries industrialize by adding value to their raw materials. Rather, the causality runs the other way — countries add value to raw materials because they already have local industries with the capacity to turn raw materials into products. Initial industrial development thus becomes the driver of demand for raw material and value addition rather than the other way around.9 Countries such as the United States, Canada, and Australia experienced commodity booms not because they added value to raw material but because they possessed nascent industries that required raw material. As a result, they designed policy instruments for resource exploration, improved prospecting technologies, and invested in commodity-based research,10 and it was these innovation-oriented measures that then resulted in value addition. Africa’s traditional focus on raw minerals rather than innovation has caused it to lag far behind in such efforts.

Value-addition strategies in Africa also must overcome tariffs imposed on African exports by trading partners. For many of African states’ largest export markets, tax charges increase with more-refined products. Reducing or removing tariff barriers does not necessarily help; because raw material exporters need time to build up complementary industrial processing capabilities, the temptation for these countries is to enter into joint ventures with importing companies, or to encourage those countries to set up enterprises through which they can leverage international joint ventures in the way that China developed its automobile industry.11

Such joint ventures can benefit Africa if they are guided by efforts to build local capabilities, mostly through linkages with universities as well as domestic suppliers of parts. Without such measures, however, joint ventures serve only as vehicles for the transfer of revenue to home firms with minimal benefit to African countries.

Fortunately, African nations do have the benefit of being latecomers — the world is full of useful examples of economic diversification that they can follow. In fact, many countries that have recently transitioned to learning economies started off with significantly fewer resources (in terms of finance and research facilities) than the majority of African countries have today. Take the case of Taiwan: in the early 1960s, the country was a world leader in mushroom exports, a high-volume, low-value, perishable export commodity that greatly limited its prospects of industrial learning.
 

Acquiring the ability to create new technological combinations is the key to industrial development.


It was only when Taiwan transformed itself into a learning economy that it was able to emerge as a modern semiconductor powerhouse. Taiwan viewed the emergence of semiconductors as a basis for industrial development, not simply the provision of services, and as a result pursued technology partnerships with the United States that involved training Taiwanese youth.12 Taiwan also combined four local research institutes left behind by the Japanese occupiers into the world-class Industrial Technology Research Institute (ITRI), which played a critical role in this transition, spawning many of the country’s leading semiconductor firms. Importantly, the institute was founded not for the purpose of adding value to exports but instead as an explicit focal point of Taiwan’s policy decision to reinvent itself as a learning economy.

Instead of leveraging raw materials for value addition, Taiwan leveraged an existing technology for economic diversification and technological learning, prioritizing an initial use of that technology that could be readily combined with other platforms to generate increasingly diverse products. As economist Ricardo Hausmann has argued, industrial growth proceeds like a game of Scrabble: nations start off with minimum technological capabilities that they recombine to create more technologies in the same way that letters are used to create new words in a Scrabble game.13 Not all letters are created equal. Some have higher values, like J, Q, X, and Z, but they are difficult to use, and players often have to substitute more versatile letters to create new words. Similarly, some technological capabilities generate more combinations than others. Semiconductor and chemical industries are examples of generic technologies that do so.

The attraction of adding value to an economy’s material endowments is that it is a strategy to leverage high-value letters, valuable raw materials, into words that bring high scores. But as with Scrabble, focusing on producing words that utilize the letter Q or X greatly limits the number of words that the player can use and forgoes acquiring technological capabilities that have higher recombinant value. Focusing technological learning around natural resource endowments actually limits the field of play for acquiring new capabilities and platforms. Taiwan didn’t concentrate its efforts on semiconductors because it was well endowed with sand or silicon; it did so because semiconductors at that time represented a nascent technological platform with endless possibilities for recombination.

As in Scrabble, industrial development involves considerable learning, not just about letters but also about vocabulary and strategies for thinking about creating new words. Acquiring the ability to create new technological combinations — through an emphasis on infrastructure and technical education — is the key to industrial development.14

4.

Taking full advantage of infrastructure’s technological potential will require a more sophisticated approach to policy, procurement practices, and project design. The first step will be to recognize the magnitude of the challenge and the associated opportunities. The African Development Bank has estimated that Africa will need to invest $93 billion annually over the next decade to meet its infrastructure needs. The estimate for Nigeria is $15 billion a year. South Africa envisages investing nearly $462 billion from 2012 to 2027.15

A large part of this investment will come from overseas, as evidenced by China’s investment in African infrastructure projects, mainly in transport. The recent creation of the China-led Asian Infrastructure Investment Bank (AIIB) will strengthen the country’s role as a source of funding not only for Africa but also for many other regions of the world, including industrialized countries.

From a governance standpoint, African countries and cities will need to capitalize on the critical role that infrastructure plays in entrepreneurship and development. The most inspirational opportunity today lies in making broadband — the low-hanging fruit of the mobile revolution — more accessible and affordable to young entrepreneurs.16 Indeed, cities in Kenya, Nigeria, Rwanda, and South Africa have begun to experiment with such a priority, providing free Wi-Fi to stimulate entrepreneurial activities.17 Generally speaking, urban centers house the highest concentration of infrastructure facilities and, as such, will continue to lead the way as the most creative and dynamic regions. Cities such as Lagos, Nairobi, Accra, Pretoria, and Cairo set inspiring examples of regions maximizing infrastructure and policy for innovation and development.
 

Leapfrogging industrial development is not an option.


While there has been great optimism over the emergence of information and technology hubs in major urban areas across Africa,18 many of which feature young entrepreneurs producing new technologies designed to solve Africa’s problems, their distance from centers of research and learning signals the need to foster more integrated innovation ecosystems that bring together business, academia, and government. Significant measures will need to be adopted to expand engineering training. This might include upgrading training institutes to offer certified engineering training, strengthening engineering training within private and public enterprises, and forging stronger international education partnerships. Doing so will also require greater overall funding and policy support for technology-based ventures.

Specific engineering education objectives should also be integrated into major infrastructure projects themselves, as was the case with the expansion of telecoms infrastructure and the creation of new technology universities in Egypt, Ghana, and Kenya. As armed forces form one of the most important sources of engineering capacity, carefully designing programs to repurpose sections of the military to support infrastructure construction can also help to foster local capacity. All such investments pay off in the long run through reductions in maintenance costs.

These policy priorities, of course, will require presidential champions. Fortunately, there is growing consensus among African countries on the centrality of infrastructure to development, as reflected in Agenda 2063 of the African Union. Perhaps more important has been the recent focus on integrating the continent’s economies without seeking to create political unions. Talks are under way to create a Continental Free Trade Area (CFTA), which will cover more than one billion people in 54 countries with a combined GDP of over $3.4 trillion. It is also expected to create opportunities for trade in agricultural machinery and associated services.19

The mobile revolution, along with other emerging technologies like 3-D printing, drones, and solar energy, offers important starting points for innovation along such paths. Many of the key elements for such a process are emerging in Africa; Egypt, Kenya, and Ghana, for example, have set up new universities to train professionals for the mobile sector on the initiative of telecoms authorities rather than education ministries. Ethiopia has also begun local assembly of mobile handsets with the intent of moving up the value chain by incrementally producing components locally.

But leapfrogging industrial development is not an option; if anything, the evolution of the mobile sector demonstrates the continued importance of industrial development as the source and catalyst for innovation and economic growth. It also offers important lessons for how government, industry, and academia can collaborate to create new industries, expand manufacturing, create jobs, and stimulate the structural transformation of African economies.

The term “industrial policy” may continue to invoke the ideological debates of the last century, where it denoted subsidies to infant industries, direct state intervention, and government selection of industrial sectors. New approaches will need to be pursued, as outlined above, to ensure that the past failures of industrial policies are not repeated.

Moving Africa from its current focus on raw material exports, value addition, and consuming technology to becoming a learning economy and technology producer, will require 21st-century industrial policy that supports continuous interactions among government, industry, and academia in open competitive and collaborative innovation ecosystems. Leapfrogging particular technologies, such as landlines, may in some cases be an option. But industrialization itself, and the innovation and development it generates, cannot be skipped over.

Read more from Breakthrough Journal, No. 7
Democracy in the Anthropocene
Featuring pieces by Erle Ellis, Emma Marris,
Jennifer Bernstein, and Siddhartha Shome

Can We Love Nature and Let It Go?

n Davis, California, a young couple are opening the cupboards of a model home in a new development — the Cannery — built on the site of a shuttered tomato packing plant. The Cannery has everything from townhouses for downscaling retirees in the mid-$400,000s to sprawling homes well above the million-dollar mark. The various tracts have appealing names — Sage, Heirloom, Persimmon. There’s something very culinary about these brands, and that’s no accident. The Cannery is billed as a farm-to-fork lifestyle destination. All along one side of the development runs a skinny parcel of land that is actively under cultivation by the Center for Land-Based Learning, crowned by an elegant 5,200-square-foot barn that I assumed was restored, but turns out to be brand new and artfully distressed. Residents can buy the farm’s produce at a mini farmers’ market on-site or subscribe to a weekly box. The farm is not terribly active when I visit in February, but it is studded by bee boxes and bat houses and I see jackrabbits, bluebirds, ground squirrels, and other wildlife frolicking under the late winter sun.

The key to deploying this little ribbon of agriculture as a tempting residential amenity is twofold: first, the New Home Company, the developer of the Cannery, links the presence of the farm with a general mood of sustainability, supported by extra insulation in the homes and a gratis 1.5-kilowatt solar system on every roof. Second, they do not ask the residents themselves to actually do any of the farming. The houses come with token, miniature gardens. Those in the front are conveniently maintained by professional gardeners. One can live here and breathe in the smell of moist loam without ever getting one’s hands dirty.

The morning after my tour of the Cannery, I wandered through the Davis Farmers’ Market, which is held in its tastefully landscaped central park. Enormous oranges, local greens, purplish lengths of sugar cane, and heaps of appealingly matte Pink Lady apples were heaped on stall tables. People of a rainbow of races, ages, orientations, and abilities drank coffee and ran into old friends. Little kids in tutus and tiny Patagonia fleece jackets chased the falling petals of early-blooming fruit trees. Signs pinned to the stalls reassured buyers that “we grow what we sell.” People chowed down on breakfast banh mis and green juices and bought flowers, happy that their purchases were not from the “industrial” agricultural system and serene in their understanding of themselves as green, right-thinking people.

The Cannery and the Davis Farmers’ Market exemplify a certain kind of Western, upper-middle-class idea of the good life. It is one that weds buying grass-fed steaks and organic oranges straight from small-scale producers with having a 200-square-foot bathroom. The problem is that this way of living doesn’t scale — it consumes monstrous amounts of land, water, and other resources that could otherwise be habitat for the millions of nonhuman animals with which we share the planet. This is the paradox of the “natural upscale” lifestyle — modes of production, distribution, and living that feel “natural” to us are often older, less efficient, and have a much larger environmental footprint.

1.

If we really want to make room for other species here on our shared planet, we must reduce our per-capita and cumulative human footprint. In most cases, this means embracing and improving upon technological advancements that have already allowed humanity to squeeze more out of less, particularly improvements in agricultural yield, food distribution, and food waste elimination. This “decoupling” has already begun in practice. After all, improvements in efficiency and reductions in the cost of inputs mean greater profit. Decoupling also owes much to government-sponsored research and international nonprofits such as the Consortium of International Agricultural Research Centers. Because of market forces, national competitiveness, and a dollop of goodwill toward humankind, our lives are getting “lighter.” It took around 25 percent less “material input” to produce a unit of GDP in 2002 as compared with 1980.1 Meanwhile, it takes half as much farmland to feed one person as it did 50 years ago.2 The 2015 Breakthrough Institute report Nature Unbound found that “nearly all forms of land use, wildlife extraction, water consumption, and pollution have been declining on a per-capita basis for decades, and in some cases for centuries.” Of course, there is much work still to do.

The United Nations Environment Programme uses the decoupling framework to tout the desirability of delinking economic growth and development from overuse of resources including freshwater, energy, and land.3 The 2015 Ecomodernist Manifesto, penned by many authors affiliated with this journal, pins its hopes for planet Earth on “committing to the real processes, already underway, that have begun to decouple human well-being from environmental destruction.”4
 

It takes half as much farmland to feed one person as it did 50 years ago.


There’s a lot to like about decoupling. Decoupling does not pit the planet’s poor people against its endangered species, nor does it rely on a sudden and unprecedented improvement in our moral character. No grand sacrifices or mandatory birth control programs will be necessary. It is a solution based on technological improvements, enlightened policy, and the marriage of human rights with environmentalism.

The best guess of United Nations demographers is that the human population on Earth will hit 11.2 billion by 2100.5 The real cost of living the Davis dream is agricultural acreage. If you want all your food grown using organic or traditional methods, including your meat — you are signing up for a reduction in yield of about 20–30 percent for crops and up to 80 percent for beef.6 Switching everything over to organic or traditional even as poorer countries begin to demand a more Western diet would balloon global agricultural acreage.

It’s true that advocates for organic have long claimed that their techniques can match or even outperform industrial agriculture on yields.7 And researchers have shown that if meat demand is sharply reduced, organic agriculture could feed the world on the current agricultural footprint even if it does have lower yields.8 But organic agriculture has a big problem. Because the rules of its game prohibit synthetic nitrogen, it is highly reliant on manure sourced from organic and conventional livestock operations. And meat, especially beef, is the least efficient use of land you can imagine. Taking into account the land they occupy for grazing and the land used to grow the feed they eat, cattle gobble up 28 times as much land per calorie as dairy, chicken, pork, or eggs, and 160 times as much land per calorie as rice, potatoes, or wheat.9 Globally, 36 percent of crops are used to feed livestock, and pasture occupies 3.4 billion hectares, which is 26 percent of global ice-free land.10,11 And don’t blame these terrible statistics on “industrial beef.” From the perspective of its land footprint, grass-fed beef is worse.

Thus, organic agriculture is not only lower-yield itself in most contexts; it also relies on the continued existence of a hyper-inefficient meat production system. If we really want to freeze or shrink our agricultural land footprint as we approach 11 billion, we need to radically reform meat production and embrace synthetic fertilizer. Indeed, I am becoming more and more convinced that most meat animal production should be eliminated in favor of factories that produce cell-culture “lab meat” or vegetable protein products spiked with meaty-tasting heme molecules.

Researchers estimate that global food availability can increase 100–180 percent with rigorous efforts to eliminate waste, close yield gaps in the global South, tighten up efficiencies, eliminate biofuel subsidies, and — crucially — quit feeding such a huge fraction of our crops to animals.12

Encouraging people to adopt vegetarianism or veganism is a possible strategy to reduce the demand for meat. I myself have adopted the slogan of food activist Brian Kateman, who encourages us to be “reducetarians” — to simply consciously cut back on this environmentally expensive food.13 Cultural change can add up to something more than just a gesture made by a few (consider smoking rates and the declining social acceptability of smoking tobacco in many countries), and disapproval and disgust with the sometimes inhumane methods employed by large meat animal operations seem to be increasing. Even given all that, however, I fear that the dietary transition toward richer, meatier diets in many of the currently developing countries will swamp the efforts of the reducetarians, vegetarians, and vegans.
 

Decoupling does not pit the planet’s poor people against its endangered species, nor does it rely on a sudden and unprecedented improvement in our moral character.


More broadly, sacrifice has proved to be an ineffective environmental tool. We don’t like to sacrifice. Few of us can keep it up very long in the face of easy, cheap, and convenient alternatives. The smarter move is always to make the environmentally superior choice the cheap and easy choice. Lab meat can do that. It still has a way to go in terms of product development and public acceptance, but ultimately a public that will accept a McNugget will accept cultured meat — especially when they do not have to contemplate the animal-rights horrors of industrial agriculture as they take a bite. Lab meat may not ever represent 100 percent of global meat production; cultural factors such as the appeal of animal husbandry and religious butchering practices may mean that a mix of majority lab meat and minority humanely, traditionally raised meat for special occasions may emerge.

If we go lab meat, organic farming runs out of poop and would have to rely on rotating in nitrogen-fixing cover crops, which means a lot of land not growing anything for a lot of the time — a real yield killer. Organic rules are a product of history, and there’s no reason, other than purely ideological, why we have to stick to them. Instead, we should do as Grist food writer Nathanael Johnson recommends and use best practices from organic and conventional and permaculture agriculture to create a soil-building, nonpolluting, possibly genetically modified, precision-guided hybrid masterpiece that will take our yields to stratospheric heights.14 We can even fertilize it with small, micro-measured amounts of synthetic nitrogen produced using clean energy. Such an approach would weave together the respect for land and soil inherent in organic farming with a passion for innovation and technological improvement that is currently seen as suspect in organic circles.

Putting together demand- and supply-side improvements — lab meat, hyperefficient “hybrid” style agriculture, reductions in food waste, and an end to biofuels — could dramatically decouple our dinner plates from huge swaths of planet Earth. And that land could return to diverse autonomous ecosystems. Decoupling from other natural resources, including timber, firewood, wild game, and fresh water, will amplify the effect.

2.

Unfortunately, “decoupling” may have a bit of a branding problem. When many imagine a strongly decoupled future, they see a vision of humanity that has embraced technology and human well-being but cut itself off from nature — a “technofix” that rips out our hearts. One hears the term “decoupling” and one imagines sterile protein factories, massive industrial farms run by robots, and gray, hyperdense cities without gardens or places for kids to play in the dirt.

But this isn’t what most proponents of the decoupling framework actually want. While our best hope for protecting nature may be using it less as a resource base, this need not entail physical, emotional, cultural, or spiritual separation. An ideal future will feature what I call “interwoven decoupling,” in which the nature thriving by virtue of our efforts to consume less of it is easily accessible and part of our daily lives. The 2015 Ecomodernist Manifesto pauses to make this same point: “Even if a fully synthetic world were possible, many of us might still choose to continue to live more coupled with nature than human sustenance and technologies require.”15

Interestingly, it is actually more-traditional conservationists, those whose environmentalism is deeply informed by a sense of loss and grief over humanity’s destructive tendencies, who seem to advocate the most radical separation between humans and the rest of nature. Consider the work of Roderick Frazier Nash, author of the indispensable Wilderness and the American Mind, which traces one of our most complex and enduring nature concepts from prehistory to the 21st century. Nash believes that the best possible future for humans on Earth is a model he calls “Island Civilization,” in which “advanced technology permits humans to reduce their environmental impact” and, while voluntarily capping our population at about 1.5 billion, humanity retreats into 500 “100-mile closed-circle units”: dense conurbations where all food, energy production, and housing are located, leaving the bulk of the planet wild.16 Trade between these islands would be minimal. Each would be completely self-sustaining and contained. “Of course Island Civilization means the end of the idea of integrating our civilization into nature,” Nash admits.17 The sole contact these future humans would have with nature would be “minimum-impact vacations in high-quality wilderness.”18
 

Instead of building walls between people and nature, we need to envision a future where human well-being has decoupled from the destruction of nature, but not from nature itself.


Nash’s walled-off cities came to mind recently as I read eminent naturalist E. O. Wilson’s latest book, Half-Earth, where I found him waxing sanguine about the opportunity to return large areas of land to nature afforded by advancing technology and the free market shrinking the human footprint.19 I was only mildly surprised to see him advocating decoupling; he is a great believer in the power of science to work for good, after all. Wilson’s hope is that decoupling can be advanced to such an extent that half the planet would consist of “inviolable natural reserves.” The suggestion is that people are not welcome in these reserves as permanent residents. The reserves would be open only to vacationers who visit, very carefully, and then leave — something not unlike the current system of designated wilderness areas in the United States. (Designated wilderness, incidentally, represents just 2 percent of the United States at the moment.)20 Wilson also suggests that we can retain a connection by means of “a thousand or so high resolution cameras . . . that broadcast live and around the clock from inside the reserves.”21 Wilson is vague about the “non-nature” half of the planet, and I imagine he’d be happy to see it include parks and gardens. But there is the sense that humans should largely stay out of the best nature.

These visions of a separation between humans and nature are recipes for misery. Instead of building walls between people and nature, we need to envision a future where human well-being has decoupled from the destruction of nature, but not from nature itself. Nature is as necessary to us as air and water — and not just because thriving nature provides us with clean air and water.

3.

In addition to its other virtues, the decoupling framework offers us an opportunity to address our emotional, cultural, spiritual, and aesthetic connections to nature, as we can now consider them without conflating them or subordinating them to the “ecosystem services” that have been used to justify the preservation of nature on pragmatic grounds.

The “ecosystem services” approach to saving nature argues that we benefit from nature in concrete ways that we can affix a price to. From the water filtration provided by a forested watershed to the hunting opportunities in wetland, nature’s real and measurable value must simply be measured and accounted for in our cost-benefit analyses, and we will begin caring for fens, tropical forests, and mangrove swamps out of our own self-interest.

Natural landscapes and nonhuman species do deliver us humans incredible value; our lives simply would not be possible without “services” provided by nature like soil nutrients, pollination, and oxygen generation. And in specific situations, measuring and highlighting the economic benefits of various natural landscapes can be effective tools. But there are two problems with using ecosystem services as the only or even the primary justification for conservation.

The first is that technology will continue to develop. Services that are currently provided more cheaply by nature may be eclipsed by machines and built infrastructure in the future. A day when water filtration is more cheaply accomplished with a plant than with plants, and pollination is more economical when done by mechanical drones than by honeybee drones, may be just decades away. If we have trained the public to value nature only insofar as it acts as “natural infrastructure,”22 then we risk devaluing nature as soon as it is less efficient than a synthetic alternative.

The second is that many, if not most, advocates for an ecosystem services framework are actually personally motivated by the intrinsic value of nature — not the instrumental value. As philosopher Eugene Hargrove noted back in 1992, most people “believe that only instrumental value arguments work — but nevertheless wish it were not so.”23
 

Nature is valuable because people value it for what it is, independently of any concrete economic or practical benefit it provides.


When I say “intrinsic value” here, I really mean what Hargrove called “weak anthropocentric intrinsic value.” That is, nature is valuable because people value it for what it is, independently of any concrete economic or practical benefit it provides. Thus, we feel that people ought not cut down old-growth redwood trees, even if we ourselves are unlikely to benefit from their continued existence or even see a redwood tree in our lifetimes. It is enough that we value redwoods for what they are to argue for protecting redwoods.24

I maintain that this kind of valuation of nature, which we variously characterize as love, respect, connection, awe, sense of place, and so on, is by far the most powerful motivator for those who work to protect nature. We see this at the Cannery, where the mini-farm and bluebirds are amenities not because they will meaningfully feed the residents or save species but simply because people love them. We humans arguably love nature more now than ever before, as more and more of us lead lives that are affluent and sheltered enough that nature has ceased to pose material threats to our well-being and has become something we can admire from a range of distances, from safely snuggled on the couch in front of Planet Earth to elbow-deep in the viscera of a mule deer to riding a high of adrenaline and oxygen deprivation on the top of a mountain.

If this love and respect are central to environmentalism, they are also routinely downplayed in favor of “better” reasons to save nature. Hard-headed pragmatists talk about ecosystem services even when there is really love in their hearts. Ecologists and conservation biologists insist that biodiversity boosts “ecosystem function” and “resilience” when they might equally well insist that biodiversity is simply splendid. Restoration ecologists and conservationists are in the habit of framing their desired ecosystem states as objectively the “correct” state. They act as if culture and human preferences have nothing to do with it.25 Appeals to “historical range of variability” or “reference systems” or “ecosystem heath” or “integrity” mask the fact that the species they want and the arrangement they want them in are, generally speaking, dictated by human culture. Problematically, the desired state is usually set by what the ecosystem happened to look like on the day the first white man showed up and started naming things after himself.
 

If love and respect are central to environmentalism, they are also routinely downplayed in favor of “better” reasons to save nature.


Acknowledging the fact that we value nature for what it is has two important implications. First, our deep love for nature suggests that human flourishing requires some level of interaction with nature. Second, it implies that some level of interaction with and care for nature might be a prerequisite for saving it in the future. If we wall ourselves off from nature to save it, we risk creating a generation that doesn’t really like it. If that happens, we will see all too well the importance of human valuation of nature for its own sake as the backbone of conservation.

There is a third implication as well, which is that once we dispense with the idea that there is some essential or correct arrangement of species and habitats that can be determined by science, economics, or rights, we actually have to negotiate the desired arrangements that we want with each other. It is perhaps to avoid this negotiation that conservation scientists tend to prefer to appeal to ecosystem services or “integrity” or something else that sounds objective. By and large trained as scientists, these men and women would rather keep values out of it — and keep the authority that flows from their expertise. However, just because restoration targets or land management practices are up for debate doesn’t mean that some outcomes are not better than others. We who love nature must simply now make the case that the mountaintop is better than the mine, the decadent orchard burbling with birdsong better than the superfluous retail park, the jungle better than yet another soy field.

And we won’t always win. But we have not managed to stop the destruction of nature with our current approach either. Admitting that our preferred outcome is based at least in part on our values is intellectually honest and potentially more effective. A pair of 2010 and 2011 Nature Conservancy surveys showed that among adults, 45 percent said that the best reason to conserve nature is “to preserve the benefits people can derive from it,” while 42 percent said that the best reason is “for its own sake.” But among youth, 56 percent chose “for its own sake” as compared to 44 percent who chose “to preserve benefits people can derive from it.”26 The implication I draw is either that US society is increasingly valuing nature intrinsically, or that we all start out valuing nature intrinsically and are trained as adults to move to a more instrumental rhetorical strategy. Either way, it seems that speaking about valuing nature for what it is comes closer to matching the true motivation of the generation coming up.

4.

Interwoven decoupling includes the ideal of equitable and easy access to nature — both bucolic and wild — not only in terms of physically including them in the cities where most humans will reside, but also including roles for humans in all that nature out there beyond the city, from conservation interventions to cultural or recreational sustainable use.

So — how to have our nature and love it too? While most of our food is grown on super-high-yield farms by highly trained farmers or in cultured meat factories, we will keep our urban P-Patches and demonstration farms and backyard vegetable gardens and chickens — up to and including beautiful midsize farms with some yurts out back for tourists. We will keep them not because they are making a materially meaningful contribution to our food supply, but because we like them. Mary Kimball, the executive director at the Center for Land-Based Learning, which runs the farm at the Cannery, has been farming since she was a child. She says that urban agriculture will always supply a tiny percentage of calories, but that it is valuable because people are “longing for that connection” to the soil, to food production, and to our species’ grand history of agriculture. But these urban ag areas need not be vast. As the Cannery demonstrates, many people prefer to gaze upon pole beans or henhouses as they jog or walk by rather than do any sowing, reaping, or mucking themselves. And that’s okay. But many of us do want this stuff around.

Recognizing that the role of low-yield traditional agriculture (or neotraditional, in the case of such agricultural systems as organic, biodynamic, or permaculture) is essentially cultural doesn’t make it less important or urgent. But it does clarify its role in a helpful way. If we are promoting traditional and neotraditional farming because we think it is the best way to feed the world, then it doesn’t matter so much where the farms are or who is farming or whether the farms are open to the public at all. But once we see these farms as important cultural amenities, it becomes obvious that equitable access to these farms is more important than their total acreage. Rich and poor children alike should have the opportunity to grow snap peas and pet piglets if these things interest them. Men and women of all ages and social classes should have access to a bit of land to grow food on if they so desire. Community gardens should not be allowed to spring up haphazardly in neighborhoods where hipster gardener types tend to live, but deployed throughout cities as a matter of urban planning.

While we nourish our sprits with this small-scale food production, we should not disdain the farms outside the city that grow the bulk of our food. Today, food activists sneer at “factory farms” and “industrial agriculture” in part because of real concerns about pollution and animal welfare, but in part because they simply aren’t considered pretty or charming. As these farms become even more efficient and less polluting, and get out of the business of raising sentient creatures, I hope that even those browsing the Davis Farmers’ Market with their basket in hand will learn to see the contemporary beauty in their careful control, closed-system design, and outstanding yields.

After all, the hyperdomesticated tomatoes and broccoli we will be growing in our rooftop community gardens are technological human inventions barely resembling their wild forebears. The high-yield hybrid agriculture that we will need to feed 11 billion people without devastating global habitat loss is no more “unnatural” — it is an extension of the deep and interwoven relationship we have had with the species we eat for thousands of years. And it is that same relationship that we desire to enact and celebrate in micro-farms and gardens dotted throughout the urban matrix. The farms that actually feed us and the farms we can walk to from our dense urban housing are not opposites but cousins. Approaches, innovations, and deliciousness from each can cross over from one into the other.

In the “interwoven decoupling” scenario, the human footprint is much reduced in size, but the cities where most people live aren’t islands, separated from nature. And the nature in cities isn’t limited to the mini-farms where people can encounter agricultural nature. We need wilder nature too. So in this vision, tendrils of nature extend from the very large natural landscapes made possible by compact, efficient agricultural systems into the very heart of every city. Advances in energy technology will, one can hope, reduce the need for roads, pipelines, well pads, seismic lines, and other energy infrastructure that have cut ecosystems like the southern boreal forest and Gulf Coast wetlands into fine lace. All energy sources have some footprint, and so some trade-offs are inevitable, but the footprint for clean energy sources should shrink over time as technology improves. 
 

So — how to have our nature and love it too?


Just as the urban farms exist not primarily to produce calories but to produce experience, so will the ribbons of undeveloped land that wind throughout the city provide experience first and biodiversity second. These inclusions will increase the size of the city to some degree, but they need not be huge, and by planning linear natural parks and farms, they can be within walking distance of every resident without covering too much area. Those people who desire vaster spaces or fewer other people to enjoy their natural recreation can simply follow these corridors out into the large core areas of nature outside the city. The city’s natural places will include our current style of manicured city parks, which have been included in urban planning since the 1850s design of New York’s Central Park — 843 acres of greenery right in the middle of one of the densest cities in the nation — and earlier. They’ll also include far wilder and freer spaces, places to run and play and forage and maybe even camp.

The majority of the earth will be given over to nature (though not devoid of humans — there will always be groups and individuals who choose to live outside the city, from indigenous groups with deep ties to the land to those for whom being-in-nature is a deeply felt avocation). Nature will also be invited into the urban matrix in the form of wilderness parks and mini-farms. The tricky urban planning piece will be making sure that those who want to interact with each type have equitable access.

The most obvious way to do this is simply to make sure that natural areas are within walking distance of every resident. Flooding the market with green space will also help avoid “environmental gentrification,” in which well-meaning projects designed to improve neighborhoods by welcoming in nature end up making those areas so desirable that poor people are priced out.27 One way to scale up green space quickly is to embrace the “novel ecosystems” that are already a feature of every urban, suburban, and peri-urban landscape. These are the feral, untended lots, the overlooked highway medians, the bit of woods out back of the big-box store. These places are wild and diverse but have been considered trashy because they tend to feature a robust population of nonnative species. As our aesthetics and ideologies of ecological purity change in a changing world, I hope that these fascinating spontaneous natures will be embraced as the small wildernesses that they are. They can be brought into the fold by a few minimal interventions to make sure that they are safe and accessible — paths and benches, removal of trash and poisonous plants, and so on. But I hope that we’ll learn to appreciate and preserve their fascinating and beautiful spontaneous ecologies.

Natural areas should have a variety of rules; not all should be of the “look but don’t touch” variety — a style of interaction with nature peculiar to the Western elite. People who want to hunt, fish, swim, build forts, collect mushrooms, and otherwise interact in a more interventionist style should have areas where those activities are allowed within the city, and outside it, for those activities, like hunting big game, that require larger landscapes. Those who want to participate in ancient or modern cultural relationships with nature should be able to do so, whether it be indigenous food gathering, tending hedgerows, or rock climbing.

This vision does not require a radical rewiring of the landscape, at least not in the United States. Twenty-eight percent of the country is already federally owned public land,28 city parks make up an average of 8.2 percent of our cities,29 our population is 80.7 percent urban and rising, and agricultural yields continue to improve. Other countries are different stories, but urbanization is on the rise everywhere, and there is at least a global desire to increase agricultural yields, improve distribution, and reduce waste, even if the means are in some places wanting. What remains is to improve the agricultural system further and bring that highly efficient mode of agriculture to all countries, to add and expand protected areas for nature, to support small farms and parks in the cities and ensure that all residents have access to them, and to connect everything up in a vast web of green.

If we use up nature, we will be miserable. If we wall ourselves off from nature, we will be miserable. The path to joy is to allow nonhuman nature to thrive by reducing our demands upon it, while loving ourselves enough to allow ourselves to remain within it. The future won’t be quite like the Cannery. Those houses are too big and the prices too high. But there are glimmers of a future city here: dense, walkable, crisscrossed by green, buzzing with bees. As the developer of the Cannery showed me around its back end, we observed several healthy-looking jackrabbits loping up the slope of a water retention feature. I asked about coyotes, and he got nervous. He feared his customers wanted nature but not something quite that wild. But we do, in our hearts. Upon every child’s head, a buttercup crown, behind every neighborhood, a den of coyotes, and in every pot, a vat-cultured chicken.

Read more from Breakthrough Journal, No. 7
Democracy in the Anthropocene
Featuring pieces by Erle Ellis, Calestous Juma,
Jennifer Bernstein, and Siddhartha Shome

Nature for the People

magine a planet without wild places. A planet so covered with aquaculture, plantations, rangelands, farms, villages, and cities that wild creatures and wild places, if they still exist at all, linger only at the margins of working landscapes, cityscapes, and seascapes.

Is this the planet you want to live on? No matter who you are, I bet it’s not. Your ideal planet would sustain both people and nature, leaving plenty of room for wild creatures to live and thrive in habitats free of human interference.

Why think about this? The answer is simple. If you aspire to live on a planet where wild creatures roam unhindered across wild landscapes, this is not the planet you are making. While times have likely never been better for most people, the opposite is true for the rest of life on Earth.1

Less than one-quarter of Earth’s land remains without human populations or land use.2 Wild species, especially wild animals, are going extinct faster than they can be counted. The cause is mostly habitat conversion and loss, combined with unregulated hunting and resource use, pollution, competition with species transported from other parts of the world, and increasingly, global changes in climate.3

The case has been made many, many times that transforming the planet in this way is ecologically, economically, and ethically unsustainable.4 These claims and their supporting evidence are important. Yet facts and rational arguments are not nearly enough to change the way things are going.

To sustain Earth’s wild places and wild species into the deep future, an unprecedented level of social change will be required. The good news is that the roots of this great social transition are already evolving. By engaging with these evolutionary processes of sociocultural change, human societies might ultimately produce and sustain a far better world than the one we are creating now.

1.

This planet is the way it is because our societies made it that way. There is no control room on Earth. No one is in charge of the planet. And no one is intentionally destroying nature. People transform ecology to make a living. We humans, as individuals and societies, are shaping this planet while we are busy making other plans. Planetary change is social change, and contemporary societies are transforming the world at rates and scales unprecedented for any other species in Earth’s history.5 Such is the nature of the Anthropocene.

Human transformation of Earth is nothing new. As Earth’s “ultimate ecosystem engineers,” humans have always transformed environments to sustain themselves, using ever more complex tools, from fire and social strategies for hunting and foraging to domesticating species, grazing livestock, tilling soils, and building global supply chains to service urban supermarkets.6

Since our species began to spread across the continents more than 60,000 years ago, human societies have grown in scale from a few dozen individuals to hundreds of millions. Through processes of cooperative ecosystem engineering, the potential productivity of a single square kilometer of land has gone from sustaining fewer than ten individuals supported by foraging and hunting to sustaining thousands through intensive agricultural practices. Energy use per individual has also scaled up by a factor of more than 20, from the burning of biomass to cook food to the use of fossil fuels and nonbiological energy sources (like solar, wind, and nuclear) to do our work. This energy now powers the flow of materials, energy, biota, and information across the globe, and human societies, as a result, have emerged as a force of nature.

These unmatched capacities to shape Earth’s functioning are based on socially learned and socially enacted behaviors. Unlike beavers, ants, and other species whose abilities to alter environments by building dams and nests are biological, humans must learn from others how to hunt, farm, trade stocks, or even live together. Humans are ultrasocial — the most social species that has ever existed on Earth, unable to survive or to reproduce without learning how to live from and with others.7 It takes a social group, a village, a nation-state, or a global trading system for humans to make a living, or to transform an environment. And it takes social learning — culture — to make it all possible. Through millennia of social change and cultural evolution, human societies have accumulated cultural capacities that have enabled us to become very good at making Earth’s ecology work for us.
 

Environmental change is social change, and social change is cultural change.


Humans reshape environments through social processes that vary hugely both within and among societies and change as societies change.8 In some societies, most individuals engage directly in altering environments to sustain themselves, as with hunter-gatherers and subsistence farmers. In others, some farm, make things, or serve others, while some trade, tax, or govern. In this way, the myriad social processes that sustain humans and transform environments have come to function entirely through trust, cooperation, competition, and exchange among complete strangers through social networks that extend across an entire planet.

Current levels of inequality within and among societies may never have been greater than they are today. The demands of some individuals, social groups, and societies shape Earth’s ecology far more than those of others. In this way, human transformation of environments might be chalked up to bad people doing bad things. Yet, the reality is far more complex.

Human societies, even some small bands of hunter-gatherers, have long been socially differentiated, with unequal sharing of power, resources, and opportunities among individuals and social groups structured through historically and culturally determined societal processes.9 Societies characterized by relatively equitable social relations have proven just as capable of mismanaging their ecosystems as the more unequal ones — witness the mass megafaunal extinctions caused by hunter-gatherers at the end of the Pleistocene.10 Moreover, like social inequality and harmful social relations, the patterns and processes of unsustainable environmental management and resource extraction reflect the social relationships, exchanges, and institutions of the societies that produce and sustain them.

As with the societies we live in, the planet we have inherited from our ancestors, and the one we are making now, is a social construct, shaped physically and culturally by the perceptions, values, aspirations, tools, and institutions of societies past and present.11 These social structures and processes have changed across generations as the cultural practices and institutions that produced them have evolved.12 In the Anthropocene, Earth’s ecology changes with us. Environmental change is social change, and social change is cultural change.

2.

Though only about half of Earth’s land is currently used for agriculture, forestry, settlements, and other human infrastructure, most of the remainder represents lower-productivity lands left unused for economic reasons.13 Look out any airplane window and the evidence is clear. The steep slopes of hills and mountains remain the lesser-used lands almost everywhere — islands of remnant and recovering vegetation lost in a sea of agriculture and settlements.

Earth as a whole is no different. The greatest areas still unused by us are its coldest, driest, and least productive regions — the Sahara, northern Siberia, and other major deserts and frigid polar regions — with the exception of remote areas of dense tropical forest in Amazonia and the Congo.

By the numbers, existing global conservation efforts appear robust, including designated protected areas covering about 15 percent of Earth’s land. Some are quite large — a million square kilometers in Greenland (0.8 percent of Earth’s land), half a million in the Sahara (0.4 percent), and between a third and a half a million in southern Africa (0.2–0.4 percent). Some marine protected areas are even larger. Total global protection has also been increasing rapidly in recent decades. There are now hundreds of thousands of terrestrial protected areas around the world.
 

We’ve taken the better half of Earth’s land and left the rest.


While these efforts to conserve habitats and wild species are vitally important, they reveal a troubling weakness. The parts of Earth we’ve protected so far have been the easy ones, the parts that mostly remained unused because they were too cold, dry, steep, or remote to develop cheaply. In contrast, Earth’s most favorable climes and most fertile and accessible lands are for the most part already in use.14 We’ve taken the better half of Earth’s land and left the rest.

Outside deserts and polar regions, the land we’ve left behind or protected is a patchwork of remnant and recovering habitats dispersed across and embedded within the engineered mosaics of used lands and infrastructure that sustain our societies. The EU has more than 120,000 protected areas, but more than two-thirds are smaller than 100 hectares.15 By fragmenting and shrinking habitats into ever smaller and less connected parcels, Earth’s wild species have been subdivided into smaller, more isolated, and more vulnerable populations exposed to the myriad pressures and disturbances that come with proximity to human societies.16 This is especially true for the vertebrate species who, like us, require extensive habitats to survive.

It’s hard enough to sustain wild habitats and wild populations in the small disturbed fragments we’ve marooned within our working landscapes. Yet we are making it even harder. With near certainty, global temperatures will rise by at least 2 degrees this century. To live within the habitats they are adapted to, some species are going to have to migrate hundreds to thousands of miles toward the poles in the coming decades. They will need to cross farmlands, cities, and highways to avoid extinction.

For these and other reasons, even if land use were kept just as it is right now, extinctions would likely continue to increase into the future.17 Yet natural habitats continue to fall to deforestation, unsustainable harvests, and conversion to agriculture, settlements, and other forms of land use. In this time of unprecedented human prosperity, ensuring that the rest of Earth’s species can make it through the Anthropocene will require figuring out how to free up and reconnect habitats across Earth’s most productive half.

3.

Thanks to increasingly productive agricultural systems, more food has been produced per person every decade since the 1960s without a major increase in the global area of land cultivated for crops since the 1970s.18 With more than 7 billion people on Earth, rates of population growth are also on the decline and should top out at 10 billion or so this century.19

With careful management, projected human populations can be sustained using even less land than today, and more room can be set aside for other species in protected areas across the continents. Agricultural intensification and urbanization continue to enable societies to produce more with less, leaving behind huge areas of marginal lands in the process. The same trends that enabled forest recoveries in the developed world almost a century ago are moving well along in China and increasingly in many other developing countries. With the right kinds of policies and development — like job creation in cities that pulls rather than pushes the rural poor to better opportunities — large swaths of land can be released for other uses, conservation included.

Enhancing the productivity of land over time is no small feat, nor is urbanizing in an equitable fashion, nor sustaining wildlife in protected areas surrounded by human development. But the greatest challenge of them all, the grand challenge of the Anthropocene, is to bring these three crucial endeavors together fruitfully across the vast tapestry of shared spaces we’ve constructed. Most of the anthropogenic biosphere is now shared space — patchworks of remnant, recovering, and less-used habitats we’ve left embedded within our producing landscapes.20 Within these mosaics, wild creatures still make their living within the human world, where sometimes they can even thrive.21
 

We’ve become very good at making Earth’s most productive lands work for us.


To expand the wild spaces needed to sustain the rest of life on Earth, it will be necessary to redesign and rebuild the anthropogenic landscapes we’ve constructed to sustain ourselves. We’ve become very good at making Earth’s most productive lands work for us. We’ve crisscrossed the continents with roads and other connective infrastructure that have made contemporary societies the most interdependent that have ever lived on Earth. It is time to put the same design and engineering prowess to work expanding and reconnecting wild habitats.

To design, engineer, and construct a planet that will sustain wild species through the Anthropocene demands a triple focus: production and protection must be advanced together, and the interface between the two must function toward both ends. At the same time that less productive lands are reclaimed for wildlife, these must be protected and reconnected through the dense webs of human infrastructure that still divide the lands we’ve left unused. Corridors of unbroken habitat, free of human pressures, must be built at scale to connect the largest protected areas across continents. Protection will only work if it forms a continental web of wildlife mobility that serves the large and the small, the slow and the fast, in the movements they must make to survive across the Anthropocene.

The good news is that creating multifunctional landscapes capable of producing high yields for humans while enabling species to live and move are increasingly common management strategies in Europe, Japan, and some other developed nations. But there will always be trade-offs.22 In Iowa, for instance, one of Earth’s most productive agricultural regions, converting 10 percent of land area to linked prairie strips delivered disproportionately large conservation benefits and connectivity, but also reduced corn and soybean production.23 While conservationists and even many farmers prefer the crop/prairie mosaic to wall-to-wall crops, missed yields must be made up by production elsewhere — likely in less productive areas requiring more land. Moreover, creating multifunctional landscapes of this sort can also create and sustain local conflicts — some violent — among farmers, pastoralists, conservationists, and wildlife, especially when the trade-offs are unequal, as they often are.24

For these reasons, sustaining production and protection in the same landscape is a demanding social project. It requires intensive and ongoing negotiations and investments shared among landowners, governments, the public, businesses, and other stakeholders. Yet such efforts are expanding around the world, including collaborations among diverse land management and conservation institutions to interconnect habitats across continents. But new strategies and even new institutions will not be enough. Protection and connection at the planetary scales needed to sustain wild creatures and wild spaces through the Anthropocene will not succeed without connecting deeply with the abiding human love and concern for wild nature.

4.

Contemporary industrial societies are not the first to value and conserve wild places and wildlife.25 From the traditional tapu areas of Polynesia (the source of the word “taboo”) to the sacred groves of India, the Maasai’s eschewal of game hunting in East Africa to the royal hunting grounds of Europe — through to the millions of acres of public lands designated by Teddy Roosevelt — countless forms of conservation have cropped up for millennia, emerging from the cultural priorities of the societies that created them. It is likely that most, if not all societies, from the days of hunter-gatherers to the present, have practiced some form of conservation, and that some of these efforts helped to sustain biodiversity for generations.26

Yet the social consequences and effectiveness of conservation practices reflect the cultural behaviors, institutions, perspectives, and aspirations of the societies that enact them. In most of the examples above, limits to habitat use or hunting served to reinforce and reproduce social inequality. The imposition of tapu areas by priests helped sustain their privileged role in Polynesian societies. Royal hunting grounds and bans on hunting specific species served as much to signify royal power as to conserve wild game. Colonial game preserves and early conservation areas imposed through the repression and removal of indigenous peoples reflect the worst forms of cultural dominance. Even Roosevelt’s more democratic conservation ethos developed from his hunting experiences in just such environments. Conservation in this sense can sometimes represent nothing more than a “green grab” imposed by elites on the less powerful.27

Yet this is clearly not the only cultural path to conservation. The Maasai’s cultural prohibitions on the consumption of game have long been reproduced through a form of identity politics; to be Maasai is to herd cattle and not to hunt for sustenance.28 To live off one’s cattle is to be an honored member of society — not to is to lose one’s social standing. The peaceful coexistence of Maasai cattle with vast herds of wild herbivores is an amazing thing to behold; from an agricultural perspective, a lot of grass is being given away for free. Yet with few exceptions, this has been the cultural norm among the Maasai for hundreds of years, and their rangelands have helped to conserve the most diverse mammalian megafauna assemblages left on Earth.29
 

The key to a global social effort to sustain Earth’s ecological heritage is to stop believing that there is any single best way to value nature.


In the developed world, conservationists have sparred for more than a century over an appropriate cultural stance for preserving the world’s natural resources, lands, and wildlife, a conflict exemplified by the debate between John Muir, staunch defender of “Nature with a capital N,” and Gifford Pinchot, “wise-use” conservationist.30 The ensuing split between ecocentric and anthropocentric philosophies has roiled environmental efforts for decades, resulting in conflicts over what nature to conserve and why.31 Even Aldo Leopold’s calls for a “harmony between men and land” failed to resolve this dispute. Debates along these lines continue in arguments about the need for explicit valuation of land or ecosystem services to justify conserving them.32

But the key to a global social effort to sustain Earth’s ecological heritage is to stop believing that there is any single best way to value nature. Humans shape their environments in accordance with their cultural beliefs, values, and practices. Some like their nature “pristine,” some like it managed productively, and some just like it on TV. But each of us, in our own individual and socialized way, wants wildlife and biodiversity out there; biophilia is basic to human psychology, perhaps as ingrained as our need to be social.33

There are so, so many ways to value nature — whether it be a pasture full of cattle, a plot of land, a city park, a protected wilderness area, a hunting preserve, or a whole world that needs saving. Scientific evidence combined with legal and economic frameworks will of course continue playing an important role in conservation. Yet these won’t be nearly enough. When it comes to hard decisions about values and sharing, people’s goals, incentives, and actions are shaped by cultural norms at the individual, group, and societal levels. Even within the increasingly globalized societies most people now live in, different people in different social groups make value-based decisions very differently.34 With such an amazing diversity of cultures of nature, how could any specific ethical or value-based approach bring people together around a common global project to conserve Earth’s ecological heritage?

5.

Given the deeply social nature of conservation and the vast diversity of our cultural preferences toward it, building a common global conservation project will not be easy. Yet three basic approaches to fostering the collective aspirations behind such a project would seem necessary. The first is to be all-inclusive — to celebrate all nature values across all value systems that exist around the world.35 The second is to seek out and appeal to those values already held in common. And the third is to disseminate and promote new or existing values with the potential to broadly support the project. All will likely be needed if human aspirations are to reshape a better world both for people and the rest of nature.

Ironically, though, stories of environmental doom have come to dominate the discourse today. Narratives warning of a “sixth mass extinction” and many other environmental crises do have a substantial scientific basis and have helped to put serious concerns about biodiversity loss on the map. Yet such negative messaging has also been shown to be not only dismal and depressing but also disempowering, off-putting, and generally unsustainable.36 Most importantly, visions of future catastrophe are unlikely to motivate the vast majority of people whose lives turn on their own daily struggles, no matter how popular such narratives might be among some educated elites in developed nations.
 

There is increasing evidence to suggest that people respond more actively to positive, aspirational messages that empower them to act toward better outcomes.


For more than a century, appealing directly to a shared love of nature has bolstered conservation efforts. Increasingly, calls for “Earth Optimism” and “Nature for All” have gained traction, directing attention toward the successes and positive aspirations of conservation movements around the world, rather than their problems and failures.37 Indeed, there is increasing evidence to suggest that people respond more actively to positive, aspirational messages that empower them to act toward better outcomes.38 Even in the midst of their daily struggles, all people aspire to a better life. These aspirations are many, but they are not infinite, and many are held in common. Aspirational natures, natures that represent what people want, might thus serve as the ultimate guide to expanding conservation into a truly universal human project.

In his 2016 book Half-Earth, Edward O. Wilson proposed what is likely the most proactive and grandly ambitious aspirational vision of conservation ever.39 While roughly anchored in science, Wilson’s vision focuses more broadly on making the biosphere great again through an unapologetically enormous project that would protect and restore half of the planet to conserve biodiversity into the deep future. The precise way forward is not made clear in the book, but his vision is radically simple, crisp, and clear — a better Earth for the rest of nature will also be a better Earth for us.

The appeal of Half-Earth lies in its simplicity. Sharing Earth half-and-half sure seems like a fair shake for both parties. While the political, economic, and other social implications of such a project are staggering,40 as a positive, easy-to-embrace vision of a way forward, it might be impossible to beat. Moreover, done right, such a plan would almost certainly facilitate the conservation of Earth’s ecological heritage into the deep future.

Half-Earth and the related Nature Needs Half project41 are almost impossibly grand visions for Earth’s future. Yet their combined appeal to human love of nature and aspirations for a better future are likely the most universal of any call for conservation in human history. Crisis narratives and scientifically defined environmental limits might engage some people, but their focus on planetary problems with complex technocratic solutions is almost impossible to grasp at a personal level.

Empowering people to join a popular social project that everyone can understand and act on in their own ways might be just what it takes to change Earth for the better. Such efforts build on a basic trust in humans and the shared values they hold in nature around the world. In these times of unprecedented global connectivity and interdependence among human societies, the time has never been riper for a global social enterprise to reshape our planet for the better.

6.

The devil is, of course, in the details. It would take deep, deep pockets to purchase conservation rights to 35 percent of Earth’s land to top up the 15 percent already protected (a wild guess: about four billion hectares at an average cost of $100 to $1,000 per hectare). Even if funds were available, what would prevent such a scheme from becoming merely the greatest green grab ever?

The first step in reshaping half of Earth’s land into a global conservation reserve is to recognize that this would introduce the most challenging land reallocation process in history. Sharing the planet (unequally) with one another is hard enough. Sharing land equitably across ecoregions — including Earth’s most productive and densely populated regions — would demand global trade-offs in land use that are hard even to imagine. Whose half will be conserved or restored? Where will lost agricultural production be made up? Who will win and who will lose in the great global land trade-off? Who will compensate whom?

Another key question is whether land allocations should be guided by expediency — wherever land is cheap or available for political or other reasons — or by ecological priorities — where species or habitats are rarest and most in need of conservation. To date, expediency has been the rule. Yet this strategy cannot ensure that the full diversity of Earth’s habitats and species will be conserved or that habitat connectivity across continents will be possible.

A truly equitable, effective, and sustainable global conservation system will need to be more than a global land deal or a global property portfolio in the hands of a few powerful institutions. An equitable system is far more likely to emerge as a shared social project evolving from the bottom-up aspirations of the world’s people, their societies, and their dynamic environments over the very long term. It will take sustained social learning, cultural negotiation, and cooperation across societies to shape conservation and connectivity across Earth’s producing and protected lands.

This will mean multilevel, not top-down, modes of governance, defined by strong local and regional institutions, as well as novel forms of social collaboration among private and public stakeholders at all levels. Elinor Ostrom’s research into the sustainable shared management of forests and other common-pool resources illuminates many of the institutional practices that might facilitate such collective management of a shared reserve covering half the Earth.42
 

The good news is that the roots of this great social transition are already evolving.


Such coordination is already off to a promising start in many regions supported by international initiatives like the Convention on Biological Diversity, the Yellowstone to Yukon project, Natura 2000, the Landscape Connectivity call to action of the World Business Council for Sustainable Development, and many others. Emergent networks of nonprofits, philanthropic organizations, and private-sector groups are also taking on the challenge of global conservation and connectivity and finding common ground in shared values like “planetary health” and Earth stewardship.43

To continue moving forward, the selection, design, and management of protected areas and the connections between them must continue to evolve and diversify if they are to serve the needs of all people and all species.44 In particular, the notion of a dichotomy between used lands and protected areas will need to transition into a continuum of strategies for integration, from interconnected regional national parks and indigenous reserves to urban green spaces, prairie strips, hedgerows, wildlife bridges, dam removal projects, and experiments with conservation management. Diverse solutions and creativity will be essential to navigating the compromises that will make a shared planet valuable to people and viable for wildlife.45

As humans increasingly become an interconnected and interdependent global species with stabilizing populations and broadly rising welfare, it is an increasingly imaginable, if daunting, prospect that our societies might yet pool our resources to construct, connect, and sustain a global ecological niche that includes the rest of life on Earth. With 15 percent of Earth’s land already protected and another 2 percent on the way, protecting 50 percent of Earth’s land is at least in the realm of possibility.

What this shared planet might ultimately look like remains largely the domain of visionaries and science fiction. Perhaps it might evolve into a simple binary world, half urbanity and intensive agriculture, the rest protected, untouched — nature somewhere else. Yet I for one do not believe that such a world is either possible or desirable; the planet is shared now, and always will be. The question is not just how much to share, but how to share it.

Imagine a planetary dashboard, like a ballot box, in the hands of every person on Earth. What would Earth look like if we all voted to construct a globally connected reserve covering half of Earth’s land? Unfortunately, the answer is not simple at all.

Would this create a global patchwork of organic farms, green cities, and interconnected habitat? Or separate seas of dense cities and farms, with protected wilderness areas far away? If habitats are to be restored and protected in Earth’s most productive ecoregions, how will this loss of production be made up? By clearing larger areas in less productive ecoregions or by increasing yields on existing farmlands? Will shared spaces, where people and wildlife coexist in close proximity, increase, or would they disappear? Will nature feel closer or farther away? These are just some of the hard questions.

Either way, the people, together, will decide. Different people in different regions will likely do things differently. But ultimately, the call for an aspirational social movement to conserve half of Earth for the rest of nature will need to serve as a call to develop better, and to develop differently, and not as a call to end development.

Over centuries, Maasai culture shaped productive, shared, and incredibly biodiverse landscapes. This is a gift to every one of us on Earth now and in the future. The megafauna and landscapes they helped to sustain might yet outlast the Great Pyramids or New York City. As Earth’s first ultrasocial ecosystem engineers, we as a species will continue to shape the world. What will be our legacy? It is hard to imagine a greater gift to the future than a planet as richly diverse or richer than the one that evolved in the millions of years before our common ancestors first walked the plains of Africa.

Read more from Breakthrough Journal, No. 7
Democracy in the Anthropocene
Featuring pieces by Emma Marris, Calestous Juma,
Jennifer Bernstein, and Siddhartha Shome

No. 7/Summer 2017

Untapped Potential

biodiversity hotspot blessed with tropical rainforests and nearly 5 percent of all terrestrial species living on Earth, Costa Rica is held up today as a paragon of environmental virtue. With more than 50 percent of its land covered in some kind of forest, and as much as 30 percent officially cordoned off in the form of parks, reserves, and other protected areas, the “Green Republic”1 features abundant forest conservation in tandem with “sustainable development,” its high score on the United Nations Development Programme’s Human Development Index matched only by its spot at the top of Yale’s Environmental Performance Index. Greener still, renewable sources provide more than 80 percent of the country’s electricity.

What environmental champions might find themselves rather less enthusiastic about, however, is that around 80 percent of this renewable generation, or roughly two-thirds of Costa Rica’s total electricity, comes from hydroelectric sources. In some ways, this figure should come as little surprise; hydroelectricity provides well over half of the world’s renewable energy supply and offers distinct advantages over other renewables. Unlike solar and wind, which are available only when the sun shines or the wind blows, hydropower is usually available around the clock and on demand.2 Large hydroelectric projects also tend to generate fewer lifetime greenhouse gas emissions than other renewables and have by far the lowest average levelized cost of electricity in OECD countries among all renewable energy sources.

Figure 1: Global electricity generation by fuel, 1973–2010

Figure credit: IEA (2012)

Nevertheless, hydroelectric development has raised the ire of environmental and indigenous rights activists alike due to its considerable social and environmental costs, most notably in the form of human displacement and habitat disruption. But a closer look at Costa Rica not only demonstrates a case in which some of these impacts have been successfully mitigated, but also confirms that hydroelectric energy lies behind much of the country’s environmental preeminence — both directly, in providing energy that is abundant, clean, and cheap, and indirectly, in motivating an ethic of environmental protection and forest conservation. Indeed, the modern history of environmental protection in Costa Rica — its very identity as a “Green Republic” and its well-deserved reputation as a model of environmental sustainability — shows itself to be deeply intertwined with its history of hydroelectric development.

In light of the twin challenges of climate change and human development in the 21st century, Costa Rica’s history should prompt a more serious consideration of hydroelectricity in other rapidly developing countries, where the need for energy is the greatest, deforestation looms the largest, and hydroelectric potential remains woefully untapped. In Africa in particular, hydroelectricity has the capacity to at once foster energy access, promote environmental conservation, and fuel sustainable development. While hydroelectric development is not necessarily possible or even desirable in all contexts, the case of Costa Rica provides a preliminary model for when hydropower would in fact make very much sense.

1.

That hydroelectric energy is abundant, clean, and cheap should come as no great surprise. What is often overlooked, however, is the role hydroelectric development has historically played as an early and consistent impetus for conservation. Indeed, the importance of hydroelectric energy for environmental sustainability in Costa Rica goes well beyond its simply being an emissions-free source of two-thirds of the country’s electric power.

Electricity was introduced to Costa Rica in 1884 when the Costa Rica Electric Company illuminated 25 street lamps in San Jose. The country’s first hydroelectric power plants were built shortly thereafter, around the beginning of the 20th century, and hydroelectricity has provided an important source of power ever since. The beginning of the 20th century also witnessed the first systematic attempts in Costa Rica to protect the environment, including the enactment of the 1909 Fire Law, motivated in part by hydrological considerations, which restricted the use of fire to clear forested land. This and other early-20th-century environmental protection laws, however, lacked strong enforcement measures and did little to curb deforestation.

It was only after the establishment of the Second Republic at the end of 1948 that serious attempts at environmental protection got under way, when the new government headed by José Figueres Ferrer abolished the military — an act many have credited with freeing up funds for social services and environmental protection — and established the Costa Rican Institute of Electricity (Instituto Costarricense de Electricidad, or ICE), a public utility corporation charged with supplying electricity to the country. Recognizing the country’s hydroelectric potential, ICE decided to focus on hydropower, a strategy that would provide a strong impetus for forest conservation in Costa Rica in the years to come. “Understanding the importance of forest cover for ensuring the hydrologic needs of the ICE,” as Sterling Evans writes in The Green Republic: A Conservation History of Costa Rica, “the Figueres administration issued a decree in 1949 to establish a Forest Council to inventory forest resources and to protect forested watersheds from diseases and fires.”

Figure 2: Sources of electricity in Costa Rica

Figure credit: Granoff et al. (2015)

If the initial desire to satisfy “the hydrologic needs of the ICE” motivated early forest conservation in Costa Rica, the need to maintain hydroelectric power potential would continue to influence forest protection efforts in Costa Rica in the decades that followed, especially as deforestation continued apace. In 1967, the government set up a commission, which included ICE, to study the problem of unregulated deforestation and to prepare a draft forestry legislation proposal. In November of 1969, the Costa Rican Legislative Assembly passed the landmark Ley Forestal (Forestry Law), which launched several initiatives to establish a system of national parks and protected areas. Despite the designation of new national parks, however, deforestation in unprotected areas proceeded unhindered into the 1970s. A report to the Legislative Assembly in early 1976 found that the destruction of forests endangered watersheds and, consequently, Costa Rica’s hydropower potential, warning that the situation was “extremely critical” and that Costa Rica was in a “state of true ecological catastrophe.” Emphasizing the impact of “enormous clear-cuts” on watersheds and the prevalence of extensive forest fires, the report laid special emphasis on threats to both wildlife and hydropower potential.

Such findings in turn led to renewed and ultimately successful efforts to preserve Costa Rica’s forests. In 1977, the Costa Rican Legislative Assembly passed the National Parks Act, which turned the existing National Parks Department within the General Forestry Directorate into a separate National Parks Service.3 Between 1974 and 1978, nine new protected areas representing nearly 350,000 acres (nearly 3 percent of Costa Rica’s total land area) were added to the Costa Rican national park system. Among these was Corcovado National Park, the country’s largest at almost 105,000 acres. By 1982, approximately 1,033,000 acres of land, or 8.3 percent of the country, had been designated as conservation areas, and Costa Rica was well on its way to becoming the environmental success story it is today.4

But the desire to maintain hydroelectric power potential has not just influenced environmental conservation efforts in Costa Rica in a general way. There have also been many cases where forest areas were preserved because of their connection to specific hydroelectric projects. When development of the Arenal Hydroelectric Dam was undertaken in the late 1960s to generate electricity for northern Costa Rica, for instance, scientific studies classified the forests among the Monteverde region, including the Peñas Blancas Valley, as watersheds that merited protection. As a result, the area was included in the 35,000 hectares of national forest reserve designated in February of 1977 as, fittingly, the Arenal National Electric Energy Reserve.5 As a result of these efforts, by the 1980s, the Peñas Blancas Valley consisted almost entirely of primary closed-canopy forest.

It is clear that in Costa Rica, the perceived link between forest conservation and hydroelectric generation capacity served as a key component of the foundational narratives used to advance the cause of nature conservation in the country. Costa Rica’s history thus illustrates the ways in which hydroelectricity can provide both direct and indirect benefits, both energy provision and the development of an ethic of environmental protection, in developing countries. Even in Costa Rica, however, hydroelectric energy has not come without significant costs, and the social and environmental impacts associated with hydroelectric development today have left its future in the country uncertain.

2.

The disadvantages of hydroelectric development can be both social and environmental in nature. Environmentally, large dams permanently transform riverine ecosystems in significant ways. The construction of the Arenal Dam, for instance, turned at least four rivers into a freshwater lake ecosystem. The release of water from the reservoirs of large dams has also been shown to negatively impact downstream aquatic ecosystems, and because many of Costa Rica’s fish species migrate between fresh and salt water, or migrate long distances within freshwater, these disruptions have had a measurable impact on several species in the region.

The human cost can also be heavy, as large-scale hydroelectric projects are often accompanied by involuntary population displacements. According to a World Commission on Dams report, in 2000 as many as 40 to 80 million people had been displaced worldwide by the creation of large dams.6 Large as they are, these numbers still don’t convey the full magnitude of the human costs associated with involuntary displacements, as development tends to disproportionately affect isolated indigenous communities who have longstanding economic and cultural ties to the land, and who often lack the contacts and institutional experience necessary to work the system and take full advantage of the new job prospects and economic opportunities that hydroelectric projects open up.

In Costa Rica, the most prominent and negative cost of hydroelectric power projects has been the forced displacement of citizens living in the inundation areas of hydroelectric dams. Some official figures about dam-displaced people in Costa Rica are shown in Table 1.

Table 1: Official displacement figures for hydroelectric projects in Costa Rica

Data sources: Partridge (1993); Trujillo et al. (2012)

Sites for hydroelectric projects are determined largely by geographic and hydrologic considerations, and Costa Rica has been fortunate that such considerations have not yet necessitated the involuntary displacement of large numbers of people. The only project that has required substantial involuntary resettlement up until now has been the Arenal project, completed in 1980, for which resettlement was carried out relatively smoothly and painlessly. For the two communities relocated in the Arenal project — a village with a bank, primary school, and basic health facilities, and a community of people living on outlying ranches — ICE established an Inter-Institutional Task Force to aid in constructing new settlements, carrying out relocations, and fostering community building. Financial compensation packages were also distributed with the help of community leaders versed in family networks and preferences. During the first two or three years, those resettled did report difficulties, but economic development picked up soon after, and, according to anthropologist William Partridge of the Inter-American Development Bank, living standards surpassed those of the original communities within five years of the move. While it is important not to discount the hardships that accompany any involuntary resettlement, it is evident that the institutional framework established by the Costa Rican government and ICE minimized short-term burdens and provided for the development of long-term gains.

Although the social and environmental costs associated with hydroelectric energy in Costa Rica have proven manageable thus far, and the benefits, in terms of abundant clean energy, have been substantial, the future growth of hydroelectric energy in Costa Rica nevertheless remains uncertain. A large hydroelectric project known as Proyecto Hidroeléctrico El Diquís (El Diquís Hydroelectric Project), which has been in the planning stages for many years, has recently been stalled over social and environmental concerns. Designed to be the largest hydroelectric dam in Central America with a capacity of 631 megawatts, the El Diquís project, if implemented, would play a key role in Costa Rica’s clean energy future.

Figure 3: Total installed capacity in Costa Rica, with and without El Diquís

Figure credit: Carls et al. (2011)

Unlike other hydroelectric projects in Costa Rica, however, El Diquís impacts indigenous lands. While ICE has stated that indigenous people will not be displaced by the dam, a 2011 United Nations report by the special rapporteur on the rights of indigenous peoples found that “the indigenous peoples and organizations concerned generally believe that whatever consultations ICE carried out in the past were inadequate.” There are environmental concerns over El Diquís as well, particularly the potential effect of the dam on the Térraba-Sierpe National Wetlands, a site assigned international conservation importance under the Ramsar Convention, a 1971 intergovernmental treaty for the conservation and sustainable use of wetlands.

The stalemate over El Diquís illustrates some of the dilemmas, difficulties, and trade-offs involved in hydroelectric development. On one side of the equation, this project represents an abundant source of cheap, clean energy. A coal-powered plant of similar generating capacity would spew out 3.5 million tons of carbon dioxide, 7,000 tons of sulfur dioxide, and 3,300 tons of nitrogen oxides into the atmosphere every year. On the other side of the equation, El Diquís involves considerable human and environmental displacement. And though Costa Rica has historically shown that it has the institutional capacity and the procedural framework to successfully carry out dam-related resettlement, El Diquís represents a more daunting challenge, one that involves appropriation of land belonging to indigenous people who may lack the contacts and institutional experience necessary to benefit from new opportunities that open up.

As Costa Rica demonstrates, the question of socially and environmentally sustainable hydropower, as with any other energy source, ultimately comes down to strong institutions and responsible governance. Effective institutions and policies have enabled Costa Rica to achieve a remarkable symbiosis between human development and nature conservation so far, with hydroelectric energy serving as catalyst. Whether Costa Rica’s institutions and policies will be able to resolve the stalemate over El Diquís, however, remains to be seen.

3.

There is a further complication to any wholesale embrace of hydroelectric development. While hydroelectric energy was once considered to be entirely free of greenhouse gas emissions, dams in tropical regions especially have recently been shown to emit substantial quantities of greenhouse gasses, linked, in part, to the trees and other organic matter submerged in filling up their reservoirs. Specifically, research points to three main emissions pathways in hydroelectric dams: surface emissions of carbon dioxide, methane, and nitrous oxide; bubble emissions of methane from sediment, which result from the anaerobic decomposition of organic matter; and methane emissions at or near the turbine associated with turbulence in the flow of water.

While there seems to be broad scientific consensus that hydroelectric dams do contribute to anthropogenic greenhouse gas emissions in these ways, quantifying their emissions has become a matter of considerable scientific contention. Take the particularly heated and extended debate between Philip Fearnside of Brazil’s National Institute for Research in Amazonia and Luiz Rosa of the Federal University of Rio de Janeiro, for instance, which led the editors of the journal Climatic Change to issue a rather extraordinary editorial note on the declining “substrate for healthy scientific debate.”

Figure 4: Main processes for greenhouse gas emissions in dams

Figure credit: Demarty and Bastien (2011)

A review of 20 years of data on greenhouse gas emissions — compiled by various researchers from 18 hydroelectric dams (11 in Brazil, 4 in the Ivory Coast, and 1 each in French Guiana, Panama, and Australia) — found that, among Brazilian dams, greenhouse gas emissions ranged from 4.63 times worse (Barra Bonita) to 356 times better (Itaipu) than coal-fired power plants of comparable power output. This kind of variation in the data led to the conclusion that “at this time, no global position can be taken regarding the importance and extent of GHG emissions in warm latitudes” from hydroelectric dams. To complicate matters further, others have argued that hydroelectric dams can also act as anthropogenic carbon sinks, permanently sequestering carbon in their reservoir beds and thereby mitigating some of the effects of anthropogenic greenhouse gas emissions. This phenomenon, however, is even less understood than greenhouse gas emissions from hydroelectric projects.

What becomes clear from these studies is the great deal of variability regarding emissions from hydroelectric projects, with factors such as location, dam and powerhouse design, reservoir characteristics, and reservoir age (emissions are highest in the first few years) playing important roles. It does appear, however, that hydroelectric power plants with high energy density (i.e., lots of energy produced per unit of reservoir area) do indeed produce far fewer emissions than comparable power plants burning fossil fuels. Although a definitive scientific consensus on the extent of greenhouse gas emissions from hydroelectric power plants has not yet been reached, available research clearly shows that while some hydroelectric projects (especially smaller ones) may have comparable emissions to fossil fuel power plants, large hydroelectric projects with deep reservoirs and high energy density — including Arenal, Reventazón, and the proposed El Diquís project in Costa Rica — are environmentally beneficial from an emissions perspective.

4.

One part of the world with tremendous potential for building economical hydroelectric power plants with high energy density is Africa, which has a growing population with skyrocketing energy needs. Fortunately, many regions in Africa are endowed with the geographic and hydrologic characteristics necessary for such high-energy-density plants, making hydropower a serious candidate for energy development on the continent, where demand might otherwise be met by fossil fuel power plants spewing carbon dioxide into the atmosphere.

In large parts of Africa today, access to electricity remains extremely limited, with a significant proportion of energy use coming from traditional biofuels like firewood and charcoal. Besides being energy inefficient, many traditional biomass energy sources also cause indoor air pollution, leading to health problems of various kinds.7 The demand for firewood to satisfy the energy needs of a growing population also contributes to deforestation, a leading cause of biodiversity loss.

Figure 5: Percentage of population without access to electricity in Africa

Figure credit: IEA (2014)

Clearly, there are both environmental and human development arguments to be made for bringing modern energy, particularly electricity, to many millions of people in Africa. While renewables advocates have generally celebrated distributed sources, and particularly solar, as remedies to the dual problem of energy access and decarbonization in the developing world, hydroelectric energy remains the only clean energy alternative that is economically competitive with coal and gas in Africa. What’s more, Africa also maintains a substantial amount of untapped hydroelectric potential. As a result, as in Costa Rica, hydroelectricity offers Africa the promise of energy development in conjunction with environmental protection policies and initiatives, a win-win for the environment and human development in a continent slated to experience rapid growth for the remainder of the century.

Some such initiatives to develop Africa’s hydroelectric potential are already in the works. The largest is the Nile Basin Initiative (NBI), formally launched in 1999 and funded by the World Bank and other international agencies, covering ten countries (Egypt, South Sudan, Sudan, Ethiopia, Uganda, Kenya, Tanzania, Burundi, Rwanda, and the Democratic Republic of Congo) with a total population of nearly 500 million people. Its stated objective is “to achieve sustainable socio-economic development through the equitable utilization of, and benefit from, the common Nile Basin water resources,” and it includes a wide range of activities — including building out irrigation systems, providing flood and desertification control, setting up a region-wide electrical grid, and constructing thermal and geothermal power plants — in addition to the development of the hydroelectric energy potential of the Nile and its tributaries. Table 2 provides a sampling of some of the hydroelectric plants involved in this vast and ambitious initiative.

Table 2: Three of the many projects featured in the Nile Basin Initiative

Data sources: Nile Basin Initiative (2012); Nile Basin Initiative (2017); Showers (2011)

It is also interesting to note that the NBI features an environmental protection component, suggesting that hydroelectric development in Africa could play a similar role in motivating an ethic of conservation as it did in Costa Rica. The NBI has declared watershed management to be an important part of its efforts, and has reported that almost 1.7 million hectares are expected to come under improved agricultural and environmental management as a result of its involvement. While it is still too early to assess whether such commitments will lead to long-term environmental protection in the Nile Basin, a look at Rwanda, one of the countries included in the NBI, indicates that hydroelectric energy does have the potential to motivate a push for environmental protection.

In Rwanda, biomass accounts for about 85 percent of primary energy use, petroleum 11 percent, and electricity the remaining 4 percent, most of which comes from hydroelectric power stations fed by the Rugezi-Bulera-Ruhondo watershed in the highlands of Rwanda’s Northern Province. When reduced water flow through the watershed led to a crippling energy crisis in 2003–2004, the Rwandan government identified deforestation and soil degradation as two of the primary causes. As a result, a series of laws were passed in 2005 to protect the watershed and restore the Rugezi Wetlands, a Ramsar site and one of the headwaters of the Nile River Basin, which eventually led to an upturn in hydroelectric power generation. As Rwandan President Paul Kagame said in 2009:

In the case of the Rugezi Wetlands, resettlement of human population, removal of cattle, and tree planting [have] seen the resurgence of this national asset with multiplier effects on other socioeconomic sectors. Not only is the biodiversity recovering, so is the economic infrastructure that had previously ceased to operate. Today the hydropower plants supported by the Rugezi marshland are operating at nearly full capacity.

As in Costa Rica, Rwanda’s recognition that conservation plays an intricate and essential role in hydroelectric capacity has led to institutional efforts to promote both, motivating an environmental ethic in the process.

According to a 2013 African Development Bank report, Rwanda’s estimated hydropower potential stands at about 313 megawatts. At the end of 2012, its total installed capacity was only 64.5 megawatts.8 There is thus much potential in Rwanda for further development of cheap, clean, environmentally friendly energy in the form of hydroelectricity. The NBI is already involved in realizing some of this potential, and in March, construction began on the 80-megawatt Rusumo Falls Hydroelectric Project, one of the flagship projects of the NBI that will serve Rwanda, Tanzania, and Burundi.

But the most ambitious of the planned NBI projects is the Grand Inga Hydroelectric Project, located on the Congo River at Inga Falls. The proposed dam, with a price tag of $80 billion, is expected to create a generating capacity of 39,000 megawatts, the largest of any hydroelectric project in the world and nearly twice the capacity of the massive Three Gorges Dam in China. Notably, the Grand Inga is a “run-of-the-river” project and is expected to have only a relatively small reservoir, which precludes problems associated with large reservoirs like extensive inundation and major involuntary relocation of residents.

Figure 6: Africa’s hydroelectric potential

Figure credit: IEA (2014)

However, whether the existing institutions and governments involved will actually be able to implement such a complex project is an open question. Indeed, doubts such as these arise not only with regard to the Grand Inga project but also more broadly with regard to the NBI itself. In a region beset with conflicts, questions remain as to how much the ten countries in the NBI will be willing to cooperate with one another. Already, Egypt has expressed opposition to some of the upstream projects, fearing a significant reduction in its share of Nile water. Other difficulties might crop up as well — NBI-sponsored projects have not yet had to deal with large numbers of people displaced by dam construction, but some hydroelectric projects in the region have. In particular, the Merowe Dam near Khartoum in Sudan has displaced many thousands of people, possibly as many as 70,000.

The hope is that the promise of cheap, clean energy and regional economic development that the NBI represents will usher in a new era of cooperation and institution building in the region that will provide it with the capacity to manage the difficulties involved. With this, it is important to remember that the need for strong institutions is not peculiar to hydropower; rather, responsible governance underpins the success and equity of all forms of energy development.

In a world where drastically reducing greenhouse gas emissions and simultaneously satisfying a growing demand for energy have emerged as two of our most pressing priorities, hydropower offers a unique opportunity. While we should take heed of the social and environmental costs that come with hydroelectric development — both the disruption of local populations and habitats and the potential for greenhouse gas emissions — it is also important to recognize that all forms of energy production come with trade-offs. How to equitably and responsibly manage these trade-offs is a question that will fall to local stakeholders, governments, and the international development community. But what will be essential to recognize in that process is hydropower’s underutilized capacity to provide clean, cheap, abundant energy for the sake of both conservation and development. This potential has a considerable upside, especially in Africa, Asia, and South and Central America, where the need for both energy access and environmental protection remains paramount. In Africa in particular, significant, and untapped, hydroelectric potential abounds.

Read more from Breakthrough Journal, No. 7
Democracy in the Anthropocene
Featuring pieces by Erle Ellis, Emma Marris,
Calestous Juma, and Jennifer Bernstein

Biotech and Pharma

Biotech and Pharma

In this case study: 
 


The modern pharmaceutical and biotech industries look very similar to traditional nuclear reactor development in the early research and development phases. Both industries rely on early research and development in the public sector—at research universities and national laboratories, in the case of nuclear. To bring a new product to market, both industries spend significant amounts on development and making their way through stringent licensing processes.

However, the market structure and business models for the two industries are quite different. For pharmaceuticals, while the upfront development costs are large, manufacturing costs are almost trivial, leading to significant and immediate profits once a drug is approved. For nuclear, the development costs are similar, but having a design approved is only the beginning. The real proof of concept comes in the construction and operation of reactors, a process that can take decades. Intellectual property also plays a much larger role in the pharmaceutical industry, with larger firms frequently buying out start-ups for their patents. Smaller firms, as a result, can focus on proving the science of their product without worrying about longer-term business models.

The regulators of both industries—the Food and Drug Administration (FDA) and the Nuclear Regulatory Commission (NRC)—each increased their stringency in response to major failures in their respective sectors: the thalidomide crisis for the FDA, and Three Mile Island for the NRC. Increased regulation at the FDA following the thalidomide crisis caused significant consolidation across pharmaceutical firms, as only the largest firms could afford the newly required staged clinical trials. There is, however, a significant philosophical difference between the two regulators. While pharmaceuticals must be proven safe and effective, the FDA also recognizes the large benefits to public health that new drugs bring, and it plays a secondary promotional role for the industry as a result. For nuclear reactors, the technology is regulated purely with the aim of mitigating harm, and nuclear power is treated as a commodity with no recognizable benefits to public health. If there were a single agency regulating the public health impact of coal, gas, and nuclear, this outcome might be different.

The major lessons that the pharmaceutical industry has to offer nuclear apply at the intermediate stage of development, after basic research, when small start-ups are developing their products and undergoing the first stages of licensing. For the nuclear industry, there should be more support for taking new technologies from the university lab to start-up companies. Nuclear also likely needs a staged licensing process, or at least more transparency and finite timelines for decisions. Smaller firms might need to focus more on intellectual property as well, to make them more attractive for acquisitions by large incumbents with the capital to move designs through development and licensing. But most importantly, the agencies overseeing nuclear development—the Department of Energy (DOE) and the NRC—need to more explicitly recognize and promote the benefits of nuclear power compared with other energy sources—namely, clean air and low-carbon, reliable, affordable power.


Read more from the report: 
How to Make Nuclear Innovative

 

 

Brief History of the Biotechnology and Pharmaceutical Industry

The pharmaceutical industry, not unlike the nuclear industry, emerged as a byproduct of World War II. Most of America’s large pharmaceutical companies today originally grew out of the postwar boom, particularly as a result of the state’s demand for penicillin.1 After experiencing rapid growth in the 1950s, the industry was upended by the thalidomide crisis in 1962. The drug was developed in Germany and marketed as a cure for a wide range of conditions, but was particularly useful for treating morning sickness and sleep problems in pregnant women. Prescribed in 46 countries, the drug was consumed more commonly than aspirin in some places. Unfortunately, it took several years to discover that thalidomide was the cause of severe birth defects in over 10,000 children, more than 40% of whom died before their first birthday.2 The United States was one of the only developed countries that did not approve the drug, thanks to a pharmacologist at the FDA who was skeptical of the drug’s safety and repeatedly asked the manufacturer for better evidence and more studies.3

Although the drug was never approved in the United States, the existing regulatory review process was incredibly lax, with no clear methodology for evaluating supporting evidence. To remedy this situation, Congress passed the Kefauver-Harris Amendment in 1962, which included such common-sense measures as a “proof of efficacy” requirement that is still the basis of the drug approval process today.

The immediate effect of this regulatory change was industry consolidation. With the massive investments and long lead times required to bring new drugs to market, only larger firms such as Pfizer, Merck, and Johnson & Johnson had the necessary capabilities to compete.

Concurrently to industry consolidation, the seeds of the biotech industry were sowed. Biotechnology, at least in pharmaceuticals, is distinguished by its reliance on the use of genetic engineering to produce new compounds. A series of breakthroughs throughout the 1970s culminated in the first genetically engineered human insulin in 1982, the fruit of collaboration between Caltech and the first biotech firm, Genentech. This paved the way for the emergence of other small start-up biotech firms in the 1980s, typically only composed of a few successful scientists.4

While the number of pharmaceutical companies remained constant during the 1970s, the 1980s was a period of rapid growth driven by the emerging biotech field.5 Though the industry itself was evolving, little changed in terms of output, not only because drug development takes a long time, but also because biotech research was still a relatively small part of total pharmaceutical R&D. Roughly 20 new molecular entities (NMEs) are approved each year (though year-to-year variation is large),6 and by 1988, only 5 had actually come out of biotech research.7 By the end of the 1990s, however, the FDA had approved more than 125 biotech drugs.8

As biotech grew, the dominance of the largest pharmaceutical firms fell. From 1950 until the early 1980s, the 15 largest pharmaceutical companies were responsible for roughly 75% of NMEs per year; today, their share has stabilized around 35%.9 Generally, large pharma has been good at generating successful follow-on approaches (70% of follow-on approaches come from large pharmaceutical firms) but much less successful in novel treatment options. Biotech has taken up much of this slack. From 1998 to 2008, biotech companies have been responsible for nearly half of new drugs with novel mechanisms (a subset of “especially innovative” NMEs) and 70% of orphan drugs in the pharmaceutical industry.10 Biotech had also increased its share of blockbusters—drugs whose annual sales exceed $1 billion—from 8% to 22% by 2007.
 

The biotech-pharma networked model suggests that smaller firms can play a key role in industry innovation.


For the last few years, anywhere between 5 and 10 biotech drugs have been approved,11 and perhaps more significantly, biotech has been growing at roughly twice the speed of the pharmaceutical industry as a whole (roughly 10% per year since 2009 versus 6%).12

The emergence of biotech also led to a steady increase in inter-firm collaboration, whether in the form of joint research projects, strategic alliances (where one firm does one part of the process, and the other another), or mergers and acquisitions. By 2000, roughly 25% of corporate-financed pharmaceutical R&D came out of joint ventures, 3 times as much as in 1990 and 20 times as much as in 1980.13 From 1990 to 2010, mergers and acquisitions deals increased fivefold. As a subset of the industry, biotech is especially reliant on external forms of collaboration, with nearly half of biotech R&D funding coming in the form of partnerships (since 2009 at least).14

Since most biotech firms are small and rarely have the means to take their drugs all the way from preclinical trials to commercialization, collaboration is a necessity for them. For larger pharmaceutical companies, the growing interest in external collaboration can be traced back to a paradigm shift in research methodology.

With the tremendous advancements in genetic engineering, chemistry, and computational biology, the 1980s opened up the possibility of rationalizing the drug development process.15 Prior to that period, companies principally relied on “random screening” to identify promising compounds for drug development.16 This process involved testing a large number of chemicals to determine how they interacted with the targeted disease. By the 1980s, this process had run into diminishing returns, and the industry switched to a “guided search” model,17 thanks to the advances in computation and genetics. As a consequence of this shift, large research labs and capital were no longer as important to drug research, and the value of genetic expertise increased, bolstering the comparative advantage of highly specialized biotech firms.

According to Gottinger and Umali (2011), this paradigm shift wasn’t in itself sufficient to push large pharma toward more collaboration. Initially at least, pharmaceutical companies were convinced they could do much of this guided research themselves.18 But the early and remarkable success of Genentech changed this perception. Its first two blockbuster drugs, Humulin and hGH, were developed and commercialized in close partnership with larger pharmaceutical companies. Genentech started working with Eli Lilly in 1978 and jointly created Humulin, approved in 1982 and a blockbuster a few years later.19 Genentech also worked with the giant Japanese firm Kabi starting in 1977 on hGH, which was approved in 1985. Genentech’s success helped shake up the pharmaceutical industry and encouraged other big players to seek out partnerships with biotech firms, which they did in much larger numbers from the early 1990s onwards.

While there is no set model for how these partnerships work, some broad trends can be observed. One common pattern is for biotech to focus on the early stages of new drug development, especially preclinical trials. If the early signs are promising, they will enter a licensing agreement with a larger firm. Another option is for the larger firm to simply acquire the start-up, a pattern that is very common. Just last year, large pharmaceutical firms spent a few billion dollars buying the rights to biotech drugs or the companies themselves.20 At this stage, it is rare for any single company to have invented, tested, and commercialized an NME solely internally.

Though again the setup varies from one case to the next, many biotech firms preserve much of their independence even once they have been bought. They maintain their separate offices, research direction, and internal structure, taking on the role of “centers of excellence.”21 Generally, biotech firms boast human capital better equipped to harness the cutting edge of research, as the majority of their employees have PhDs and strong ties to leading research universities.22
 

The nuclear industry needs more support in turning new technologies into start-up companies.


From the perspective of biotech firms, there is little doubt that this collaboration has been beneficial. Apart from Genentech and a few others, very few biotech firms have actually succeeded in bringing their inventions to market. Reviewing the evidence from the 1990s, Owen-Smith and Powell (2004) find that network ties are a significant predictor of performance in biotech.23 More recently, Munos (2009) finds that acquisitions lead to a 120% increase in NME output for small companies.24 Interestingly, increased collaboration or consolidation between large firms isn’t as strongly associated with increases in NME output.

Since the approval of NMEs is actually relatively rare, most partnerships and external collaborations don’t actually lead to a new drug approval. In most cases, they are formed in the hope of achieving the next milestone on the long road toward drug approval. Indeed, the average R&D alliance in biotech lasts less than 4 years, whereas the drug development process takes closer to 10–12 years.25 The fact that large pharma has been willing to bet substantial sums on biotech ventures still far away from commercialization has played a key role in driving the growth of the latter, and has allowed smaller firms to secure funding (often from VCs) despite having a product that is still many years from commercialization.26

In sum, there has been a very clear shift toward a more networked innovation model over the last couple of decades in the pharmaceutical industry. This shift has been driven by three factors: the relative fall in the dominance of large pharma, a shift in the research paradigm, and the emergence of small research-focused biotech firms. However, this three-part explanation is incomplete, as it gives insufficient credit to one of the main drivers of this transition: the state.


Nuclear and Pharmaceuticals: An Industry Comparison

The parallels to the nuclear industry are obvious. In both sectors, development of new products stretches over multiple years (or even decades) and requires significant upfront investment. Both commercialization processes also require significant investment. However, pharmaceuticals are a much larger industry than nuclear in the United States, comprising 23% of all private R&D in 2013. Pharmaceuticals add over $1 trillion to the US economy every year.27 But where the pharmaceutical industry spends over $40 billion on R&D annually,28 the US nuclear industry spends under $500 million. The pharmaceutical industry releases new blockbuster drugs every year, while the nuclear industry is struggling to deploy its first new designs in 30 years.


Role of the State

The pharmaceutical industry has always been closely interwoven with the state. The state’s wartime demand for penicillin created the industry, the Kefauver-Harris Amendment drove consolidation, and the DNA and genetic research conducted in university labs across the world in the 1970s kick-started the biotech revolution. If these contributions are widely recognized, the state’s role post 1980 is perhaps less well known, but no less critical, especially for the development of biotech.

First, the state enacted a suite of legislation to boost the industry. The widely known Bayh-Dole Act of 1980 allowed research sponsored by the National Institutes of Health (NIH) to be patented. The slightly less-well-known Stevenson-Wydler Act of the same year required publicly funded research institutions to form technology transfer offices and to do more to make their research available to businesses. In 1983, the Orphan Drug Act was passed; its aim was to encourage the development of drugs for relatively rare diseases, through generous tax credits, funded research, extended IP, and FDA fast track. The Orphan Drug Act played an especially important role in supporting the nascent biotech industry. By the early 2000s, 90% of the revenue of the four biggest biotech firms—Genentech, Biogen, Idex, and Serono—came from drugs that benefited from the Act.29

Through the NIH, the state also provided direct research funding for breakthrough drugs. NIH funding increased significantly between the mid-1980s and the mid-2000s, increasing at an average of 2.9% per year. Average spending in the 1980s was $10 billion, compared to $35 billion today.30 By 2000, NIH funding accounted for over half of nondefense public R&D, up from 30% in the mid-1980s.31 This research proved remarkably effective for the biotech industry—Vallas et al. (2011) estimate that 13 out of the 15 blockbuster biotech drugs that were on the market in 2007 benefited from NIH funding in the early stages. Equally influential was the Small Business Innovation Research (SBIR) program; set up in 1982, it was tasked with funneling federal dollars to R&D in small businesses, many of which were in the biotech industry.32

The combination of direct funding, subsidies, and supportive regulation was clearly a major factor in explaining the growth and success of the US biotech industry. In fact, the broad range of policy support gave the US biotechnology industry a unique advantage over its international competitors. While other countries also provided direct funding for genetic research, they lacked complementary institutions like the Orphan Drug Act or SBIR, which inhibited the growth of their own biotech industries.33


Drug Development Costs

Estimates of the cost of new drug development aren’t easy to pin down. Certain drugs can cost little more than a hundred million dollars to develop, whereas others can exceed the billion-dollar mark. What’s more, there is no universally agreed method to estimate drug development cost.34 One widely cited study out of Tufts University estimated the average cost of drug development to be $1.4 billion in 2016, a number that includes the cost of failures as well as success. Tufts estimates the opportunity cost of investing in drugs to be $1.2 billion, which is an estimate of forgone return during the 10–12-year investment period. Tufts’s estimate has been the subject of considerable controversy, especially since they haven’t released the raw data from which these numbers are derived. Still, a review of these debates strongly suggests that most drugs end up costing in the hundreds of millions to develop. As a comparison, NuScale will spend about $45 million to have the NRC review their license application, and the process will take 3 years.35 However, Nuscale has spent closer to $450 million and taken 10 years to get to the point of license application with the NRC. 36

The Tufts study is perhaps more useful to get a sense of how the cost of drug development has evolved over time. The center has been estimating the cost of drug development since 1987, and estimates have been on a consistently upward trajectory, even after adjusting for inflation. This increase in costs can be explained by two main factors: the increasing number of failures, and the increasing cost of post-approval R&D (phase IV), from complying with foreign standards or testing for previously unobserved side effects.37
 

Drug Development Timeline

In terms of timeline, the range is slightly smaller than it is for cost, with most NMEs taking between 8 and 12 years to develop. Tufts estimates it takes roughly 128 months to get a drug from synthesis to approval but “only” 96 months from the beginning of clinical trials to FDA approval (provided the drug gets that far). Each phase of the clinical trials typically takes one to two years (and gets a little longer with each phase). The majority of drugs go through three pre-approval stages: stage 1 involves testing the drug for any adverse effects on humans (50–100 participants), stage 2 tests the drug for actual effectiveness (100–300 participants), and stage 3 tests its effectiveness on a much larger pool of patients (1,000–3,000). If a drug successfully passes each of these stages (and the probability that it will increases with each), the firm will submit a New Drug Application (NDA) to the FDA. If the drug is classified as a priority (like an orphan drug), the roughly 100,000-page application will be reviewed in 6 months. If it isn’t, the process takes between 10 and 12 months.

However, the FDA isn’t solely involved in the final approval of the drug; it also works with drug companies to create a schedule for the trial phases, as well as the set of criteria used to evaluate the drugs following each trial stage. To even begin a stage-1 trial, firms have to submit an Investigational New Drug Application (IND) to the FDA. This IND must contain detailed information on animal pharmacology (a large part of preclinical trials involves testing the drug on nonhuman species), the process of drug manufacture, and proposals for the clinical protocols to be followed. This initial FDA review typically engages a range of external experts (often in universities) to help review and refine the trial protocols. This period is also when a firm will receive its patent (and the 20-year exclusivity period begins then).

In contrast, the NRC’s review of license applications can take several years—3 years in the case of NuScale. What then follows is a 5–10-year construction process before the plant starts generating revenue.
 

Importance of Intellectual Property

If a drug successfully clears each of these hurdles, firms can finally commercialize it, and use their patent protection to charge far above the marginal cost of production. While new drugs can be priced at a little more than a few hundred dollars, others can easily cost thousands if not hundreds of thousands of dollars per prescription. Since the marginal cost of drug production is low,38 the firms stand to make a substantial profit and recoup much of their investment.

Clearly, this profit is almost entirely contingent on regulation. The majority of drugs are easy to copy, and were it not for patent protection, other firms would quickly offer cheaper alternatives. The last few years has proven to be a very stark illustration of this fact, as many drugs lost their exclusivity. For instance, Pfizer’s Lipitor, an anti-cholesterol drug, was the world’s top-selling drug for eight years but lost market protection at the end of 2011. In 2012, its sales ranking dropped to 14th, a nearly 61% decline in revenue in one year—from $12.9 billion to $5.1 billion. Bristol-Myers and Sanofi’s Plavix, a blood thinner that was number 2 in sales in 2012 at $9.5 billion, dropped to number 12 the next year, to $5.2 billion. Patents on Plavix expired in several European countries in 2011 and in the United States in 2012. Reviewing a range of drugs in the early 2000s, Conti and Berndt (2016) estimate that prices fall between 30% and 50% in their first year following the loss of exclusivity and even more after that.39 Though sales of the protected drug will fall, total sales of the drug (i.e., adding up branded and generic) tend to increase, as does revenue, consistent with a typical supply-demand story.

If IP is the main profit driver, pharmaceutical companies do also spend substantial sums on marketing. In fact, frequent media reports suggest they spend more on advertising than on R&D. Those figures, however, should be taken with a grain of salt since advertising spending is often bundled up with management and operational expenses.

A meta-study from Johns Hopkins found that total marketing expenditure for the industry was $31 billion in 2010.40 Total marketing expenditure actually peaked in 2004 but has stayed fairly consistent at 8–10% of sales throughout 2000–2010. Most of that marketing money is directed toward physicians and doctors, but there has been an increase in direct-to-consumer spending in the last few years, around $4–5 billion per year. To put these numbers in context, FierceBiotech estimated that industry spent $67 billion on R&D in 2010, or twice as much. And indeed, Statista’s estimate of US pharmaceuticals R&D as a percentage of revenue between 1990 and 2010 hovered around 16–20% in 2015.41
 

Regulator Comparison

The FDA’s 2014 budget was $4.7 billion, $2.6 billion of which came from the public purse, and the remaining $2.1 billion came from user fees. This 50-50 split has been fairly consistent ever since the Prescription Drug User Fee Act of 1992, which allowed the FDA to charge fees to drug manufacturers.42 By contrast, the NRC has a budget of $1.1 billion, 90% of which is funded by user fees (the change was part of a wider effort by then-President George H. W. Bush to cut the government deficit).43

The FDA charges a fixed fee for regulatory review: an IND cost $449,500 in 2013, and an NDA for an NME cost $5 million (if a new drug is a “me too” drug rather than a new molecular entity the fee is reduced to $1.5 million). As mentioned above, the FDA’s review process rarely exceeds a year and is often only six months. By contrast, the NRC has no fixed fee—it charges between $170 and $270 per hour—or timeline by which to deal with new applications. According to their estimates, a new reactor design certification takes five years (and the early site permit review takes three), but its most recent approvals suggest it takes a lot longer than that for innovative designs.

Two designs recently approved by the NRC are the AP1000 and ESBWR. The AP1000 was submitted in 2002 and approved in 2011; similarly, the ESBWR was submitted in 2005 and only approved in 2014. Though reliable estimates are difficult to find, the process seemed to cost in the region of $500 million in both cases. NuScale has reported that they’ve already spent $130 million in preparation for submitting their license to the NRC.
 

The agencies overseeing nuclear development need to more explicitly recognize and promote the benefits of nuclear power.


Another key difference between the FDA and the NRC is their mission. While the NRC’s primary objective is public safety—“to ensure adequate protection of public health and safety”—the FDA has a twin mission: to protect human health by ensuring the safety of new drugs and to advance public health by speeding up the drug innovation process. In other words, part of the FDA’s mandate is to accelerate the commercialization of new technology.

In fact, some go so far as to say that the FDA is too amenable to new drug proposals and, indeed, the FDA approved roughly 90% or more of submitted drugs last year, a rate that has been steadily growing from a low of 60% in 2008. The Senate is currently debating the Republican-sponsored 21st Century Cures Act, which would both lower the bar for approval and speed up the process. Opponents of the bill are concerned it will lead to a lax regime and see a repeat of past scandals, as when, in the late 1990s, two FDA-approved drugs (Vioxx and Avendia) had to be removed following the post-approval discovery of serious side effects.

These debates notwithstanding, it is clear that the FDA is far more amenable to new technologies than the NRC has ever been. If achieving the perfect balance between sufficiently diligent yet pro-innovation regulation is a continuous process, comparing the NRC and the FDA suggests the former has the balance wrong.


Lessons Learned for Nuclear

At a superficial level, the parallels between the nuclear industry and biotech are obvious. The timelines are long and the development costs are very high. One key difference, though, is the marginal cost. Building a new nuclear plant is like building a cathedral: building the second one is only marginally cheaper than building the first, unlike a pill. The development of a blockbuster drug is actually more similar to the building of a plant, since once it’s built the O&M costs are low. Since the marginal cost to new builds is very far from zero for nuclear developers, it would suggest the appetite for innovation is weakened. If the reward for innovation is approximated as the difference between marginal cost and marginal benefit, then ceteris paribus, a higher marginal cost reduces the size of the reward and thus the motivation to pursue it.

A related difference between pharmaceuticals and nuclear is that, for nuclear, older designs’ value doesn’t collapse as soon as the IP expires. A proven design, even if it is fairly antiquated, can still be a winning strategy, especially in countries with underdeveloped technical infrastructure. Rosatom building four of its VVER-1000s in Kudankulam, India, is a case in point. For drugs, there is no significant benefit to using a brand instead of a generic, and indeed, certain drugs will only ever make it to developing countries once a generic version exists. Even if marketing can help expand the life and profitability of drugs, its effectiveness is limited. Again, the imperative for innovation is less critical to survival for nuclear developers than for pharma companies. 

Keeping this in mind, the biotech-pharma networked model does suggest that smaller firms can play a key role in industry innovation, despite being incapable of fully realizing that innovation on their own. It’s also worth noting that large corporate players initially took a lot of convincing to adopt this model. It was only when a few genetically engineered compounds gained commercial viability that larger firms realized that (a) genetic engineering would be key and (b) smaller biotech firms were best equipped to handle these new advancements.

To get to that point, it’s important to recognize the critical role of policy. Whether it was in the form of direct NIH support to genetic research, the Orphan Drug Act, or favorable regulation, Genentech and other early biotech companies benefited from a tremendous amount of public support. And even today, the FDA’s systematized, transparent, and relatively efficient process plays a critical role in facilitating collaboration: every time a biotech company successfully completes a different stage of the drug development process, it boosts its chance of getting acquired.

While the market structure of building nuclear power plants will never look like the high-profit pharmaceutical industry, its early-stage RD&D could shift to be more innovative like the pharmaceutical industry’s did in the 1970s and ’80s. Universities should create more support mechanisms to spin off research into private companies, such as incubators, seed funding, and tech transfer programs. Small nuclear firms should focus more on intellectual property to make their companies more attractive for acquisition. Following the pharmaceutical sector, nuclear companies should invest more in joint ventures and collaborative R&D to solve shared challenges and demonstrate emerging technologies. The NRC may need to develop a staged or phased licensing process. But more importantly, the NRC should incorporate more transparency and offer strict timelines for application review and decisions. Finally, DOE and the NRC should explicitly acknowledge the benefits of nuclear power as compared to the alternatives. DOE should make the case for investing in commercialization of new nuclear designs from a public health perspective, the same way the FDA does for orphan drugs.

Breakthrough Welcomes 2017 Generation Fellows

Breakthrough Welcomes 2017 Generation Fellows

Each summer, the Breakthrough Institute welcomes a new class of Breakthrough Generation fellows to join our research team for 10 weeks. Generation fellows work to advance the ecomodern project, by deepening our understanding in the fields of energy, environment, technology, and human development. 

Breakthrough Generation has proven crucial to the work we do here. Past fellows' research has contributed to some of our most impactful publications, including Where Good Technologies Come FromBeyond Boom & BustHow to Make Nuclear CheapLighting Electricity Steel, and Nature Unbound.

Over 80 fellows have come through Breakthrough Generation since its founding in 2008. We are delighted that the following scholars are joining their ranks:

Christopher Gambino

Chris Gambino is an expert in reactive nitrogen cycling and a product of the Nitrogen Systems: Policy-oriented Integrated Research and Education (NSPIRE). He was trained to be a new generation scientist capable of spanning the boundary between scientific research and public policy decision-making. He earned a PhD in animal sciences from Washington State University.

 

Aurelia Hilary

Aurelia Hillary is a graduate of Imperial College London where she received an MS in environmental technology. Her undergraduate research focused on biomass utilization from agricultural waste and today her interests are in agriculture and water. She and her research partner recently published a paper for nutrient leaching reduction using biochar.

 

Emmanuella Maseli

Emmanuella Omaro Maseli graduated with a Bachelor of laws (LLB) from the London School of Economics and is currently in the process of completing a dual master’s degree between Sciences Po Paris and the University of Tokyo in public policy, specializing in energy, resources and sustainability. She is fascinated by the relationship between energy, resources, and sustainability an interest sparked by the situation in her country of Nigeria. 

 

Jameson McBride

Jameson McBride recently graduated Columbia University with a B.A. in Political Science, Economics, and Sustainable Development. Previously, he has worked as a research intern at the Center on Global Energy Policy, the Council on Foreign Relations, and the Earth Institute. 

 

Abigail Eyram Sah

Abigail Eyram Sah recently earned her MA from the Energy Science, Technology & Policy Program at Carnegie Mellon University. As a native of Ghana, she has always been very passionate about bringing energy to poor areas and has worked on projects in her home country to design tailored energy systems aimed at increasing energy access. 

 

Aishwarya Saxena

Aishwarya Saxena is pursuing a Master of Laws at the University of California, Berkeley. She is a graduate of the International School of Nuclear Law and has earned a university Diploma in nuclear law from the University of Montpelier. Her research has focused on civil liability for nuclear damage, nuclear liability insurance and establishment of a global nuclear liability regime. 

Celinda Lake

Nils Gilman

Dario Gil

Support our work

The Breakthrough Institute is a 501(c)(3) nonprofit and is dedicated to the public interest and as such only excepts charitable contributions from any person or institution without a financial interest in our work. All contributions are tax deductible.

Plenty of Fish on the Farm

 

 

ESSAY

Plenty of Fish on the Farm
Marian Swain

 

 

 

RESPONSES

Plenty of Fish on the Farm
Dane Klinger
Kim Thompson
Ray Hilborn

 

 

 

 

VIDEO

The Future of Aquaculture

 

 

 

For more food and farming news from Breakthrough sent directly to your inbox, subscribe to our mailing list.

Plenty of Fish on the Farm

 

 

ESSAY

Plenty of Fish on the Farm
Marian Swain

 

 

 

RESPONSES

Plenty of Fish on the Farm
Dane Klinger
Kim Thompson
Ray Hilborn

 

 

 

 

VIDEO

The Future of Aquaculture

 

 

 

For more food and farming news from Breakthrough sent directly to your inbox, subscribe to our mailing list.

Plenty of Fish on the Farm

 

 

ESSAY

Plenty of Fish on the Farm
Marian Swain

 

 

 

RESPONSES

Plenty of Fish on the Farm
Dane Klinger
Kim Thompson
Ray Hilborn

 

 

 

 

VIDEO

The Future of Aquaculture

 

 

 

For more food and farming news from Breakthrough sent directly to your inbox, subscribe to our mailing list.

Food Production and Wildlife on Farmland

 

 

ESSAY

Food Production and Wildlife on Farmland
Linus Blomqvist

 

 

RESPONSES

Food Production and Wildlife on Farmland
Claire Kremen
William Price
Andrew Kniss
Ben Phalan

 

 

 

VIDEO

Food Production and Wildlife on Farmland

 

 

 

For more food and farming news from Breakthrough sent directly to your inbox, subscribe to our mailing list.

The Future of Meat

 

 

ESSAY

The Future of Meat
Marian Swain

 

 

 

RESPONSES

The Future of Meat
Maureen Ogle, Jayson Lusk, Judith Capper, Simon Hall, Alison Van Eenennaam, Jesse Ausubel, and Iddo Wernick

 

 

 

 

VIDEO

The Future of Meat

 

 

 

For more food and farming news from Breakthrough sent directly to your inbox, subscribe to our mailing list.

Is Precision Agriculture the Way to Peak Cropland?

 

 

ESSAY

Is Precision Agriculture the Way to Peak Cropland?
Linus Blomqvist and David Douglas

 

 

 

RESPONSES

Is Precision Agriculture the Way to Peak Cropland?
Calestous Juma and Mark Lynas

 

 

 

 

VIDEO

Visualizing Agricultural Innovation

 

 

 

For more food and farming news from Breakthrough sent directly to your inbox, subscribe to our mailing list.

What We Consume Matters. So Does How We Produce It.

I enjoyed reading Linus Blomqvist’s recent essay on wildlife and food production, and the response by Claire Kremen. Blomqvist provides a good overview of the reasons that high-yielding farming—even if it is organic or based on agroecological principles—is unlikely to provide habitat for more than a handful of the original species present in an area. When humans make maximum use of space, light, and water to divert as much as possible of the primary production in an area for human consumption, populations of many other species will suffer. Kremen acknowledges this, but argues that it would be more productive to focus on tackling demand-side issues including meat consumption, population growth, and food waste.

Both perspectives have considerable merit. I agree with Blomqvist’s conclusion that trade-offs between agricultural yields and biodiversity are the norm. While there are opportunities to improve both yields and biodiversity outcomes in many places where farmland currently has little value for either, maximizing both yields and conservation value on the same plots of land is likely to be unattainable in most (if not all) contexts.

I also agree with Kremen that many of the most important challenges revolve around consumption. Where I disagree is with the implication that reductions in demand will make choices about how to produce food along the land sharing-sparing continuum obsolete. Even if current trends are reversed, and global consumption of food declines, land sparing will be important in creating space for habitat restoration and rewilding,1 helping to reduce extinction debt. We must explore the value of both supply-side and demand-side interventions, as recent studies have begun to do.2,3 As part of those explorations, a continued focus on the implications of different strategies for allocating and managing land use is warranted, not least because it matters what land is spared, and where.4 If we believe that other species have intrinsic value5—a point on which I expect Kremen, Blomqvist, and myself are all in agreement—then we conservationists have an ethical obligation to understand what we can do to minimize the harm inflicted on them by how our species uses land.

What has the land sharing-sparing framework told us about this question? We have learned that the majority of species, specialists and generalists alike, are negatively affected when their habitats are converted to agricultural use, even to apparently benign uses such as diverse landscape mosaics with agroforestry plots and fallows. A few species do well, including some not found in the original native vegetation types. We have learned that species richness—the only biodiversity metric used in many studies arguing for land sharing—is a poor way to measure these changes, because it fails to detect the replacement of restricted-range species and forest or grassland specialists by widespread generalists, and because it fails to detect dramatic declines in population abundance.

We have learned that the farming systems that hold the greatest conservation value, such as the lightly grazed semi-natural grasslands of the South American pampas,6 are so low yielding as to make little meaningful contribution to food supply. If we wish to preserve these systems, we are better to do so primarily for their ecological and cultural values, rather than positioning them as a model of production and conservation in harmony. We have also learned that while on-farm biodiversity can make an important contribution to food production, and can in many cases support sustainable increases in yields, actions to promote such biodiversity are insufficient to conserve those species that have little direct service value.8 Such species—probably the majority of life on Earth—will benefit if we can produce the same amount of food on less land, while conserving and restoring native vegetation elsewhere.

The land sharing-sparing framework does not tell us that high-yield farming will result in land being spared for nature: it is an analytical framework for understanding the “biophysical option space,” to borrow a term,3 not a framework for predicting land-use change. The basic framework is a starting point, not the last word, and has already been modified to accommodate objectives beyond biodiversity and food production, and to incorporate more complex land-use scenarios. However, it can tell us how species might respond if we can find ways to make land sparing happen, and it exposes the substantial limitations of land-sharing strategies for reconciling conservation and production objectives. These various conclusions seem to be consistent, so far, across all studied taxa and different geographies (for a review, see endnote 8).

These observations require us to think very differently than before about the role of wildlife-friendly farming in conservation. Their implication is that the priority for conservationists must be to minimize the land area devoted to production, and to increase that devoted to conservation. The other millions of species we share the planet with need space, and if we are to limit the losses underway in what some have termed the “sixth extinction,” we must conserve and restore large areas of native vegetation around the world. There are undoubtedly sound reasons to promote some biodiversity on farmland, for its functional and cultural roles, but those are mostly about the needs of our own species. When we look at the needs of other species, especially those at greatest risk of extinction, what most of them need from us are bigger, higher-quality, better-connected areas of their natural habitats, and protection from additional threats such as hunting, logging, and invasive species.9 If proponents of alternative agriculture really want to help biodiversity, it is not enough to provide on-farm resources for a few (typically widespread and generalist) species. They too must think beyond the farm, to how their activities can support the objectives of halting and reversing habitat loss and degradation in the wider landscape.

It is also clear from studies that have applied the land sharing-sparing framework that efforts to reduce consumption and minimize the amount of land needed for food production can enlarge the “biophysical option space,” and help with this objective of making more space for wild species. Here, Kremen and I are in full agreement. For example, in our 2011 paper on land sparing and sharing in Science, one of the conclusions my co-authors and I reached was as follows:

Measures to reduce demand, including reducing meat consumption and waste, halting expansion of biofuel crops, and limiting population growth, would ameliorate the impacts of agriculture on biodiversity.10

Today, I would add reducing dairy consumption to that list.11 The production of virtually all animal products is more land demanding than their plant-based counterparts, and while there is an important role for livestock in subsistence societies, the imperative for reducing consumption of livestock products in wealthy countries could not be clearer.12 Reversing support for crop-based biofuels in Europe and North America offers another good opportunity to reduce global agricultural expansion, because it is a use of land that was largely created by government policies, with few beneficiaries and dubious environmental benefits.13

Because of what I have learned in the course of my work on land sparing and sharing, I have become mostly vegan, and have been involved in advocacy to halt the use of crops for biofuels. Shifting to more plant-based diets, and reducing the use of land for producing fuel, are two of the most promising ways for our species to increase food yields on a smaller land base.14 While influencing human behavior and changing policy are complex and take time, both can help to make more space available for other species.

Looking to the future, I have a vision that differs in some respects from Kremen’s. I would like to see more landscapes where agriculture is confined to a few productive zones surrounded by a matrix of natural vegetation types—a mosaic of forests, wetlands, grasslands, and shrublands. It is a vision in which humans see themselves as one species living alongside others, embedded in a larger ecosystem, rather than seeing nature as something permitted to continue only within island-like protected areas and in the interstices of human-dominated land use. For now, the closest thing to such landscapes might only be found in a few indigenous territories, with low population densities and often a culture of respect for nature. However, if we take the best of agroecological and agronomic knowledge, our growing ability to restore native vegetation in different parts of the world, and most importantly, our increasing recognition of humanity’s ethical responsibilities towards other species, I believe we could replicate it in other parts of the world with higher population densities. It is a vision that aligns with ideas expressed by E. O. Wilson in Half-Earth15 and George Monbiot in Feral,16 and is consistent with an ecocentric ethic that values all life, not just that of humans.17

What is encouraging is that much of what needs to be done to achieve any of these visions is the same: developing alternatives to the paradigm of constant economic growth; changing what (and how much) we consume; building on existing policies, regulations, and incentives (and developing new ones) to end agricultural expansion; expanding protection for native vegetation and wild species on public, private, and customary lands; improving and scaling up habitat restoration techniques; and developing methods for systems of high-yielding agriculture that respect both human communities and the ecosystems of which they are part.

 

Acknowledgments

I thank Andrew Balmford, David Williams, and Erasmus zu Ermgassen for comments on a draft. The opinions expressed, and any errors, are mine.

America’s Role in Global Nuclear Innovation

A new report from the Global Nexus Initiative reminds us of the serious security and geopolitical implications if America does indeed forfeit leadership in the global nuclear power market. Their report, Nuclear Power for the Next Generation, concludes a two-year project to study the intersection of climate, nuclear power, and security. GNI’s four broad conclusions are:
 

  1. Nuclear power is necessary to address climate change
  2. Nuclear governance needs significant strengthening
  3. Evolving nuclear suppliers impact geopolitics
  4. Innovative nuclear policy requires “break the mold” partnerships


One striking graphic from the report is this map of countries that are currently building new nuclear power plants, and countries that are pursuing new nuclear power programs:

Notice that the countries that have expressed interest in developing nuclear power programs tend to be in more geopolitically tense regions like the Middle East, Africa, and Southeast Asia. Maria Korsnick, CEO of the Nuclear Energy Institute, highlights why this matters for US foreign relations: when the US, or anyone else, builds a nuclear reactor in a new country, they’re establishing a 100-year or more relationship with that country, from siting and building the plant through operation, servicing, and decommissioning.
 

When the US, or anyone else, builds a nuclear reactor in a new country, they’re establishing a 100-year or more relationship with that country.


In our recent report How to Make Nuclear Innovative, we also highlight the long-standing trend of North America and Europe's declining leadership on nuclear R&D, as measured by patenting. You can see this geographic shift by looking at nuclear R&D funding by country over the last few decades as well.

A recent report from the think tank Third Way offers several concrete solutions to this downturn, including creating a senior-level position in the White House to oversee nuclear exports, reauthorizing the US Export-Import Bank and filling its vacant board seats, and increasing and sustaining funding for nuclear innovation through the Gateway for Accelerated Innovation in Nuclear (GAIN) program and the Nuclear Regulatory Commission.

Finally, the GNI report concludes that innovative nuclear policy will require novel, “break the mold” partnerships. Our 2014 report High-Energy Innovation, published with the Consortium for Science, Policy & Outcomes at Arizona State, makes a similar recommendation, calling on the US and European countries to expand their international collaborations on energy RD&D to both accelerate commercialization and to ensure that these technologies are developed where demand is highest.

If the world is to meet the twin goals of reducing poverty through significantly expanded energy access along with drastically reducing greenhouse gas emissions, we’re going to need a lot more nuclear in a lot of new countries. The US isn’t currently positioned to lead on this kind of global nuclear development, but could, with the right policies, investments, and partnerships, regain such a role.

Tom Riley

Caroline Eboumbou

Ryan Phelan

Jeremy Carl

Stephen Smith

Suzanne Hobbs Baker

Gregory Aplet

David Simpson

Demons Under Every Rock

Brandon Keim

Vijaya Ramachandran

Demons Under Every Rock

The tendrils of the conspiracy slowly seem to reach into all corners of the community, culminating with the girls announcing the interrogators themselves to be part of the cult that had abused them. As the case begins to unravel, a social psychologist from Berkeley is brought in to investigate what had gone wrong. The “false memories,” he concludes, had been manufactured through group pressure and persuasion, building an increasingly elaborate—and increasingly social—narrative far removed from the events on the ground.

This disturbing and memorable story has kept coming back to me the last few years, as a cadre of climate activists, ideologically motivated scholars, and sympathetic journalists have started labeling an ever-expanding circle of people they disagree with climate deniers.

Climate change, of course, is real and demons are not. But in the expanding use of the term “denier,” the view of the climate debate as a battle between pure good and pure evil, and the social dimensions of the narrative that has been constructed, some quarters of the climate movement have begun to seem similarly unhinged.

Not so long ago, the term denier was reserved for right-wing ideologues, many of them funded by fossil fuel companies, who claimed that global warming either wasn’t happening at all or wasn’t caused by humans. Then it was expanded to so-called “lukewarmists,” scientists and other analysts who believe that global warming is happening and is caused by humans, but either don’t believe it will prove terribly severe or believe that human societies will prove capable of adapting without catastrophic impacts.

As frustration grew after the failure of legislative efforts to cap US emissions in 2010, demons kept appearing wherever climate activists looked for them. In 2015, Bill McKibben argued in the New York Times that anyone who didn’t oppose the construction of the Keystone pipeline, without regard to any particular stated view about climate change, was a denier.

Then in December 2015, Harvard historian and climate activist Naomi Oreskes expanded the definition further. “There is also a new, strange form of denial that has appeared on the landscape of late,” Oreskes wrote in the Guardian, “one that says that renewable sources can’t meet our energy needs. Oddly, some of these voices include climate scientists, who insist that we must now turn to wholesale expansion of nuclear power.”

Oreskes took care not to mention the scientists in question, for that would have been awkward. They included Dr. James Hansen, who gave the first congressional testimony about the risks that climate change presented the world, and has been a leading voice for strong, immediate, and decisive global action to address climate change for almost three decades. The others—Kerry Emanuel, Ken Caldeira, and Tom Wigley—are all highly decorated climate scientists with long and well-established histories of advocating for climate action. The four of them had travelled to the COP21 meeting in Paris that December to urge the negotiators and NGOs at the meeting to embrace nuclear energy as a technology that would be necessary to achieve deep reductions in global emissions.

So it was only a matter of time before my colleagues and I at the Breakthrough Institute would be tarred with the same brush. In a new article in the New Republic, reporter Emily Atkin insists that we are “lukewarmists.” She accuses us of engaging in a sleight of hand “where climate projections are lowballed; climate change impacts, damages, and costs are underestimated” and claims that we, like other deniers, argue “that climate change is real but not urgent, and therefore it’s useless to do anything to stop it.”

None of these claims are true. For over a decade, we’ve argued that climate change was real, carried the risk of catastrophic impacts, and merited strong global action to mitigate carbon emissions. We have supported a tax on carbon, the Paris Agreement, and the Clean Power Plan, although have been clear in our view that the benefits of these policies would be modest. We have supported substantial public investment in renewables, energy efficiency, nuclear energy, and carbon capture and storage.

Atkin’s story initially simply linked to our Wikipedia page. When I pointed this out to TNR executive editor Ryan Kearney and asked for a correction, he instead added further links that he claimed showed us to be “lukewarmists.” Of those, two were links to criticisms of our work on energy efficiency rebound. One is a link to two footnotes in a book by climate scientist Michael Mann, neither of which is material to the claim either. One links to a blog post that criticizes our view that An Inconvenient Truth contributed to the polarization of public opinion about climate change. The other makes the demonstrably false claim that the George and Cynthia Mitchell Foundation is our primary funder.1

These sorts of attacks, supported by multiple layers of links that never actually materially support the claims that are being made, used to be the domain of a small set of marginal activists and blogs. Atkin herself cut her teeth at Climate Progress, where her colleague Joe Romm has spent over a decade turning ad hominem into a form of toxic performance art.2

But today, these misrepresentations are served up in glossy, big-budget magazines. Climate denial has morphed, in the eyes of the climate movement, and their handmaidens in the media, into denial of green policy preferences, not climate science.

“The ‘moral argument’ for fossil fuels has collapsed. But renewables denial has not,” McKibben wrote in Rolling Stone last January. “It’s now at least as ugly and insidious as its twin sister, Climate Denial. The same men who insist that the physicists are wrong about global warming also insist that sun and wind can’t supply our energy needs anytime soon.”

“We can transition to a decarbonized economy,” Oreskes claimed in the Guardian, “by focusing on wind, water and solar, coupled with grid integration, energy efficiency and demand management.”

This newfangled climate speak is based on newfangled energy math. Oreskes and McKibben, like much of the larger environmental community, rely heavily these days on the work of Mark Jacobson, a Stanford professor whose work purports to show that the world can be powered entirely with existing renewable energy technologies. Jacobson’s projections represent an extreme outlier. Even optimistic outfits, like the National Renewable Energy Laboratory, conclude that even reaching 80% renewable energy would be very technically and economically difficult.

Advocates, of course, will be advocates. But the fact that those claims are now uncritically repeated by journalists at once-respectable publications like the New Republic speaks to how far our public discourse has fallen, and how illiberal it has become. Fake news and alternative facts are not the sole province of the right wing. Inserting links to unhinged bloggers 3 now passes for fact checking for a new generation of hyper-aggressive and hyper-partisan journalists. The righteous community of self-proclaimed climate hawks is now prepared to meet the opposition, exaggeration for exaggeration and outrage for outrage.

The continuing escalation of rhetoric by climate advocates, meanwhile, is unlikely to do much to solve climate change. After eight years of excoriating hard-fought efforts to make headway on the issue by President Obama and candidate Clinton (McKibben in recent years labeled both deniers), we can thank provocateurs like McKibben and Oreskes for helping to put an actual climate denier in the White House.

More broadly, the expansion of the use of denier by both activists and journalists in the climate debate, a word once reserved only for Holocaust denial, mirrors a contemporary political moment in which all opposing viewpoints, whether in the eyes of the alt-right or the climate left, are increasingly viewed as illegitimate. The norms that once assured that our free press would also be a fair press have deeply eroded. Balanced reporting and fair attribution have become road kill in a world where all the incentives for both reporters and their editors are to serve up red meat for their highly segmented and polarized readerships, a dynamic that both reflects and feeds the broader polarization in our polity. It is a development that does not bode well for pluralism or democracy.

Wide-Body Aircraft

Wide-Body Aircraft

In this case study: 
 


The modern commercial aircraft industry, specifically medium and large wide-body aircraft, bears a striking resemblance to the civilian nuclear power industry in both market structure and technological complexity. Both industries are segregated between the vendors that provide the technology—aircraft manufacturers and reactor developers—and the companies that use and operate these products—airlines and electric utilities. The operators in both industries, airlines and utilities, have very small profit margins: roughly 3% for airlines1 and 10% for investor-owned utilities in the United States. Both markets are heavily concentrated, with less than ten major firms. Aircraft production in particular is one of the most concentrated markets in the world (and Paul Krugman argues that the market is really only big enough for one firm2,3).

Both industries can also be strongly affected by exogenous events. With airlines, accidents and security concerns reduce air travel, along with disease outbreaks, blizzards, and volcanic eruptions. A decline in air travel either from an accident or an economic recession greatly reduces orders for new aircraft. Similarly, a major nuclear power accident leads utilities to cancel planned projects and even prematurely close existing plants. Even an unrelated event like a terrorist attack can reduce the demand for nuclear power plants, as they are seen as more prone to risks in general.

Both nuclear power and aircraft manufacturers, finally, share large entry costs for new firms, gradual innovation, imperfect competition, and the fact that many countries consider them “strategic” industries,4 which usually coincides with substantial state support.

And yet commercial aviation appears significantly more innovative and successful than commercial nuclear power, with miles traveled increasing every year and costs per mile and per passenger falling since the 1970s. The lessons that the nuclear industry can learn are simple but not easy. Market consolidation was a key factor in the success of new aircraft designs, but it only worked because firms have significant state support. While aircraft benefited from learning-by-doing on the assembly line, it takes thousands of aircraft before firms see the return on their investments. Therefore, nuclear reactors will need to get a lot smaller to take advantage of similar economies of multiples. Lastly, aircraft can be built in the United States or Europe and flown by most airlines and in most countries around the world. Can the global nuclear regulator, the International Atomic Energy Agency, develop a similar level of international regulation and licensing?
 


Read more from the report: 
How to Make Nuclear Innovative

 

 

Brief History of the Commercial Aircraft Industry

Commercial aviation took off after the Second World War, as excess military aircraft were converted to transport passengers and cargo. Turbojet aircraft were independently invented in the United Kingdom and Germany in the late 1930s, but it wasn’t until 1952 when the first commercial jet airline launched, the state-owned British Overseas Airways Corporation (BOAC). While the introduction of jet aircraft in the1930s affords a very useful study in disruptive innovation,5 this case study will focus on the more recent system of innovation for commercial wide-body jet aircraft, as it bears the most similarities to today’s nuclear reactor industry.

The British dominated the commercial turbojet market through the 1950s with their Comet jetliner. But a series of fatal crashes caused BOAC to ground the entire Comet fleet, and this period opened up space for Boeing to enter the jetliner market with their 707. The novel design of the 707 placed the jet engines underneath the wings, which remains the practice today across all commercial jetliner designs. The 707 proved significantly safer and more fuel-efficient, and led to Boeing and the Americans coming to dominate the commercial aviation industry for the next thirty years. In the 1970s, American aircraft manufacturers had 90% of the free world’s market (excluding the USSR).
 

The modern commercial aircraft industry bears a striking resemblance to the civilian nuclear power industry.


Today, there is a relative duopoly in the market for large wide-body aircraft (over 100 seats) between the American Boeing and the European consortium Airbus, each with ~40-45% of the market, depending on the year.6 There’s a tie for third largest market share between Brazil’s Embraer and Canada’s Bombardier. China’s Comac, currently with the fifth largest share of the market, is starting to make gains with heavy state subsidies. This market duopoly means that Boeing and Airbus are constantly fighting to gain an advantage over the other in terms of aircraft sales.

In the regional jet market (planes with 30-90 seats), Embraer and Bombardier dominate. Embraer was originally state-owned but was privatized in 1994 (but the state still owns 51%). New entrants in the regional jet market include Russian, Japanese, Chinese, and Indian firms, all receiving substantial state aid. Even Bombardier is receiving state aid for its CSeries.7
 

Market Trends

From the public’s perspective, the aircraft industry appears very innovative because the cost of flying has declined so dramatically over the last few decades. However, most of these cost declines have resulted from the way airlines were operated rather than the aircraft technology employed. The major factor was airline deregulation, which began in 1978 in the United States. This forced airlines to streamline their businesses and compete for passengers. Deregulation also led to the bankruptcy of several major airlines. Aircraft represent a major investment for airlines, and they compete to get the lowest prices and then plan to operate their aircraft for decades.

After deregulation, costs were cut sharply through improved operations and better methods for filling seats. Today, the major remaining cost to airlines is fuel, which now represents up to 50% of airline operating expenses. Therefore, upgrading an airline fleet to more fuel-efficient aircraft is the simplest way for an airline to reduce costs, although it is also the most cash intensive. Innovation in aircraft design over the last two decades, as a result, has tended to focus on fuel efficiency.

Airlines have also been able to reduce costs by fine-tuning the size of the aircraft in their fleet and on particular routes. In general, increasing the size of an airplane reduces costs through greater efficiency in terms of fuel per seat and seats per flight. On the other hand, larger planes lose out on flexibility in terms of routes, flight frequency, and passenger preference.

For a long time, the trend was toward bigger and bigger jets. In the early ’90s, Boeing and Airbus formed a consortium to explore a very large aircraft; they were hoping to jointly produce the aircraft to share the very limited market. Boeing eventually pulled out, and the story of the Airbus A380 has become a cautionary tale.8 More recently, airlines (and aircraft manufacturers) have been converging on aircraft with 160 seats as the most optimal size. Finding the right size for an an aircraft is a strategy that the nuclear industry should learn from, although there might be different optimal sizes for different markets.

To give some perspective on the size of the aircraft industry, Boeing has estimated the demand for aircraft (from all manufacturers) over the next fifteen years.9 The last column shows the average catalog price of a plane in each category. Boeing estimates the total value of planes demanded over the next fifteen years will be $5,200 billion.


 

R&D Budgets

Over the last decade, Boeing’s annual R&D budget has been between $3 and $7 billion and between 3% to 10% of their total annual revenue. From 2000 to 2004, Airbus outspent Boeing on R&D by 100% ($8 billion compared to Boeing’s $4 billion).10 More recently, though, Boeing has been outspending Airbus on R&D, with Boeing spending $3-$7 billion annually11 and Airbus spending only $2.7 million in 2013.12 It’s suggested that Boeing draws on R&D funded as part of its defense contracts, and that Airbus may rely more on R&D from universities and national labs. In 2014, revenue from Airbus’s commercial aircraft division was $46 billion,13 while Boeing’s revenue was about $86 billion.


Nuclear and Aviation: An Industry Comparison

Compared with commercial jets, the nuclear industry is less concentrated. While there are only seven reactor developers building around the world today, the largest market share is only 28%, tied between Rosatom and the China General Nuclear Power Group. Following these big two, Westinghouse Electric Company and the Korea Electric Power Corporation each have around 12% of new builds.14 Areva NP, the Nuclear Power Corporation of India, and GE-Hitachi, finally, each have around 6% of the reactors under construction globally.
 

While Boeing and Airbus each spend around $3 billion annually on R&D, nuclear companies invest significantly less.


While there are more competitors, the market for nuclear power plants is much smaller. Boeing delivered 748 jets in 2016, at a market value of $94 billion.15 In comparison, only ten nuclear reactors came online in 2016. In the last ten years, only about five nuclear reactors have come online each year, with a pricetag of $2-$5 billion.

While Boeing and Airbus each spend around $3 billion annually on R&D, nuclear companies invest significantly less. Rosatom invests 4.5% of its annual revenues into research, or about $360 million in 2013. The latest figure for Areva is from 2007—they invested about $644 million into R&D, but that was across their products and services. Across all of the OECD, total spending on nuclear fission R&D was less than $1 billion in 2015 (that includes public and private R&D). However, this figure excludes Russia, China, and India, all of which are investing heavily in nuclear.
 

Major Setbacks in Innovation

While Boeing has been the global leader in jet aircraft production for almost sixty years, their business success has followed a roller coaster. Aircraft production is very capital intensive, and demand follows broad economic trends in addition to responding to exogenous events like terrorist attacks, high-profile crashes, and even volcanic eruptions. Boeing had to significantly cut payroll in the 1920s, during the Great Depression, after the Second World War, in the 1960s, and in the 1970s. After 9/11, Boeing nearly halved payroll, as the entire aviation industry struggled. While Boeing made several business risks that ultimately paid off, other innovative firms were not so successful.
 

The Concorde (and the Tupolev Tu-144)

Since the 1950s, many countries have expressed interest in supersonic transport jets. However, the costs to develop such an aircraft were expected to be huge. The major US aircraft manufacturers at the time—Boeing, McDonnell Douglas, and Lockheed—decided it was too great an investment and didn’t pursue the technology. However, a British and French alliance formed in 1961 to develop the world’s first supersonic passenger jet, and they successfully brought the plane—known as the Concorde—to commercial operation in 1976, at a joint cost of $1.3 billion. The British-Franco team had to overcome significant technical challenges in metallurgy, structural integrity, and the cooling of the heated airframe. But eventually a commercial aircraft was produced that could travel twice the speed of sound, and at such high altitudes that passengers could see the curvature of the Earth. However, while the plane succeeded on technical grounds—it cut the flight time from London to New York almost in half—it failed on economic and political grounds.

Many accused the Americans of intentionally trying to thwart the success of the Concorde, since they abandoned their own supersonic transport program, but the truth is that the Concorde faced many self-inflicted problems. Despite its supersonic speeds, the plane had a short flying range and poor fuel economy, meaning that it had to stop frequently for refueling. For example, the Concorde had to make two refueling stops between London and Sydney, but the subsonic Boeing 747 could make the trip nonstop, which meant it actually got their faster overall. And the Boeing 747, a new plane itself, cost 70% less per passenger-mile to fly.16 Noise was a nontrivial issue, both when aircraft took off and the boom created when they went supersonic. Several countries would not allow the Concorde to fly over their airspace, which limited the routes available to those primarily over oceans. Over 100 Concordes were originally ordered, but only 20 were ever built, and only 14 were actually delivered to British and French airlines. In 2000, the Concorde suffered its first and only crash, killing all 100 passengers, 9 crew members, and 4 people on the ground. Following the crash and a market-wide slump in air travel following 9/11, British Airways and Air France announced the retirement of the Concorde in 2003.

The Russians developed their own supersonic transport, the Tupolev Tu-144, that was mostly a copy of the Concorde, but after it crashed at the Paris Air Show in 1973, it was only flown for cargo transport within Russia.


Role of the State

Because of aviation’s strategic importance to state militaries, the industry has always benefited from strong state support in one form or another. Boeing got a big boost from the government in 1929, when Congress passed a bill requiring the US Postal Service to fly mail on private planes between cities. Boeing’s R&D has also benefited from decades of defense and NASA contracts, where it can shift profits from its military programs to fund development in its commercial programs.

While European aviation firms started out ahead of the Americans, their diversity of small national companies had limited runs of most aircraft lines. To compete with the Americans, a consortium of British, French, and West German aviation companies formed Airbus Industrie in 1970. In the late 1990s, an even larger group of European civil and defense aerospace firms merged to form the European Aeronautic Defence and Space Company (EADS). Airbus receives research and development loans from various EU governments that have very generous terms, often not requiring repayment unless the new jet is a commercial success.

Both Boeing and Airbus accuse the other of taking illegal subsidies and violating World Trade Organization (WTO) rules. A bilateral EU-US agreement from 1992 was meant to reign in state support for aviation by laying ground rules. However, in 2010, the WTO ruled that Airbus had received improper subsidies by receiving loans at below-market rates. Just a year later, the WTO also ruled that Boeing had violated rules by receiving direct local and federal aid, including tax breaks.


Importance of Intellectual Property

Because of the strong competition between Boeing and Airbus, intellectual property protection is very important. Boeing decides whether to patent a technology based upon how visible it is and how easily it can be reverse engineered. If a certain technology is not visible on their aircraft and is difficult to reverse engineer, they won’t apply for a patent, but instead keep the technology as a trade secret.17 Boeing also doesn’t patent any technology they develop for military applications. Boeing is very ambitious in leasing their patents to noncompetitive industries, such as automotives, and they consider their patents a valuable asset.

One of the major technological innovations that defines the Boeing 787 is the use of carbon composite materials. These composites are composed of carbon fibers reinforced by an epoxy resin and are very difficult to manufacture. Because they have joint applications in missile fabrication, Boeing will not share the technology.18 Due to the limited number of potential suppliers for novel materials, and the large demand expected, Boeing entered into a twenty-plus-year contract with the world’s largest producer of carbon fiber. Such long-term supplier contracts are common in aviation to guarantee quality and maintain regulatory compliance. These contracts also help protect IP, and suppliers are precluded from contracting with other firms.


Regulator Comparison

The federal body that regulates aircraft manufacturing and operations, the Federal Aviation Administration (FAA), is actually quite similar to the Nuclear Regulatory Commission (NRC) in many ways. The FAA primarily certifies aircraft designs for “airworthiness” by issuing rules on aircraft design and production. Additionally, the FAA certifies aircraft production facilities (and performs quality control over time) and component and materials production facilities. Unlike the NRC, the FAA spends a lot of time certifying those who operate aircraft as well: airlines, licensing pilots, aircraft mechanics, repair stations, air traffic controllers, and airports.19 While some may ask how nuclear could be regulated more like aviation, such that innovation is encouraged, others argue that aviation should be regulated more like nuclear to place a greater emphasis on reduced fatalities.20 But there are many intrinsic differences between the two industries that require different kinds of regulation.

One of the major differences is that up until 1998, the FAA was in charge of both regulating and promoting air travel. In contrast, the Atomic Energy Commission (AEC) lost this dual designation for nuclear power in 1974. Because of this dual role, the FAA was more concerned with the cost-benefit analysis of new safety regulations, and whether they would impose unnecessary financial burdens on airlines.21

In a 1997 New York Times analysis, Matt Wald argues that the main reason for these differences is consumer choice.22 You can choose which airline to fly and even which aircraft you fly on, but you can’t really choose where you get your electricity. And while utilities can choose whether or not to build a nuclear power plant, they usually can’t opt to get their power from a different nuclear plant down the road if their local reactor is underperforming. Hence, nuclear power operators have collaborated much more in setting standards and sharing best practices.
 

Modern wide-body aircraft encompass a diversity of innovations.


Since airlines are publically traded companies, the FAA has a strong incentive not to highlight accidents, safety violations, or underperformance, as this would negatively affect the reputation—and stock price—of the specific airline, whereas the NRC frequently publishes small accidents, safety violations, and performance records of all plants.23 There is also a much larger bureaucracy for regulating nuclear: for every federal employee of the FAA there are 71 employees regulated in the airline industry, whereas in the nuclear industry there are only 6 employees for each federal employee at the NRC.24

Another major difference is that the airlines were deregulated in 1978. While some utilities started deregulating in the 1990s, there are still over 20 states that have regulated energy markets. Before and after airline deregulation, many worried that deregulation would lead to a moral hazard with regard to safety. Indeed, airlines that spent less per flight on safety had a higher frequency of accidents, and this problem was even more noticeable among airlines with financial trouble.25 Nuclear power, on the other hand, seemed to improve in performance and safety under deregulation,26 although deregulation has made it more difficult to build new nuclear power plants. In contrast, the competition among airlines has seemed to spur innovation among aircraft manufacturers, as airlines demand the newest and most efficient new aircraft models to stay competitive.

However, this lax regulation by the FAA should in theory be offset by a certain amount of self-regulation by the airlines and aircraft manufacturers precisely because airlines are publically traded and consumers have so much choice in air travel. If an airline has an accident, this is almost instantly reflected in a drop in stock value. However, airlines and nuclear power operators both have very minimal market responses in reaction to high-publicity accidents, as compared with less rigorous regulatory bodies like the Food and Drug Administration, the Occupational Safety and Health Administration, and the Mine Safety and Health Administration. But there is a big difference in how much each industry is willing to pay to prevent fatalities. A 1986 report estimated that many of the recent FAA safety regulations would cost airlines about $700 (1986 US dollars) per life saved.27 In comparison, in 1995, the NRC set a new value of $1,000 to prevent one person-rem of radiation exposure,28 a dose that will not result in a fatality.

At the international level, air travel is regulated by the UN agency the International Civil Aviation Organization. Flights and aircraft are governed by a set of standards and best practices referred to as ETOPS (extended operations), which were based on a mix of current FAA policy, industry best practices and recommendations, as well as international standards. Most interestingly, ETOPS is a performance-based standard to some extent. Originally, airliners were tested and certified to an ETOPS-180 standard, which meant they had to prove that the aircraft could fly 180 minutes and land with only one functioning engine (out of two). For their first year of flight, the airliner’s routes were always within 180 minutes of a certified landing strip for just this purpose. But after a year, or 18 months, if the aircraft had performed as expected, it might get approval to extend to ETOPS-240 and later ETOPS-360. These standards were developed with heavy input from both Boeing and Airbus, the airlines, international regulators, and even the Air Crash Victims Families Group. This broad stakeholder engagement on safety led to regulations that serve a dual purpose of protecting passengers as well as allowing efficient long-range flights for airlines.
 


2015 was the safest year on record for the airlines, with the lowest number of fatalities from accidents. (Source: The Telegraph,“2015 was the safest year in aviation history,” January 6, 2016, http://www.telegraph.co.uk/travel/news/2015-was-the-safest-year-in-aviation-history/.)


Despite the seemingly lax regulation of airline safety, the number of fatalities from aircraft accidents has been declining for decades.29 Considering the growth of commercial airline travel over this time period, the relative probability of fatalities has decreased dramatically. Although it is worth noting that the number of annual fatalities is still three orders of magnitude greater than those due to commercial nuclear power.


Major Innovation Success Stories

Most of the innovation taking place in aviation is incremental, with minor improvements in aircraft weight, fuel efficiency, and performance. While many of these incremental innovations have had significant effects on the cost and operability of commercial aviation, below are more detailed case studies of significant (and successful) major innovations in modern aviation.
 

Jet Turbines

The aircraft industry successfully introduced a major technological change with the introduction of turbojets as an alternative to propeller planes. Turbojets were much preferred for commercial airlines because they were faster and came with much less turbulence, meaning more comfort for passengers. But turbojets were originally the pipedream of aerodynamics scientists, not aviation engineers, and they remained a purely academic exercise for almost a decade, with most aircraft manufacturers predicting they would never be practical since their fuel consumption was so high. However, turbojets took a big leap forward as a result of World War II and the invention of radar. Before radar was able to detect incoming bombers, large prop planes would patrol the skies for long periods ready to attack—thus fuel efficiency was a matter of life and death. But with the introduction of radar, planes could sit on the tarmac and only needed to take off when a bomber was detected; as a result, take-off speed and flight speed replaced fuel efficiency as critical concerns. The turbojet filled this niche perfectly.30 Significant money was spent to research and develop turbojets by the British Air Force, which led to rapid prototyping, testing, and deployment. Post-WWII, the American government dumped surplus aircraft into the commercial market at drastically reduced prices, which created a boom in commercial air travel.

But turbojets were still not an initial commercial success. Military turbojets flew very little and sat stationary most of the time. Commercial jets would be flown almost continuously to maximize profit, and therefore military jets were not well suited to commercial use, as many components broke quickly and materials wore out within a few months, leading to accidents and expensive repairs. In 1945, the major airlines created an international cartel to keep commercial prices artificially high, which allowed a buffer for airlines to purchase airplanes at high upfront costs. These bigger planes often flew only half-full of passengers, and airlines were losing money. Thus, they had to introduce new market mechanisms like economy class tickets to attract passengers and turn a profit with these bigger planes (similar to how large nuclear plants need to run 24/7 to be profitable). The high speeds of turbojets required longer and concrete runways to replace grass fields. The noise of such aircraft required them to fly at high altitudes, which meant cabins now had to be pressurized.


Fly-By-Wire

The Airbus A320 was the first airliner to fly with an all-digital fly-by-wire control system, a technology originally developed for military aircraft and used on experimental space shuttle flights. Digital fly-by-wire technology essentially enables planes to be flown by computers. The workload for pilots is simplified and reduced, and many adjustments are made automatically. This has reduced the weight and complexity of mechanical control systems and has also improved the safety and performance of airliners, as it reduces human error.
 

Boeing 787 Dreamliner

In the late 1990s, Boeing was investigating new airplanes to offset the sluggish demand for their 767 and 747-400. Initially, they were aiming for much faster airplanes (Mach 0.98), but after the 9/11 attacks most airlines were focused on reducing costs. In 2003, Boeing announced the development of a radically more fuel-efficient wide-body aircraft that would be available in 2007.

The main innovation that Boeing took advantage of for the 787 was replacing steel with composite materials in many of the plane’s components, most importantly the wings. These composite materials allowed for wing shapes that had superior aerodynamics not possible with conventional materials. These innovations combined to reduce fuel use by 70% and reduce noise footprint by 90%. Noise might not seem like a big deal, but it affects which airports the planes can use and what speed they can go when taking off and landing. The improved fuel efficiency would not only reduce costs for airlines, but also greatly extend the range of passenger routes and increase the load that cargo planes could carry.

The total program cost $32 billion,31 but it will take time for Boeing to realize a full return on this investment. The very first orders for the 787 were placed in 2004 by a Japanese airline, but the aircraft suffered serious delays in production, meaning the first planes weren’t delivered until 2011, four years late. The main cause for the delay was Boeing’s global distribution chain, where subcomponents are manufactured around the world and flown to Everett, Washington, where they are assembled. This process was thought to be revolutionary and was expected to dramatically reduce assembly time, but has proven the opposite. The 787 is assembled from subcomponents manufactured in Japan, Italy, South Korea, the United States, France, Sweden, India, and the United Kingdom. This should serve as a warning for the nuclear industry proposing a similar supply chain structure for large modular reactors like the AP1000. A single 787 costs approximately $224 million.

Below is a chart of the orders placed for Boeing 787s and actual deliveries. Boeing has delivered a total of 318 aircraft as of 2015.


Airbus A380

As mentioned above, Airbus began in collaboration with Boeing on a very large aircraft in the early 1990s, before Boeing left the partnership. Airbus—a European consortium of aircraft manufacturers—continued with the development of what would become the A380, the world’s largest commercial aircraft. Airbus spent the rest of the ’90s exploring options for their next aircraft and performing hundreds of focus groups with airlines and passengers. In 2000, Airbus officially announced the start of a $10 billion program to develop the A380 with 50 firm orders from six airlines. The A380 can accommodate 853 passengers and has 40% more floor space than the next largest aircraft, Boeing’s 747.

Airbus employs many of the same innovative materials and technologies as the Boeing 787, although developed independently. Unique among the A380 jetliners is the development of a central wing box made of composite material and a smoothly contoured wing cross section.32

However, development of the A380 suffered many delays that reduced its economic viability and allowed Boeing to gain market share. Flight tests began for the first A380 in 2005, but trouble with wing failure meant that the design wasn’t certified until 2007. Production delays occurred due to the extremely complex wiring involved (over 500 kilometers of wiring in each aircraft), and the differing wiring standards between German, Spanish, British, and French component facilities. Because of the large size of the A380, an extremely specialized supply chain was developed (shown below) to allow movement of the gigantic subcomponents by barge, whereas Boeing can fly subcomponents by its own planes.
 


 

The first completed A380 was delivered to Singapore Airlines in 2007, but Airbus has so far delivered only 169 A380s. While no longer operating at a loss, they do not think they will ever recoup the full investment cost.33 Each A380 has a retail cost of $450 million.
 

Lessons Learned for Nuclear

Modern wide-body aircraft encompass a diversity of innovations, both technological and in terms of innovative practices in manufacturing, supply sourcing, designs, and pricing. However, the predominant theme is that these innovations were driven by customer demands, whether from airlines or passengers. In an aggressive market with a strong duopoly, the major aircraft manufacturers were constantly looking for a way to gain advantage. Innovation was targeted at reducing costs for the major expense for airlines: fuel. New designs allowed higher profits for airlines along with greater flexibility in routes and longer ranges.

Economies of multiples proved much more important than economies of scale. Airlines had to find the right-sized aircraft to strike a balance between economies of scale and business flexibility. However, large planes came with many challenges. Most importantly, it takes airline manufacturers hundreds to thousands of airliner units before they recoup the cost of investment in a new design (and maybe they still never recoup the cost in Airbus’s case). But Boeing and AIrbus plan for this and structure the retail cost of the aircraft accordingly. Airlines are willing to pay a premium for the first deliveries of a new aircraft to please customers. In addition, developing a robust supply chain takes a large and consistent demand for aircraft. For example, Airbus sold 626 jetliners in 2013, and currently has over 13,000 standing orders across their four major designs.34
 

The lessons that the nuclear industry can learn are simple but not easy.


Both Boeing and Airbus had firm orders from airlines from the very early stages of aircraft development. This allowed them to receive feedback on what customers wanted as well as to work on a strict timeline for delivery (they both failed to deliver on time, but they delivered extremely fast compared with the nuclear industry). Airlines are willing to place orders far in advance for two reasons: they trust the reputations of the manufacturers to deliver, but more importantly, they are competing with all other airlines to offer the newest, most efficient airliners. For the nuclear industry to develop this level of advance orders, several major changes would need to occur. First, reactors, or at least major components, would need to be factory produced and sold at a fixed price. Second, utilities would need a guaranteed delivery date for their orders. Currently, nuclear projects are consistently far behind schedule and over budget, which doesn’t allow utilities to adequately plan for future supply.

Globalizing and diversifying the supply chain proved challenging and led to delays, even for the two giants of the aircraft industry Boeing and Airbus. Large components and novel materials required entirely new manufacturing facilities that had to be built from the ground up around the world. The nuclear industry should take particular note of this experience. China, for example, is trying to indigenize the entire supply chain for their reactor, the CAP-1000.

Oren Cass

James Woudhuysen

Rich Tafel

Matt Winkler

Tisha Schuller